uuid int64 541B 3,299B | dataset stringclasses 1
value | text stringlengths 1 4.29M |
|---|---|---|
2,877,628,091,154 | arxiv | \section{Introduction}
Thermodynamic properties of a strongly interacting matter at nonzero
baryon density and high temperature were quantified numerically within
Lattice QCD (LQCD) \cite{L_lgt1,L_lgt2,L_lgt3}. The LQCD results
demonstrate that QCD exhibits both dynamical chiral symmetry breaking
and confinement at finite temperature and densities. The LQCD
equation of state indicates a clear separation between the confined
hadronic phase and deconfined phase of quark--gluon plasma. However,
since the thermodynamics at large baryon densities and near the chiral
limit is still not accessible from the first principle LQCD
calculations, many phenomenological models and effective theories have
been developed
~\cite{Gocksch,Buballa:review,Meisinger,Fukushima,Mocsy,PNJL,CS,DLS,Megias,IK,Fukushima:strong,Schaefer:PQM,kap,sasaki,model}.
The hadronic properties at low energy as well as the nature of the
chiral phase transition at finite temperature and densities have been
successfully explored and described in such effective models.
The physics of color deconfinement and its relation to the chiral
symmetry breaking has been recently studied in terms of effective
models. The idea to extend the existing chiral Lagrangians such as
the Nambu--Jona--Lasinio or the quark--meson, by introducing coupling
of quarks to uniform temporal background gauge fields (Polyakov loops)
was an important step forward in these studies
~\cite{Fukushima,Schaefer:PQM}.
It was shown that the Polyakov loop extended Nambu--Jona--Lasinio
(PNJL) \cite{PNJL} or quark--meson (PQM) \cite{Schaefer:PQM} models
can reproduce essential properties of the QCD thermodynamics obtained
in the LQCD already within the mean-field approximation. However,
to correctly account for the critical behavior and scaling properties
near the chiral phase transition one needs to go beyond the
mean-field approximation and include quantum fluctuations and
non-perturbative dynamics. This can be achieved by using the methods
based on the functional renormalization group
(FRG)~\cite{Wetterich,Morris,Ellwanger,Berges:review,Schaefer:2006ds,SFR,Herbst:2010rf}.
Following our previous work~\cite{Skokov:2010wb} we use a truncation
of the PQM model which is suitable for the functional renormalization
group analysis to describe the thermodynamics beyond the mean-field
approximation. The functional renormalization group approach in the PQM
model is used to take into account fluctuations of the meson fields,
while the Polyakov loop is treated as a background field on the
mean-field level.
In contrast to the previous work~\cite{Skokov:2010wb} we extend our
calculations to the finite chemical potential. We determine the phase
diagram and the position of the critical end point (CEP) in the PQM
model by exploring the dependencies of the chiral order parameter and
the quark number susceptibility on thermal variables. We calculate
the moments (cumulants) of the net-quark number density fluctuations ($c_n$)
at finite temperature and chemical potential in the presence
of mesonic fluctuations.
We discuss the
influence of non-perturbative effects on properties of the first four
$c_n$-moments near the chiral crossover transition. We show that
$c_n$-cumulants exhibit a peculiar structure and for sufficiently large
values of the chemical potential can be negative
near to the crossover transition.
We calculate
the ratios $c_3/c_1$ and $c_4/c_2$ and discuss their roles as probes of
deconfinement and chiral phase transitions.
We summarized properties of different susceptibilities near the chiral
phase transition at finite net-quark density within the Landau { mean-field}
and the scaling theories.
\section{The Polyakov-quark-meson model}\label{sec:pqm}
The model which { is used in this paper} to explore the chiral phase
transition at finite temperature and density within the FRG approach is
the Polyakov loop-extended two flavor quark--meson model. In general,
the PQM model, being an effective realisation of the low--energy
sector of the QCD, cannot describe confinement phenomena because the
local $SU(N_c)$ invariance of the QCD is replaced by the global
symmetry. However, it was argued that by connecting the chiral
quark--meson (QM) model with the Polyakov loop potential the confining
properties of QCD can be approximately accounted for~\cite{Fukushima,
Fukushima:strong, Schaefer:PQM}.
The Lagrangian of the PQM model reads \cite{Schaefer:PQM}
\begin{eqnarray}\label{eq:pqm_lagrangian}
{\cal L} &=& \bar{q} \, \left[i\gamma^\mu {D}_\mu - g (\sigma + i \gamma_5
\vec \tau \vec \pi )\right]\,q
+\frac 1 2 (\partial_\mu \sigma)^2+ \frac{ 1}{2}
(\partial_\mu \vec \pi)^2
\nonumber \\
&& \qquad - U(\sigma, \vec \pi ) -{\cal U}(\ell,\ell^{*})\ .
\end{eqnarray}
The coupling between the effective gluon field and quarks is
implemented through the covariant derivative
\begin{equation}
D_{\mu}=\del_{\mu}-iA_{\mu},
\end{equation}
where $A_\mu=g\,A_\mu^a\,\lambda^a/2$. The spatial components of the
gluon field are neglected, i.e. $A_{\mu}=\delta_{\mu0}A_0$. Moreover,
${\cal U}(\ell,\ell^{*})$ is the effective potential for the gluon
field expressed in terms of the thermal expectation values of the
color trace of the Polyakov loop and its conjugate
\begin{equation}
\ell=\frac{1}{N_c}\vev{\Tr_c L(\vec{x})},\quad \ell^{*}=\frac{1}{N_c}\vev{\Tr_c
L^{\dagger}(\vec{x})},
\end{equation}
with
\begin{eqnarray}
L(\vec x)={\mathcal P} \exp \left[ i \int_0^\beta d\tau A_4(\vec x , \tau)
\right]\,,
\end{eqnarray}
where ${\mathcal P}$ stands for the path ordering, $\beta=1/T$ and
$A_4=i\,A_0$. In the $O(4)$ representation the meson field is
introduced as $\phi=(\sigma,\vec{\pi})$ and the corresponding
$SU(2)_L\otimes SU(2)_R$ chiral representation is defined by
$\sigma+i\vec{\tau}\cdot\vec{\pi}\gamma_5$.
The purely mesonic potential of the model, $U(\sigma,\vec{\pi})$, is
defined as
\begin{equation}
U(\sigma,\vec{\pi})=\frac{\lambda}{4}\left(\sigma^2+\vec{\pi}
^2-v^2\right)^2-c\sigma,
\end{equation}
while the effective potential of the gluon field is parametrized in
such a way as to preserve the $Z(3)$ invariance:
\begin{equation}
\frac{{\cal U}(\ell,\ell^{*})}{T^4}=
-\frac{b_2(T)}{2}\ell^{*}\ell
-\frac{b_3}{6}(\ell^3 + \ell^{*3})
+\frac{b_4}{4}(\ell^{*}\ell)^2\,\label{eff_potential}.
\end{equation}
The parameters,
\begin{eqnarray}
\hspace{-4ex}
b_2(T) &=& a_0 + a_1 \left(\frac{T_0}{T}\right) + a_2
\left(\frac{T_0}{T}\right)^2 + a_3 \left(\frac{T_0}{T}\right)^3\,
\end{eqnarray}
with $a_0 = 6.75$, $a_1 = -1.95$, $a_2 = 2.625$, $a_3 = -7.44$, $b_3 =
0.75$ and $b_4 = 7.5$ were chosen to reproduce the equation of state
of the pure SU$_c$(3) lattice gauge theory. Consequently, at
$T_0\simeq 270$ MeV the potential~(\ref{eff_potential}) yields the
first-order deconfinement phase transition. Several alternative
parametrizations of the effective potential of the gluon field were
also explored {in Refs.}~\cite{Ratti:2007jf,Fukushima:2008wg,Schaefer:2009ui}.
\subsection{The FRG method in the PQM model}\label{sec:rg}
In order to formulate a non-perturbative thermodynamics within the PQM
model we adopt a method based on the functional renormalization group (FRG).
The FRG is based on an infrared regularization with the momentum scale
parameter of the full propagator which turns the corresponding
effective action into the scale $k$-dependent functional
$\Gamma_k$~\cite{Wetterich, Morris, Ellwanger, Berges:review}.
In the PQM model the formulation of the FRG flow
equation
would require implementation of the Polyakov loop as a dynamical
field. However, in the current calculation we treat the Polyakov loop
as a background field which is introduced selfconsistently on the
mean-field level.
\begin{figure*}
\includegraphics*[width=7.5cm]{Phase_diag_MF.eps}
\includegraphics*[width=7.5cm]{Phase_diag.eps}
\caption { The phase diagrams for the PQM model in the mean-field
approximation (left panel) and in the FRG approach (right
panel). The shaded regions are defined by $5\%$-deviations of the
temperature derivative of the chiral order parameter,
$d\sigma/dT$, from its maximal value. The arrows show the lines
corresponding to different values of $\mu/T$. The dashed curves
indicate isentropes for $s/n_{q}=$ 2, 5, 10, 25, 50, 200. }
\label{fig:PD}
\end{figure*}
\begin{widetext}
Following our previous work~\cite{Skokov:2010wb} we formulate the
flow equation for the scale-dependent grand canonical potential for
the quark and mesonic subsystems
\begin{eqnarray}\label{eq:frg_flow}
\del_k \Omega_k(\ell, \ell^*; T,\mu)&=&\frac{k^4}{12\pi^2}
\left\{ \frac{3}{E_\pi} \Bigg[ 1+2n_B(E_\pi;T)\Bigg]
+\frac{1}{E_\sigma} \Bigg[ 1+2n_B(E_\sigma;T)
\Bigg] \right. \\ \nonumber && \left. -\frac{4 N_c N_f}{E_q} \Bigg[ 1-
N(\ell,\ell^*;T,\mu)-\bar{N}(\ell,\ell^*;T,\mu)\Bigg] \right\}.
\end{eqnarray}
Here $n_B(E_{\pi,\sigma};T)$ is the bosonic distribution function
\begin{equation*}
n_B(E_{\pi,\sigma};T)=\frac{1}{\exp({E_{\pi,\sigma}/T})-1}
\end{equation*}
with the pion and sigma energies given by
\begin{equation*}
E_\pi = \sqrt{k^2+\overline{\Omega}^{\,\prime}_k}\;~,~ E_\sigma
=\sqrt{k^2+\overline{\Omega}^{\,\prime}_k+2\rho\,\overline{\Omega}^{\,
\prime\prime} _k};
\end{equation*}
where the primes denote derivatives with respect to $\rho$ and
$\overline{\Omega}=\Omega+c\sigma$.
The functions $N(\ell,\ell^*;T,\mu)$ and
$\bar{N}(\ell,\ell^*;T,\mu)$ defined by
\begin{eqnarray}\label{n1}
N(\ell,\ell^*;T,\mu)&=&\frac{1+2\ell^*\exp[\beta(E_q-\mu)]+\ell \exp[2\beta(E_q-\mu)]}{1+3\ell \exp[2\beta(E_q-\mu)]+
3\ell^*\exp[\beta(E_q-\mu)]+\exp[3\beta(E_q-\mu)]}, \\
\bar{N}(\ell,\ell^*;T,\mu)&=&N(\ell^*,\ell;T,-\mu)
\label{n2}
\end{eqnarray}
are fermionic distributions
modified because of the coupling to gluons. {The quark energy is defined by}
\begin{equation}
\label{dispertion}
E_q =\sqrt{k^2+2g^2\rho}.
\end{equation}
\end{widetext}
\begin{figure*}
\includegraphics*[width=7.5cm]{c1_mf.eps}
\includegraphics*[width=7.2cm]{c1_frg.eps}
\caption {The baryon number density normalized by $T^3$, $c_1=n_q/T^3$, as a
function of temperature for different values of $\mu/T$ for the {
PQM} model in the mean-field approximation (left panel) and in
the FRG approach (right panel).}
\label{fig:c1}
\end{figure*}
The minimum of the thermodynamic potential is determined by the
stationarity condition
\begin{equation}
\left. \frac{d \Omega_k}{ d \sigma} \right|_{\sigma=\sigma_k}=\left. \frac{d
\overline{\Omega}_k}{ d \sigma} \right|_{\sigma=\sigma_k} - c =0.
\label{eom_sigma}
\end{equation}
The flow equation~(\ref{eq:frg_flow}) is solved numerically with the
initial cutoff $\Lambda=1.2$ GeV (see details in
Ref.~\cite{Skokov:2010wb}). The initial conditions for the flow are
chosen to reproduce { the vacuum properties:} the physical pion mass $m_{\pi}=138$
MeV, the pion decay constant $f_{\pi}=93$ MeV, the sigma mass
$m_{\sigma}=600$ MeV, and the constituent quark mass $m_q=300$ MeV at
the scale $k=0$. The symmetry breaking term, $c=m_\pi^2 f_\pi$,
corresponds to an external field and consequently does not flow. In
this work we neglect the flow of the Yukawa coupling $g$, {which is
not expected to be significant for the present studies~(see e.g. Refs.~\cite{Jungnickel,Palhares:2008yq}). }
By solving the equation~(\ref{eq:frg_flow}) one gets thermodynamic
potential for quarks and mesonic subsystems, $\Omega_{k\to0} (\ell,
\ell^*;T, \mu)$, as the function of the Polyakov loop variables $\ell$
and $\ell^*$. The full thermodynamic potential $\Omega(\ell, \ell^*;T,
\mu)$ in the PQM model which includes quarks, mesons and gluons
degrees of freedom is obtained by adding to $\Omega_{k\to0} (\ell,
\ell^*;T, \mu)$ the effective gluon potential ${\cal U}(\ell,
\ell^*)$:
\begin{equation}
\Omega(\ell, \ell^*;T, \mu) = \Omega_{k\to0} (\ell, \ell^*;T, \mu) + {\cal U}(\ell, \ell^*).
\label{omega_final}
\end{equation}
At a given temperature and chemical potential, the Polyakov loop
variables, $\ell$ and $\ell^*$, are determined by the stationarity
conditions:
\begin{eqnarray}
\label{eom_for_PL_l}
&&\frac{ \partial }{\partial \ell} \Omega(\ell, \ell^*;T, \mu) =0, \\
&&\frac{ \partial }{\partial \ell^*} \Omega(\ell, \ell^*;T, \mu) =0.
\label{eom_for_PL_ls}
\end{eqnarray}
The thermodynamic potential~(\ref{omega_final}) does not contain
contributions of statistical modes with momenta larger than the cutoff
$\Lambda$. In order to obtain the correct high-temperature behavior
of thermodynamic functions we need to supplement the FRG potential with the
contribution of high-momentum states. A simplified model for
implementing such states was proposed in Ref.~\cite{Braun:2003ii}
where the contributions of the $k > \Lambda$ states to the flow is
approximated by that of a non-interacting gas of quarks and gluons.
For the PQM model we generalize this procedure by including for $k >
\Lambda$ the flow equation of interacting quarks with Polyakov loops~\cite{Skokov:2010wb}
\begin{eqnarray}\label{eq:qcdflow}
\del_k \Omega_k^{\Lambda}(T,\mu)&=&-\frac{N_c N_f k^3}{3\pi^2}
\\
&& \hspace*{-1cm} \Big[ 1-
N(\ell,\ell^*;T,\mu)-\bar{N}(\ell,\ell^*;T,\mu)\Big],\nonumber
\end{eqnarray}
where the dynamical quark mass is neglected. In addition, since the
effective gluon potential ${\cal U}(\ell, \ell^*)$ is fitted to
reproduce the Stefan-Boltzmann limit at high temperatures, the explicit
gluon contribution is omitted for consistency.
To obtain the complete thermodynamic potential of the PQM model we
integrate Eq.~(\ref{eq:qcdflow}) from $k=\infty$ to $k=\Lambda$ where
we switch to the PQM flow equation (\ref{eq:frg_flow}). Divergent
terms in the high-momentum flow equation (\ref{eq:qcdflow}) are
independent of mesonic and gluonic fields as well as of temperature
and chemical potential. Consequently, such terms can be absorbed to an
unobservable constant shift of the thermodynamic potential.
\begin{figure*}
\includegraphics*[width=7.5cm]{c2_mf.eps}
\includegraphics*[width=7.2cm]{c2_frg.eps}
\caption{ The coefficient $c_2$ as a function of temperature for
different values of $\mu/T$ for the { PQM} model in the mean-field
approximation (left panel) and in the FRG approach (right panel).
}
\label{fig:c2}
\end{figure*}
\subsection{The mean-field approximation} \label{sec:mf}
To illustrate the importance of mesonic fluctuations
on thermodynamics in the PQM model we will compare
the FRG results with those obtained under the mean-field
approximation. In the latter, both quantum and thermal fluctuations
are neglected and the mesonic fields are replaced by their classical
expectation values.
\begin{widetext}
In the PQM model the thermodynamical potential derived under the
mean-field approximation reads~\cite{Schaefer:PQM}:
\begin{equation}
\Omega_{MF} = {\cal U}(\ell,\ell^*) + U(\langle\sigma\rangle, \langle\pi\rangle=0) + \Omega_{q\bar{q}} (\langle\sigma\rangle,\ell,\ell^*).
\label{Omega_MF}
\end{equation}
Here, the contribution of quarks with dynamical mass
$m_q=g\langle\sigma\rangle$ is given by
\begin{equation}
\Omega_{q\bar{q}} (\langle\sigma\rangle, \ell,\ell^*) = - 2 N_f T \int \frac{d^3 p}{(2\pi)^3} \left\{
\frac{N_c E_q}{T}
+ \ln
g^{(+)}(\langle\sigma\rangle, \ell, \ell^*; T, \mu) + \ln
g^{(-)}(\langle\sigma\rangle,\ell, \ell^*; T, \mu) \right\},
\label{Omega_MF_q}
\end{equation}
where
\begin{eqnarray}
\label{g}
g^{(+)}(\langle\sigma\rangle,\ell, \ell^*; T, \mu) &=& 1 + 3 \ell
\exp[-(E_q-\mu)/T] + 3 \ell^*\exp[-2(E_q-\mu)/T] + \exp[-3(E_q-\mu)/T], \\
g^{(-)}(\langle\sigma\rangle,\ell, \ell^*; T, \mu) &=& g^{(+)} (\langle\sigma\rangle,\ell^*, \ell; T, -\mu);
\end{eqnarray}
and $E_q = \sqrt{p^2+m_q^2}$ is the quark quasi-particle energy. The
first term in Eq.~(\ref{Omega_MF_q}) is a divergent vacuum
fluctuation contribution which has to be properly
regularized. Following our previous studies~\cite{MFonVT} we use the
dimensional regularization, which introduces an arbitrary scale
$M$. The renormalization procedure allows to compensate the physics
dependence on this scale. The finite contribution of
the vacuum term to Eq.~(\ref{Omega_MF_q}) reads~\cite{MFonVT}
\begin{equation}
\Omega_{q\bar{q}}^{vac} = - \frac{N_c N_f}{8 \pi^2} m_q^4 \ln\left(\frac{m_q}{M}\right).
\label{vacuum_term}
\end{equation}
The importance and influence of this contribution on the thermodynamics
of chiral models was demonstrated and studied in detail in Refs. \cite{MFonVT} and
\cite{Nakano:2009ps}.
\end{widetext}
The equations of motion for the mean fields are obtained by requiring
that the thermodynamic potential is stationary with respect to changes
of $\langle\sigma\rangle$, $\ell$ and $\ell^*$:
\begin{equation}
\frac{\partial \Omega_{MF}}{\partial \langle\sigma\rangle} = \frac{\partial \Omega_{MF}}{\partial \ell} = \frac{\partial \Omega_{MF}}{\partial \ell^*} =0.
\label{EOM_MF}
\end{equation}
The model parameters are fixed to reproduce the same vacuum physics as
in the FRG calculation.
\begin{figure*}
\includegraphics*[width=7.5cm]{c3_mf.eps}
\includegraphics*[width=7.2cm]{c3_frg.eps}
\caption {The coefficient $c_3$ as a function of temperature for
different values of $\mu/T$ for the { PQM} model in the mean-field
approximation (left panel) and in the FRG approach (right
panel). }
\label{fig:c3}
\end{figure*}
\section{Thermo\-dynamics of the PQM model}\label{sec:thermo}
The thermodynamic potential introduced in Eqs. (\ref{omega_final}) and
(\ref{Omega_MF}) provides basis to explore critical properties of the
PQM model at finite baryon density within the FRG approach and under the mean-field approximation.
To find the potential at finite temperature and chemical potential
one needs to solve the FRG flow equation (\ref{eq:frg_flow}).
{ This equation is solved by numerical methods based on }the Taylor
series expansion~\cite{Litim:2002cf,Skokov:2010wb}. This method is successful in studying
thermodynamics at finite density and temperature~\cite{SFR,Nakano:2009ps,Skokov:2010wb}
in the regime where the system exhibits the crossover or the second-order
phase transition. For the solution of the FRG flow equations in
the regime of the first-order phase transition, where the
thermodynamical potential develops two minimums, different
numerical methods are required~\cite{Adams:1995cv,Nakano:2009ps}. In
the present work we restrict our considerations only to the parameter range
where the PQM model exhibits the crossover or the second-order chiral
phase transitions.
\subsection{The phase diagram}
The PQM model is expected to belong to the same universality class as QCD
and thus should
exhibit a generic phase diagram with the critical point at
non-vanishing chemical potential. In the chiral limit the phase
diagram is identified by divergences of the chiral susceptibility. At
finite quark mass the chiral transition is of crossover type. In this case
the pseudocritical temperature { and chemical potential are located by determining the
maximum} of the chiral susceptibility or the temperature derivative of
the chiral order parameter. The position of the CEP is identified by the
properties of the sigma mass.
The
temperature and chemical potential where the sigma mass vanishes
correspond to the position of the CEP. One can equivalently consider
the net-quark number fluctuations to identify the critical point which
according to $Z(2)$ universality diverge at the CEP.
Fig.~\ref{fig:PD} shows the phase diagrams of the PQM model obtained
in the FRG approach and in the mean-field approximation. For the
physical pion mass and moderate values of the chemical potential the
PQM model exhibits a smooth crossover chiral transition. In
Fig.~\ref{fig:PD}
we define the transition region as bands where the temperature
derivative of the order parameter exhibits $5\%$-deviations from its
maximal value. At larger chemical potentials the crossover line terminates at the
CEP where the transition is of the second order
and belongs to the universality class of { the three dimensional
Ising model.}
Comparing the resulting phase diagrams of the PQM model obtained
within the mean-field approximation and FRG approach we find a clear shift of the position of
the chiral phase boundary to higher temperatures due to mesonic
fluctuations. Such shifts were previously found in the QM model
within the FRG approach~\cite{Schaefer:2006ds,Nakano:2009ps}. However, in our
studies due to the gluonic background, which is explicitly included in
our FRG calculations, we also find a significant shift of the CEP to higher
temperature.
In the following we focus on observables that are related to the
non-vanishing
net-quark number density and consider moments of quark number fluctuations.
We discuss how such fluctuations depend on the quark chemical
potential in the presence of the gluonic background within the FRG approach.
\begin{figure*}
\includegraphics*[width=7.5cm]{c4_mf.eps}
\includegraphics*[width=7.3cm]{c4_frg.eps}
\caption{ The coefficient $c_4$ as a function of temperature for
different values of $\mu/T$ for the { PQM} model in the mean-field
approximation (left panel) and in the FRG approach (right panel).
}
\label{fig:c4}
\end{figure*}
\subsection{Quark number density fluctuations}
The fluctuations of conserved charges are observables that provide
direct information on critical properties related with the chiral
symmetry restoration. Fluctuations related
with the baryon number conservation are of particular interest.
In an equilibrium medium a divergence of the net-quark number
susceptibility is a direct evidence for the existence of the CEP~\cite{Stephanov:1999zu}. Consequently, any
non-monotonic dependence of these fluctuations on collision energy in
heavy-ion collisions was proposed as a phenomenological method to
verify the CEP~\cite{Stephanov:1999zu}. In a non-equilibrium system, the net-quark
number susceptibility was also shown to signal the first-order chiral
phase transition due to spinodal decomposition~\cite{CS}.
The fluctuations of the net-quark number density are
characterized by the generalized susceptibilities,
\begin{equation}
c_n(T)=\frac{\del^n[p\,(T,\mu)/T^4]}{\del(\mu/T)^n}.
\end{equation}
The first coefficient $c_1=n_q/T^3$
characterises the response of the net-quark number density to changes
in the quark chemical potential. The second coefficient
\begin{equation}
c_2 = {\frac{\chi_q} {T^2}}= \frac{1}{V T^3} \langle(\delta N_q)^2\rangle,
\end{equation}
with $\delta N_q= N_q-\langle N_q\rangle$ is proportional to the susceptibility of the net-quark number density, $\chi_q$.
The third and fourth order moments can be expressed through $\delta N_q$ as
\begin{eqnarray}
c_3 &=& \frac{1}{V T^3} \langle(\delta N_q)^3\rangle, \\
c_4 &=& \frac{1}{V T^3} (\langle(\delta N_q)^4\rangle-3\langle(\delta N_q)^2\rangle^2) \label{fluctuations}.
\end{eqnarray}
The coefficients $c_n(T)$ are sensitive probes of the chiral phase
transition. They indicate the position, the order and, in case of the
second-order phase transition, the universality class of corresponding phase transition.
The net-quark number density, $n_q$, is discontinuous at the first-order
transition, whereas the susceptibility $c_2$ and higher cumulants
diverge at the
CEP~\cite{Stephanov:2007fk,Stephanov:2008qz,Athanasiou:2010kw}.
In the chiral limit and at non-zero chemical potential, all generalized susceptibilities
$c_n(T)$ with $n>2$ diverge at the $O(4)$ chiral critical
line~\cite{Ejiri:2005wq}.
Moreover, they also diverge at the spinodal lines of the first-order phase transition~\cite{CS}.
A very particular role is attributed to the so-called kurtosis of the
net-quark number fluctuations~\cite{Ejiri:2005wq,F1,kurtosis} which is
defined as the ratio
\begin{equation}\label{eq:ratio_c42}
R_{4,2}=\frac{c_4}{c_2}.
\end{equation}
This key observable is not only sensitive to the chiral but also to
the deconfinement transition. At vanishing chemical potential, in the asymptotic regime of
high and low temperatures the
kurtosis reflects quark content of the baryon-number carrying
effective degrees of freedom~\cite{Ejiri:2005wq,kurtosis}. Therefore,
at low temperatures in the confined phase, $R_{4,2}\simeq N_q^2=9$ while
for high temperatures one recovers an ideal gas of quarks with $R_{4,2}\sim 1$~\footnote{More precisely, this
number is $6/\pi^2$ due to quantum statistics.}.
\begin{figure*}
\includegraphics*[width=7.5cm]{R_mf.eps}
\includegraphics*[width=7.3cm]{R_frg.eps}
\caption{ The kurtosis $R_{4,2}$ as a function of temperature for
different $\mu/T$ calculated in the { PQM} model under the
mean-field approximation (left panel) and in the FRG approach
(right panel). }
\label{fig:r}
\end{figure*}
The properties of different moments of the net-quark number
fluctuations in the presence of the chiral phase transition were
studied in the literature both in terms of the LQCD ~\cite{lgt2} as
well as in different models \cite{SFR,Fu:2009wy,Schaefer:2009ui,Skokov:2010wb,MFonVT,Karsch:2010ck}.
In particular, the
importance of the quark susceptibility and the kurtosis as signatures
of the deconfinement and the chiral phase transition as well as the
CEP was discussed~\cite{kurtosis}. The influence and dependence of
these fluctuations on the quark mass was also analyzed on the lattice
and in effective
models~\cite{SFR,Fu:2009wy,Schaefer:2009ui,Skokov:2010wb,MFonVT}.
However, only little is known
about chemical potential dependence of the higher cumulants $c_n$, particularly
with $n>2$. Such dependence can be obtained in the PQM model
from the thermodynamic potential introduced in Eqs.~(\ref{omega_final}) and~(\ref{Omega_MF}). In
Figs.~\ref{fig:c1}--\ref{fig:c4} we quantify the first four moments obtained in
the PQM model under the mean-field approximation and in the FRG
approach for different values of the ratio $\mu/T$. The lines of the fixed ratio $\mu/T$
also indicated on the phase diagram in Fig.~\ref{fig:PD}.
Fig.~\ref{fig:c1} shows the net-quark number density normalized by
$T^3$, $c_1=n_q/T^3$, for different values of $\mu/T$. The cumulant
$c_1$ as well as all $c_{2n+1}$, for $n=1, 2, 3, \ldots$,
are vanishing at zero chemical potential, $\mu=0$. At finite $\mu$
and in the chiraly broken phase the net-quark number, $n_q$, is strongly increasing
function of $\mu/T$. In the low temperature phase due to the large dynamical quark mass the
$n_q$ increases approximately as $\sinh(3\mu/T)$. This is a direct
consequence of the "statistical confinement" properties of the PQM
model which for small expectation values of Polyakov loops $l\ll1$
suppresses the single and double quark modes in the partition function
as seen from Eqs.~(\ref{n1}) and~(\ref{g}). At
high temperatures/chemical potentials $n_q$ behaves as polynomial in $\mu/T$. For a fixed
$\mu/T$ and in the pseudo-critical region where the chiral symmetry is
partially restored there is a clear change in the temperature dependence of
$n_q$. At the pseudo-critical temperature the density exhibits a
kink-like structure which is particularly evident in the mean-field
calculations at larger $\mu/T$. In the FRG calculations and at the
corresponding $\mu/T$ the above kink-like structure in the density $n_q$ is
suppressed. Thus, the meson fluctuations quantified by the FRG method
provide smoothing of the net-quark density near the crossover
transition.
\begin{figure*}
\includegraphics*[width=7.5cm]{R_mf_IS.eps}
\includegraphics*[width=7.3cm]{R_IS.eps}
\caption{ The temperature dependence of the kurtosis $R_{4,2}$
calculated in the PQM model at fixed values of the entropy density to net-quark density
($s/n_q$) under the mean-field approximation (left panel) and in
the FRG approach (right panel). }
\label{fig:ris}
\end{figure*}
The appearance of the crossover chiral transition in the PQM model is
also transparent when considering the structure of isentropic
trajectories. Because of the entropy and baryon-number conservation such
trajectories correspond to contours of the constant entropy density
per net-quark number density, $s/n_q$, in the
temperature-chemical potential plane. Fig.~\ref{fig:PD} shows
isentropes for different $s/n_q$ in the PQM model obtained under the
mean-field approximation and in the FRG calculations. There is a clear
change of slopes of isentropes along the transition line indicating
the change of the equation of state. The qualitative behavior of contours of constant $s/n_q$ obtained
in the PQM model is similar to that calculated previously within the
QM model~\cite{Nakano:2009ps}. In particular, the isentropes remain smooth when
the effect of long-wavelength meson fluctuations is consistently
included in the presence of the gluon background. There are also no
indications of any focusing towards the CEP in the PQM model.
The influence of the finite chemical potential on $c_2$ (which is
proportional to the net-quark number susceptibility $c_2=\chi_q/T^2$)
is shown in Fig.~\ref{fig:c2} for the mean-field approximation and the
FRG approach. At vanishing chemical potential the cumulant $c_2$
increases monotonously with temperature.
However, at finite chemical potential, the susceptibility
$c_2$ develops a peak structure. The amplitude of this peak increases
with the chemical potential towards the CEP where $c_2$ diverges. In the
high temperature/chemical potential phase $c_2$ converges to the Stefan Boltzmann limit
\begin{eqnarray}
c_{2}^{ SB} &=& \frac{N_cN_f}{3} \left[ 1 + \frac{3 }{\pi^2} \left( \frac{\mu}{T}\right)^2 \right]
\label{SB}
\end{eqnarray}
As it is seen in Fig.~\ref{fig:c2} the peak structure in $c_2$ is more
pronounced in the mean-field approximation at $\mu/T=1$ than in the FRG at
$\mu/T=1.5$. This is in spite of the fact that the location of the CEP
in the FRG is closer to the line of $\mu/T=1.5$ than the corresponding one for
the mean-field approximation to $\mu/T=1$ (see Fig. \ref{fig:PD}).
This shows that the criticality of $c_2$ as a function of the
distance to the CEP appears earlier in the mean-field approximation
than in the FRG approach. This result is in agreement with the previous studies
in the QM model showing that the critical region shrinks due to
mesonic fluctuations~\cite{Schaefer:2006ds}.
The cumulants $c_1$ and $c_2$ are sensitive to
changes in the chemical potential and are influenced by the meson
fluctuations and the gluon background. However, both these
coefficients are
positive for all values of $\mu$ and $T$. At
finite chemical potential the positivity of the first two coefficients is not
preserved for $c_n$-moments with $n>2$. Figs.~\ref{fig:c3}
and~\ref{fig:c4} show the third and the fourth order cumulants
of
the net-quark number density for different values of $\mu/T$. For a
large $\mu/T$ both these susceptibilities exhibit a peculiar
structure in the transition region. There is a minimum of $c_3$
which can reach negative value. The amplitudes of this minimum is
strongly suppressed by meson fluctuations in FRG approach.
The fourth order cumulant is strictly positive for vanishing chemical
potential. However, for higher values of $\mu/T$, $c_4$ becomes
negative in the vicinity of the crossover transition. This
manifests the broadening of the net-quark number distribution in
comparison to the Gaussian one. Large values of $c_4$ in the broken
phase infer that the distribution becomes narrower than the
Gaussian. The chemical potential independent Stefan-Boltzmann limit $
c_{4}^{ SB} ={2 N_cN_f}/{\pi^2}$ is reproduced at temperatures
$T\gg T_c$.
\begin{figure*}
\includegraphics*[width=7.5cm]{c3c1_mf.eps}
\includegraphics*[width=7.2cm]{c3_c1_frg.eps}
\caption {The ratio of $c_3$ to $c_1$ as a function of temperature
for different values of $\mu/T$ obtained in the { PQM} model under
the mean-field approximation (left panel) and in the FRG approach
(right panel). }
\label{fig:c3c1}
\end{figure*}
Comparing the mean-field with the FRG results for both $c_3$ and $c_4$
we conclude that mesonic fluctuations essentially modify properties
of different generalized quark susceptibilities. In the transition region $c_3$ and
$c_4$ are suppressed in the FRG relative to the mean-field results.
{ Thus, the mean-field approach is only an approximate method to describe static thermodynamics
near the chiral phase transition.}
We have already indicated that the ratios of different $c_n$ are
sensitive to the deconfinement and the chiral phase transitions.
Figs.~\ref{fig:r} and~\ref{fig:ris} show the kurtosis
$R_{4,2}=c_4/c_2$ calculated as a function of temperature along
different paths in the temperature--chemical potential plane quantified by fixed $\mu/T$
and $s/n_q$. In the PQM model the kurtosis drops from $R_{4,2}\simeq 9$ to
$R_{4,2} \sim 1$ in the transition region. Such a change is explicitly
attributed to the change in quark content of baryon number carrying effective
degrees of freedom~\cite{Ejiri:2005wq}. In the PQM model and in the
chiraly broken phase the effective three quark states dominate, while
at high temperatures single quarks prevail. In the mean-field
approximation the kurtosis exhibits a peak at the transition
temperature at $\mu=0$. The height of these peaks depends not only on
the pion mass~\cite{kurtosis,karsch,Skokov:2010wb,MFonVT} but also
on the value of the chemical potential. The mesonic
fluctuations weaken the peak structure both at finite and at vanishing
quark density. For finite chemical potential the kurtosis becomes negative
following the same trends as seen in the fourth order cumulant.
A direct information on quark content of baryon carrying effective
degrees of freedom in the low temperature phase is also contained in
the $c_3/c_1$ ratio. At low temperature where thermodynamics is
dominated by three-quark modes $c_3/c_1=9$ for any value of
chemical potential. At low temperatures and at zero chemical potential $c_1$ and $c_3$ vanishes but
their ratio is finite. At asymptotically large temperatures and for $\mu\to 0$
this ratio diverges. The ratio $c_3/c_1$ similarly as the kurtosis
$R_{4,2}=c_4/c_2$ exhibits strong variations in the phase transition
region.
It develops a peak with height which increases with $\mu/T$
and for sufficiently large $\mu/T$ it develops a deep structure. This
is the case for both mean-field and FRG calculations, however variations at
corresponding $\mu/T$ in the FRG are strongly suppressed because of meson
fluctuations.
\subsection{Scaling properties of generalized susceptibilities at
finite chemical potential}
The general trends and behavior of different susceptibilities
calculated in the last section can be understood considering their
scaling properties near the chiral transition. Under the mean-field approximation
such scaling can be inferred from the Landau theory of phase
transitions, where the singular part of the thermodynamic potential is a
polynomial in an order parameter $\sigma$,
\begin{eqnarray}
\nonumber
\Omega^{sing}(T,\mu;\sigma) &=& \frac12 a(T,\mu) \sigma^2 + \frac14
b(T,\mu) \sigma^4\\ &+& \frac16 c \sigma^6 - h \sigma.
\label{LG}
\end{eqnarray}
with $h$ being a symmetry breaking term.
For the chemical potential much smaller than the critical temperature
$T_c(\mu=0)$ of the second-order chiral phase transition the mass term
$a(T,\mu)$ is parameterized as
\begin{equation}
a(T,\mu) = A\cdot(T-T_c) + B_2\mu^2,
\label{a}
\end{equation}
where both coefficients $A$ and $B_2$ are positive. In general, the
effective quartic coupling $b>0$, is also $T$ and $\mu$-dependent.
However, this dependence is irrelevant near the critical
line $a(T,\mu)=0$ and away from the CEP or the tri-critical point
(TCP). Therefore in this case
we can also neglect the
higher order polynomial contribution by setting $c=0$. In the chiral
limit $h=0$, the critical properties of $c_2$ and $c_4$ are
obtained from Eq.~(\ref{LG}):
\begin{eqnarray}
c_2^{sing} &=& \frac{A B_2}{b T^2} \left( T - T_c\right) \theta( T_c -T ), \\
c_4^{sing} &=& \frac{6 B_2^2 }{b} \theta( T_c -T ).
\label{c2c4LG}
\end{eqnarray}
Thus, $c_2$ is not differentiable at the critical temperature, while
$c_4$ has a discontinuity.
For a finite quark mass i.e. for $h\neq0$, the second-order
transition is turned to the crossover and the sharp structures in
$c_2$ and $c_4$ are smoothened. Therefore in the PQM model, the peak
structure appearing in $c_4$ is directly linked to the quark mass
and closely connected to the partial restoration of the chiral
symmetry.
From Eq.~(\ref{c2c4LG}) it is clear, that the kurtosis $R_{4,2}$,
driven by the $c_4$ has a discontinuity at $T_c$.
In the FRG approach the critical behavior of generalized
susceptibilities obtained under the mean-field approximation will be
modified by the long wave meson fluctuations. Detailed studies in the
QM model showed that the FRG method can correctly account for
long-range correlations resulting in the O(4) critical behavior of
thermodynamic functions~\cite{SFR}. The gluon background is
not modifying critical dynamics related with the chiral symmetry.
Therefore in the chiral limit the PQM model belongs to the O(4)
universality class. The singular part of the thermodynamic
potential is controlled by the critical exponents of the
three-dimensional O(4)-symmetric spin system. At vanishing chemical
potential
\begin{eqnarray}
\Omega^{sing} &\sim& (T-T_c)^{2-\alpha},
\end{eqnarray}
leading to the following scaling of generalized susceptibilities
\begin{eqnarray}
c_{2n}^{sing} &\sim& (T-T_c)^{2-n-\alpha}, \\
c_{2n+1}&=&0
\label{scaling}
\end{eqnarray}
for $n=1,2,3,\ldots$
In the mean-field approach the critical exponent $\alpha=0$. The
quantum fluctuations within FRG renormalize the exponent to
$\alpha\simeq-0.21$ expected in the O(4) universality class. Therefore
fluctuations lead to weakening of singularities. The finite quark mass
further smooths the temperature dependence of $c_2, c_3$ and $c_4$ as
seen in Figs.~\ref{fig:c2},~\ref{fig:c3} and~\ref{fig:c4}. The
kurtosis follows singular behavior of $c_4$ and for $\mu=0$ exhibits a
step-like structure at $T_c$.
For finite chemical potential, but still away from the CEP or
the tricritical point (TCP), the coefficient $a(\mu,T)$ in the Landau
potential can be parameterized as
\begin{equation}
a(T,\mu) = A\cdot(T-T_c) + B_1\cdot(\mu-\mu_c)
\label{a_fmu}
\end{equation}
while the quartic coupling $b>0$ and we still keep $c=0$.
In this case one gets the following expressions for susceptibilities
\begin{eqnarray}
\label{c2LG_fmu}
c_1^{sing} &=& \frac{B_1 a }{2 T^3 b} \theta(-a), \\
c_2^{sing} &=& \frac{B_1^2}{2b T^2} \theta( -a ), \\
c_3^{sing} &=& c_4^{sing}= 0 .
\label{c4LG_fmu}
\end{eqnarray}
Thus, $c_2$ exhibits a discontinuity while the singular part of $c_4$
is vanishing along with $c_3$ and higher order cumulants. However,
there is the next to leading order contribution to $a(T,\mu)$ owing to
non-vanishing curvature of the transition line in the
$(T,\mu)$-plane. In order to take it into account one needs to add an
extra contribution $B_2\cdot(\mu-\mu_c)^2$ to the right-hand side of
Eq.~(\ref{a_fmu}). In this case the $c_4^{sing}$ behaves as in
Eq.~(\ref{c2c4LG}) while the leading contribution to
$c_1^{sing}$ and $c_2^{sing}$ have the structure as in Eq.~(\ref{c2LG_fmu})
but with a modified $a(T,\mu)$. {The third order cumulant also develops
nonzero values in the broken phase $c_3^{sing}=\frac{3 B_2 B_1}{b T }\theta(-a)$. }
As in the case of $\mu=0$, the kurtosis has a step-like behavior
but here the curvature of the phase diagram controls whether the
kurtosis is an increasing or decreasing function at the phase
transition because of two competing step structures in $c_2^{sing}$ and
$c_4^{sing}$.
Including quantum fluctuations as in the FRG approach the above MF
scaling is modified to the following O(4) relations
\begin{eqnarray}
\Omega^{sing} &\sim& ( -a )^{2-\alpha} \\
c_{n }^{sing} &\sim& ( -a )^{2-n-\alpha}
\label{scaling_fmu}
\end{eqnarray}
resulting in divergence of the $c_3$, $c_4$ and all higher order
cumulants along the O(4) critical line with the specific heat critical
exponent $\alpha\simeq -021$. The kurtosis is clearly divergent at
$T_c=T(\mu_c)$ following the singularity of the $c_4^{sing}$ cumulant.
The above scaling properties of the net-quark number fluctuations at
finite chemical potential are modified when approaching the TCP in the
chiral limit ar CEP at finite quark mass.
Close to the TCP the effective Landau potential has the structure as in
Eq.~(\ref{LG}) where the parameter $a(T,\mu)$ has a linear dependence on the
reduced temperature and chemical potential
\begin{equation}
a(T,\mu) = A_a\cdot(T-T_c) + B_a\cdot (\mu-\mu_c).
\label{aTCP}
\end{equation}
The quartic coupling tends to zero as
\begin{equation}
b(T,\mu)=A_b\cdot (T-T_c) + B_b\cdot (\mu-\mu_c).
\label{bTCP}
\end{equation}
The six-order coupling $c>0$, as it is required by the stability of
the theory. Consequently, within Landaus theory and with the above
parametrization of the potential coefficients one gets the following
relations for the leading contribution to generalized susceptibilities,
\begin{eqnarray}
c_1^{sing} &=& \frac{B_a}{2 T^3} \sqrt{\frac{-a}{c}} \theta(-a), \\
c_n^{sing} &=& \frac{\Gamma(n-\frac32)}{2 \Gamma(\frac12)} \frac{(4c)^{n-2} B_a^n T^{n-4}}{(b^2-4ac)^{n-3/2}}, \ \ n>1.
\label{c2c4LGTCP}
\end{eqnarray}
resulting in the divergent kurtosis at the TCP as: $R_{4,2}^{sing} \sim
(b^2 - 4ac)^{-2}$.
From Eq.~(\ref{c2c4LGTCP}) one concludes that the $c_2^{sing}$ is
inversely proportional to a distance from the TCP along the
$a(T,\mu)=0$ line. When approaching the TCP from any other direction
which is non-tangential to the critical line, then $b^2\ll a$ and
$c_2^{sing}$ is inversely proportional to the square root of the
distance to the TCP~\cite{CS,Hatta:2002sj}. This demonstrates that the critical
region being defined by the properties of the $c_n$ for $n>1$ is
elongated along the O(4) critical line.
The thermodynamic properties and critical behavior near TCP are well
described by the mean-field theory up to logarithmic corrections,
because in this case the upper critical dimension is three.
For the non-zero external field (non-zero quark mass) the
three-critical point is turned to the CEP. Following the same
mean-field analysis (see also Ref.~\cite{Hatta:2002sj}) we obtain the
following leading singular behavior along the linear continuation of
the phase-coexistence line to the crossover region and near to the CEP
\begin{eqnarray}
c_{n}^{sing}&\sim& |v|^{2-\frac32n}.
\label{c2c4LGCEP1}
\end{eqnarray}
For any other directions which are asymptotically not equivalent to
the previous one
\begin{eqnarray}
c_{n}^{sing}&\sim& |u|^{\frac43-n}.
\label{c2c4LGCEP2}
\end{eqnarray}
where the introduced variables $v$ and $u$ are linear combinations of
$(T-T_{CEP})$ and $(\mu-\mu_{CEP})$. For scaling (\ref{c2c4LGCEP1})
the kurtosis diverges as $R_{4,2}^{sing} \sim |v|^{-3}$, and for
scaling (\ref{c2c4LGCEP2}) the kurtosis $R_{4,2}^{sing} \sim
|u|^{-2}$.
When going beyond the mean-field approximation by including quantum
and thermal fluctuations in the PQM model one expects that along the
$u=0$ line the following scaling holds (see also~\cite{Stephanov:2008qz}):
\begin{eqnarray}
c_n^{sing} &\sim& |v|^{-[(n-2)(\gamma+\beta)+\gamma]},
\label{c2c4LGCEP_sc_a}
\end{eqnarray}
and for any other direction
\begin{eqnarray}
c_n^{sing} &\sim& |u|^{-\left(n- 2 + \frac{\gamma}{\gamma+\beta} \right)}
\label{c2c4LGCEP_sc_b}
\end{eqnarray}
Here, the critical exponents $\gamma$ and $\beta$ correspond to the
three-dimensional spin model belonging to the $Z(2)$ universality
class ~\footnote{In the Z(2) universality class the
$\alpha\approx0.125$, $\beta=0.312$ and $\gamma=1.25$.}. The
kurtosis is divergent at the CEP, however the strength of the
singularity depends on the direction. Approaching the TCP along the
$u=0$ line results in
\begin{eqnarray}
R_{4,2}^{sing}&\sim& |v|^{ -2(\gamma+\beta) } = |v|^{ -2 - \gamma+\alpha } ,
\label{c2c4LGCEP_sc_R}
\end{eqnarray}
whereas for any other direction
\begin{eqnarray}
R_{4,2}^{sing} &\sim& |u|^{ -2 }.
\label{c2c4LGCEP_sc_Ru}
\end{eqnarray}
The properties of the net-quark number density fluctuations and their
higher moments obtained in the PQM model at finite chemical potential
and the pion mass are qualitatively understood as remnants of the
critical structure and scaling behaviors related with the chiral
symmetry restoration in the limit of massless quarks.
\section{Summary and Conclusions}\label{sec:concl}
We have formulated thermodynamics of the Polyakov loop extended
quark--meson effective chiral model (PQM), including quantum
fluctuations within the functional renormalization group method
(FRG). We have solved the flow equation for the scale dependent
thermodynamic potential at finite temperature and density in the
presence of a background gluonic field.
We have shown that the non-perturbative dynamics introduced by the FRG
approach essentially modifies predictions of the model derived under
the mean-field approximation. In particular, we have demonstrated
quantitative changes of the phase diagram leading to a shift in the
position of the critical end point.
We have focused on fluctuations of the net-quark number density and
calculated the first four moments near the chiral transition for
different values of the chemical potential. We have indicated the role
and importance of ratios of different cumulant moments to identify
the deconfinement and chiral phase transitions. We have also discussed
predictions of the Landau and scaling theories on the
critical behavior of the net-quark number density fluctuations and
their higher moments in the vicinity of the chiral phase
transition.
The extension of the FRG method proposed here to account for the
coupling of fermions to the background gluon fields within the
quark--meson model is of relevance to understand effectively
thermodynamics of the QCD near the chiral phase transition.
\section*{Acknowledgments}
V.~Skokov acknowledges the support by the Frankfurt Institute for
Advanced Studies (FIAS). V.~Skokov thanks M.~Stephanov for valuable
discussions. K. Redlich acknowledges partial support from the Polish
Ministry of Science.
|
2,877,628,091,155 | arxiv | \section{Introduction: The General Concept of Superiorization}
Given an algorithmic operator $\mathcal{A}:X\rightarrow X$ on a Hilbert
space $X$, consider the iterative process
\begin{equation}
x^{0}\in X,\;x^{k+1}=\mathcal{A}\left(x^{k}\right),\;\mathrm{for}\,\mathrm{all}\;k\geqslant0,\label{eq:iter-proc-basic}
\end{equation}
and let $SOL\left(P\right)$ denote the solution set of some problem
$P$ of any kind. The iterative process is said to solve $P$ if,
under some reasonable conditions, any sequence $\left\{ x^{k}\right\} _{k=0}^{\infty}$
generated by the process converges to some $x^{*}\in SOL\left(P\right)$.
An iterative process (\ref{eq:iter-proc-basic}) that solves $P$
is called perturbation resilient if the process
\begin{equation}
y^{0}\in X,\;y^{k+1}=\mathcal{A}\left(y^{k}+v^{k}\right),\;\mathrm{for}\,\mathrm{all}\;k\geqslant0,\label{eq:iter-proc-basic-1}
\end{equation}
also solves $P$, under some reasonable conditions on the sequence
of perturbation vectors $\left\{ v^{k}\right\} _{k=0}^{\infty}\subseteq X$.
The iterative processes of (\ref{eq:iter-proc-basic}) and (\ref{eq:iter-proc-basic-1})
are called ``the basic algorithm'' and ``the superiorized version
of the basic algorithm'', respectively.
Superiorization aims at identifying perturbation resilient iterative
processes that will allow to use the perturbations in order to steer
the iterates of the superiorized algorithm so that, while retaining
the original property of converging to a point in $SOL\left(P\right)$,
they will also do something additional useful for the original problem
$P$, such as converging to a point with reduced values of some given
objective function. These concepts are rigorously defined in several
recent works in the field, we refer the reader to the recent reviews
\cite{h14super},\cite{censor15weak} and references therein. More
material about the current state of superiorization can be found also
in \cite{linsup16}, \cite{hgdc12} and \cite{reem15new}.
A special case of prime importance and significance of the above is
when $P$ is a convex feasibility problem (CFP) of the form: Find
a vector $x^{*}\in\cap_{i=1}^{I}C_{i}$ where $C_{i}\subseteq R^{J}$,
the $J$-dimensional Euclidean space, are closed convex subsets, and
the perturbations in the superiorized version of the basic algorithm
are designed to reduce the value of a given objective function $\phi$.
In this case the basic algorithm (\ref{eq:iter-proc-basic}) can be
any of the wide variety of feasibility-seeking algorithms, see, e.g.,
\cite{bb96}, \cite{cccdh10} and \cite{annotated}, and the perturbations
employ nonascent directions of $\phi$. Much work has been done on
this as can be seen in the Internet bibliography at \cite{bib-page}.
The usefulness of this approach is twofold: First, feasibility-seeking
is, on a logical basis, a less-demanding task than seeking a constrained
minimization point in a feasible set. Therefore, letting efficient
feasibility-seeking algorithms ``lead'' the algorithmic effort and
modifying them with inexpensive add-ons works well in practice.
Second, in some real-world applications the choice of an objective
function is exogenous to the modeling and data acquisition which give
rise to the constraints. Thus, sometimes the limited confidence in
the usefulness of a chosen objective function leads to the recognition
that, from the application-at-hand point of view, there is no need,
neither a justification, to search for an exact constrained minimum.
For obtaining ``good results'', evaluated by how well they serve
the task of the application at hand, it is often enough to find a
feasible point that has reduced (not necessarily minimal) objective
function value\footnote{Some support for this reasoning may be borrowed from the American
scientist and Noble-laureate Herbert Simon who was in favor of ``satisficing''
rather then ``maximizing''. Satisficing is a decision-making strategy
that aims for a satisfactory or adequate result, rather than the optimal
solution. This is because aiming for the optimal solution may necessitate
needless expenditure of time, energy and resources. The term ``satisfice''
was coined by Herbert Simon in 1956 \cite{simon1956satisficing},
see: https://en.wikipedia.org/wiki/Satisficing.}.
\section{Linear Superiorization}
\subsection{The problem and the algorithm}
Let the feasible set $M$ be
\begin{equation}
M:=\{x\in R^{J}\mid Ax\leq b,\text{ }x\geq0\}\label{eq:linear-feas-prob}
\end{equation}
where the $I\times J$ real matrix $A=(a_{j}^{i})_{i=1,j=1}^{I,J}$
and the vector $b=(b_{i})_{i=1}^{I}\in R^{I}$ are given.
For a basic algorithm we pick a feasibility-seeking projection method.
Here projection methods refer to iterative algorithms that use projections
onto sets while relying on the general principle that when a family
of, usually closed and convex, sets is present, then projections onto
the individual sets are easier to perform than projections onto other
sets (intersections, image sets under some transformation, etc.) that
are derived from the individual sets.
Projection methods may have different algorithmic structures, such
as block-iterative projections (BIP) or string-averaging projections
(SAP) (see, e.g., the review paper \cite{censor-segal2008} and references
therein) of which some are particularly suitable for parallel computing,
and they demonstrate nice convergence properties and/or good initial
behavior patterns.
This class of algorithms has witnessed great progress in recent years
and its member algorithms have been applied with success to many scientific,
technological and mathematical problems. See, e.g., the 1996 review
\cite{bb96}, the recent annotated bibliography of books and reviews
\cite{annotated} and its references, the excellent book \cite{CEG12},
or \cite{cccdh10}.
An important comment is in place here. A CFP can be translated into
an unconstrained minimization of some proximity function that measures
the feasibility violation of points. For example, using a weighted
sum of squares of the Euclidean distances to the sets of the CFP as
a proximity function and applying steepest descent to it results in
a simultaneous projections method for the CFP of the Cimmino type.
However, there is no proximity function that would yield the sequential
projections method of the Kaczmarz type, for CFPs, see \cite{baillon2012}.
Therefore, the study of feasibility-seeking algorithms for the CFP
has developed independently of minimization methods and it still vigorously
does, see the references mentioned above. Over the years researchers
have tried to harness projection methods for the convex feasibility
problem to LP in more than one way, see, e.g., Chinneck's book \cite{C08}.
The mini-review of relations between linear programming and feasibility-seeking
algorithms in \cite[Section 1]{nurminski2015single} sheds more light
on this. Our work in \cite{linsup16} and here leads us to study whether
LinSup can be useful for either feasible or infeasible LP problems.
The objective function for linear superiorization will be
\begin{equation}
\phi(x):=\left\langle c,x\right\rangle \label{eq:lin-objective}
\end{equation}
where $\left\langle c,x\right\rangle $ is the inner product of $x$
and a given $c\in R^{J}.$
In the footsteps of the general principles of the superiorization
methodology, as presented for general objective functions $\phi$
in previous publications, we use the following linear superiorization
(LinSup) algorithm. The algorithm and its implementation details follow
closely those of \cite{linsup16} wherein only feasible constraints
were discussed.
The input to the algorithm consists of the problem data $A,$ $b,$
and $c$ of (\ref{eq:linear-feas-prob}) and (\ref{eq:lin-objective}),
respectively, a user-chosen initialization point $\bar{y}$ and a
user-chosen parameter (called here kernel) $0<\alpha<1$ with which
the algorithm generates the step-sizes $\beta_{k,n}$ by the powers
of the kernel $\eta_{\ell}=\alpha^{\ell},$ as well as an integer
$N$ that determines the quantity of objective function reduction
perturbation steps done per each feasibility-seeking iterative sweep
through all linear constraints. The perturbation direction $-\frac{{\displaystyle c}}{{\displaystyle \left\Vert c\right\Vert _{2}}}$
used in step 10 of Algorithm 1 is a nonascend direction of the linear
objective function, as required by the general principles of the superiorization
methodology, see, e.g., \cite[Subsection II.D]{hgdc12}.
\begin{algorithm}[H]
\label{alg_super}\textbf{Algorithm 1. The Linear Superiorization
(LinSup) Algorithm}
\end{algorithm}
\begin{enumerate}
\item \textbf{set} $k=0$
\item \textbf{set} $y^{k}=\bar{y}$
\item \textbf{set }$\ell_{-1}=0$
\item \textbf{while }stopping rule not met\textbf{ do}
\item $\qquad$\textbf{set} $n=0$
\item \ \ \ \ \ \ \textbf{set} $\ell=rand(k,\ell_{k-1})$
\item $\qquad$\textbf{set} $y^{k,n}=y^{k}$
\item $\qquad$\textbf{while }$n$\textbf{$<$}$N$ \textbf{do}
\item $\qquad\qquad$\textbf{set} $\beta_{k,n}=\eta_{\ell}$
\item $\qquad\qquad$\textbf{set} $z=y^{k,n}-\beta_{k,n}\frac{{\displaystyle c}}{{\displaystyle \left\Vert c\right\Vert _{2}}}$
\item $\qquad\qquad$\textbf{set }$n\leftarrow n+1$
\item $\qquad\qquad$\textbf{set }$y^{k,n}$\textbf{$=$}$z$
\item \ \ \ \ \ \ \ \ \ \ \ \ \textbf{set }$\ell\leftarrow\ell+1$
\item \ \ \ \ \ \ \textbf{end while}
\item \ \ \ \ \ \ \textbf{set }$\ell_{k}=\ell$
\item \qquad{}\textbf{set }$y^{k+1}$\textbf{$=$}$\mathcal{A}\left(y^{k,N}\right)$
\item \qquad{}\textbf{set }$k\leftarrow k+1$
\item \textbf{end while}
\end{enumerate}
All quantities in this algorithm are detailed and explained below,
except for the choice of the basic algorithm for the feasibility-seeking
operator represented by $\mathcal{A}$ in step 16 of Algorithm \ref{alg_super}
which appear in the next subsection.
\textbf{Step-sizes of the perturbations}.\textbf{ }The step sizes
$\beta_{k,n}$ in Algorithm 1 must be such that $0<\beta_{k,n}\leq1$
in a way that guarantees that they form a summable sequence $\sum_{k=0}^{\infty}\sum_{n=0}^{N-1}\beta_{k,n}<\infty,$
see, e.g., \cite{censor-zas2015}. To this end Algorithm 1 assumes
that we have available a summable sequence $\{\eta_{\ell}\}_{\ell=0}^{\infty}$
of positive real numbers generated by $\eta_{\ell}=\alpha^{\ell}$
, where $0<\alpha<1$. Simultaneously with generating the iterative
sequence $\{y^{k}\}_{k=0}^{\infty},$ a subsequence of\ $\{\eta_{\ell}\}_{\ell=0}^{\infty}$
is used to generate the step sizes $\beta_{k,n}$ in step 9 of Algorithm
1. The number $\alpha$ is called the kernel of the sequence $\{\eta_{\ell}\}_{\ell=0}^{\infty}.$
\textbf{Controlling the decrease of the step-sizes of objective function
reduction}. If during the application of Algorithm 1 the step sizes
$\beta_{k,n}$ decrease too fast then too little leverage is allocated
to the objective function reduction activity that is interlaced into
the feasibility-seeking activity of the basic algorithm. This delicate
balance can be controlled by the choice of the index $\ell$ updates
and separately by the value of $\alpha$ whose powers $\alpha^{\ell}$
determine the step sizes $\beta_{k,n}$ in step 9. In our work we
adopt a strategy for updating the index $\ell$ that was proposed
and implemented for total variation (TV) image reconstruction from
projections by Prommegger and by Langthaler in \cite[page 38 and Table 7.1 on page 49]{prommegger2014}
and in \cite{langthaler2014}, respectively. This strategy advocates
to set $\ell$ at the beginning of every new iteration sweep (steps
5 and 6) to a random number between the current iteration index $k$
and the value of $\ell$ from the last iteration sweep, i.e., $\ell_{k}=rand(k,\ell_{k-1}).$
\textbf{The proximity function}. To measure the feasibility-violation
(or level of disagreement) of a point with respect to the target set
$M$ we used the following proximity function
\begin{equation}
\Pr(x):=\frac{1}{2I}{\displaystyle \sum_{i=1}^{I}}\frac{\left(\left(\left\langle a^{i},x\right\rangle -b_{i}\right)_{+}\right)^{2}}{{\displaystyle \sum\limits _{j=1}^{J}}\left(a_{j}^{i}\right)^{2}}+\frac{1}{2J}{\displaystyle \sum\limits _{j=1}^{J}}\left(\left(-x_{j}\right)_{+}\right)^{2}\label{eq:prox}
\end{equation}
where the plus notation means, for any real number $d,$ that $d_{+}:=\max(d,0).$
\textbf{The number }$N$\textbf{ of perturbation steps}. This number
$N$ of perturbation steps that are performed prior to each application
of the feasibility-seeking operator $\mathcal{A}$ (in step 16) affects
the performance of the LinSup algorithm. It influences the balance
between the amounts of computations allocated to feasibility-seeking
and those allocated to objective function reduction steps. A too large
$N$ will make Algorithm 1 spend too much resources on the perturbations
that yield objective function reduction.
\textbf{Handling the nonnegativity constraints}. The nonnegativity
constraints in (\ref{eq:linear-feas-prob}) are handled by projections
onto the nonnegative orthant, i.e., by taking the iteration vector
in hand after each iteration of Cimmino's feasibility-seeking algorithm
applied to all $I$ row-inequalities of (\ref{eq:linear-feas-prob})
and setting its negative components to zero while keeping the others
unchanged.
\subsection{Cimmino's feasibility-seeking algorithm as the basic algorithm\label{subsec:AMS}}
We use the simultaneous projections method of Cimmino for linear inequalities,
see, e.g. \cite{LAA1985cimmino}, as the basic algorithm for the feasibility-seeking
operator represented by$\mathcal{A}$ in step 16 of Algorithm 1. Denoting
the half-spaces represented by individual rows of (\ref{eq:linear-feas-prob})
by $H_{i},$
\begin{equation}
H_{i}:=\{x\in R^{J}\mid\left\langle a^{i},x\right\rangle \leq b_{i}\},
\end{equation}
where $a^{i}\in R^{J}$ is the $i$-th row of $A$ and $b_{i}\in R$
is the $i$-th component of $b$ in (\ref{eq:linear-feas-prob}),
he orthogonal projection of an arbitrary point $z\in R^{J}$ onto
$H_{i},$ has the closed-form
\begin{equation}
P_{H_{i}}(z)=\left\{ \begin{array}{ll}
z-{\displaystyle \frac{\left\langle a^{i},z\right\rangle -b_{i}}{\Vert a^{i}\Vert^{2}}a^{i},} & \text{if }\left\langle a^{i},z\right\rangle >b_{i},\\
z, & \text{if }\left\langle a^{i},z\right\rangle \leq b_{i}.
\end{array}\right.
\end{equation}
\newpage{}
\begin{algorithm}[H]
\label{alg:cimmino}\textbf{Algorithm 2. The Simultaneous Feasibility-Seeking
Projection Method of Cimmino}
\end{algorithm}
\textbf{Initialization}: $x^{0}\in R^{J}$ is arbitrary.
\textbf{Iterative step}: Given the current iteration vector $x^{k}$
the next iterate is calculated by
\begin{equation}
x^{k+1}=x^{k}+\lambda_{k}\left(\sum_{i=1}^{I}w_{i}\left(P_{H_{i}}(x^{k})-x^{k}\right)\right)
\end{equation}
with weights $w_{i}\geq0$ for all $i\in I,$ and $\sum_{i=1}^{I}w_{i}=1.$
\textbf{Relaxation parameters}: The parameters $\lambda_{k}$ are
such that $\epsilon_{1}\leq\lambda_{k}\leq2-\epsilon_{2},$ for all
$k\;\geq\;0,$ with some, arbitrarily small, fixed, $\epsilon_{1},\epsilon_{2}>0.$
This Cimmino simultaneous feasibility-seeking projection algorithm
is known to generate convergent iterative sequences even if the intersection
$\cap_{i=1}^{I}H_{i}$ is empty, as the following, slightly paraphrased,
theorem tells.
\begin{theorem}
\cite[Theorem 3]{LAA1985cimmino} For any starting point $x^{0}\in R^{J}$,
any sequence $\{x^{k}\}_{k=0}^{\infty},$ generated by the simultaneous
feasibility-seeking projection method of Cimmino (Algorithm 2) converges.
If the underlying system of linear inequalities is consistent, the
limit point is a feasible point for it. Otherwise, the limit point
minimizes $f(x):=\sum_{i=1}^{I}w_{i}\parallel P(x)-x\parallel^{2}$,
i.e., it is a weighted (with the weights $w_{i}$) least squares solution
of the system.
\end{theorem}
\section{An Empirical Result}
Employing MATLAB 2014b \cite{matlab}, we created five test problems
each with 2500 linear inequalities in $R^{J},\:J=2000$. The entries
in 1250 rows of the matrix $A$ in (\ref{eq:linear-feas-prob}) were
uniformly distributed random numbers from the interval $(-1,1).$
The remaining 1250 rows were defined as the negatives of the first
1250 rows, i.e., $a_{j}^{1250+t}=-a_{j}^{t}$ for all $t=1,2,\ldots,1250$
and all $j=1,2,\ldots,2000.$ This guarantees that the two sets of
rows represent parallel half-spaces with opposing normals. For the
right-hand side vectors, the components of $b$ associated with the
first set of 1250 rows in (\ref{eq:linear-feas-prob}) were uniformly
distributed random numbers from the interval $(0,100).$ The remaining
1250 components of each $b$ were chosen as follows: $b_{1250+t}=-b_{t}-rand(100,200)$
for all $t=1,2,\ldots,1250$. This guarantees that the distance between
opposing parallel half-spaces is large making them inconsistent, i.e.,
having no point in common, and that the whole system is infeasible.
For the linear objective function, the components of $c$ were uniformly
distributed random numbers from the interval $(-2,1).$ All runs of
Algorithm 1 and Algorithm 2 were initialized at $\bar{y}=10\cdot\mathbf{1}$
and $x^{0}=10\cdot\mathbf{1}$, respectively, where $\mathbf{1}$
is the vector of all 1's.
We ran Algorithm 1 on each problem until it ceased to make progress,
by using the stopping rule
\begin{equation}
\frac{{\displaystyle \left\Vert y^{k}-y^{k-1}\right\Vert }}{\left\Vert y^{k}\right\Vert }\leq10^{-4}.
\end{equation}
The same stopping rule was used for runs of Algorithm 2. The relaxation
parameters in Cimmino's feasibility-seeking basic algorithm in step
16 of Algorithm 1 were fixed with $\lambda_{k}=1.99$ for all $k\geqslant0.$
Based on our work in \cite{linsup16} we used $N=20$ and $\alpha=0.99$
in steps 8 and 9 of Algorithm 1, respectively, where $\eta_{\ell}=\alpha^{\ell}.$
The three figures, presented below, show results for the five different
(but similarly generated) families of inconsistent linear inequalities
along with nonnegativity constraints. Figures 1 and 2, in particular,
show that the perturbation steps 5-15 of the LinSup Algorithm 1 initially
work and reduce the objective function value powerfully during the
first ca. 500 iterative sweeps (an iterative sweep consists of one
pass through steps 5-17 in Algorithm 1 or one pass through all linear
inequalities and the nonnegativity constraints in Algorithm 2). As
iterative sweeps proceed the perturbations in Algorithm 1 loose steam
because of the decreasing values of the $\beta_{k,n}$s and later
the algorithm proceeds toward feasibility at the expense of some increase
of objective function values. However, even at those later sweeps
the objective function values of LinSup remain well below those of
the unsuperiorized application of the Cimmino feasibility-seeking
algorithm (Algorithm 2).
The slow increase of objective function values observed for the unsuperiorized
application of the Cimmino feasibility-seeking algorithm seems intriguing
because the feasibility-seeking algorithm is completely unaware of
the given objective function $\phi(x):=\left\langle c,x\right\rangle .$
But this is understood from the fact that the unsuperiorized algorithm
has an orbit of iterates in $R^{J}$ which, by proceeding in space
toward proximity minimizers, crosses the linear objective function's
level sets in a direction that either increases or decreases objective
function values. It would keep them constant only if the orbit was
confined to a single level set of $\phi$ which is not a probable
thing to happen. To clarify this we recorded in Figure 3 the values
of $\left\langle c,x\right\rangle $ and $\left\langle -c,x\right\rangle $
at the iterates $x^{k}$ produced by the Cimmino feasibility-seeking
algorithm (Algorithm 2).
\begin{figure}
\begin{raggedright} \includegraphics[scale=0.35]{Objective-Value-Vs-Sweep-13-06-16}\caption{Linear objective function values plotted against iteration sweeps.
LinSup has reduced objective function values although the effect of
objective function reducing perturbations diminishes as iterations
proceed.\label{fig: lin-obj}}
\end{raggedright}
\end{figure}
\begin{figure}
\includegraphics[scale=0.35]{Proximities-Vs-Sweep-13-06-16}\caption{Proximity function values plotted against iteration sweeps. The unsuperiorized
feasibility-seeking only algorithm does a better job than LinSup here
which is understandable. LinSup's strive for feasibility comes at
the expense of some increase in objective function values, as seen
in Figure 1.\label{fig: prox}}
\end{figure}
\begin{figure}
\includegraphics[scale=0.35]{Objective-Value-Vs-Sweep-Unsuperiorized-13-06-16}\caption{The fact that objective function values increase to some extent by
the unsuperiorized feasibility-seeking only algorithm observed in
Figure 1 is due to the relative situation of the linear objective
function's level sets with respect to where in space is the set of
proximity minimizers of the infeasible target set.\label{fig: c-c}}
\end{figure}
\section*{Concluding Comments}
We proposed a new approach to handle infeasible linear programs (LPs)
via the linear superiorization (LinSup) method. To this end we applied
the feasibility-seeking projection method of Cimmino to the original
linear infeasible constraints (without using additional variables).
This Cimmino method is guaranteed to converge to one of the points
that minimize a proximity function that measures the violation of
all constraints. We used the given linear objective function to superiorize
Cimmino's method to steer its iterates to proximity minimizers with
reduced objective function values. Further computational research
is needed to evaluate and compare the results of this new approach
to existing solution approaches to infeasible LPs.
\subsubsection*{Acknowledgments. \textmd{We thank Gabor Herman, Ming Jiang and Evgeni
Nurminski for reading a previous version of the paper and sending
us comments that helped improve it. This work was supported by Research
Grant No. 2013003 of the United States-Israel Binational Science Foundation
(BSF).}}
\bibliographystyle{plain}
|
2,877,628,091,156 | arxiv | \section{Introduction}
In linear response transport through quantum dots the spin Kondo effect shows up a
as a plateau in the linear conductance $G$
when varying the level positions by an external gate voltage
$V_g$, often referred to as a {\it Kondo ridge.}\cite{Hewson,Glazman,Ng,Goldhaber,Cronenwett,Schmid,Wiel}
The width of the Kondo
ridge is set by the local Coulomb interaction $U$ on the dot, which, in the Kondo
regime, exceeds the
level-lead hybridization $\Gamma$. The latter determines the width of the Lorentzian
resonance in $G(V_g)$ at $U=0$. Breaking the two-fold Kramers-degeneracy
by a local Zeeman field of amplitude $B$ destroys the Kondo ridge; along the $V_g$-axis
the conductance plateau is split up into two Lorentzian resonances.
In contrast, spin-orbit interaction (SOI),
although breaking spin-rotational symmetry by designating a certain (spin) direction,
does not destroy the Kondo effect.\cite{Meir,thesisB,jens} In the presence of SOI spin is no longer
a good quantum number but a Kramers doublet remains as time-reversal symmetry is
conserved. This leads to a Kondo effect in the presence of a local interaction provided
the gate voltage is tuned such that the dot is filled (on average) by an odd number of
electrons (dominant spin fluctuations).
In multi-level dots increasing $B$
might lead to energetically degenerate states
(level crossings) resulting from different orbitals. If one is a spin-up state and
one a spin-down state and the gate voltage is tuned such that an electron fluctuates
between these states one might expect the emergence of a spin Kondo effect at {\it finite
magnetic field}.\cite{Pustilnik,2,Izumida}
If the orbital quantum number
is conserved in the leads in such systems additional orbital Kondo
effects\cite{Cox} and combinations of spin and orbital Kondo effects\cite{Borda} may appear.
Here we consider a setup where the orbital quantum number does not arise in the leads
and we thus concentrate on spin Kondo effects.
In contrast to the standard $B=0$ Kondo effect the one appearing at finite $B$ is not protected
by time-reversal symmetry and we show that it is {\it destroyed} in the presence of a
finite SOI.
We here study a serial double
quantum dot, each having a single spin-degenerate level (at vanishing magnetic field)
described by a tight-binding model with two lattice sites coupled by
electron hopping of amplitude $t$ and connected to two semi-infinite
noninteracting (Fermi liquid) leads via tunnel couplings
of strength $\Gamma_L$ and $\Gamma_R$. The on-site energies of the two levels
are given by $\epsilon_{1/2}=V_g \pm \delta$.
The Rashba SOI identifies the $z$-direction of the spin space and
is modeled as an imaginary electron hopping with spin-dependent sign between the
two lattice sites.\cite{Winkler,Mireles,BM,thesisB}
We here exclusively consider the coupling of a magnetic field to the spin
degree of freedom (Zeeman term) and neglect its effect on the orbital motion.
The relevant component of the Zeeman field perpendicular to the SOI defines
(without loss of generality) the $x$-direction.
The local Coulomb repulsion is modeled as an on-site $U$ as well as a nearest-neighbor $U'$ repulsion
and treated within an approximate static functional-renormalization group (fRG)
approach.\cite{KEM} Our model is sketched in Fig.~\ref{fig:fig1}.
In the absence of SOI a finite-$B$ Kondo effect will lead to a Kondo ridge. This conductance
plateau can be detected if $G$ is computed (or measured) as a function of $V_g$ and $B$. We show
that the finite-$B$ conductance plateau develops on a line
parallel to the $V_g$-axis, the only exception being the case with broken
{\it left-right} and {\it bonding-anti-bonding state symmetry} realized
for $\Gamma_L \neq \Gamma_R$ {\it and} $\delta \neq 0$.
In this case the finite-$B$ Kondo ridges are {\it bent} with respect to the $V_g$-axis; to follow
the conductance maximum when changing $V_g$ one has to adjust $B$ to compensate
for the asymmetry-induced level renormalization.
As the two symmetries are generically broken in experimental systems our
results are important for the understanding of measurements of the
linear conductance of multi-level quantum dots as a function of $V_G$
and $B$.
This paper is organized as follows. In the next section, we introduce our double-dot
model. In Sec.~\ref{sec:RG} details of the approximate fRG treatment of the Coulomb
interaction specific to the present model are discussed. For later reference
we also give a brief account of the appearance of the Kondo ridge for a single
dot within our approximation scheme. In Sec.~\ref{sec:res} we discuss our results. We
first describe the effects of the SOI on the spectrum of the noninteracting
isolated double dot. Next we present our results for the linear conductance $G(V_g,B)$
considering the entire parameter space. Our work is summarized in Sec.~\ref{concl}.
\begin{figure}[t!]
\center{\includegraphics[width=65mm]{fig1.eps}}
\caption{The considered setup consists of a serial double quantum dot with
energies $\epsilon_{1/2}=V_g \pm \delta$ coupled by a hopping
amplitude $t$ and a SOI of strength $\alpha$. The levels are split by an external Zeeman field $B$.
The local Coulomb
interaction is $U$ and the interaction between electrons on the two sites is $U'$.
The system is coupled to noninteracting leads by hopping amplitudes $t_{L/R}$.}
\label{fig:fig1}
\end{figure}
\section{Model}\label{sec:model}
Our multi-level quantum dot model is realized by a serial double dot
each having a single spin-degenerate level (at $B=0$) as sketched in Fig.~\ref{fig:fig1}.
The Hamiltonian of the isolated dot contains several terms
\begin{equation}
H_{\rm dot}=H_0+H_{\rm SOI}+H_Z+H_{\rm int} \;.
\end{equation}
The free part
\begin{equation}
H_0=\sum_{\sigma} \left[\sum_{j=1,2} \epsilon_j d_{j,\sigma}^\dagger
d_{j,\sigma}-t\left(
d_{2,\sigma}^\dagger d_{1,\sigma} +\mbox{H.c.}\right)\right] \; ,
\end{equation}
with $d_{j,\sigma}^\dagger$ being the creation operator of an electron
on the dot site $j=1,2$ (Wannier states) of spin $\sigma$, contains the
conventional hopping $t>0$, and the on-site energies
\begin{equation}
\epsilon_{1/2}=V_g \pm \delta
\end{equation}
which can be tuned by an external gate voltage $V_g$.
The difference of the on-site energies is parametrized
by the level splitting $\delta$. The effect of SOI is taken into account by
an imaginary hopping amplitude of spin-dependent sign. It is the
lattice realization of a Rashba SOI resulting from spatial confinement.\cite{Winkler,Mireles,BM,thesisB}
The Rashba hopping term with amplitude $\alpha>0$ reads
\begin{eqnarray}
H_{\rm SOI}&=&\alpha\sum_{\sigma,\sigma'}\left[ d_{2,\sigma}^\dagger
\left(i\sigma_z\right)_{\sigma,\sigma'} d_{1,\sigma'} + \mbox{H.c.} \right] \; ,
\end{eqnarray}
with the third Pauli matrix $\sigma_z$. Choosing the $z$-direction in spin
space for the direction of the SOI breaks the spin-rotational invariance.
The Zeeman field can have a component parallel to the SOI (that is in
$z$-direction) and one perpendicular to it. For this we here choose (without
loss of generality) the $x$-direction such that the (local) Zeeman term reads
\begin{eqnarray}
H_Z & = & B \sum_{\sigma,\sigma'} \sum_{j=1,2} \left[
d_{j,\sigma}^\dag (\sigma_z)_{\sigma,\sigma'} d_{j,\sigma'} {\rm sin} \, \theta \right. \nonumber
\\ && + \left.
d_{j,\sigma}^\dag (\sigma_x)_{\sigma,\sigma'} d_{j,\sigma'} {\rm cos} \, \theta \right] \; .
\end{eqnarray}
For $\theta = \pm \pi/2$ the SOI and the $B$-field are (anti-)parallel. In this case
the conventional hopping and the SOI can be combined to an effective hopping
and the SOI does not have a specific effect. In particular, it does not destroy the
finite-$B$ Kondo ridges (see Sec.~\ref{aa}). The local Coulomb interaction is included by
\begin{eqnarray}
H_{\rm int}&=&U\sum_{j=1,2} \left(n_{j,\uparrow} - \frac{1}{2} \right) \left( n_{j,\downarrow} - \frac{1}{2} \right)
\nonumber \\ && +U' \left( n_{1} -1 \right) \left(n_{2}-1\right)
\end{eqnarray}
for the local $U>0$ and nearest-neighbor $U'>0$ interactions respectively, with
$n_{j,\sigma}= d_{j,\sigma}^\dagger d_{j,\sigma}$ and $n_j= \sum_\sigma n_{j,\sigma}$.
By subtracting $1/2$ from
$n_{j,\sigma}$ in the definition of $H_{\rm int}$ the point $V_g=0$ corresponds to half-filling
of the double dot even in the presence of Coulomb repulsion.
The dot Hamiltonian is supplemented by a term describing two semi-infinite
noninteracting leads, which we here model as one-dimensional
tight-binding chains
\begin{eqnarray}
H_\mathrm{lead} =- \tau \sum_{\beta=L,R}
\sum_{j=0}^\infty \sum_{\sigma} \left[ c_{\beta,j+1,\sigma}^\dagger
c_{\beta,j,\sigma} +\mathrm{H.c.} \right]\;,
\end{eqnarray}
with lead operators $ c_{\beta,j+1,\sigma}^{(\dagger)}$ and equal band width
$4 \tau$. In the following we choose $\tau$ as the unit of energy: $\tau=1$.
The dot-lead couplings are given by a tunnel Hamiltonian
\begin{eqnarray}
\label{Hcoup}
H_\mathrm{coup}=&\sum\limits_{\sigma}&
\big( t_L d_{1,\sigma}^\dagger c_{L,1,\sigma} + t_R c_{R,1,\sigma}^\dagger
d_{2,\sigma} +\mathrm{H.c.} \big) \;,
\end{eqnarray}
with tunnel barriers set by $t_{L/R}$. We here consider the so-called wide band limit
(see e.g. Ref.~\onlinecite{KEM}) in which the tunnel barriers only enter in combination
with the local lead density of states (at lattice site 1) evaluated at the chemical potential
by
$\Gamma_{L/R}= \pi t_{L/R}^2\rho_{\rm leads}$.
\section{Method: functional RG}\label{sec:RG}
\subsection{Flow equation for the self-energy}
We briefly review the applied approximation scheme which is based on the fRG method
\cite{SalmhoferHonerkamp} focusing on aspects specific to the present model.
A detailed description of the implementation for quantum-dot systems in
the absence of SOI
is provided in Ref.~\onlinecite{KEM}. Recent extensions including SOI involve
inhomogeneous quantum wires.\cite{BM}
Starting point of the fRG scheme is the bare ($U=U'=0$) Matsubara frequency
propagator ${\mathcal{G}}_0$
of the double dot. The leads are projected onto the dot sites and enter via the
hybridization $\Gamma_{L/R}$.\cite{KEM} In the basis
\begin{eqnarray}
\label{basis}
\left\{ \left|1,\uparrow\right>,
\left|1,\downarrow\right>, \left|2,\uparrow\right>,
\left|2,\downarrow\right> \right\}
\end{eqnarray}
of single-particle states its inverse reads
\begin{widetext}
\begin{eqnarray}
&&\!\!\!\!\!\!\!\! {\mathcal{G}}_0^{-1}(i\omega) = \nonumber \\ && \left( \begin{array}{cccc}
i\omega-\epsilon_1-B \sin \theta + i \Gamma_L(\omega)\!\!\!\!& -B \cos \theta &t-i\alpha&0 \\
- B \cos\theta&i\omega-\epsilon_1+ B \sin \theta + i \Gamma_L(\omega)\!\!\!\! &0&t+i\alpha \\
t+i\alpha&0&i\omega-\epsilon_2-B \sin \theta+i \Gamma_R(\omega)\!\!\!\!&- B \cos \theta\\
0& t-i\alpha &-B{\rm cos} \, \theta& i\omega-\epsilon_2+B \sin \theta+i
\Gamma_R(\omega)\end{array} \right) \label{g0}
\;
\end{eqnarray}
\end{widetext}
with $\Gamma_{L/R}(\omega)= \Gamma_{L/R} \, \mbox{sgn}(\omega)$.
This propagator is replaced by one in which low-energy degrees of freedom below a cutoff $\Lambda$ are suppressed
\begin{equation}
\mathcal{G}_0^\Lambda(i\omega)=\Theta(|\omega|-\Lambda)\mathcal{G}_0(i\omega)\;.
\end{equation}
The cutoff $\Lambda$ is later sent from $\infty$ down to $0$.
Inserting $\mathcal{G}_0^\Lambda$ in the generating functional of the one-particle irreducible vertex
functions, an infinite hierarchy of coupled differential equations is obtained by differentiating
the generating functional with respect to $\Lambda$ and expanding it in powers of the external fields.
Practical implementations require a truncation of the flow equation hierarchy.
We restrict the present analysis to the first order in the hierarchy and only consider the flow of the
one-particle vertex, that is the self-energy.
It was previously discussed analytically\cite{AEM,KEM} that for a single dot the resulting flow equations
capture the appearance of a Kondo ridge in $G(V_g)$
of width $\sim 1.5 \, U$.
The accuracy can be improved by including the flow of the static
part of the two-particle vertex (effective interaction) which gives a Kondo plateau of width $\sim U$
in good agreement with the exact result.\cite{AEM,KEM}
Here we are not interested in such quantitative improvements and instead keep the bare vertex. This
has the advantage that the resulting flow equations for the matrix elements of the self-energy have a
simple structure which not only allows for a fast numerical solution but provides the opportunity to
gain analytical insights.
The flow equation for the self-energy reads\cite{KEM}
\begin{equation}
\frac{\partial}{\partial\Lambda}\Sigma_{a',a}^\Lambda=-\frac{1}{2\pi}\sum\limits_{\omega=\pm\Lambda}\sum\limits_{b,b'}e^{i\omega 0^+}{\mathcal{G}}_{b,b'}^\Lambda(i\omega)\Gamma_{a',b';a,b} \;, \label{DGLsigma}
\end{equation}
where the indices $a,a',b,b'$ label the quantum numbers $({j,\sigma})$, $\Gamma_{a',b';a,b}$ is
the two-particle vertex, and
the interacting Green function ${\mathcal{G}}$ is determined by the Dyson equation
\begin{equation}
\label{dyson}
{\mathcal{G}}^\Lambda(i\omega)=\left[\mathcal{G}_0^{-1}(i\omega)-\Sigma^\Lambda\right]^{-1} \; .
\end{equation}
The initial condition for $\Lambda_0 \to \infty$ is $\Sigma^{\Lambda_0} = 0$.\cite{KEM}
In the lowest-order scheme the two-particle vertex $\Gamma_{a',b';a,b}$ is given by the bare
anti-symmetrized interaction. As the bare vertex is frequency independent,
the approximate self-energy turns out to be frequency independent.
Dynamical contributions are generated only at higher orders. As the latter are
important for the conductance at temperatures $T>0$ the current approximation scheme
is restricted to $T=0$. The correct temperature dependence of the (single-dot) Kondo
ridge is only captured if the flow of a frequency dependent two-particle vertex---leading
to a flowing frequency dependent self-energy---is kept.\cite{karrasch,Severin}
Within our approximation, the matrix elements of the self-energy
at the end of the flow $\Sigma^{\Lambda=0}$ can be interpreted as
interaction-induced renormalizations to the noninteracting model parameters such
as the SOI and conventional hopping amplitudes, as well as the on-site
energies and the magnetic field.\cite{KEM}
\subsection{Computation of the linear conductance}
From the self-energy obtained at the end of the flow at $\Lambda=0$, the full propagator
including interaction effects is determined via the Dyson equation
(\ref{dyson}). From this various observables can be computed.\cite{KEM}
Here we concentrate on the linear conductance. At $T=0$ current-vertex
corrections vanish and the Kubo formula for the spin-resolved
conductance assumes a generalized
Landauer-B\"uttiker form\cite{Oguri}
\begin{equation}
G_{\sigma,\sigma'} =\frac{e^2}{h}
|\mathcal{T}_{\sigma,\sigma'}(0)|^2 \; ,
\end{equation}
with the effective transmission $\mathcal{T}_{\sigma,\sigma'}(0)$ evaluated
at the chemical potential. For the present setup the transmission is given by the
$(1,\sigma;2,\sigma')$ matrix element of the full propagator leading to\cite{KEM}
\begin{equation}
G_{\sigma,\sigma'} = \frac{e^2}{h} 4 \Gamma_L \Gamma_R
\left| \mathcal{G}_{1,\sigma;2,\sigma'}(0)\right|^2 \; .
\end{equation}
\subsection{Single-level quantum dot}
Before analyzing the serial double dot, for later reference we
briefly discuss the single-level quantum dot within our
approximation scheme.\cite{AEM,KEM}
In this case the flow equation for the effective (flowing) level
position $V_\sigma^\Lambda = V_g + \sigma B + \Sigma_\sigma^\Lambda$ is
\begin{equation}
\label{eq:diffkondo}
\frac{d}{d\Lambda} V_\sigma^\Lambda = \frac{UV_{\bar\sigma}^\Lambda/\pi}
{(\Lambda+\Gamma)^2+(V_{\bar\sigma}^\Lambda)^2}\;,
\end{equation}
with initial condition $V_\sigma^{\Lambda=\infty} = V_g+\sigma B$, $\bar \sigma= - \sigma$,
and $\Gamma=\Gamma_L+\Gamma_R$.
At $B=0$ the level position is spin independent $V_\sigma^\Lambda = V^\Lambda$
and the differential equation can be solved
analytically.\cite{AEM,KEM} For $U \gg \Gamma$ (in the Kondo
regime) and $|V_g| \lesssim \,0.77 \, U$ the solution at $\Lambda=0$ is
\begin{align}
\label{eq:exp}
V = V_g \exp \Big( -\frac{U}{\pi \Gamma}\Big) \; .
\end{align}
It is this exponential pinning of the renormalized level position to
zero (the chemical potential) which leads to the Kondo plateau in the total
conductance $G(V_g,B)=G_\uparrow(V_g,B) +G_\downarrow(V_g,B) $ given by
\begin{align}
G(V_g,B=0) & = \frac{2e^2}{h} \, \frac{4 \Gamma_L \Gamma_R}{\Gamma^2}
\frac{\Gamma^2}{V^2 + \Gamma^2} \,.
\end{align}
For $U=0$ the level position $V=V_g$ is unrenormalized and $G$ reaches
its maximum value $G_{\rm max} = (2e^2/h) \, 4\Gamma_L
\Gamma_R/\Gamma^2$ only at the resonance voltage $V_g=0$ (Lorentzian resonance of width $\Gamma$).
For $U>0$, $V$ is pinned to zero around $V_g=0$ for a
gate-voltage range of width $\sim 1.5 \, U$, where the Kondo plateau develops in the conductance
$G\simeq G_{\rm max}$.
We next consider a finite magnetic field $B \neq 0$. The flow equation for the effective
$B^\Lambda = ({V}^\Lambda_{\uparrow}-{V}_{\downarrow}^\Lambda )/2$
reduces to
\begin{align}
\frac{d}{d\Lambda} B^\Lambda = -\frac{U B^\Lambda/\pi}
{(\Lambda + \Gamma)^2+(B^\Lambda)^2}\label{bren}
\end{align}
at $V_g=0$.
The renormalized magnetic field $B_{\rm ren}$ at $\Lambda=0$ hence shows the same exponential
behavior (with prefactor $B$ instead of $V_g$) as the renormalized level
position Eq.\ (\ref{eq:exp}) except for the \textit{reversed}
sign in the exponent. This leads to a dramatic increase of the renormalized field $B_{\rm ren}$
compared to the bare one.
For $U>0$ the total conductance at $V_g=0$
\begin{align}
G(V_g=0,B) & = \frac{2e^2}{h} \, \frac{4 \Gamma_L \Gamma_R}{\Gamma^2}
\frac{\Gamma^2}{B_{\rm ren}^2 + \Gamma^2} \; ,
\end{align}
in the $B$ direction becomes exponentially
(set by the Kondo scale\cite{Hewson,AEM,KEM}) sharp, instead of Lorentzian-like of width $\Gamma$ for $U=0$.
In fact, this holds not only at $V_g=0$ but for all gate voltages within the $B=0$ conductance plateau.
Qualitatively the Kondo ridge of a single-level dot in the $V_g$-$B$ plane looks similar to
the surrounding of one of the $B=0$ Kondo ridges appearing in our double dot model as e.g. shown
in Fig.\ \ref{fig:fig2}.
\vspace{0.6cm}
\section{Results}\label{sec:res}
\subsection{The noninteracting isolated double dot}
For a detailed understanding of the Kondo ridges it is instructive to first
study the noninteracting isolated double dot.
In the basis of Eq.\ (\ref{basis}) the single-particle Hamiltonian $h_{\rm dot}$ is represented
as a complex $4 \times 4$ matrix
\begin{widetext}
\begin{equation}
h_{\rm dot}=\left( \begin{array}{cccc}
V_g+\delta+ B \sin \theta& B \cos \theta &-t+i\alpha&0 \\
B \cos \theta&V_g+\delta-B \sin \theta &0&-t-i\alpha \\
-t-i\alpha&0&V_g-\delta+B \sin\theta&B \cos \theta\\
0& -t+i\alpha & B \cos \theta& V_g-\delta-B \sin \theta\end{array} \right) \;.
\label{0}
\end{equation}
\end{widetext}
The corresponding eigenvalues are
\begin{eqnarray}
\lambda& =&V_g\pm\sqrt{B^2+t^2+\delta^2+\alpha^2\pm2B\sqrt{t^2+\delta^2+\alpha^2{\rm sin}^2\theta}}\nonumber\\
&=&V_g\pm\sqrt{\left(B\pm\sqrt{t^2+\delta^2+\alpha^2{\rm sin}^2\theta}\right)^2+\alpha^2{\rm cos}^2\theta}\;.\nonumber\\
\label{e}
\end{eqnarray}
The spectrum is invariant under the transformation $B \to -B$ and symmetric at $V_g=0$.
A finite on-site energy $\delta>0$ yields an effective hopping
$t_{\rm eff}=\sqrt{t^2+\delta^2}$. For vanishing SOI ($\alpha=0$) Eq.~(\ref{e})
reduces to
\begin{equation}
\label{EVform}
\lambda=V_g\pm\left(t_{\rm eff} \pm B\right) \; .
\end{equation}
Naturally, the
$\theta$-dependence drops out. For $\theta=\pm \pi/2$ the SOI and the Zeeman field
are (anti-)parallel and $\alpha$ can be absorbed into an
effective hopping $t_{\rm eff}=\sqrt{t^2+\delta^2 + \alpha^2}$. The eigenvalues
are then of the $\alpha=0$-form Eq.\ (\ref{EVform}).
For the appearance of a spin Kondo effect (after turning on $\Gamma_{L/R}$ as well as $U$ and $U'$)
it is necessary that two degenerate levels of opposite spin are located at zero energy
(the chemical potential). Zero energy spin-degenerate levels are obtained at $B=0$ and
$V_g=\pm t_{\rm eff}$ (bonding and anti-bonding states).
This will lead to the standard spin Kondo effect related to the presence of a (spin)
Kramers doublet when $U$ and $U'$ are switched on.
Increasing $B$ the spin-up level
of the bonding state and the spin-down level of the anti-bonding one approach each other.
For either $\alpha=0$ or $\theta=\pm \pi/2$ they become degenerate (cross) at zero
energy for $B_c=\pm t_{\rm eff}$ and $V_g=0$. Besides the two $B=0$ Kondo ridges developing
for all dot parameters, for $\alpha=0$ or $\theta=\pi/2$ one might thus expect the appearance of
two finite-$B$ Kondo ridges.
Viewed on the basis of the many-body energies of the isolated {\it interacting} double dot this situation corresponds to the crossing of a two-particle $S=1$, $S_z=-1$ state with a corresponding $S=1$, $S_z=0$ state. The resulting Kondo effect can thus also be referred to as a singlet-triplet one.
For $\alpha>0$ and an arbitrary angle $\theta$ between
the SOI and the Zeeman field the finite-$B$ level crossings turn into avoided crossings and
no zero-energy degeneracies are possible at finite $B$.
\subsection{Numerical results for the conductance}
We next present results for the linear conductance $G(V_g,B)$ obtained by numerically
solving the flow equation (\ref{DGLsigma}). For our purposes it is sufficient
to consider only a single set of $U,U'$ and $t$. The results depend only
quantitatively on the strength of the local Coulomb interaction
and the inter-dot hopping $t$, as long as $U/\Gamma_{L/R}$ and $t/t_{L/R}$ are
sufficiently large. We here focus on $U=U'=1$ and $t=1$.
\subsubsection{Vanishing SOI}
\label{aa}
We start our discussion with the case of vanishing SOI. This is trivially
realized for $\alpha=0$. As discussed above, for
$\theta=\pi/2$ the SOI can be absorbed into an effective hopping.
In this case the Zeeman field and the SOI are (anti-)parallel
and only a single Pauli matrix enters the Hamiltonian. The physics is thus similar
to the $\alpha=0$ case.
In Fig.~\ref{fig:fig2} we show $G(V_g,B)$ for a left-right symmetric
setup with $t_{L/R}=0.3$ and $\delta=0$ in the absence of SOI ($\alpha=0$).
The conductance shows the expected two pairs of $B=0$ and $V_g=0$ Kondo ridges.
Due to the renormalization of the inter-dot hopping the
$B=0$ plateaus are not centered around $V_g = \pm t_{\rm eff}=t=1$ but
renormalized gate voltages. For the present parameters the renormalization of
the inter-dot hopping and the magnetic field (almost) cancel each other such that the
finite $B$ ridges are located at $B_c \approx \pm t_{\rm eff}=t=1$.
For $U' < U$, $B_c$ is renormalized to smaller values.
All Kondo plateaus exhibit the same maximal height of $2 e^2/h$.
This can be understood from transforming the dot-lead
coupling Eq.~(\ref{Hcoup}) into the basis of bonding ($b$) and anti-bonding ($\bar b$)
states of the noninteracting isolated dot.
The couplings between the leads and the two states are
\begin{equation}
\Gamma_{b,L/R}=\frac{\sqrt{t^2+\delta^2}\mp \delta}{2\sqrt{t^2+\delta^2}}\,\Gamma_{L/R}\;,
\label{leadcoup}
\end{equation}
and $\Gamma_{\bar b,L/R}=\Gamma_{b,R/L} \,t^2_{L/R} / t^2_{R/L}$.
For the considered left-right symmetric case and $\delta=0$
the bonding and anti-bonding states have the same total coupling
$\Gamma_{b/\bar b}=\Gamma_{b/\bar b,L}+\Gamma_{b/\bar b,R}$
and the couplings are left-right symmetric implying unitary
conductance. Increasing $|B|$ from $B=0$ the Kondo ridges are suppressed on the
exponential Kondo scale and resonance peaks of height $e^2/h$ and
width $\Gamma$ develop. The position of the resonance peaks varies linearly
with $V_g$. Eventually the peaks corresponding to the spin-up level of the bonding
state and the spin-down level of the anti-bonding one merge with the
finite-$B$ Kondo ridges. Figure \ref{fig:fig3} shows $G(V_g)$ at fixed
different $B>0$ to further exemplify this. The shoulders appearing in the
$B=0$ bonding (anti-bonding) state Kondo ridges are linked to the
presence of the anti-bonding (bonding) one. Increasing the Coulomb interaction has the two obvious
effects of broadening the Kondo ridges and increasing the distance between the
centers of the plateaus; the latter also holds for increasing $t$.
In the lower panel of Fig. \ref{fig:fig3} the respective dot fillings are shown
for various values of $B$.
For $B=0$ the dot occupation exhibits plateaus at odd fillings. Their width corresponds
to the Kondo ridges observed in the conductance. Similarly, for a finite magnetic field $B=1$ the plateau around $V_g=0$ in the conductance is reflected in the filling.
\begin{figure}[t!]
\center{\includegraphics[clip=true,width=8.75cm]{fig2.eps}}
\caption{\label{fig:fig2} (Color online)
Conductance $G(V_g,B)$ for $t=1$, $U=U'=1$, $t_{L/R}=0.3$, and $\delta=0$ in the
absence of SOI ($\alpha=0$).}
\end{figure}
\begin{figure}[t!]
\center{\includegraphics[clip=true,width=9.25cm]{fig3.eps}}
\caption{\label{fig:fig3} (Color online)
Gate-voltage dependence of the conductance $G$ and dot occupation $\left< n_1+n_2
\right>$ for different values of $B$ and the same parameters as in Fig.~\ref{fig:fig2}.}
\end{figure}
We note that $G(V_g,B)$ is symmetric
with respect to $B \to -B$ (see the eigenvalues Eq.\ (\ref{e})) and
$V_g \to -V_g$. While the former symmetry is given by the Hamiltonian and
holds for {\it all} parameter sets, the latter is specific to the parameters
chosen here ($t_L=t_R$ and $\delta=0$).
The conductance remains symmetric under $V_g \to -V_g$ if at least either $\delta=0$
or $t_L = t_R$ holds. In these cases the four Kondo ridges do no longer reach
the unitary value $2 e^2/h$ but exhibit an equally reduced conductance plateau as the bonding
and anti-bonding states are coupled with the same asymmetry and the same total coupling
to the leads.
The most interesting situation arises if the left-right symmetry and the bonding-anti-bonding
state symmetries are simultaneously broken. This is achieved for $t_L \neq t_R$ {\it and}
$\delta \neq 0$. In this case the $V_g \to -V_g$ symmetry of $G(V_g,B)$ is broken as shown
in Fig.~\ref{fig:fig4}. For our parameters the couplings of the anti-bonding state have a
strong left-right asymmetry leading to a $B=0$
Kondo plateau with significantly reduced conductance (around $V_g =2.2$).
The bonding state has a weaker asymmetry such that the $B=0$ Kondo ridge
centered around $V_g = -2.2$ almost reaches the
unitary conductance. The total coupling $\Gamma_{\bar b}$ is larger than $\Gamma_{b}$.
With the breaking of the $V_g \to -V_g$ symmetry, manifest already from the comparison
of the two $B=0$ Kondo ridges, the finite-$B$ Kondo ridges (centered
around $V_g=0$) are no longer necessarily parallel to the $V_g$-axis.
In fact they are {\it bent} with respect to this axis as becomes apparent
from Fig.~\ref{fig:fig4}. It turns out that the direction of bending is always
away from the state with stronger total level-lead coupling. For our model this
is the state with larger left-right asymmetry.
This bending cannot be predicted considering the isolated double-dot even by
including the Coulomb interaction as it results from a level renormalization
associated with the dot-lead couplings.
Related level renormalizations are discussed
in Refs.\ \onlinecite{Holm} and \onlinecite{Hauptmann}.
In experiments on multi-level quantum
dots left-right symmetry is difficult to realize and the states at different
energies will have different level-lead couplings. Therefore finite-$B$ spin
Kondo ridges appearing in measurements are expected to be generically bent with
respect to the $B=0$ ones. This result is relevant for the understanding
of finite-$B$ Kondo ridges observed in conductance measurements on multi-level
carbon nanotube quantum dots.\cite{kasper}
\begin{figure}[t!]
\center{\includegraphics[clip=true,width=8.75cm]{fig4.eps}}
\caption{\label{fig:fig4} (Color online)
Conductance $G(V_g,B)$ for $t=1$, $U=U'=1$, $\alpha=0$
with asymmetric couplings to the leads $t_L=0.3$, $t_R=0.5$ and finite level
splitting $\delta=-0.3$.}
\end{figure}
\subsubsection{Effect of the SOI}
For the discussion of the effect of the SOI on the Kondo ridges we focus on the symmetric case
with $t_L = t_R$ and $\delta=0$. Figure~\ref{fig:fig5} shows $G(V_g,B)$
for $\alpha = 0.6$ and an angle $\theta = 1.56$ close to the parallel configuration
with $\theta = \pi/2$. Thus the effective SOI given by the component perpendicular to
the direction of the Zeeman field is small. For the noninteracting single-particle levels of
the isolated double dot this implies a small minimal distance between the levels avoiding the
crossing at finite $B$. Therefore remnants of the finite-$B$ Kondo ridges are still observable. The inset
shows $G$ at $V_g=0$ as a function of $\theta$ and $B$. Increasing the SOI component
perpendicular to the direction of the Zeeman field by deviating from $\theta=\pi/2$ obviously
destroys the finite-$B$ Kondo effect as expected. This provides a way to probe the presence of SOI
in multi-level dots: after observing finite-$B$ spin Kondo ridges one can probe their robustness
by changing the direction of the magnetic field.
Remarkably, the SOI introduces an intriguing angular dependence even
for a magnetic field coupling exclusively to the spin degree of freedom.\cite{jens}
From the angular dependence of the gap opening in the single-particle
energy spectrum the strength of the SOI can be extracted by spectroscopic measurements.
\begin{figure}[t!]
\center{\includegraphics[clip=true,width=8.75cm]{fig5.eps}}
\caption{\label{fig:fig5} (Color online)
Conductance $G(V_g,B)$ for $t=1$, $U=U'=1$, $t_{L/R}=0.3$, and
$\delta=0$ with $\theta=1.56$ and finite SOI
$\alpha=0.6$. Inset: Conductance $G(\theta,B)$ at $V_g$=0.}
\end{figure}
\subsection{Analytical insights}
We next provide analytical insights to our findings in the absence of SOI
by analyzing the fRG flow equation (\ref{DGLsigma}).
To simplify the analysis we focus
on the case of purely local Coulomb interactions with $U'=0$. Due to the absence of
Fock terms $t$ remains unrenormalized, but this only yields
quantitative changes compared to the results shown in the last subsection.
In the absence of SOI $\theta$ does not play any role and can be chosen arbitrarily. For $\theta=\pm \pi/2$
the matrix (\ref{g0}) is block diagonal and we can restirct the analysis to a single spin sector.
Introducing $V_{j\sigma}^{\Lambda}=\epsilon_j+\sigma B+\Sigma_{j\sigma}^{\Lambda}$
the full propagator including the self-energy reads
\begin{eqnarray}
{\mathcal{G}}_{\sigma}^{\Lambda}(i\omega)&=&\\&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\frac{1}{D_{\sigma}^{\Lambda}(i\omega)}\left( \begin{array}{cc}
\!i\omega\!+\!i\Gamma_R{\rm sgn}(\omega)-\!{V}_{2\sigma}^{\Lambda}&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!-t\\
-t &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!i\omega\!+\!i\Gamma_L{\rm sgn}(\omega)-\!{V}_{1\sigma}^{\Lambda}\end{array}\! \right)\;,\nonumber
\label{p}
\end{eqnarray}
with the determinant
\begin{eqnarray}
D_{\sigma}^{\Lambda}(i\omega)&=&\\&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\![i\omega\!+\!i\Gamma_R{\rm sgn}(\omega)-\!{V}_{2\sigma}^{\Lambda}][i\omega\!+\!i\Gamma_L{\rm sgn}(\omega)-\!{V}_{1\sigma}^{\Lambda}]-t^2\;.\nonumber\phantom{\frac{1}{1}}
\end{eqnarray}
The zeros of $D_{\sigma}^{\Lambda=0}(0)$ for $\Gamma_{L/R}=0$
determine the zero-energy levels. For degenerate levels these are responsible for the development of
the Kondo ridges in the conductance.
Inserting the above expression in Eq.~(\ref{DGLsigma}), the flow equations for the local potential are
\begin{widetext}
\begin{eqnarray}
\frac{d}{d\Lambda}{V}_{1\sigma}^{\Lambda}&=&\frac{U}{\pi}\frac{({V}^{\Lambda}_{1\bar{\sigma}}{V}^{\Lambda}_{2\bar{\sigma}}-t^2){V}^{\Lambda}_{2\bar{\sigma}}+(\Lambda+\Gamma_R)^2{V}^{\Lambda}_{1\bar{\sigma}}}{{[{V}^{\Lambda}_{1\bar{\sigma}}{V}^{\Lambda}_{2\bar{\sigma}}-(\Lambda+\Gamma_L)(\Lambda+\Gamma_R)-t^2]^2+[(\Lambda+\Gamma_R){V}^{\Lambda}_{1\bar{\sigma}}+(\Lambda+\Gamma_L){V}^{\Lambda}_{2\bar{\sigma}}]^2}}\label{locpot}
\end{eqnarray}
\end{widetext}
with an analog equation for ${V}_{2\sigma}^{\Lambda}$ with $(1\leftrightarrow2)$ and
$(L\leftrightarrow R)$.
We first analyze the symmetric case for $\delta=0$ ($\epsilon_1=\epsilon_2$) and $\Gamma_L=\Gamma_R=\Gamma$. The condition for zero-energy levels
$V_{\sigma}^2-t^2=0$ implies degenerate solutions for
either $B=0$ and $V=\pm t$ or $B_{\rm ren}=\pm t$ and $V=0$, where the renormalized level position or
Zeeman field replace the bare ones with respect to the noninteracting case.
The flow equation for the local potential (\ref{locpot}) reduces to
\begin{eqnarray}
\frac{d}{d\Lambda}{V}_{\sigma}^{\Lambda}&=&\frac{U{V}^{\Lambda}_{\bar{\sigma}}}{\pi}\frac{({V}^{\Lambda}_{\bar{\sigma}})^2-t^2+(\Lambda+\Gamma)^2}{[({V}^{\Lambda}_{\bar{\sigma}})^2-t^2-(\Lambda+\Gamma)^2]^2+4({V}^{\Lambda}_{\bar{\sigma}})^2(\Lambda+\Gamma)^2}\nonumber\\
\label{eq:flow}
&=& \frac{UV_{\bar\sigma}^\Lambda}{\pi}
{\rm Re}\,\frac{1}
{(\Lambda+\Gamma+it)^2+(V_{\bar\sigma}^\Lambda)^2}
\end{eqnarray}
resembling Eq.~(\ref{eq:diffkondo}) for the single-level quantum dot, except for the presence of
the finite inter-dot hopping $t$ and a factor $1/2$ in the definition of $\Gamma$.
At $B=0$ the flow equation for $\tilde{V}^{\Lambda}={V^{\Lambda}}- t\, {\rm sgn} (V^{\Lambda})$
is characterized by an exponential pinning to zero for
$|\tilde{V}|\lesssim\, 0.77 U$, inducing a splitting of the central plateau for $\tilde{V}$
in two plateaus of half width for ${V}$ ranging from $V_g=\pm t$ to $\pm (t+0.77 \,U)$.
These are shifted to larger values in Fig.~\ref{fig:fig3} as a consequence of the renormalization of $t$.
The same substitution around the $B=0$ Kondo ridges $V_\sigma^{\Lambda}= t \, {\rm sgn} (V^{\Lambda})+\sigma B^{\Lambda}$ leads to an exponential suppression of the renormalized field as described by
Eq.~(\ref{bren}) for the single-level dot.
A similar behavior in the $V_g$-$B$ plane is found for the finite-$B$ Kondo ridges. Here the renormalization of $B$ leads to a shift of the position.
Due to the enhancement of the renormalized magnetic field, the finite-$B$ Kondo ridge
develops at a reduced field for which $B_{\rm ren}=\pm t$. This effect is superposed by the
renormalization of $t$ in Fig.~\ref{fig:fig3} and hardly visible.
We now consider the general asymmetric situation. With
${V}_{1/2\sigma}^{\Lambda}={\bar V}_{\sigma}^{\Lambda}\pm\delta$ and introducing
the asymmetry parameter $\beta$ for the coupling to the leads $\Gamma_{L/R}=\Gamma\pm\beta$,
the flow equation for $\bar{V}_{\sigma}^{\Lambda}$ is
\begin{widetext}
\begin{eqnarray}
\frac{d}{d\Lambda}\bar{V}_{\sigma}^{\Lambda}&=&\frac{U}{\pi}\frac{
\bar{V}^{\Lambda}_{\bar{\sigma}}[(\bar{V}^{\Lambda}_{\bar{\sigma}})^2-\delta^2-t^2+\beta^2]+(\Lambda+\Gamma)[(\Lambda+\Gamma)\bar{V}^{\Lambda}_{\bar{\sigma}}-2\beta\delta]}
{[(\bar{V}^{\Lambda}_{\bar{\sigma}})^2-\delta^2-t^2+\beta^2-(\Lambda+\Gamma)^2]^2+4[(\Lambda+\Gamma)\bar{V}^{\Lambda}_{\bar{\sigma}}-\beta\delta]^2}\label{b}\;.
\end{eqnarray}
\end{widetext}
Analogously, a flow equation for the renormalization of $\delta$ can be derived, which will not be considered
in the following as it turns out not to significantly affect the results.
For $\delta\neq 0$ and symmetric couplings to the reservoirs $\beta=0$
($\Gamma_L=\Gamma_R$), the above equation reduces to
(\ref{eq:flow}). Kondo plateaus of width $0.77\, U$ are hence obtained in correspondence of degenerate
energy levels at
$\bar{V}=\pm t_{\rm eff}$ for $B=0$ and at $V_g=0$ for $B_{\rm ren}=\pm t_{\rm eff}$.
For asymmetric couplings to the leads ($\beta\neq0$), the above Eq. (\ref{b}) in proximity of the degenerate energy levels can be simplified to
\begin{eqnarray}
\frac{d}{d\Lambda}\bar{V}^\Lambda_{\sigma}&=&\frac{U}{\pi}\frac{\bar{V}^\Lambda_{\bar{\sigma}}-2\beta\delta/\Gamma}{(\Lambda+\Gamma)^2+4(\bar{V}^{\Lambda}_{\bar{\sigma}}-\beta\delta/\Gamma)^2}\;
\end{eqnarray}
in the limit of small asymmetric couplings $\beta\ll\Gamma$.
The terms proportional to $\beta\delta/\Gamma$ induce a \textit{shift} in the effective level position responsible for the $V_g\to -V_g$ symmetry breaking.
For $V_g\gtrsim 0$ an effective magnetic field of strength $B_c+2\beta\delta/\Gamma$ is required to compensate for the shift (corresponding to a larger effective field for $\beta\delta>0$ in Fig.~\ref{fig:fig4}) .
This implies that the finite-$B$ Kondo plateaus in the $V_g$-$B$ plane are not parallel to the $B=0$ axis for
$\beta\delta\neq0$, see Fig.~\ref{fig:fig4}.
\section{Conclusion}\label{concl}
We studied the linear conductance of a serial quantum dot as a minimal model to describe the influence of the SOI on the spin Kondo effect in presence of a Zeeman field.\cite{galpin}
Without SOI, the linear conductance as a function of an applied gate voltage
exhibits characteristic Kondo plateaus at finite magnetic field $B$, in addition to the
ones at $B=0$. Interestingly the finite-$B$ Kondo ridges are bent with respect to the
$V_g$-axis if the left-right symmetry and the symmetry between the bonding and
anti-bonding states are broken simultaneously.
This finding is of importance for the understanding of measurements of
the linear conductance of multi-level quantum dots as a function of
$V_G$ and $B$.
In the presence of SOI the finite-$B$ Kondo ridges disappear; in contrast to the ridges
at $B=0$, they are not protected by time-reversal symmetry.
\section*{Acknowledgments}
We are grateful to K. Grove-Rasmussen for inspiration and numerous discussions, and to
C. Karrasch for a critical reading of the manuscript.
This work was supported by the Deutsche Forschungsgemeinschaft
(FOR 912).
|
2,877,628,091,157 | arxiv |
\section{Introduction}
\label{sec:intro}
This work studies to what extent voice can hint face geometry motivated by recent studies on voice-face matching and cross-modal learning \cite{nagrani2018seeing, Wen_2021_CVPR, zheng2021adversarial}.
Many physiological attributes are embedded in voices. For example, speech is produced by articulatory structures, such as vocal folds, facial muscles, and facial skeletons, which are all densely connected.
Such a fact intuitively indicates potential correlations between voices and face shapes \cite{harrington2010acoustic}.
Experiments in cognitive science point out that audio cues are associated with visual cues in human perception-- especially in recognizing a person's identity \cite{belin2004thinking}.
Recent neuroscience research further shows that two parallel processing of low-level auditory and visual cues are integrated in the cortex, where voice processing affects facial structural analysis for the perception purpose \cite{young2020face}.
Traditional research in the voice domain focuses on utilizing voice inputs for predicting more conspicuous attributes which include speaker identity \cite{bull1983voice, maguinness2018understanding, ravanelli2018speaker}, age \cite{ptacek1966age, singh2016relationship, grzybowska2016speaker}, gender \cite{li2019improving}, and emotion \cite{wang2017learning, zhang2019attention}.
A novel direction in recent development goes beyond predicting these attributes and tries to reconstruct \textit{2D face images from voice} \cite{NEURIPS2019_eb9fc349, oh2019speech2face, choi2020inference}. Their research is built on an observation that one can approximately envision how an unknown speaker looks when listening to the speaker's voice. Attempts towards validating this assumptive observation include the work \cite{oh2019speech2face} for image reconstruction and works \cite{NEURIPS2019_eb9fc349, choi2020inference} using generative adversarial networks (GANs). They aim to output face images from only a speaker's voice.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=1.0\linewidth]{Figures/teaser.pdf}
\end{center}
\vspace{-12pt}
\caption{\textbf{Cross-Modal Perceptionist.} We study the correlations between voices and face geometry under both supervised and unsupervised learning settings. This work targets at more explainable human-centric cross-modal learning for biometric applications.}
\label{purpose}
\vspace{-6pt}
\end{figure}
However, face images from voices are inherently ill-posed: the task involves predicting extraneous attributes that voices cannot hint, including image backgrounds, hairstyles, headgears, or beards. These attributes are apparently that one can choose without changing voices.
Similar concerns arise regarding the correlations between voices and facial textures or ethnicity.
\cite{oh2019speech2face} demonstrates a t-SNE plot in which ethnicity is scattered across all samples, indicating its low correlations to voices.
As a result, quantifying the differences between an output face image and a reference is hard and less grounded.
Instead of producing face images, our analysis moves to the 3D domain with mesh representations and \textbf{predicts one's face geometry or skull structures from voices}, which is free from the above issues.
Working on 3D meshes is less ambiguous than images because the former includes less noisy variations unrelated to a speaker's voice, such as stylistic variations, hairstyles, background, and facial textures.
Moreover, meshes enable more straightforward quantification of differences between prediction and groundtruth in the Euclidean space-- unlike the case in using face images, where sources of differences involve backgrounds and hairstyles.
From the perspective of 3D faces, much research attention has been paid to 3D reconstruction from monocular images \cite{shang2020self,guo2020towards,zhu2016face, wu2021synergy} or video sequences \cite{garrido2016reconstruction, kim2018deep} for 3D face animation or talking face synthesis. In contrast, we are the first to investigate the correlations between one's 3D face geometry and voices, and we focus on the analysis of the face geometry gleaned from one's voices.
Our goal is to validate the correlations between voices and face geometry towards more explainable human-centric cross-modal learning with neuroscience support.
The analysis inevitably involves acquiring large-scale 3D face scans with paired voices, which is expensive and subject to privacy. To deal with this issue, we propose a novel \textit{Voxceleb-3D} dataset that includes paired voices and 3D face models.
Voxceleb-3D is inherited from two widely used datasets: Voxceleb \cite{nagrani2017voxceleb}) and VGGFace \cite{BMVC2015_41}, which include voice and face images of celebrities, respectively.
The approach \cite{Zhu_2015_CVPR} we adopt to create Voxceleb-3D is inspired by 300W-LP-3D \cite{zhu2016face}, the most-used 3D face dataset, and we will describe details in Sec.\ref{sec:methods/supervised}.
Our analysis framework \textbf{Cross-Modal Perceptionist} (CMP), investigates the feasibility to predict face meshes using 3D Morphable Models (3DMM, Sec.\ref{sec:methods/3dmm}) from voices on the following two scenarios (Fig. \ref{purpose}).
We first train neural networks directly from Voxceleb-3D in a \textit{supervised learning} manner using the paired voices and 3DMM parameters (Sec.\ref{sec:methods/supervised}).
We further investigate an \textit{unsupervised learning} setting to inspect whether face geometry can still be gleaned without paired voices and 3D faces, which is a more realistic scenario. In this case, we use \textit{knowledge distillation} (KD) \cite{hinton2015distilling} to transfer knacks from the state-of-the-art method for 3D faces from images, SynergyNet \cite{wu2021synergy}, into our student network and jointly train speech-to-image and image-to-3D blocks (Sec.\ref{sec:methods/unsupervised}).
We design a set of metrics to measure the geometric fitness based on points, lines, and regions for both the supervised and the unsupervised scenarios.
The evaluation attempts to show correlations between 3D faces and voices with straightforward neural network-based approaches. The analysis with CMP enables us to comprehend the correlations between face geometry and voices. Our research lays explainable foundations for human-centric cross-modal learning and biometric applications using voice-face correlations, such as security and surveillance when only voice is given.
Our goal is not to recover high-quality 3D face meshes from voices comparable to synthesis from visual modalities such as image or video inputs, but we try to answer the core question under our CMP framework: can face geometry be gleaned from voice? We break down the question into four parts and will answer them through experiments.
Q1. Is it feasible to predict visually reasonable face meshes from voice?
Q2. How stable is the mesh prediction from different utterances of the same person?
Q3. Compared with face meshes produced by cascading separately trained speech-to-image and image-to-3D-face methods, can the performance of a joint training flow, where mesh prediction is trained with voice information, improve? How much?
Q4. What is the major improvement that voice information can bring in the joint training flow?
Our contributions are summarized.
\begin{enumerate}
\item Towards explainable human-centric cross-modal learning, we are the first to study the correlations between face geometry and voices.
\item We devise an analysis framework, Cross-Modal Perceptionist, which studies both supervised and unsupervised approaches to learn face meshes from voices.
\item We show extensive analysis and discussion and answer to four breakdown questions to validate the correlations between voices and face shapes
\end{enumerate}
\section{Related Work}
\subsection{Audio: Learning Personal Traits from Voice}
The human voice is embedded with a wide range of personal information and has long been exploited for recognizing personal traits,
such as speaker identity \cite{bull1983voice, maguinness2018understanding, ravanelli2018speaker}, age \cite{ptacek1966age, singh2016relationship, grzybowska2016speaker}, gender \cite{li2019improving}, and emotion status \cite{wang2017learning, zhang2019attention}.
Voices can also be used to monitor health conditions \cite{ali2017automatic} or applied to other medical applications \cite{han2021exploring}.
Most existing works focus on predicting personal traits that are more intuitively related to voice. Our work can be seen as a much more challenging task for learning implicit personal faces or skull structures from voices.
\subsection{Visual: 2D/3D Face Synthesis}
Face-related synthesis has been under much research in the past years.
Generating 2D face images using GANs \cite{goodfellow2014generative, abdal2019image2stylegan, nie2020semi, Karras_2020_CVPR, richardson2021encoding, karras2019style} has been a prevalent task, and recent progress includes more realistic synthesis with diverse styles. The task of face reenactment \cite{garrido2014automatic, nirkin2019fsgan, thies2016face2face} focuses on transferring facial features from a source to a target.
Some works focus on the 3D domain: synthesizing 3D face models from monocular images \cite{zhu2016face, guo2020towards, tran2018nonlinear, wu2021synergy}, synthesizing 3D face motion from videos \cite{kim2018deep, garrido2016reconstruction} using 3DMM \cite{egger20203d, tewari2021learning}, or implicit fields \cite{Yenamandra_2021_CVPR}.
\subsection{Audio-Visual Learning}
\textbf{Cross-Modal Face Matching} \cite{nagrani2018seeing,kim2018learning, Wen_2021_CVPR,ning2021disentangled,zheng2021adversarial} covers tasks where voices are used as queries to retrieve faces or vice versa.
These tasks are inherently \textit{selection} problems in which the best fit of a voice-face pair from the dataset is desired.
Another similar task is cross-modal verification \cite{Nawaz_2021_CVPR, tao2020audio,sari2021multi} that tells whether input faces and voices belong to the same person, which is a simply \textit{classification} problem for paired inputs. Our work solves its root question and \textit{explains} the success in voice-face matching or verification by verifying correlations between voices and face geometry.
\textbf{Talking face synthesis} targets at generating coherent and natural lip movements.
Some works drive template images \cite{jamaludin2019you, zhou2019talking, guo2021adnerf} or template face meshes \cite{cudeiro2019capture} to talk by speech inputs.
Some replace lip movements in a video with movements inferred from another video or speech \cite{chen2018lip, wiles2018x2face}. Their focuses are coherent lip movements and thus are different from our target at studying holistic facial structures.
\textbf{Voice to Face} is the closest task to our work. This task is introduced recently to synthesize face images from only voice inputs. \cite{NEURIPS2019_eb9fc349} and \cite{choi2020inference} adopt GANs to generate face images from audio clips. \cite{oh2019speech2face} uses an encoder-decoder structure to reconstruct face images. However, the disadvantages are that 2D representations contain many variations, such as hairstyles, beards, backgrounds, and facial textures irrelevant to facial geometry, or the correlations lack physiological support. Besides, face reconstruction errors can be ambiguous because two images of the same person can contain different hairstyles and backgrounds.
Our analysis framework circumvents the issues raised by 2D face representations. 3D face models do not contain hairstyles, backgrounds, or texture variations. Geometric representation of meshes enables us to analyze the correlations between voices and 3D shapes and further directly measure gains and errors in the Euclidean space. In this way, we can focus on face geometry gleaned from voices.
\section{Method}
\label{sec:methods}
Our goal is to analyze how a person's voice relates to one's face geometry in the 3D space. Thus, we learn 3D face meshes using 3D Morphable Models (3DMM) from input speech and analyze the correlations under supervised and unsupervised learning settings. The supervised setting learns the correlation from a paired voice and 3D face dataset. The unsupervised learning studies a realistic case when such paired dataset is not available, is it still possible to predict face geometry from voice?
\subsection{3D Morphable Models (3DMM)}
\label{sec:methods/3dmm}
3DMM \cite{egger20203d} is a popular method for 3D face modeling using principal component analysis (PCA). By estimating the weights of basis matrices, a 3D face can be constructed.
We can decompose a face into two components: the average face and face shape variation.
That is, for a face $A$,
\begin{equation}
\vspace{-4pt}
A = \bar{A} + V\alpha,
\label{3dmm_modeling}
\vspace{-4pt}
\end{equation}
where $\bar{A} \in \mathbb{R}^{3N}$ is the average face with $N$ three-dimensional vertices,
$V \in \mathbb{R}^{3N\times P}$ is a basis matrix for the face shape variation,
$\alpha \in \mathbb{R}^{P}$ is the coefficients.
Note that we can reshape $A$ into $A_r \in \mathbb{R}^{3\times N}$,
a matrix representation suitable for 3D rotation and translation.
We set $N=53490$ vertices following BFM \cite{paysan20093d}, a particular form of 3DMM.
Per the dimensionality of shape variation basis, we choose $P=50$ following SynergyNet \cite{wu2021synergy}, the state-of-the-art 3D face reconstruction methods from \textit{images} using BFM.
There are 12 additional pose parameters in SynergyNet used to align reconstructed 3D faces to its 2D image inputs: a rotation matrix $R \in \mathbb{R}^{3 \times 3}$ and a translation vector $t \in \mathbb{R}^{3}$, i.e., $A_p = RA_r+t$.
In our analysis, we only use these pose parameters for visualizing how well a predicted face mesh fits a 2D shape outline.
\subsection{Supervised Learning with Voice/Mesh Pairs}
\label{sec:methods/supervised}
\begin{figure}[bt!]
\begin{center}
\includegraphics[width=1.0\linewidth]{Figures/iccv21-supervised.pdf}
\end{center}
\vspace{-9pt}
\caption{\textbf{Supervised learning framework.} Given a speech input, voice embedding is extracted by $\phi_v$. $\phi_{dec}$ then estimates 3DMM parameters $\alpha$ for 3D face modeling. The supervision is computed with groundtruth $\alpha^*$.}
\label{supervised_fig}
\vspace{-7pt}
\end{figure}
We first describe the supervised learning setting, illustrated in Fig. \ref{supervised_fig}. Given a paired speech sequence and 3DMM parameters for an identity, we build an encoder-decoder structure first to extract voice embedding $v\in \mathbb{R}^{64}$ from a mel-spectrogram \cite{grochenig2001foundations}, which is a commonly used time-frequency representation for speech, of the input speech. Following \cite{NEURIPS2019_eb9fc349}, the voice encoder $\phi_v$ is pretrained on the large-scale speaker recognition task. Then, we train a decoder $\phi_{dec}$ to estimate 3DMM parameters, $\alpha$. We use groundtruth 3DMM parameters to supervise the training with $L_2$ loss.
\begin{equation}
\mathcal{L}_{reg} = \|\alpha-\alpha^*\|^2
\label{supervised_loss}
\end{equation}
where $\alpha^*$ is groundtruth 3DMM parameters.
In addition, we adopt the triplet loss on the estimated 3DMM parameters $\alpha$. The triplet loss minimizes the difference of pairwise relations between (anchor, positive) and (anchor, negative) pairs with a soft margin.
\begin{equation}
\vspace{-3pt}
\mathcal{L}_{tri} = max\{\|\alpha- \alpha_p\|_2-\|\alpha- \alpha_n\|_2+1,0\},
\label{triplet_loss}
\vspace{-3pt}
\end{equation}
where $\alpha$ plays as an anchor, $\alpha_p$ is a positive sample for the anchor, representing the same identity but regressed from different images, and $\alpha_n$, coming from a different identity, is a negative sample for the anchor. The triplet loss aims at coherent 3DMM parameters for the anchor and positive samples due to the same identities and simultaneously contrasting to the negative sample due to a different identity. The overall loss function is $\mathcal{L}_{sup} = \mathcal{L}_{reg}+\mathcal{L}_{tri}$.
The challenge of this supervised learning problem is how to obtain $\alpha^*$. Most large voice datasets, such as Voxceleb \cite{nagrani2017voxceleb}, only contain speech for celebrities, and most large face datasets, such as VGGFace \cite{BMVC2015_41}, only consist of publicly scraped face images. We first follow \cite{NEURIPS2019_eb9fc349} to fetch the intersection of voice and image data from Voxceleb and VGGFace. Then, we propose to fit 3D faces from 2D to create a novel dataset, \textbf{Voxceleb-3D}, using an optimization-based approach adopted by 300W-LP-3D \cite{zhu2016face}, the most-used 3D face dataset. In detail, we use an off-the-shelf 3D landmark detector \cite{bulat2017far} to extract facial landmarks from collected face images and then optimize 3DMM parameters to fit in the extracted landmarks. Our Voxceleb-3D contains paired voice and 3D face data to fulfill our supervised learning.
\subsection{Unsupervised Learning with KD}
\label{sec:methods/unsupervised}
Obtaining real 3D face scans is very expensive and limited by privacy, and the workaround of optimization-based 3DMM fitting with facial landmarks is time-consuming.
An unsupervised framework may serve real-world scenarios.
As a result, we propose an unsupervised framework with knowledge distillation. By leveraging a well-pretrained expert, it helps to validate whether face geometry can still be gleaned with neither real 3D face scans nor optimized 3DMM parameters.
Our unsupervised framework, illustrated in Fig. \ref{kd_pipeline}, has two stages: (1) synthesizing 2D face images from voices with GAN and (2) 3D face modeling from synthesized face images. The motivation is that we first use the GAN to generate 2D faces from voices to obtain the speaker's appearance. However, 2D images contain variations of backgrounds, textures, hairstyles that are irrelevant to voice. Thus, the second-stage image-to-3D-face module disentangles geometry from other variations.
\textbf{Synthesizing face images from voices with GANs.}
Previous research develops a GAN-based speech-to-image framework \cite{NEURIPS2019_eb9fc349}. A voice encoder $\phi_v$ extracts voice embeddings from input speech. Then a generator $\phi_g$ synthesizes face images from the voice embeddings, and a discriminator $\phi_{dis}$ decides whether the synthesis is indistinguishable from a real face image. Last, a face classifier $\phi_c$ learns to predict the identity of an incoming face, ensuring that the generator produces face images that are truly close to the identity in interest. Here we overload notations of $\phi_v$ and other components introduced later for 3D face modeling in both Sec.\ref{sec:methods/supervised} and \ref{sec:methods/unsupervised} due to the same functionalities.
In detail, given a speech input $S$, its corresponding speaker ID $id$, and real face images $I_r$ for the speaker, the image synthesized from the generator is $I_f = \phi_g(\phi_v(S))$. The loss formulation is divided into two parts: real and fake images. For real images, the discriminator learns to assign them to "real" ($r$) and the classifier learns to assign them to $id$. The loss for real images is $\mathcal{L}_r = \mathcal{L}_d(\phi_{dis} (I_r), r)+\mathcal{L}_c(\phi_c (I_r), id)$ showing the discriminator and classifier losses respectively. For fake images, after producing $I_f$ from $\phi_g$, the discriminator learns to assign them to "fake" ($\bar{r}$) and the classifier also learns to assign them to $id$. The loss counterpart for fake images is $\mathcal{L}_f = \mathcal{L}_d(\phi_{dis} (I_f), \bar{r})+\mathcal{L}_c(\phi_c (I_f), id)$.
\textbf{3D face modeling from synthesized images.} After image synthesis by GAN, we build a network to estimate 3DMM parameters from fake images. The parameter estimation consists of an encoder $\phi_{I}$ and an decoder $\phi_{dec}$ to obtain 3DMM parameters $\alpha = \phi_{dec}(\phi_{I}(I_f))$. 3D face meshes are then reconstructed by Eq. \ref{3dmm_modeling}.
\textbf{Knowledge distillation for unsupervised learning}
\label{sec:unsuper}
\begin{figure}[bt!]
\begin{center}
\includegraphics[width=1.0\linewidth]{Figures/iccv21-unsupervised.pdf}
\end{center}
\vspace{-11pt}
\caption{\textbf{Unsupervised learning with KD.} The unsupervised framework contains a GAN for face image synthesis with voice encoder $\phi_v$, generator $\phi_g$, discriminator $\phi_{dis}$, and classifier $\phi_c$. Then, knowledge distillation is used to achieve unsupervised learning, where information of image-to-3D-face mapping distilled from the expert network (yellow block) is exploited to train the student network (blue block). 2D face is a latent representation in this fashion. Beside using pseudo-groundtruth $\alpha^E$ to train the student, we also distill knowledge at intermediate layers using conditional probability distributions.}
\vspace{-7pt}
\label{kd_pipeline}
\end{figure}
To fulfill the unsupervised training, we distill the knowledge of image-to-3D-face reconstruction from a pretrained expert network. The expert, consisting of encoder $\phi_{I}^E$ and decoder $\phi_{dec}^E$, reconstructs 3D face models from synthesized face images and produces pseudo-groundtruth of 3DMM parameters $\alpha^E$. $\alpha^E$ is used to train the student network by $L_2$ loss:
\begin{equation}
\mathcal{L}_{p-gt}=\|\alpha^E-\alpha\|^2.
\label{loss_pgt}
\end{equation}
This KD strategy \textit{circumvents the needs of paired voice and 3D face data} and helps us achieve unsupervised learning.
In addition to pseudo-groundtruth, we also distill knowledge at intermediate layers and minimize their distribution divergence between the expert and the student. We measure the distributions in the feature spaces by the extracted image embedding $z^E \in \mathbb{R}^{B \times \nu}$ and $z \in \mathbb{R}^{B \times \nu}$ of the expert and the student network. We maintain the batch dimension $B$ and collapse the rest to $\nu$. Then as in \cite{passalis2020probabilistic}, we calculate the conditional probability $z$ between feature points as follows.
\begin{equation}
\footnotesize
z_{i|j}=\frac{K(z_i,z_j)}{\sum_{k,k\ne j}K(z_k,z_j)},
z^E_{i|j}=\frac{K(z_i^E,z_j^E)}{\sum_{k,k\ne j}K(z^E_k,z^E_j)},
\label{loss_pgt}
\end{equation}
where $K(\cdot, \cdot)$ is scaled and shifted cosine similarity whose outputs lie in [0,1]. Kullback-Leibler (KL) divergence is then used to minimize the two conditional distributions.
\begin{equation}
\footnotesize
\mathcal{L}_{div}=\sum_i \sum_{j\ne i} z^E_{j|i} \text{log}\left( \frac{z^E_{j|i}}{z_{j|i}}\right).
\label{loss_pgt}
\end{equation}
The KD loss is $\mathcal{L}_{\textit{KD}}=\mathcal{L}_{p-gt}+\mathcal{L}_{div}$. The overall unsupervised learning loss is combined with GAN loss and also triplet loss in Eq.\ref{triplet_loss}.
\begin{equation}
\mathcal{L}_{unsuper}=\mathcal{L}_{f}+\mathcal{L}_{r}+\mathcal{L}_{\textit{KD}}+ \mathcal{L}_{tri}.
\label{loss_pgt}
\vspace{-6pt}
\end{equation}
\section{Experiments and Results}
\label{sec:exper}
\textbf{Datasets.}
We use our created Voxceleb-3D dataset described in Sec. \ref{sec:methods/supervised}.
There are about 150K utterances and 140K frontal face images from 1225 subjects.
The train/test split for Voxceleb-3D is the same as \cite{NEURIPS2019_eb9fc349}:
Names starting with A-E are used for testing, and the others are for training.
We manually pick the best-fit 3D face models for each identity as reference models for evaluations. We display samples of face meshes in Fig. \ref{voxceleb-3d}.
\textbf{Data Processing and Training.} We follow \cite{NEURIPS2019_eb9fc349} and extract 64-dimensional log mel-spectrograms with a window size of 25 ms, and perform normalization by mean and variance of each frequency bin for each utterance. In the unsupervised setting, we adopt SynergyNet \cite{wu2021synergy} as the expert. Face images from the generator are 64$\times$64, and we bilinearly upsample them to 120 $\times$120 to fit the input size of the expert for 3D face reconstruction from images.
Our framework is implemented in PyTorch \cite{paszke2019pytorch}. We use Adam optimizer \cite{kingma2014adam} and set the learning rate to 2$\times$10$^{-4}$, batch size to 64, and a total number of training steps to 50,000, which consumes about 16 hours to train on a machine with a GeForce RTX 2080 GPU.
To train with triplet loss, for each sample in a batch, we further uniformly sample one utterance of the same person as the positive sample and sample the other one of the different person as the negative sample. We illustrate the network architectures in the supplementary
\textbf{Metrics.}
We design several metrics to evaluate 3D face deformation based on $\alpha$. Here we introduce a line-based metric, ARE, and present point-based and region-based metrics using iterative closet point registration and facial landmarks in the supplementary.
Absolute Ratio Error (\textbf{ARE}, line-based): Distances between facial points are commonly used as measures related to aesthetics or surgical purposes \cite{sarver2007aesthetic, pallett2010new, abdullah2002inner}.
We pick point pairs (shown in Fig. \ref{p2pDistance}) that are most representative for evaluation and calculate the distance ratios to outer-interocular distance (OICD). For example, ear ratio (ER) is $\overline{AB}/\overline{EF}$, and the same for forehead ratio (FR), midline ratio (MR), and cheek ratio (CR).
We evaluate our models by the absolute ratio error (ARE) between the predicted and the reference face meshes
because these ratios can capture face deformation.
As an example, ARE of ER is $|\text{ER} - \text{ER}^*|$, where $^*$ denotes the ratios of reference models.
\begin{figure}[bt!]
\begin{center}
\includegraphics[width=0.35\linewidth]{Figures/face_template.png}
\end{center}
\vspace{-12pt}
\caption{\textbf{Distance illustration for our ARE metric.} $\overline{AB}$: ear-to-ear distance. $\overline{CD}$: forehead width. $\overline{EF}$: outer-interocular distance. $\overline{GH}$: midline distance. $\overline{IJ}$: cheek-to-cheek distance.}
\vspace{-10pt}
\label{p2pDistance}
\end{figure}
\textbf{Baseline}.
We build a straightforward baseline by directly cascading two separately pretrained methods without joint training: the GAN-based speech-to-image block \cite{NEURIPS2019_eb9fc349} and SynergyNet \cite{wu2021synergy} for image-to-3D-face block (illustrated in Fig. \ref{baseline}) to produce 3D meshes from voices as the baseline framework.
In addition, 3DDFA-V2 \cite{guo2020towards} is another method for 3D face modeling from monocular images using BFM and holds a close performance to SynergyNet. Thus, we experiment with combinations of speech-to-image block + 3DDFA-V2 (Base-1) and speech-to-image block + SynergyNet (Base-2).
Aside from network-based approaches, we also devise simple oracles that use mean shapes of labels, such as male/female, as predictions, and provide the results as references in the supplementary.
\begin{figure}[bt!]
\begin{center}
\includegraphics[width=1.0\linewidth]{Figures/iccv21-baseline.pdf}
\end{center}
\vspace{-15pt}
\caption{\textbf{Baseline framework.} The baseline is a direct cascade of two pre-trained state-of-the-art modules: speech-to-image \cite{NEURIPS2019_eb9fc349} and image-to-3D-face modeling \cite{guo2020towards,wu2021synergy}.}
\setlength{\belowcaptionskip}{0pt}
\vspace{-5pt}
\label{baseline}
\end{figure}
\begin{figure}[bt!]
\begin{center}
\includegraphics[width=1.0\linewidth]{Figures/VGGFace_compare.pdf}
\end{center}
\vspace{-17pt}
\caption{\textbf{Evidence for positive response to Q1}. Our unsupervised framework predicts intermediate 2D images and 3D meshes. This answers to Q1 that 3D face models exhibiting similar \textit{face shapes} to the references can be predicted from only voice inputs.}
\vspace{-12pt}
\label{VGGFace_compare}
\end{figure}
\begin{figure*}[hbt!]
\begin{center}
\includegraphics[width=1.0\linewidth]{Figures/additional_results.png}
\end{center}
\vspace{-15pt}
\caption{\textbf{A collection of results supports our positive response to Q1.} This figure extends Fig.\ref{VGGFace_compare}. Top to down for two row chunks: predicted intermediate face images, predicted 3D models, real faces for references.}
\vspace{-15pt}
\label{additional}
\end{figure*}
\subsection{Analysis}
\label{sec:vis}
We attempt to answer Q1-Q4 raised in Sec.\ref{sec:intro} in this section and respond to each respective question in A1-A4. In A1-A3, we show predictions using our unsupervised learning setting since the by-product intermediate images help explain 3D mesh prediction for better comprehension of the mechanism. We show visuals from the supervised version in the supplementary.
\begin{figure}[bt!]
\begin{center}
\includegraphics[width=1.0\linewidth]{Figures/coherencev2.pdf}
\end{center}
\vspace{-18pt}
\caption{\textbf{Illustration for our positive response to Q2.} Consistent intermediate images and 3D faces can be predicted from the same speaker with different time-step utterances.}
\vspace{-10pt}
\label{coherence}
\end{figure}
\begin{figure}[bt!]
\begin{center}
\includegraphics[width=1.0\linewidth]{Figures/consistency.pdf}
\end{center}
\vspace{-15pt}
\caption{\textbf{Shape variation statistics in response to Q2.} Mean and std of per-vertex variation w.r.t. the center frame are shown, calculated in frontal pose. 3D shapes recovered from different utterances are consistent with only sub-pixel differences.}
\vspace{-9pt}
\label{statistics}
\end{figure}
\textbf{A1: Meshes and intermediate images.}
In Fig. \ref{VGGFace_compare}, we display intermediate 2D images, 3D meshes, and real faces. Note that the real faces should only be treated for identification purposes in terms of \textbf{face shapes} because those images include backgrounds or hairstyle variations that differ from references. Our end targets are the 3D face meshes that are free from these factors.
Prediction from our framework generates wider meshes in Column 2 and thinner meshes in Column 3 and 4, which reflect the real face wideness.
All the generated 3D meshes fit in 2D facial outlines well.
These results exemplify the ability to convert voices into plausible 3D face meshes. Although meshes are rough compared with 3D synthesis from images or videos modalities, the results conform to our intuitions that when an unheard speech comes, one can roughly envision whether the speaker's face is overall wider or thinner. However, we cannot picture subtle details, such as bumps or wrinkles on faces. The same trends can be observed in a vast result collection in Fig. \ref{additional}. The results are not cherry-picked.
\textbf{A2: Prediction coherence of the same speaker.}
To address Q2, we showcase in Fig. \ref{coherence} and \ref{statistics} for the coherence of the predicted face shapes from different utterances of the same speakers.
The 2D predictions exhibit \textit{face shape and outline consistency},
though they are still plagued by stylistic variations that are geometrically unrelated to our task.
This not only confirms the ability to produce coherent face meshes but also underlines why predicting face meshes from voices is regarded as less noisy than face image synthesis.
\textbf{A3: Gain from cross-modal joint training.}
For Q3, we compare results from our unsupervised framework against those from the baseline in Fig. \ref{baseline_comp}.
Joint training for the speech-to-image and image-to-3D sub-networks attain higher and more stable image synthesis quality, which benefits 3D mesh prediction. In contrast, those from the baseline (Base-2) include more artifacts. This justifies our CMP's cross-modal joint training strategy, which lets networks learn to predict 3D faces with voice input at the training, improves over the baseline that is separately trained.
To this end, we understand that voices can help 3D face prediction and produce visually reasonable meshes that are close to real face shapes.
\begin{figure}[bt!]
\begin{center}
\includegraphics[width=1.0\linewidth]{Figures/baseline_comp.pdf}
\end{center}
\vspace{-15pt}
\caption{\textbf{Comparison of intermediate images and meshes in response to Q3.} The cross-modal joint training strategy in our unsupervised CMP produces better-quality images than the baseline. More reliable images as latent representations from our CMP can facilitate the mesh prediction. We include real faces for \textit{face shape} references.}
\vspace{-10pt}
\label{baseline_comp}
\end{figure}
\textbf{A3-Quantification+A4}.
We numerically compare supervised and unsupervised settings of our analysis framework, CMP, against the baseline (Fig. \ref{baseline}) using the ARE proposed in Sec.\ref{sec:exper}.
Both supervised and unsupervised settings improve the line-based ARE over the baseline around \textit{20\%}, as exhibited in Table \ref{ARE_metric}.
The results show that cross-modal joint training achieves better results than the direct cascade of pretrained blocks.
These improvements reveal underlying correlations between voices and face shapes such that training face mesh prediction with joined voice information is helpful.
Among all metrics, ear ratio (ER) has the most prominent improvements,
indicating that the best indicative attribute voice can hint is the head width, and thus it answers Q4.
This analysis aligns with the findings in Sec.\ref{sec:vis} that voice can indicate wider/thinner faces, which corresponds to our intuition that we can roughly envision a speaker's face width from voices.
Through this study, we quantify the improvements of cross-modal learning from voice inputs, and the findings echo human perception intuitively.
\begin{table}[tb!]
\begin{center}
\setlength{\abovecaptionskip}{3pt}
\caption{\textbf{ARE metric study.} Compared with baseline in Fig. \ref{baseline}, results from CMP show that cross-modal joint training with voice input can obtain around 20\% improvements. We also highlight the largest improvement, ER, that answers to Q4.}
\small
\label{ARE_metric}
\begin{tabular}[c]
{|
p{1.10cm}<{\centering\arraybackslash}|
p{1.0cm}<{\centering\arraybackslash}|
p{1.0cm}<{\centering\arraybackslash}|
p{1.5cm}<{\centering\arraybackslash}|
p{1.5cm}<{\centering\arraybackslash}|}
\hline
ARE & Base-1 & Base-2 & CMP-supervised & CMP-unsupervised \\
\hline
ER & 0.0319 & 0.0311 & \cellcolor{yellow!30}\textbf{0.0152} & \cellcolor{yellow!30}\textbf{0.0181} \\
FR & 0.0184 & 0.0173 & 0.0186 & 0.0169\\
MR & 0.0177 & 0.0173 & 0.0169 & 0.0174 \\
CR & 0.0562 & 0.0551 & 0.0457 & 0.0480 \\
\hline
Mean & 0.0311 & 0.0302 & \textbf{0.0241} & \textbf{0.0251} \\
Gain & - & 0\% & \cellcolor{blue!25}\textbf{-20.2}\% & \cellcolor{blue!25}\textbf{-16.9}\% \\
\hline
\end{tabular}
\vspace{-11pt}
\end{center}
\end{table}
\newcommand {\Hnull} {\mathcal{H}_0}
\newcommand {\Halt} {\mathcal{H}_1}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.92\linewidth]{Figures/preference-test.pdf}
\small
\begin{tabular}[c]
{|
p{1.0cm}<{\centering\arraybackslash}|
p{0.95cm}<{\centering\arraybackslash}|
p{1.5cm}<{\centering\arraybackslash}|
p{1.5cm}<{\centering\arraybackslash}|
p{1.1cm}<{\centering\arraybackslash}|}
\hline
& Image & 3D Model & Image + 3D Model & Overall \\
\hline
$p$-value& \begin{small}$\sim$10$^{-16}$ \end{small} & \begin{small}$\sim$10$^{-14}$ \end{small} & \begin{small}$\sim$10$^{-10}$ \end{small} & \begin{small}$\sim$10$^{-16}$ \end{small} \\
\hline
\end{tabular}
\end{center}
\vspace{-12pt}
\caption{\textbf{Results of subjective preference tests.}
The blue bars are the preference for our method, while the red bars are the preference for the baseline method.
The percentages are labeled on the bar, and the total number of votes is enclosed in the parentheses.
The $x$-axis on the bottom labels the total number of responses, and that on the top denotes the percentage. The $p$-values of the statistical significance tests are provided under the bar. $\sim$ shows the value's order of magnitude.}
\label{fig:pref-test}
\vspace{-10pt}
\end{figure}
\subsection{Subjective Evaluations}
\label{sec:human_judgement}
We further conduct subjective preference tests over the outputs to quantify the difference of preference.
The test was divided into three sections, considering \textit{images}, \textit{3D models}, and \textit{joint materials}. Though we favor face meshes over images because the former are free from irrelevant textures or backgrounds, we included intermediate images from our unsupervised setting in the test and asked subjects to focus on face shapes since better-outlined shapes on images lead to better-shaped meshes, as indicated in Fig. \ref{baseline_comp}.
\textbf{Evaluation design.} Thirty questions were included in the test, and 154 subjects with no prior knowledge of our work were invited to the test.
In the first section, each of the ten questions consisted of three images-- a reference face image, a face image from our unsupervised CMP, and a face image generated from the Base-2 (\cite{NEURIPS2019_eb9fc349}+\cite{wu2021synergy}).
The order of the generated images was randomized.
The subjects were asked to select the face image "whose shape is geometrically more similar to the reference face?".
In the second section (10 questions), a similar design was laid out, but 3D face models from Base-2 and our CMP were used instead of images.
Finally, in the third section (10 questions), each of the two options comprised a face image and a 3D face model; the subject was asked to jointly consider over the two materials: "overall, whose shape geometrically fits the given reference image better?"
\textbf{Statistical significance test.} Fig. \ref{fig:pref-test} summarizes our subjective evaluation.
We conduct a statistical significance test with the following formulation.
A subject's response to a question is considered as a Bernoulli random variable with a parameter $p$.
The null hypothesis ($\Hnull$) assumes $p \le 0.5$, meaning that the subjects do not prefer our model.
The alternative hypothesis $\Halt$ assumes $p > 0.5$, meaning that the subjects prefer our model.
For each section, there are 154 subjects and ten responses per subject.
For a significance level $\gamma=0.001$, let $b_{n,p}(\gamma)$ denote the quantile of order $\gamma$ for the binomial distribution with parameters $p$ and $n$.
We can decide whether the subjects prefer our model by
\begin{equation}
\begin{split}
&\mathrm{Reject ~\Hnull ~versus~ \Halt \Leftrightarrow} ~np \ge b_{n,p}(1 - \gamma) \\
& \Hnull: p \le 0.5, \Halt: p > 0.5.
\end{split}
\end{equation}
As shown in Fig. \ref{fig:pref-test}, $np$ is well above the threshold $b_{n=1540,p=0.5}(1 - \gamma) = 831$, rejecting $\Hnull$ and suggesting that the subjects significantly prefer our model over the baseline.
The single-sided $p$-values are displayed under the bar chart. A lower $p$-value means stronger rejection of $\Hnull$. The $p$-values from our tests are much lower than the level 0.001, showing high statistical significance.
In conclusion, the hypothesis test verifies that the subjects indeed favor the predictions from our method.
\section{Conclusion and Discussion}
\label{sec:conclusion}
In this work, we investigate a root question in human perception: can face geometry be gleaned from voices? We first point out shortcomings in previous studies in which 2D faces are predicted: such synthesis contains variations in hairstyles, backgrounds, and facial textures with controversial correlations to voices. We instead focus on 3D faces whose correlations to voices have been supported by neuroscience and cognitive science studies. As a pioneering work toward this direction, we innovate a way to construct Voxceleb-3D that includes paired voices and 3D face models, devise and test baseline methods and oracles, and propose a set of evaluation metrics. Our proposed main framework, CMP, learns 3DMM parameters from voices under both supervised and unsupervised settings. Based on CMP, we answer the core question with a four-part breakdown by detailed analyses and subjective evaluations. We conclude that 3D faces can be roughly reconstructed from voices.
Our study is far from complete, but hopefully, it lays a foundation for speech and 3D cross-modal studies in the future.
\textbf{Ethical statement.}
There are arguably implicit factors, such as voices after smoking and drinking might be different. The data of Voxceleb contains speech from interviews, where interviewees usually speak in normal voices. More implicit and subtle factors such as drug use or health conditions might affect voices, but it needs clinical studies and should be validated from physiological views.
The results shown in this work only aim to point out the correlation between voice and face (skull) structure exist and do not make assumptions on race/ethnic origin, and this work does not indicate the relation between race and voice or race and face structure. As mentioned in Introduction, the correlation between race/ethnicity cannot be easily resolved. Besides, the reconstructed meshes do not contain skin color, facial textures, or hairstyles that can explicitly correspond to one’s true identity, and thus anonymity can be preserved.
\section{Overview}
This supplementary document is organized as follows.
In Sec. \ref{sec:net_arch}, we show detailed network architectures for both supervised and unsupervised learning scenarios.
In Sec. \ref{sec:dataset}, we describe details about our Voxceleb-3D training and evaluation split.
In Sec. \ref{sec:vis_supervised}, we respond to Q1-Q4 using illustrations from our CMP- supervised learning setting.
In Sec. \ref{sec:point-based}, we introduce both point-based and region-based metrics and compare results produced from our CMP with those from baselines.
In Sec. \ref{sec:oracles}, we provide two simple oracles as non-network-based solutions for references using averaged face meshes in the training data as the predictions.
In Sec. \ref{sec:pose}, we show the robustness of head pose estimation of the expert network.
In Sec. \ref{sec:study_unsupervised}, we present more results from our unsupervised setting.
In Sec. \ref{sec:applications}, we describe more on the applications of the cross-modal learning from voices to 3D faces.
In Sec. \ref{sec:limitation}, we describe limitations of this work.
The numbering of figures, tables, and equations in this supplementary document go with the prefix 'S-'.
\vspace{-3pt}
\section{Network Architecture}
\label{sec:net_arch}
Here we exhibit detailed network architectures for both supervised and unsupervised settings in our CMP in Fig. \ref{network_arch}.
\section{Voxceleb-3D Training/Evaluation Split}
\label{sec:dataset}
We display the details of the training/evaluation split in Table \ref{statistics}. As described in our paper Sec. 3.2 and Sec. 4-Datasets, Voxceleb-3D inherited from Voxceleb and VGGFace contains 1225 people. Names starting with 'A' - 'E' are included in the evaluation set, and the others are in the training set.
The training set contains as many utterances we can fetch from Voxceleb, and the evaluation set contains three utterances for each person, amounting to a total of 0.9K utterances.
Face images are not included in the evaluation set because they cannot be used to calculate 3D face modeling errors, and thus we put a '-' mark in the table.
For 3D faces, we fit landmarks from images and obtain the optimized 3DMM and reconstructed 3D faces, as described in our paper Sec. 3.2 and Sec. 4-Datasets. There are several images associated with a person in VGGFace. We fit 3DMM parameters and reconstruct 3D meshes from these images. To create \textit{reference face meshes} for a person to fulfill quantitative evaluation, we manually select one neutral 3D face from the pool that best fits 2D facial outlines on images. Therefore, there are 301 3D face meshes represented in 3DMM parameters for each person as the reference.
At test time, three utterances for each identity are used as inputs to reconstruct 3D faces. Those three predicted models are then used to compute quantitative results with the picked one reference model for each identity.
Note that Voxceleb collects speech clips of interviews or talks for celebrities scraped from the web, and only gender labels are available in Voxceleb. Other features may require self-disclosure or are hard to trace, such as ages at the time of speaking, and thus are unavailable.
\begin{table}[htb]
\begin{center}
\caption{\textbf{Voxceleb-3D training/evaluation split.} We provide data split details including number of utterances, number of face images, number of 3DMM parameters (equivalent to the number of 3D meshes), number of male and female, and number of people. Images for the evaluation set are not used for quantitative evaluation, and thus we mark the number '-'. We also display gender pie charts below the table.}
\label{statistics}
\vspace{-10pt}
\begin{tabular}[c]
{|
p{4.5cm}<{\centering\arraybackslash}|
p{1.5cm}<{\centering\arraybackslash}|
p{1.4cm}<{\centering\arraybackslash}|}
\hline
Dataset & Training & Evaluation \\
\hline
\# of utterances & 113K & 0.9K \\
\# of face images & 107K & - \\
\# of 3DMM param (face mesh) & 107K & 301 \\
\# of male/female & 485/439 & 182/119 \\
\# of people & 924 & 301 \\
\hline
\end{tabular}
\vspace{-20pt}
\begin{subfigure}{0.45\linewidth}
\centering
\includegraphics[width=\textwidth]{Figures/train-gender.pdf}
\vspace{-20pt}
\caption{Training split}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{0.45\linewidth}
\centering
\includegraphics[width=\textwidth]{Figures/test-gender.pdf}
\vspace{-20pt}
\caption{Evaluation split}
\label{fig:sub2}
\end{subfigure}
\end{center}
\end{table}
\section{Response to Q1-Q4 using Supervised CMP}
\label{sec:vis_supervised}
Following main paper Sec.4.1-Analysis, we present the counterpart of A1-A4 using our supervised framework.
\textbf{A1-Face meshes from our supervised learning.} In Fig. \ref{supervised_gt}, we present four types of face shapes -- \textit{skinny}, \textit{wide}, \textit{regular}, and \textit{slim} -- and show the reference images.
The produced face meshes from our supervised learning setting exhibit the model's ability to produce various types of face shapes and are also consistent with the reference images, which are provided for shape identification purposes.
This illustration also validates our findings in Table 1 of the paper: the lowest absolute ratio error is ear-to-ear ratio (ER) distance, which is associated with overall face shapes, indicating wider or thinner faces.
We further investigate the proximity of the illustrated four face types, we calculate the parameter space Euclidean distances and show the confusion matrix in Fig. \ref{confusion}.
\begin{figure}[b]
\begin{center}
\includegraphics[width=1.0\linewidth]{Figures/confusion.png}
\end{center}
\vspace{-8pt}
\caption{\textbf{Proximity for four face types.} We show (A) a confusion matrix and (B) distances with mean face shape in 3DMM parameter space to help comprehension for the face type variation in Fig. \ref{supervised_gt}.}
\label{confusion}
\vspace{-6pt}
\end{figure}
\textbf{A2-Mesh prediction coherence from our supervised learning.} In Fig. \ref{coherence}, we display coherence of test-time face mesh predictions from our supervised learning setting. We use different utterances from the same speaker at different time-step as inputs to produce the meshes.
In Fig. \ref{coherence}, one can observe from, for example, jaw widths that the output meshes are different for the two speakers;
by contrast, meshes for the same speaker are highly coherent.
These results demonstrate that our training strategy successfully predicts coherent geometry for the same speaker and can predict different topologies for different identities.
Finally, this coherence illustration also implies advantages over previous voice-to-face methods that work on the image domain \cite{NEURIPS2019_eb9fc349, oh2019speech2face}. Their generation includes variations of background, hairstyles, and the rest. In contrast, Our results exclude these variations and focus on facial geometry to validate the correlation between face shapes and voices.
\textbf{A3+A4-Comparison against the baseline and the major improvement from our supervised learning.} We further compare against Base-2 (See Sec.4-Baseline in the paper: the cascaded pretrained blocks). One reference face, one 3D face mesh produced by our method, and one by Base-2 are presented in Fig. \ref{supervised_comp}.
For the example on the left, the person of interest has a wider jawbone, and the mesh produced from our method also shows a similar trait. On the right, the image shows a wider face shape and apparent cheeks. Our 3D model also displays a similar shape, but Base-2 shows a much thinner face. Our mesh can reflect the wideness of faces, which corresponds to our findings in paper Table 1 that the major improvement voice can hint is ER (ear-to-ear ratio).
In summary, we use the above visual results to show that the supervised learning of the analysis framework is effective.
Fig. \ref{supervised_gt} shows the output face meshes have similar overall face shapes to the reference images, which shows the model's ability for various types of faces and is validated in Fig. \ref{confusion}.
Fig. \ref{coherence} shows our supervised method can predict coherent face shapes.
Fig. \ref{supervised_comp} shows the output face models are more similar to the reference than Base-2 in terms of overall face shapes, which again validates the ER improvements shown in the paper Table 1.
\clearpage
\begin{figure*}[bt!]
\begin{center}
\includegraphics[width=0.75\linewidth]{Figures/supervised_gt.png}
\end{center}
\vspace{-12pt}
\caption{\textbf{Visualization of predicted 3D face meshes from our supervised learning}. We display four face shapes, skinny, wide, regular, and slim, and their reference images to show the shape correspondence. References are provided to identify face shapes of the person of interest.}
\label{supervised_gt}
\vspace{-15pt}
\end{figure*}
\begin{figure*}[bt!]
\begin{center}
\includegraphics[width=0.75\linewidth]{Figures/coherence_supp.png}
\end{center}
\vspace{-8pt}
\caption{\textbf{Inference coherence of meshes produced from our CMP- supervised learning.} }
\label{coherence}
\vspace{-6pt}
\end{figure*}
\begin{figure*}[bt!]
\begin{center}
\includegraphics[width=0.70\linewidth]{Figures/supervised_comp.png}
\end{center}
\vspace{-16pt}
\caption{\textbf{Comparison of output face meshes from our CMP of the supervised learning and Base-2}. In case (a), our mesh shows a more squared face with a wider jawbone, but Base-2 only shows a slim face. Reference face in (b) is wider and bears apparent cheeks, and our result is much more similar to the reference.}
\label{supervised_comp}
\vspace{-7pt}
\end{figure*}
\clearpage
\begin{figure}[htb!]
\begin{center}
\includegraphics[width=0.8\linewidth]{Figures/LMR_kpts.png}
\end{center}
\vspace{-18pt}
\caption{\textbf{Illustration of commonly-used 68-point 3D facial landmarks.}}
\label{landmark}
\end{figure}
\begin{figure}[bt!]
\begin{center}
\includegraphics[width=0.50\linewidth]{Figures/term_explanation.png}
\end{center}
\vspace{-15pt}
\caption{\textbf{Illustration of physiology terms in Sec. \ref{sec:point-based}}. }
\label{terms}
\end{figure}
\section{Point-based and Region-based Metrics}
\label{sec:point-based}
Normalized Mean Error (\textbf{NME}, point-based) of facial landmarks. BFM Face annotates 68 points 3D facial landmark points that lie on the eyes, nose, mouth, and face outlines (shown in Fig. \ref{landmark}). We calculate the NME of the landmark point set between the predicted and reference 3D face meshes, i.e., first calculate the Euclidean distance of two landmark sets and then normalize the distance by the face size (square root of face width $\times$ length).
Results in Table~\ref{NME_metric} show NME for 3D facial landmark alignment. Quantitative results under this metric show improvements, but the gains are smaller. It is because most facial landmarks concentrate on eyes, nose, and mouth parts that naturally bear more minor deformations across people. For example, the nose tip and mid-dorsum usually lie on the centerline of faces, and alar base and columella are located around them closely. (See Fig. \ref{terms} for the definition of these physiology terms.)
Point-to-Plane Root Mean Square Error (\textbf{Point-to-Plane RMSE}, region-based). We follow the surface registration for 3D models using the popular iterative closest point (ICP) \cite{pomerleau2015review} algorithm to align the predicted and reference meshes. We then calculate point-to-plane RMSE. Registration for the holistic face and facial parts (illustrated in Fig.~\ref{regions}) are considered and shown in Table~\ref{P2PRMSE_metric} and Table~\ref{part_metric}. Both supervised and unsupervised CMP outperforms the baselines in either holistic or part-based registrations. These evaluations indicate the capability of cross-modal learning, from voice inputs to 3D face outputs.
\begin{table}[tb!]
\begin{center}
\caption{\textbf{NME for point-based metric study.} 68 facial landmarks annotated in BFM Face \cite{paysan20093d} are used for measurements.}
\vspace{-7pt}
\small
\label{NME_metric}
\begin{tabular}[c]
{|
p{1.5cm}<{\centering\arraybackslash}|
p{0.9cm}<{\centering\arraybackslash}|
p{0.9cm}<{\centering\arraybackslash}|
p{1.4cm}<{\centering\arraybackslash}|
p{1.45cm}<{\centering\arraybackslash}|}
\hline
Landmark Alignment & Base-1 & Base-2 & CMP- supervised & CMP- unsupervised \\
\hline
NME & 0.2979 & 0.2969 & 0.2723 & 0.2904 \\
\hline
\end{tabular}
\vspace{-18pt}
\end{center}
\end{table}
\begin{table}[tb!]
\begin{center}
\caption{\textbf{Point-to-Plane RMSE study.} ICP is used to align the predicted and reference meshes. We calculate point-to-plane RMSE after ICP.}
\vspace{-12pt}
\label{P2PRMSE_metric}
\small
\begin{tabular}[c]
{|
p{1.6cm}<{\centering\arraybackslash}|
p{0.9cm}<{\centering\arraybackslash}|
p{0.9cm}<{\centering\arraybackslash}|
p{1.4cm}<{\centering\arraybackslash}|
p{1.45cm}<{\centering\arraybackslash}|}
\hline
Holistic Registration & Base-1 & Base-2 & CMP- supervised & CMP- unsupervised \\
\hline
RMSE & 1.357 & 1.348 & 1.210 & 1.312 \\
\hline
\end{tabular}
\vspace{-10pt}
\end{center}
\end{table}
\begin{table}[tb!]
\begin{center}
\caption{\textbf{Part-based point-to-plane RMSE study.}
}
\label{part_metric}
\vspace{-10pt}
\small
\begin{tabular}[c]
{|
p{1.7cm}<{\centering\arraybackslash}|
p{0.9cm}<{\centering\arraybackslash}|
p{0.9cm}<{\centering\arraybackslash}|
p{1.35cm}<{\centering\arraybackslash}|
p{1.4cm}<{\centering\arraybackslash}|}
\hline
Part Registration & Base-1 & Base-2 & CMP- supervised & CMP- unsupervised \\
\hline
Left Eye & 0.3961 & 0.3945 & \textbf{0.3517} & 0.3779 \\
Right Eye & 0.3667 & 0.3656 & \textbf{0.3349} & 0.3488 \\
Nose & 0.5258 & 0.5250 & \textbf{0.5141} & 0.5177 \\
Mouth & 0.3466 & 0.3435 & \textbf{0.2958} & 0.3149 \\
Left Cheek & 0.4748 & 0.4735 & \textbf{0.4654} & 0.4711 \\
Right Cheek & 0.5078 & 0.5061 & \textbf{0.4916} & 0.4919 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}[hbt!]
\begin{center}
\includegraphics[width=0.3\linewidth]{Figures/regions.png}
\end{center}
\vspace{-14pt}
\caption{\textbf{Illustration of regions.} We show regions of the left eye, right eye, nose, mouth, left cheek, and right cheek that are used in the part-based point-to-plane evaluation using ICP in our paper Table 3.}
\label{regions}
\vspace{-5pt}
\end{figure}
\section{Simple Oracles}
\label{sec:oracles}
We provide numerical results of simple oracles as other baselines. Oracle (1): We take average 3D faces in the Voxceleb-3D training set and use the mean shape directly as predictions for testing data; Oracle (2): We take the mean 3D face for the male/female group in the training set and use the mean shape for male/female as predictions at test time. The following Table \ref{tab:oracles} shows results using line-based (Mean ARE), point-based (NME), and region-based (RMSE) metrics, compared with Base-2 described in paper paper Sec.4-Baseline. Simple oracles perform worse than Base-2, which means directly taking average faces is naive and weaker than the network-based solutions. This validates our baseline construction, and our CMP framework can further predict more accurate face geometry from voices for each person of interest.
\begin{table}[ht]
\caption{\textbf{Oracle results}. We show quantitative evaluations of simple oracles explained in Sec. \ref{sec:oracles}.}
\begin{center}
\vspace{-5pt}
\footnotesize
\begin{tabular}[c]
{|
p{1.9cm}<{\centering\arraybackslash}|
p{1.0cm}<{\centering\arraybackslash}|
p{1.2cm}<{\centering\arraybackslash}|
p{1.2cm}<{\centering\arraybackslash}|
p{1.25cm}<{\centering\arraybackslash}|}
\hline
Metrics & Type & Oracle(1) & Oracle(2) & Base-2 \\
\hline
Mean ARE & line & 0.0319 & 0.0311 & 0.0302 \\
NME & point & 0.3058 & 0.3021 & 0.2969\\
RMSE & region & 1.540 & 1.529 & 1.348 \\
\hline
\end{tabular}
\vspace{-5pt}
\label{tab:oracles}
\end{center}
\end{table}
\section{Pose from the Pretrained Expert}
\label{sec:pose}
Here we study the robustness of head poses estimated from the expert, which helps our visualization (in Fig.7-10 of the paper) of laying 3D face meshes onto images to show the fitness.
SynergyNet \cite{wu2021synergy} as an expert used in the unsupervised framework predicts 3DMM parameters ($\alpha_s$ and $\alpha_e$) as pseudo-groundtruth based on images synthesized from GAN. Here we verify the robustness of pose estimation from the expert. As illustrated in our paper Fig. 8, synthesized faces from GAN are almost frontal because face images in VGGFace \cite{BMVC2015_41} for GAN-training are with small poses. We adopt widely-used AFLW2000-3D \cite{zhu2016face} including 2K in-the-wild face images to examine the performance of head pose estimation. Then, we calculate the mean absolute error (MAE) for three estimated Euler angles (yaw, pitch, roll). MAE is 1.49 degrees for faces whose yaw angle (left/right turns) lies in [-15$^{\circ}$, 15$^{\circ}$]. This result justifies the robustness of head pose estimation from the pretrained expert.
\section{Ablation Study on Knowledge Distillation}
\label{sec:study_unsupervised}
\begin{table*}[htb]
\begin{center}
\caption{\textbf{Study on performances of different KD strategies.} ARE is used as the metric for comparison.}
\label{KD_study}
\footnotesize
\begin{tabular}[c]
{|
p{1.20cm}<{\centering\arraybackslash}|
p{1.30cm}<{\centering\arraybackslash}|
p{1.30cm}<{\centering\arraybackslash}|
p{1.30cm}<{\centering\arraybackslash}|
p{1.50cm}<{\centering\arraybackslash}|
p{1.30cm}<{\centering\arraybackslash}|
p{1.30cm}<{\centering\arraybackslash}|
p{1.30cm}<{\centering\arraybackslash}|
p{1.30cm}<{\centering\arraybackslash}|}
\hline
ARE & Vanilla KD & Attention & SP & Correlation & RKD & CRD & VID & PKT\\
\hline
ER & 0.0306 & 0.0318 & 0.0230 & 0.0227 & 0.0172 & 0.0198 & 0.0213 & 0.0184\\
FR & 0.0173 & 0.0172 & 0.0169 & 0.0173 & 0.0171 & 0.0172 & 0.0172 & 0.0172\\
MR& 0.0173 & 0.0173 & 0.0179 & 0.0179 & 0.0195 & 0.0177 & 0.0178 & 0.0176\\
CR & 0.0540 & 0.0551 & 0.0471 & 0.0471 & 0.0474 & 0.0481 & 0.0471 & 0.0484\\
\hline
Mean & 0.0298 & 0.0304 & 0.0262 & 0.0263 & \textbf{0.0254} & 0.0255 & 0.0259 & \textbf{0.0254}\\
\hline
\end{tabular}
\end{center}
\end{table*}
We conduct an extensive survey for the performance of various recent KD strategies on our unsupervised framework.
We include vanilla KD \cite{hinton2015distilling}, Attention \cite{zagoruyko2016paying}, SP \cite{tung2019similarity}, Correlation \cite{peng2019correlation}, RKD \cite{park2019relational}, CRD \cite{tian2019crd}, VID \cite{ahn2019variational}, PKT \cite{passalis2020probabilistic}, and train our unsupervised framework with different $\mathcal{L}_{\textit{KD}}$.
Here we show the results in Table \ref{KD_study}.
We find that more recent and advanced KD methods attain similar results. For example, RKD, CRD, and PKT have very close performance, compared with earlier methods such vanilla version or using attention map similarity. Therefore, the study validates our adoption of conditional probability in our paper Eq.(4)\footnote{The scaled and shifted cosine similarity is $K_{cosine}(z_i,z_j)=\frac{1}{2} \left((z_i^E z_j / \|z_i^E\|_2 \|z_j\|_2) +1 \right).$}, introduced in PKT \cite{passalis2020probabilistic}.
\begin{figure}[bt!]
\begin{center}
\includegraphics[width=1.0\linewidth]{Figures/baseline_comp_supp.pdf}
\end{center}
\caption{\textbf{Qualitative comparison between our unsupervised CMP and the baseline}. This figure presents more results that extend Fig. 10 of the main paper.}
\label{unsupervised_comp_supp}
\end{figure}
Further, we exhibit more qualitative comparisons in Fig. \ref{unsupervised_comp_supp} extending Fig. 10 of the main paper.
\section{Applications of Voice-to-3D-Face Task}
\label{sec:applications}
We focus on the analysis that purposes to validate the correlation between voice and 3D face geometry. Here we describe more on application sides, where our work has potential at:
1. Our work can be used for public security, such as recovering the face shape of the unheard speech of a suspect or a masked robber.
2. Our work can generate personal avatars in gaming or virtual reality systems: it is helpful to create a rough 3D face model from voices as the initialization, and users can refine its shape based on one's preference.
3. 3D faces from voice can potentially provide another verification mode for person identification other than speech and face image verification.
\section{Limitations}
\label{sec:limitation}
Our work focus on the analysis between voices and 3D faces, and generating high-quality meshes is not our aim. In fact, using voices as inputs to produce face meshes has its inherent limitations since our face wideness might be gleaned from voices from our intuition, but more subtle details, such as bumps or wrinkles, cannot be hinted at from this modality.
We target at analysis between one's normal voices and face shapes and utilize Voxceleb as the speech source, which is primarily interviews or talks. As pointed out in main paper Sec.5- Ethical statement, more implicit factors like talking after drinking or abnormal health conditions may affect the analysis, but this requires large data corpse from a medical or physiological view to further validate these effects.
\section{Overview}
This supplementary document is organized as follows.
In Sec. \ref{sec:net_arch}, we show detailed network architectures for both supervised and unsupervised learning scenarios.
In Sec. \ref{sec:dataset}, we describe details about our Voxceleb-3D training and evaluation split.
In Sec. \ref{sec:vis_supervised}, we respond to Q1-Q4 using illustrations from our CMP- supervised learning setting.
In Sec. \ref{sec:point-based}, we introduce both point-based and region-based metrics and compare results produced from our CMP with those from baselines.
In Sec. \ref{sec:oracles}, we provide two simple oracles as non-network-based solutions for references using averaged face meshes in the training data as the predictions.
In Sec. \ref{sec:pose}, we show the robustness of head pose estimation of the expert network.
In Sec. \ref{sec:study_unsupervised}, we present more results from our unsupervised setting.
In Sec. \ref{sec:applications}, we describe more on the applications of the cross-modal learning from voices to 3D faces.
In Sec. \ref{sec:limitation}, we describe limitations of this work.
The numbering of figures, tables, and equations in this supplementary document go with the prefix 'S-'.
\vspace{-3pt}
\section{Network Architecture}
\label{sec:net_arch}
Here we exhibit detailed network architectures for both supervised and unsupervised settings in our CMP in Fig. \ref{network_arch}.
\section{Voxceleb-3D Training/Evaluation Split}
\label{sec:dataset}
We display the details of the training/evaluation split in Table \ref{statistics}. As described in our paper Sec. 3.2 and Sec. 4-Datasets, Voxceleb-3D inherited from Voxceleb and VGGFace contains 1225 people. Names starting with 'A' - 'E' are included in the evaluation set, and the others are in the training set.
The training set contains as many utterances we can fetch from Voxceleb, and the evaluation set contains three utterances for each person, amounting to a total of 0.9K utterances.
Face images are not included in the evaluation set because they cannot be used to calculate 3D face modeling errors, and thus we put a '-' mark in the table.
For 3D faces, we fit landmarks from images and obtain the optimized 3DMM and reconstructed 3D faces, as described in our paper Sec. 3.2 and Sec. 4-Datasets. There are several images associated with a person in VGGFace. We fit 3DMM parameters and reconstruct 3D meshes from these images. To create \textit{reference face meshes} for a person to fulfill quantitative evaluation, we manually select one neutral 3D face from the pool that best fits 2D facial outlines on images. Therefore, there are 301 3D face meshes represented in 3DMM parameters for each person as the reference.
At test time, three utterances for each identity are used as inputs to reconstruct 3D faces. Those three predicted models are then used to compute quantitative results with the picked one reference model for each identity.
Note that Voxceleb collects speech clips of interviews or talks for celebrities scraped from the web, and only gender labels are available in Voxceleb. Other features may require self-disclosure or are hard to trace, such as ages at the time of speaking, and thus are unavailable.
\begin{table}[htb]
\begin{center}
\caption{\textbf{Voxceleb-3D training/evaluation split.} We provide data split details including number of utterances, number of face images, number of 3DMM parameters (equivalent to the number of 3D meshes), number of male and female, and number of people. Images for the evaluation set are not used for quantitative evaluation, and thus we mark the number '-'. We also display gender pie charts below the table.}
\label{statistics}
\vspace{-10pt}
\begin{tabular}[c]
{|
p{4.5cm}<{\centering\arraybackslash}|
p{1.5cm}<{\centering\arraybackslash}|
p{1.4cm}<{\centering\arraybackslash}|}
\hline
Dataset & Training & Evaluation \\
\hline
\# of utterances & 113K & 0.9K \\
\# of face images & 107K & - \\
\# of 3DMM param (face mesh) & 107K & 301 \\
\# of male/female & 485/439 & 182/119 \\
\# of people & 924 & 301 \\
\hline
\end{tabular}
\vspace{-20pt}
\begin{subfigure}{0.45\linewidth}
\centering
\includegraphics[width=\textwidth]{Figures/train-gender.pdf}
\vspace{-20pt}
\caption{Training split}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{0.45\linewidth}
\centering
\includegraphics[width=\textwidth]{Figures/test-gender.pdf}
\vspace{-20pt}
\caption{Evaluation split}
\label{fig:sub2}
\end{subfigure}
\end{center}
\end{table}
\section{Response to Q1-Q4 using Supervised CMP}
\label{sec:vis_supervised}
Following main paper Sec.4.1-Analysis, we present the counterpart of A1-A4 using our supervised framework.
\textbf{A1-Face meshes from our supervised learning.} In Fig. \ref{supervised_gt}, we present four types of face shapes -- \textit{skinny}, \textit{wide}, \textit{regular}, and \textit{slim} -- and show the reference images.
The produced face meshes from our supervised learning setting exhibit the model's ability to produce various types of face shapes and are also consistent with the reference images, which are provided for shape identification purposes.
This illustration also validates our findings in Table 1 of the paper: the lowest absolute ratio error is ear-to-ear ratio (ER) distance, which is associated with overall face shapes, indicating wider or thinner faces.
We further investigate the proximity of the illustrated four face types, we calculate the parameter space Euclidean distances and show the confusion matrix in Fig. \ref{confusion}.
\begin{figure}[b]
\begin{center}
\includegraphics[width=1.0\linewidth]{Figures/confusion.png}
\end{center}
\vspace{-8pt}
\caption{\textbf{Proximity for four face types.} We show (A) a confusion matrix and (B) distances with mean face shape in 3DMM parameter space to help comprehension for the face type variation in Fig. \ref{supervised_gt}.}
\label{confusion}
\vspace{-6pt}
\end{figure}
\textbf{A2-Mesh prediction coherence from our supervised learning.} In Fig. \ref{coherence}, we display coherence of test-time face mesh predictions from our supervised learning setting. We use different utterances from the same speaker at different time-step as inputs to produce the meshes.
In Fig. \ref{coherence}, one can observe from, for example, jaw widths that the output meshes are different for the two speakers;
by contrast, meshes for the same speaker are highly coherent.
These results demonstrate that our training strategy successfully predicts coherent geometry for the same speaker and can predict different topologies for different identities.
Finally, this coherence illustration also implies advantages over previous voice-to-face methods that work on the image domain \cite{NEURIPS2019_eb9fc349, oh2019speech2face}. Their generation includes variations of background, hairstyles, and the rest. In contrast, Our results exclude these variations and focus on facial geometry to validate the correlation between face shapes and voices.
\textbf{A3+A4-Comparison against the baseline and the major improvement from our supervised learning.} We further compare against Base-2 (See Sec.4-Baseline in the paper: the cascaded pretrained blocks). One reference face, one 3D face mesh produced by our method, and one by Base-2 are presented in Fig. \ref{supervised_comp}.
For the example on the left, the person of interest has a wider jawbone, and the mesh produced from our method also shows a similar trait. On the right, the image shows a wider face shape and apparent cheeks. Our 3D model also displays a similar shape, but Base-2 shows a much thinner face. Our mesh can reflect the wideness of faces, which corresponds to our findings in paper Table 1 that the major improvement voice can hint is ER (ear-to-ear ratio).
In summary, we use the above visual results to show that the supervised learning of the analysis framework is effective.
Fig. \ref{supervised_gt} shows the output face meshes have similar overall face shapes to the reference images, which shows the model's ability for various types of faces and is validated in Fig. \ref{confusion}.
Fig. \ref{coherence} shows our supervised method can predict coherent face shapes.
Fig. \ref{supervised_comp} shows the output face models are more similar to the reference than Base-2 in terms of overall face shapes, which again validates the ER improvements shown in the paper Table 1.
\begin{figure}[htb!]
\begin{center}
\includegraphics[width=0.8\linewidth]{Figures/LMR_kpts.png}
\end{center}
\vspace{-18pt}
\caption{\textbf{Illustration of commonly-used 68-point 3D facial landmarks.}}
\label{landmark}
\end{figure}
\begin{figure}[bt!]
\begin{center}
\includegraphics[width=0.50\linewidth]{Figures/term_explanation.png}
\end{center}
\vspace{-15pt}
\caption{\textbf{Illustration of physiology terms in Sec. \ref{sec:point-based}}. }
\label{terms}
\end{figure}
\clearpage
\begin{figure*}[bt!]
\begin{center}
\includegraphics[width=0.75\linewidth]{Figures/supervised_gt.png}
\end{center}
\vspace{-12pt}
\caption{\textbf{Visualization of predicted 3D face meshes from our supervised learning}. We display four face shapes, skinny, wide, regular, and slim, and their reference images to show the shape correspondence. References are provided to identify face shapes of the person of interest.}
\label{supervised_gt}
\vspace{-15pt}
\end{figure*}
\begin{figure*}[bt!]
\begin{center}
\includegraphics[width=0.75\linewidth]{Figures/coherence_supp.png}
\end{center}
\vspace{-8pt}
\caption{\textbf{Inference coherence of meshes produced from our CMP- supervised learning.} }
\label{coherence}
\vspace{-6pt}
\end{figure*}
\begin{figure*}[bt!]
\begin{center}
\includegraphics[width=0.70\linewidth]{Figures/supervised_comp.png}
\end{center}
\vspace{-16pt}
\caption{\textbf{Comparison of output face meshes from our CMP of the supervised learning and Base-2}. In case (a), our mesh shows a more squared face with a wider jawbone, but Base-2 only shows a slim face. Reference face in (b) is wider and bears apparent cheeks, and our result is much more similar to the reference.}
\label{supervised_comp}
\vspace{-7pt}
\end{figure*}
\clearpage
\section{Point-based and Region-based Metrics}
\label{sec:point-based}
Normalized Mean Error (\textbf{NME}, point-based) of facial landmarks. BFM Face annotates 68 points 3D facial landmark points that lie on the eyes, nose, mouth, and face outlines (shown in Fig. \ref{landmark}). We calculate the NME of the landmark point set between the predicted and reference 3D face meshes, i.e., first calculate the Euclidean distance of two landmark sets and then normalize the distance by the face size (square root of face width $\times$ length).
Results in Table~\ref{NME_metric} show NME for 3D facial landmark alignment. Quantitative results under this metric show improvements, but the gains are smaller. It is because most facial landmarks concentrate on eyes, nose, and mouth parts that naturally bear more minor deformations across people. For example, the nose tip and mid-dorsum usually lie on the centerline of faces, and alar base and columella are located around them closely. (See Fig. \ref{terms} for the definition of these physiology terms.)
Point-to-Plane Root Mean Square Error (\textbf{Point-to-Plane RMSE}, region-based). We follow the surface registration for 3D models using the popular iterative closest point (ICP) \cite{pomerleau2015review} algorithm to align the predicted and reference meshes. We then calculate point-to-plane RMSE. Registration for the holistic face and facial parts (illustrated in Fig.~\ref{regions}) are considered and shown in Table~\ref{P2PRMSE_metric} and Table~\ref{part_metric}. Both supervised and unsupervised CMP outperforms the baselines in either holistic or part-based registrations. These evaluations indicate the capability of cross-modal learning, from voice inputs to 3D face outputs.
\begin{table}[tb!]
\begin{center}
\caption{\textbf{NME for point-based metric study.} 68 facial landmarks annotated in BFM Face \cite{paysan20093d} are used for measurements.}
\vspace{-7pt}
\small
\label{NME_metric}
\begin{tabular}[c]
{|
p{1.5cm}<{\centering\arraybackslash}|
p{0.9cm}<{\centering\arraybackslash}|
p{0.9cm}<{\centering\arraybackslash}|
p{1.4cm}<{\centering\arraybackslash}|
p{1.45cm}<{\centering\arraybackslash}|}
\hline
Landmark Alignment & Base-1 & Base-2 & CMP- supervised & CMP- unsupervised \\
\hline
NME & 0.2979 & 0.2969 & 0.2723 & 0.2904 \\
\hline
\end{tabular}
\vspace{-18pt}
\end{center}
\end{table}
\begin{table}[tb!]
\begin{center}
\caption{\textbf{Point-to-Plane RMSE study.} ICP is used to align the predicted and reference meshes. We calculate point-to-plane RMSE after ICP.}
\vspace{-12pt}
\label{P2PRMSE_metric}
\small
\begin{tabular}[c]
{|
p{1.6cm}<{\centering\arraybackslash}|
p{0.9cm}<{\centering\arraybackslash}|
p{0.9cm}<{\centering\arraybackslash}|
p{1.4cm}<{\centering\arraybackslash}|
p{1.45cm}<{\centering\arraybackslash}|}
\hline
Holistic Registration & Base-1 & Base-2 & CMP- supervised & CMP- unsupervised \\
\hline
RMSE & 1.357 & 1.348 & 1.210 & 1.312 \\
\hline
\end{tabular}
\vspace{-10pt}
\end{center}
\end{table}
\begin{table}[tb!]
\begin{center}
\caption{\textbf{Part-based point-to-plane RMSE study.}
}
\label{part_metric}
\vspace{-10pt}
\small
\begin{tabular}[c]
{|
p{1.7cm}<{\centering\arraybackslash}|
p{0.9cm}<{\centering\arraybackslash}|
p{0.9cm}<{\centering\arraybackslash}|
p{1.35cm}<{\centering\arraybackslash}|
p{1.4cm}<{\centering\arraybackslash}|}
\hline
Part Registration & Base-1 & Base-2 & CMP- supervised & CMP- unsupervised \\
\hline
Left Eye & 0.3961 & 0.3945 & \textbf{0.3517} & 0.3779 \\
Right Eye & 0.3667 & 0.3656 & \textbf{0.3349} & 0.3488 \\
Nose & 0.5258 & 0.5250 & \textbf{0.5141} & 0.5177 \\
Mouth & 0.3466 & 0.3435 & \textbf{0.2958} & 0.3149 \\
Left Cheek & 0.4748 & 0.4735 & \textbf{0.4654} & 0.4711 \\
Right Cheek & 0.5078 & 0.5061 & \textbf{0.4916} & 0.4919 \\
\hline
\end{tabular}
\vspace{-18pt}
\end{center}
\end{table}
\begin{figure}[hbt!]
\begin{center}
\includegraphics[width=0.3\linewidth]{Figures/regions.png}
\end{center}
\vspace{-14pt}
\caption{\textbf{Illustration of regions.} We show regions of the left eye, right eye, nose, mouth, left cheek, and right cheek that are used in the part-based point-to-plane evaluation using ICP in our paper Table 3.}
\label{regions}
\vspace{-5pt}
\end{figure}
\section{Simple Oracles}
\label{sec:oracles}
We provide numerical results of simple oracles as other baselines. Oracle (1): We take average 3D faces in the Voxceleb-3D training set and use the mean shape directly as predictions for testing data; Oracle (2): We take the mean 3D face for the male/female group in the training set and use the mean shape for male/female as predictions at test time. The following Table \ref{tab:oracles} shows results using line-based (Mean ARE), point-based (NME), and region-based (RMSE) metrics, compared with Base-2 described in paper paper Sec.4-Baseline. Simple oracles perform worse than Base-2, which means directly taking average faces is naive and weaker than the network-based solutions. This validates our baseline construction, and our CMP framework can further predict more accurate face geometry from voices for each person of interest.
\begin{table*}[htb]
\begin{center}
\caption{\textbf{Study on performances of different KD strategies.} ARE is used as the metric for comparison.}
\vspace{-8pt}
\label{KD_study}
\footnotesize
\begin{tabular}[c]
{|
p{1.20cm}<{\centering\arraybackslash}|
p{1.30cm}<{\centering\arraybackslash}|
p{1.30cm}<{\centering\arraybackslash}|
p{1.30cm}<{\centering\arraybackslash}|
p{1.50cm}<{\centering\arraybackslash}|
p{1.30cm}<{\centering\arraybackslash}|
p{1.30cm}<{\centering\arraybackslash}|
p{1.30cm}<{\centering\arraybackslash}|
p{1.30cm}<{\centering\arraybackslash}|}
\hline
ARE & Vanilla KD & Attention & SP & Correlation & RKD & CRD & VID & PKT\\
\hline
ER & 0.0306 & 0.0318 & 0.0230 & 0.0227 & 0.0172 & 0.0198 & 0.0213 & 0.0184\\
FR & 0.0173 & 0.0172 & 0.0169 & 0.0173 & 0.0171 & 0.0172 & 0.0172 & 0.0172\\
MR& 0.0173 & 0.0173 & 0.0179 & 0.0179 & 0.0195 & 0.0177 & 0.0178 & 0.0176\\
CR & 0.0540 & 0.0551 & 0.0471 & 0.0471 & 0.0474 & 0.0481 & 0.0471 & 0.0484\\
\hline
Mean & 0.0298 & 0.0304 & 0.0262 & 0.0263 & \textbf{0.0254} & 0.0255 & 0.0259 & \textbf{0.0254}\\
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{table}[ht]
\caption{\textbf{Oracle results}. We show quantitative evaluations of simple oracles explained in Sec. \ref{sec:oracles}.}
\begin{center}
\vspace{-18pt}
\footnotesize
\begin{tabular}[c]
{|
p{1.9cm}<{\centering\arraybackslash}|
p{1.0cm}<{\centering\arraybackslash}|
p{1.2cm}<{\centering\arraybackslash}|
p{1.2cm}<{\centering\arraybackslash}|
p{1.25cm}<{\centering\arraybackslash}|}
\hline
Metrics & Type & Oracle(1) & Oracle(2) & Base-2 \\
\hline
Mean ARE & line & 0.0319 & 0.0311 & 0.0302 \\
NME & point & 0.3058 & 0.3021 & 0.2969\\
RMSE & region & 1.540 & 1.529 & 1.348 \\
\hline
\end{tabular}
\vspace{-19pt}
\label{tab:oracles}
\end{center}
\end{table}
\section{Pose from the Pretrained Expert}
\label{sec:pose}
Here we study the robustness of head poses estimated from the expert, which helps our visualization (in Fig.7-10 of the paper) of laying 3D face meshes onto images to show the fitness.
SynergyNet \cite{wu2021synergy} as an expert used in the unsupervised framework predicts 3DMM parameters ($\alpha_s$ and $\alpha_e$) as pseudo-groundtruth based on images synthesized from GAN. Here we verify the robustness of pose estimation from the expert. As illustrated in our paper Fig. 8, synthesized faces from GAN are almost frontal because face images in VGGFace \cite{BMVC2015_41} for GAN-training are with small poses. We adopt widely-used AFLW2000-3D \cite{zhu2016face} including 2K in-the-wild face images to examine the performance of head pose estimation. Then, we calculate the mean absolute error (MAE) for three estimated Euler angles (yaw, pitch, roll). MAE is 1.49 degrees for faces whose yaw angle (left/right turns) lies in [-15$^{\circ}$, 15$^{\circ}$]. This result justifies the robustness of head pose estimation from the pretrained expert.
\section{Ablation Study on Knowledge Distillation}
\label{sec:study_unsupervised}
We conduct an extensive survey for the performance of various recent KD strategies on our unsupervised framework.
We include vanilla KD \cite{hinton2015distilling}, Attention \cite{zagoruyko2016paying}, SP \cite{tung2019similarity}, Correlation \cite{peng2019correlation}, RKD \cite{park2019relational}, CRD \cite{tian2019crd}, VID \cite{ahn2019variational}, PKT \cite{passalis2020probabilistic}, and train our unsupervised framework with different $\mathcal{L}_{\textit{KD}}$.
Here we show the results in Table \ref{KD_study}.
We find that more recent and advanced KD methods attain similar results. For example, RKD, CRD, and PKT have very close performance, compared with earlier methods such vanilla version or using attention map similarity. Therefore, the study validates our adoption of conditional probability in our paper Eq.(4)\footnote{The scaled and shifted cosine similarity is $K_{cosine}(z_i,z_j)=\frac{1}{2} \left((z_i^E z_j / \|z_i^E\|_2 \|z_j\|_2) +1 \right).$}, introduced in PKT \cite{passalis2020probabilistic}.
\begin{figure}[bt!]
\begin{center}
\includegraphics[width=1.0\linewidth]{Figures/baseline_comp_supp.pdf}
\end{center}
\vspace{-15pt}
\caption{\textbf{Qualitative comparison between our unsupervised CMP and the baseline}. This figure presents more results that extend Fig. 10 of the main paper.}
\label{unsupervised_comp_supp}
\vspace{-12pt}
\end{figure}
Further, we exhibit more qualitative comparisons in Fig. \ref{unsupervised_comp_supp} extending Fig. 10 of the main paper.
\section{Applications of Voice-to-3D-Face Task}
\label{sec:applications}
We focus on the analysis that purposes to validate the correlation between voice and 3D face geometry. Here we describe more on application sides, where our work has potential at:
1. Our work can be used for public security, such as recovering the face shape of the unheard speech of a suspect or a masked robber.
2. Our work can generate personal avatars in gaming or virtual reality systems: it is helpful to create a rough 3D face model from voices as the initialization, and users can refine its shape based on one's preference.
3. 3D faces from voice can potentially provide another verification mode for person identification other than speech and face image verification.
\section{Limitations}
\label{sec:limitation}
Our work focus on the analysis between voices and 3D faces, and generating high-quality meshes is not our aim. In fact, using voices as inputs to produce face meshes has its inherent limitations since our face wideness might be gleaned from voices from our intuition, but more subtle details, such as bumps or wrinkles, cannot be hinted at from this modality.
We target at analysis between one's normal voices and face shapes and utilize Voxceleb as the speech source, which is primarily interviews or talks. As pointed out in main paper Sec.5- Ethical statement, more implicit factors like talking after drinking or abnormal health conditions may affect the analysis, but this requires large data corpse from a medical or physiological view to further validate these effects.
|
2,877,628,091,158 | arxiv | \section{Introduction}
Usually, in quantum field theory, the Wilson loop operator is defined as $$W(C)=\dfrac{1}{N}TrPe^{i\oint_{C}A},$$
where $C$ denotes a closed loop in space-time and the trace is over
the fundamental representation of the gauge field $A$ with $SU(N)$ symmetry.
In the particular case of a rectangular
loop (of sides $T$ and $L$), it is possible to calculate
(in the limit $T\rightarrow\infty$) the expectation value for the Wilson loop:
$$<W(C)>={\cal A}(L)e^{-TE(L)},$$ where $E(L)$ can be identified with the energy of the quark-antiquark pair in the static limit.
Soon after the conjecture about the duality between M/string theory in $AdS$ spaces and conformal gauge
field theories \cite{maldacena1,GKP,Witten1,Review1,Review2},
Maldacena \cite{maldacena2}, Rey and Yee \cite{Rey:1998ik} (MRY),
proposed a method to calculate expectation values of the Wilson loop for the large $N$ limit of field theories.
This limit is calculated from a string theory in a given background using the gauge/gravity duality.
In this method, the expectation value of the Wilson loop is related
to the worldsheet area $S$ of a string whose boundary is the loop in
question such that $$<W(C)>\sim e^{S}.$$
Maldacena used this approach to calculate the quark-antiquark potential for the string in the
$AdS_5\times S^5$ background \cite{maldacena2} obtaining a non-confining potential for the
infinitely massive quark-antiquark pair, consistent with the conformal symmetry of the dual super Yang-Mills theory.
In other backgrounds the quark-antiquark potential can be confining as shown for instance in
\cite{kinar}, where a confinement criterion was obtained.
This approach can also be extended to the finite temperature case
\cite{finiteT,brandhuber} by considering an AdS Schwarzschild background. In this case,
the temperature of the conformal dual theory is identified with the Hawking
temperature of the black hole \cite{Witten2}.
This situation also leads to a non-confining potential for the quark-antiquark
interaction.
The thermodynamic of D-brane probes in a black hole background were treated in \cite{Mateos:2007vn}. These systems are holographically dual
to a small number of flavours in a finite-temperature gauge theory. First order phase transitions were found characterised by a
confinement/deconfinement transition of quarks.
A phenomenological approach was also considered calculating the Wilson loop for the string in some
holographic AdS/QCD models. For instance, the hard-wall model exhibits a confining behaviour
\cite{henrique2,Andreev:2006ct} reproducing the Cornell potential.
At finite temperature, this calculation gives a second order phase transition describing qualitatively a
confinement/deconfinement phase transition \cite{henrique3}.
Then, it was shown that a Hawking-Page phase transition \cite{HP} should occur for the hard- and
soft-wall models at finite temperature
\cite{Herzog,henrique4,Andreev:2006eh,Andreev:2006nw,Kajantie:2006hv}. In particular,
for the soft-wall model, an interesting estimate of the deconfinement temperature was found \cite{Herzog},
compatible with QCD expectations.
In a recent paper it were studied some geometric configurations of a static string on a D3-brane background \cite{henrique} and
also a string-like object on M2- and M5-brane backgrounds \cite{Quijada:2015tma}. These geometric configurations corresponds to a gauge
theory which describes the quark-antiquark interaction on the branes. For some specific geodesic regimes we found confining interactions and for
others non-confining potentials were found.
In this paper we perform a systematic analytical and numerical study of the quark-antiquark potentials in D3- M2- and M5- brane backgrounds analysing their confining/non-confining behaviours in different situations, always at zero temperature.
In particular, at the near horizon geometry the potentials are non-confining in agreement with conformal field theory expectations.
On the other side, far from horizon, the dual field theories are no longer conformal and the potentials present confinement.
This is in agreement with the expected behaviour of strings in flat space where the string mimics the flux tube model of QCD. In the cases of M2 and M5 branes in M-theory we choose a cigar-shaped membrane background such that that stringy picture of the dual flux tube also holds. We also focus in searching for the point in the geodesics at which the zero temperature confinement/deconfinement transition takes place.
\section{Wilson loops in D3- M2- and M5-brane spaces}
We start this study by considering the Wilson loop on the background generated by a large number of coincident D3-branes in string theory in 10 dimensional spacetime.
The Nambu-Goto string action \cite{Becker:2007zj}:
\begin{equation}
S=\frac{1}{2\pi}\int d\sigma d\tau\sqrt{\det(g_{NM}\partial_{\alpha}X^N\partial_{\beta}X^M})
\end{equation}
is employed on this background, where the scale was set to $\alpha'=1$, $X^N(\sigma,\tau)$ are the coordinates of the string worldsheet and $g_{NM}$ is the background metric. The specific form of the metric is given in the next section.
It is considered that the pair of quark-antiquark is contained in the D3-brane world which are attached to the ends of the open string
that lives in 10 dimensions. For simplicity, we work in a static string configuration, that is represented in
figure \ref{fig:one}. The Wilson loop corresponds to a rectangle with sides $L$ and $T$, where $T$ is some time interval. This
rectangle is associated with the worldsheet surface as shown in figure \ref{fig:two}.
\begin{figure}[h]
\centering
\includegraphics[width=10cm,height=6cm]{./figure1.png}
\caption{Position of quark $q$ and anti-quark $\bar q$ on the D3-brane (represented here by the $x$-axis)
together with the static string as the curve which connects $q$ and $\bar q$ through $r_0$.}
\label{fig:one}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=10cm,height=6cm]{./folha-do-mundo.png}
\caption{Curved worldsheet surface of the static string corresponding to a time interval $T$. The associated Wilson loop is the plane rectangle of sides $T$ and $L$.}
\label{fig:two}
\end{figure}
Thus the distance separation $L$ between the quark-antiquark pair may be computed starting from the geodesic of the static string on this
D3-brane background. This distance turns out to be an expression in terms of $r_0$ and $r_1$ (respectively, minimum and maximum for $r$ coordinate in the worldsheet. See \cite{kinar, henrique}):
\begin{equation} \label{L}
L=2\int \frac{g(r)}{f(r)}\frac{f(r_0)}{\sqrt{f^2(r)-f^2(r_0)}}dr,
\end{equation}
where
\begin{align*}
f^2(r)&=(2\pi)^{-2}g_{00}(r)g_{ii}(r)\,,\\
g^2(r)&=(2\pi)^{-2}g_{00}(r)g_{rr}(r)\,.
\end{align*}
According to the MRY proposal the worldsheet area ($S$) is proportional to the energy interaction ($V$) between the
quark-antiquark pair, so it may also be written down in terms of $r_0$ and $r_1$:
\begin{equation}\label{V}
V=2\int_{r_0}^{r_1}\frac{g(r)f(r)}{\sqrt{f^2(r)-f^2(r_0)}}dr-2m_q \,.
\end{equation}
Note that it is in general necessary to subtract the masses of the quarks $m_q$ in order to obtain a finite result for the energy interaction.
We continue our study analysing the cases concerning M2- and M5-brane backgrounds. Since these backgrounds of 11-dimensional SUGRA corresponds to M-theory objects, it is not possible to start from Nambu-Goto action. Instead we should start from a 11-dimensional membrane action in those backgrounds \cite{Duff:1990xz, Becker:2007zj}:
\begin{eqnarray}
S=\frac{1}{(2\pi)^2l_{11}^3}\int d^3\sigma\Big(\frac{(-\gamma)^{1/2}}{2}\left[\gamma^{ij}\partial_i X^{M}\partial_j X^{N}
G_{MN}(X)-1\right] \nonumber\\
+\epsilon^{ijk}\partial_i X^{M}\partial_j X^{N}\partial_k X^{P}A_{MNP}(X)\Big)\,,
\end{eqnarray}
where $i,j=0,1,2$ are world-volume indices with $\gamma^{ij}$ as the induced metric, $M,N,P=0,...10$ are space-time
indices with $G_{NM}$ as the space-time metric, $X^N(\sigma^0,\sigma^1,\sigma^2)$ are the membrane coordinates, $A_{MNP}$ is
a three-form field with with strength $F=dA$ and $l_{11}$ sets the scale for the membrane (see \cite{Duff:1990xz}).
After compactification of one spatial dimension of the membrane wrapped along the 11-th dimension of space-time we are able to reduce
the membrane in 11 dimensions to a string-like object in 10 dimensions (see \cite{duff}). As a result we are able to work with string-like objects
and similarly to the case of strings on D3 backgrounds, we utilize the static configuration and the MRY proposal to get
the distance separation and energy interaction between a pair of quark-antiquark on M2-(M5-)branes (see \cite{Quijada:2015tma}).
\section{D3-brane}
The solitonic solution of 10-dimensional supergravity that we are going to study is a space geometry generated by $N$ coincident D3-branes. This solution is usually written down as \cite{Horowitz:1991cd,GKP}:
\begin{equation}\label{0}
ds^2 =\left(1+\frac{R^4}{r^4}\right)^{-1/2}(-dt^2 + dx_3^2) +\left(1+\frac{R^4}{r^4}\right)^{1/2}(dr^2 + r^2 d\Omega^2_5)\,,
\end{equation}
where $R$ is a constant defined by $ R^4 = 4\pi g Nl_s^4 $.
Following the MRY approach, the calculation of the distance separation $L$, Eq. \eqref{L}, and the static potential interaction $V$, Eq. \eqref{V},
between a pair of quarks on the D3-brane were obtained in \cite{henrique}:
\begin{equation}\label{1}
L=\frac{2r_0^3}{R^2}I_1\left(\frac{r_1}{r_0}\right)+\frac{2R^2}{r_0}I_2\left(\frac{r_1}{r_0}\right)\,,
\end{equation}
\begin{equation}\label{2}
V=\frac{2r_0\sqrt{r_0^4+R^4}}{2\pi R^2l_s^2}I_1\left(\frac{r_1}{r_0}\right)-2m_q\,,
\end{equation}
where
\begin{equation}\label{2.1}
I_1\left(\frac{r_1}{r_0}\right)=\int_1^{r_1/r_0}dy\frac{y^2}{\sqrt{y^4-1}}\,,
\end{equation}
\begin{equation}\label{2.2}
I_2\left(\frac{r_1}{r_0}\right)=\int_1^{r_1/r_0}dy\frac{1}{y^2\sqrt{y^4-1}}\,.
\end{equation}
\noindent
Following \cite{kinar} we have that the quark mass must be:
\begin{equation}
2m_q= \frac{r_1}{\pi l_s^2}\,,
\end{equation}
which diverges in the limit $r_1 \to \infty$.
In the following we are going to study the distance separation $L$, Eq. \eqref{1}, and the potential energy $V$, Eq. \eqref{2}, of the quark-antiquark pair in various different situations in the D3-brane solution.
\subsection{Non-confining behaviour}
Let us start our study considering the regime defined by $r_1>>r_0$ which means that the quark is very massive. Also we take $r_0<<R$
which means that we are in the near horizon geometry which corresponds approximately to the AdS$_5$ space.
We take $r_0$ as the independent variable of parametrization with fixed $R$. Then, the behaviour of the distance separation $L$, Eq. \eqref{1}, against $r_0$ is analysed. The numerical result is shown in figure \ref{fig3}, where we plot $L/R$ vs. $r_0/R$. This plot shows a monotonic decreasing behaviour of $L/R$ against $r_0/R$.
\begin{figure}[h]
\centering
\includegraphics[width=7cm,height=5cm]{./D3-1.pdf}
\caption{Monotonic decreasing behaviour of $L/R$ vs. $r_0/R$. Here $r_1/R=10^5$ and $r_0/R<5 \times 10^{-4}$. }
\label{fig3}
\end{figure}
The next step is to analyse the behaviour of the potential $V$, Eq. \eqref{2}, against the separation $L$, Eq. \eqref{1}. The numerical result is shown in figure \ref{fig4}, where
we plot $V/R$ versus $L/R$.
This plot shows an increasing function which goes to zero as $L$ increases. So one can conclude that this plot corresponds to a non-confining potential which is essentially Coulomb like, as the one found by Maldacena in \cite{maldacena2} for the case of the pure AdS space. This result is also in agreement with
\cite{henrique} where a non-confining potential $V\sim-1/L$ was obtained in the regime $r_1>>r_0$ with $r_0<<R$. The dual field theory in this case is the well known $\cal N$=4 SYM which is a superconformal field theory. Then the non-confining behaviour found for the Wilson loop is in agreement with the conformal property of the dual theory.
\begin{figure}[h]
\centering
\includegraphics[width=7cm,height=5cm]{./D3-2.pdf}
\caption{Non-confining potential interaction $l_s^2V/R $ vs. $L/R$. Here $r_1/R=10^5$ and $r_0/R<5 \times 10^{-4}$.}
\label{fig4}
\end{figure}
\subsection{Confining behaviour}
Our next step is to analyse the regime $r_1>>r_0$ (very massive quark) but with $r_0>>R$ which corresponds to the region far from the horizon which is approximately a flat space geometry. First we perform a numerical study of the distance separation $L$, Eq. \eqref{1}, against the minimum position of the string $r_0$. The result of this analysis is presented in figure \ref{fig5}, where we plot $L/R$ vs. $r_0/R$.
This figure shows a monotonic increasing behaviour of $L/R$ against $r_0/R$.
\begin{figure}[h]
\centering
\includegraphics[width=7cm,height=6cm]{./D3-3.pdf}
\caption{Increasing behaviour of $L/R$ vs. $r_0/R$. Here $r_1/R=10^5$ and $r_0/R>10^3$.}
\label{fig5}
\end{figure}
Then, the next step is to study the shape of the potential $V$, Eq. \eqref{2}, against the separation distance $L$, Eq. \eqref{1}. We did this numerical study and the result is presented in figure \ref{fig6}, where
we plot the behaviour of $V/R$ against $L/R$.
\begin{figure}[h]
\centering
\includegraphics[width=6cm,height=8cm]{./D3-4.pdf}
\caption{Plot of $V/R$ vs. $L/R$ with $r_1/R=10^5$ and $r_0/R>10^3$ which shows a confining potential.}
\label{fig6}
\end{figure}
Looking at figure \ref{fig6} we see an almost straight line with positive derivative indicating that this
plot implies a confining potential. This result is in agreement with ref. \cite{henrique}, where a linear confining potential was obtained in this regime for the quark antiquark pair in D3-brane space. The dual theory in this case is no longer conformal, since we are far from the horizon. Here, we can understand this picture as a string in flat space which mimics the confining flux tube of QCD.
\subsection{Deconfinement/Confinement transition}
In previous sections we obtained confining and non-confining behaviours for the potential energy $V$, Eq. \eqref{2}, against the separation distance $L$, Eq. \eqref{1}, in D3-brane space for different regimes of $r_0$ compared with $R$. So, we expect that a transition should occur between the regimes $r_0>>R$ (far from the horizon) and $r_0<<R$ (near the horizon).
In this section we work with $r_1>>r_0$ for $r_0$ values in the regime $r_0\sim R$ such that we may find some deconfinement/confinement transition. Note that this is not a thermal phase transition since we are working at zero temperature. Instead, the expected transition should be related to the geometry of the D3-brane space.
First we present in figure \ref{fig7} a plot showing how $L/R$ varies against $r_0/R$. This picture shows a minimum value of $r_0$ which we call $r*$.
\begin{figure}[h]
\centering
\includegraphics[width=8cm,height=5cm]{./D3-5.pdf}
\caption{ $L/R$ vs. $r_0/R$. Here $r_1/R=10^5$ and $r_0/R<0.2$ .}
\label{fig7}
\end{figure}
Note that it is also possible to define $r*$
from equations (\ref{1}), (\ref{2.1}) and (\ref{2.2}). Using these equations we get an equation whose root is precisely $r*$:
\begin{equation}\label{r*D3}
\frac{6r*^2}{R^2}I_{1}\left(\frac{r_1}{r*}\right)-\frac{2r_1^3}{R^2r*\sqrt{(r_1/r*)^4-1}}
-\frac{2R^2}{r*^2}I_{2}\left(\frac{r_1}{r*}\right)-\frac{2R^2}{r_1r*\sqrt{(r_1/r*)^4-1}} =0
\end{equation}
Some numerical solutions for this equation are shown in table \ref{my-label}.
\begin{table}[h]
\centering
\caption{Some values of $r*/R$ in eq. \eqref{r*D3} for different values of $r_1/R$.}
\label{my-label}
\begin{tabular}{|l|l|ll}
\cline{1-2}
$r*/R$ & $r_1/R$ & & \\ \cline{1-2}
$9.87\times 10^{-3}$ & $10^6$ & & \\ \cline{1-2}
$ 3.015 \times 10^{-2}$ & $10^5 $ & & \\ \cline{1-2}
$6.687 \times 10^{-2}$ & $10^4 $ & & \\ \cline{1-2}
$1.42 \times 10^{-1}$ & $10^3 $ & & \\ \cline{1-2}
\end{tabular}
\end{table}
The potential interaction $V$, Eq. \eqref{2}, against the separation distance $L$, Eq. \eqref{1}, in this regime is presented in figure \ref{fig8}. In this figure we can notice
that there are two branches: the inferior one is a non-confining Coulomb-like potential, and the superior one is a confining potential that has
a monotonic increasing behaviour as $L/R$ is increased.
\begin{figure}[h]
\centering
\includegraphics[width=7cm,height=6cm]{./D3-6.pdf}
\caption{Transition of the potential interaction. Here $r_1/R=10^5$.}
\label{fig8}
\end{figure}
From fig. \ref{fig8}, our analysis shows that the $r_0<r*$ condition corresponds to non-confining behaviour and $r_0>r*$ condition corresponds to confining one. This is the expected transition in the confinement/deconfinement behaviour of the quark-antiquark pair potential in D3-brane space. The transition seems to occur near the region $r_0 \sim r*$. It is important to remark that this is not a thermal phase transition since we are working at zero temperature and the transition is of geometrical nature.
\section{M2-brane}
In the previous section, we presented an analysis of the Wilson loop for the D3-brane background. Here in this section and in the following we are going to present a similar discussion for other backgrounds such as M2- and M5-brane spaces.
Although these background spaces belong to 11-dimensional M-theory that must correspond to higher dimensional objects like membranes, it is possible to do a dimensional reduction. This consists in compactifying one dimension of the membrane along one spacial direction, in order to have a string-like configuration in 10-dimensional
background spaces. For details see \cite{duff, Quijada:2015tma}.
We start the study of confinement with the MRY method in SUGRA-backgrounds with the case of the space generated
by $N$ coincident M2-branes.
The 11-dimensional supergravity M2-brane solution is given by the metric (see \cite{Duff:1990xz,Review2,Review1}):
\begin{equation}
ds^2_{\text{M2}}=\left(1+\frac{R_2^6}{r^6}\right)^{-2/3}dx_3^2+\left(1+\frac{R_2^6}{r^6}\right)^{1/3}(dr^2+r^2d^2\Omega_7^2),
\end{equation}
where $R_2$ is a constant defined by $R_2=(32\pi Nl^6_{11})^{1/6}$,
$N$ is the number of coincident branes, $l_{11}$ is the Plank's length in eleven dimensions and $d\Omega_7$ is the differential solid angle for seven-sphere.
In a previous work \cite{Quijada:2015tma} the distance separation ($L$) and static potential (V) for a pair of quarks in a M2-brane
space were obtained:
\begin{equation}\label{3.1}
L=2r_0\int_{1}^{r_1/r_0}dy\frac{\left(1+\frac{\epsilon}{y^6}\right)y^3}{\sqrt{y^8-y^6+\epsilon(y^8-1)}},
\end{equation}
\begin{equation}\label{3.2}
V=\frac{2r_0^2\sqrt{1+\epsilon}}{2\pi l_{11}^3}\int_1^{y_1}dy\frac{y^5}{\sqrt{y^8-y^6+\epsilon(y^8-1)}}-2m_q,
\end{equation}
where $r_0$ (- $r_1$) is the minimum(-maximum) value of coordinate $r$ associated with the string-like object obtained by dimensional reduction and $\epsilon=(R_2/r_0)^6$. Again following ref. \cite{kinar},
we can compute the quark mass $m_q$ as:
\begin{equation}
2m_q=\frac{r_1^2}{2\pi l_{11}^3}\,,
\end{equation}
which diverges if we let $r_1 \to \infty$.
\subsection{Non-confining behaviour}
In this section we work in the geometric regime $r_0<<r_1$ which corresponds to very massive quarks and with $r_0<<R_2$ which means that we are in the near horizon geometry which is approximately AdS$_4$.
First we plot in figure \ref{fig:10} the distance $L/R_2$, eq. (\ref{3.1}), against $r_0/R_2$. We can notice from this plot
that $L$ has a monotonic decreasing behaviour as $r_0$ is increased.
\begin{figure}[h]
\centering
\includegraphics[width=7cm,height=6cm]{./M2-1.pdf}
\caption{Distance $L/R_2$ between quarks in a M2-brane space vs. $r_0/R $. Here $r_1/R_2=10^4$, $r_0/R_2<5\times 10^{-3}$.}
\label{fig:10}
\end{figure}
Next, we plot in figure \ref{fig:11} the potential interaction $l_{11}^3V/R_2^2$, eq.(\ref{3.2}), against the distance of the quark-antiquark pair
$L/R_2$, eq.(\ref{3.1}). As we can note from this plot, the potential interaction in this case turns out to have a
Coulomb-like non-confining behaviour. This is in agreement with the result obtained in this same regime in ref. \cite{Quijada:2015tma} and with the fact that the dual field theory is conformal, since we are in the near horizon geometry which is approximately AdS$_4$.
\begin{figure}[h]
\centering
\includegraphics[width=8cm,height=6cm]{./M2-2.pdf}
\caption{Potential $l_{11}^3V/R_2^2$ between quarks in a M2-brane space vs. $L/R_2$. Here $r_1/R_2=10^4$, $r_0/R_2<5\times 10^{-3}$.}
\label{fig:11}
\end{figure}
\subsection{Confining behaviour}
Here we still work in the regime $r_1>>r_0$, but with $r_0>>R_2$, which corresponds to the region far from the horizon which is approximately a flat space. Now we plot in figure \ref{fig:12} the rationalised distance
$L/R_2$, eq.(\ref{3.1}), against $r_0/R_2$. From this plot we can see that the distance $L$ has an increasing behaviour as $r_0$ is increased. Notice that this behaviour is almost linear.
\begin{figure}[h]
\centering
\includegraphics[width=7cm,height=6cm]{./M2-3.pdf}
\caption{Distance $L/R_2$ between quarks in a M2-brane space vs. $r_0/R_2$. Here $r_1/R=10^4$ and $r_0/R_2>10^2$.}
\label{fig:12}
\end{figure}
Next, continuing in the same geometric regime, we plot in figure \ref{fig:13} the potential $l_{11}^3V/R_2^2$, eq.(\ref{3.2}), against $L/R_2$, eq.(\ref{3.1}). We can
see from this plot that potential interaction $V$ has positive derivative, which means a confining behaviour. This behaviour is expected since we are working in the region far from the horizon of the M2-brane geometry where the dual field theory is non-conformal.
\begin{figure}[h]
\centering
\includegraphics[width=7cm,height=6cm]{./M2-4.pdf}
\caption{Potential $l_{11}^3V/R^2$ between quarks in a M2-brane space vs. $L/R_2$ for $r_0>>R_2$. Here $r_1/R=10^4$ and $r_0/R_2>10^2$.}
\label{fig:13}
\end{figure}
\subsection{Deconfinement/Confinement transition}
In the last subsections we had a non-confining behaviour at the regime $r_0<<R_2$ and a confining one at $r_0>>R_2$. So
in this section we look for a transition behaviour at a middle term regime $r_0\sim R_2$.
First we plot the distance $L/R_2$ between quarks, eq.(\ref{3.1}), against $r_0/R_2$, which is shown in figure \ref{fig:14}. From this plot we notice that there is a minimum at $r_0=r*$.
\begin{figure}[h]
\centering
\includegraphics[width=6cm,height=8cm]{./M2-5.pdf}
\caption{Distance $L/R_2$ between quarks in a M2-brane space vs. $r_0/R_2$. Here $r_1/R_2=10^4$, $r_0/R\leq1$ and $r*/R=0.500$.}
\label{fig:14}
\end{figure}
Also we can get this value $r*$ as a root of an equation that can be derived from (\ref{3.1}):
\begin{equation}\label{r*M2}
\frac{d}{dr_0}L(r_0/R_2)=0\,.
\end{equation}
Some solutions of this equation are presented
on table \ref{table2} for different values of $r_1/R_2$.
\begin{table}[h]
\centering
\caption{Some values of $r*/R_2$ in eq. \eqref{r*M2} for different values of $r_1/R_2$.}
\label{table2}
\begin{tabular}{|l|l|ll}
\cline{1-2}
$r*/R_2$ & $r_1/R_2$ & & \\ \cline{1-2}
0.500 & $1\times 10^4$ & & \\ \cline{1-2}
0.503 & $8\times 10^3$ & & \\ \cline{1-2}
0.510 & $4\times 10^3$ & & \\ \cline{1-2}
0.524 & $1\times 10^3$ & & \\ \cline{1-2}
\end{tabular}
\end{table}
Next we plot in figure \ref{fig:15} the potential $l_{11}^3 V/R^2_2$, eq.(\ref{3.2}), against the distance $L/R_2$, eq.(\ref{3.1}).
We notice from this plot that there are two branches: the inferior one corresponding to a non-confining Coulomb-like potential and
the superior one corresponding to a confining potential. Also, from our analysis of the last plots, we can conclude that for $r_0<r*$ the potential is a non-confining one while for $r_0>r*$ the potential is a confining one.
\begin{figure}[h]
\centering
\includegraphics[width=7cm,height=8cm]{./M2-6.pdf}
\caption{Potential $l_{11}^3 V/R^2_2$ between quarks in a M2-brane space vs $L/R_2$. }
\label{fig:15}
\end{figure}
\section{M5-brane}
Now we analyse the confinement behaviour of a quark-antiquark pair using the MRY method in the 11-dimensional SUGRA background space generated by
$N$ coincident M5-branes. The metric solution is \cite{Review2,Review1}:
\begin{equation}
ds^2_{\text{M5}}=\left(1+\frac{R_5^3}{r^3}\right)^{-1/3}dx_6^2+\left(1+\frac{R_5^3}{r^3}\right)^{2/3}(dr^2+r^2d\Omega^2_4),
\end{equation}
where $R_5$ is a constant given by $R_5=(\pi N l_{11}^3)^{1/3}$.
Acoording to \cite{Quijada:2015tma}, the distance separation and the potential interaction of a pair of quarks are given by:
\begin{equation}\label{L5}
L=2r_0\int_{1}^{r_1/r_0}dy\frac{(1+\epsilon/y^3)^{1/2}}{(y^2-1)^{1/2}}
\end{equation}
\begin{equation}\label{V5}
V=\frac{r_0^2}{\pi l_{11}^3}\int_1^{r_1/r_0}dy\frac{y^2(1+\epsilon/y^3)^{1/2}}{(y^2-1)^{1/2}}-2m_q
\end{equation}
where $r_0$ (- $r_1$) is the minimum(-maximum) value of coordinate $r$ of the string-like object obtained from dimensional reduction and $\epsilon=R_5^3/r_0^3$.
Following ref. \cite{kinar} we can compute the quark mass:
\begin{equation}
2m_q= \frac{1}{\pi l_{11}^3}\int_0^{r_1}dr\, r\, \sqrt{1+\left(\frac{R_5}{r}\right)^3}\,,
\end{equation}
which is divergent in the limit $r_1\to\infty$.
\subsection{Non-confining behaviour}
We work here in the regime $r_1>>r_0$ which means that the quarks are very massive and with $r_0<<R_5$ corresponding the region near horizon which in this case is approximately an AdS$_7$ geometry.
For this regime the distance $L/R_5$ between the pair of quarks, eq.(\ref{L5}), against
$r_0/R_5$ is plotted in figure \ref{fig:17}. This plot shows that the distance $L$ has a
monotonic decreasing behaviour as $r_0$ is increased.
\begin{figure}[h]
\centering
\includegraphics[width=8cm,height=7cm]{./m5-1.pdf}
\caption{Distance $L/R_5$ between quarks vs. $r_0/R_5$. Here $r_1/R_5=10^2$, $r_0/R_5<5\times 10^{-5}$.}
\label{fig:17}
\end{figure}
Next we plot in figure \ref{fig:18} the potential interaction $l_{11}^3V/R_5^2$, eq.(\ref{V5}), against the distance
separation between quarks $L/R_5$, eq.(\ref{L5}). We can see from this plot that the potential $V$ shows a non-confining behaviour: it has a negative slope and it goes to minus infinite as $L$ increases. This is the expected behaviour since we are in the near horizon region where the metric is aproximately an AdS$_7$ compatible with a conformal field theory.
\begin{figure}[h]
\centering
\includegraphics[width=8cm,height=7cm]{./m5-2.pdf}
\caption{Potential $l_{11}^3V/R_5^2$ between quarks in a M5-brane space vs. the distance $L/R_5$. Here $r_1/R_5=10^2$ and $r_0/R_5<5\times 10^{-5}$.}
\label{fig:18}
\end{figure}
\subsection{Confining behaviour}
In this subsection we still work in the regime $r_1>>r_0$ but with $r_0>>R_5$ (far from the horizon) which corresponds to an approximately flat space geometry. In
this regime we plot in figure \ref{fig:18a} the distance separation $L/R_5$, eq.(\ref{L5}), against $r_0/R_5$.
We can see from this plot that $L$ shows an almost linear behaviour as $r_0$ is increasing.
\begin{figure}[h]
\centering
\includegraphics[width=7cm,height=8cm]{./m5-3.pdf}
\caption{Distance $L/R_5$ between quarks in a M5-brane space vs. $r_0/R_5$. Here $r_1/R_5=10^6$ and $r_0/R_5>10^3$.}
\label{fig:18a}
\end{figure}
Next we plot in figure \ref{fig:18b} the
potential interaction $l_{11}^3V/R_5^2$ between the pair of quarks, eq.(\ref{V5}), against the distance between quarks $L/R_5$, eq.(\ref{L5}). From this plot we can notice that the potential $V$ shows a confining behaviour: it has a positive slope as $L$ is increased. This behaviour is expected since we are working in the region far from the brane which approaches asymptotically a flat space so that the dual field theory is no longer conformal.
\begin{figure}[h]
\centering
\includegraphics[width=8cm,height=8cm]{./m5-4.pdf}
\caption{Potential interaction $l_{11}^3V/R_5^2$ between quarks in a M5-brane space vs. the distance $L/R_5$. Here $r_1/R_5=10^6$ and $r_0/R_5>10^3$.}
\label{fig:18b}
\end{figure}
\subsection{Deconfinement/Confinement transition}
In the last subsections we found non-confining potential behaviour at $r_0<<R_5$ and confining potential behaviour at $r_0>>R_5$.
In this section we work in the regime $r_0\sim R_5$ and look for a confinement/deconfinement transition.
First we plot in \ref{fig:19} the distance separation between quarks $L/R_5$, eq.(\ref{L5}), against $r_0/R_5$. From this plot we can notice
that there is a minimum at the position $r_0=r*$.
\begin{figure}[h]
\centering
\includegraphics[width=7cm,height=6cm]{./m5-5.pdf}
\caption{Distance $L/R_5$ between quarks in a M5-brane space vs. $r_0/R_5$. Here $r_1/R_5=10^3$, $r_0/R_5<0.7$ and
$r*/R_5=0.19$.}
\label{fig:19}
\end{figure}
We can also get $r*$ as a root of a equation that is obtained deriving equation (\ref{L5}):
\begin{equation}\label{r*M5}
\frac{d }{dr_0}L(r_0/R_5)=0\,.
\end{equation}
Some solutions of this equation are shown in table \ref{table3} for some values of $r_1/R_5$.
\begin{table}[h]
\centering
\caption{Some values of $r*/R_5$ in eq. \eqref{r*M5} for different values of $r_1/R_5$.}
\label{table3}
\begin{tabular}{|l|l|ll}
\cline{1-2}
$r*/R_5$ & $r_1/R_5$ & & \\ \cline{1-2}
0.19 & $1\times 10^3$ & & \\ \cline{1-2}
0.21 & $5\times 10^2$ & & \\ \cline{1-2}
0.25 & $1 \times 10^2$ & & \\ \cline{1-2}
0.42 & $1\times 10^1$ & & \\ \cline{1-2}
\end{tabular}
\end{table}
\begin{figure}[h]
\centering
\includegraphics[width=7cm,height=6cm]{./m5-6.pdf}
\caption{Potential interaction $l_{11}^3V/R_5^2$ between quarks in a M5-brane space vs. $L/R_5$. Here $r_1/R_5=10^3$, $r_0/R_5<0.5 $ and
$r*/R_5=0.19$. }
\label{fig:21}
\end{figure}
Finally we plot in figures \ref{fig:21} and \ref{fig:22} the potential interaction $l_{11}^3V/R_5^2$, eq.(\ref{V5}), against the distance
separation between quarks $L/R_5$,
eq.(\ref{L5}).
From these plots we can see that we have two branches:
The superior one in figure \ref{fig:21}, which continuation for larger values of $L/R_5$ is amplified in figure \ref{fig:22}, corresponds to a
confining potential interaction, since we can observe that it has a positive derivative as $L$ increases.
On the other side, the inferior one,
that is just presented in figure \ref{fig:21}, corresponds to a non-confining potential.
Also we can conclude from these plots that values with $r_0<r*$ correspond to a non-confining behaviour, and values with $r_0>r*$ correspond to a confining behaviour.
\begin{figure}[h]
\centering
\includegraphics[width=7cm,height=6cm]{./m5-7.pdf}
\caption{Potential interaction $l_{11}^3V/R_5^2$ between quarks in a M5-brane space vs. $L/R_5$. Here $r_1/R_5=10^2$, $0.5<r_0/R_5$ and
$r*/R_5=0.19$. }
\label{fig:22}
\end{figure}
\section{Conclusions}
We have analysed the Wilson loops for D3-, M2-, and M5-brane backgrounds using the MRY approach.
As was discussed previously in refs. \cite{henrique,Quijada:2015tma} these backgrounds imply confining and non-confining quark-antiquark potentials depending on the geometric regime considered. We investigated here these situations further and mainly the transition between these two confinement behaviours.
In general for the three geometries that we have studied, we notice that as the distance separation $L/R_i$ has a monotonic decreasing behaviour with $r_0$, one finds a non-confining potential interaction. This situation occurs at the regime $r_1>>r_0$ and $r_0<<R_i$, which corresponds to heavy quark masses in AdS geometries ($R_i$ assumes the values $R$, $R_2$, and $R_5$ for the geometries D3-, M2-, and M5-branes, respectively).
On the other hand, when the distance separation $L/R_i$ is a monotonic increasing function of $r_0$, one finds a confining potential interaction. This situation occurs at the regime $r_1>>r_0$ and $r_0>>R_i$, which corresponds to heavy quark masses in flat space geometries. This confining behaviour can be understood looking at the metric in the region far from the brane. In this case the metric approaches a flat spacetime so that the dual field theory is no longer conformal. This situation is analogous to a string in flat space which mimics the flux tube model of QCD showing confinement.
We found out that the confinement/deconfinement transition occurs at a point $r*$ in the regime $r_0\sim R_i$ for the D3, M2, and M5-brane backgrounds. The point $r*$ is where the non-monotonic $L$ distance function of $r_0$ is a minimum.
The value of $r*$ depends on $r_1$ and $R_i$ and we have tabulated possible values in tables \ref{my-label}, \ref{table2} and \ref{table3}, for each geometry. This situation occurs at the regime $r_1>>r_0$ (heavy quark) and corresponds to a transition between the AdS and flat space geometries. All these situations were analysed at zero temperature, so that the nature of the transitions are purely geometrical and not thermodynamical.
It would be interesting to analyse if this discussion can be extended to other Wilson loop configurations where $1/N$ corrections are present \cite{Farquet:2014bda,Gomis:2006sb, Gomis:2006im, Buchbinder:2014nia}.
\acknowledgments We would like to thank C. Hoyos for interesting discussions at the {\it Strings at Dunes} conference in Natal, Brazil, 2016, where a previous version of this work was presented. We would like also to thank CNPq - Brazilian agency - for financial support.
|
2,877,628,091,159 | arxiv | \section{Introduction}
Microalgae have recently received more and more attention in the frameworks
of CO$_2$ fixation and renewable energy \cite{Huntley2007, Chisti2007}. Their
high actual photosynthetic yield compared to terrestrial plants (whose growth
is limited by CO$_2$ availability) leads to large potential algal biomass
productions of several tens of tons per hectare and per
year \cite{Chisti2007}. Also, they have a potential for numerous high added value
commercial applications \cite{Spolaore200687} as well as wastewater treatment
capabilities including nutrients and heavy metal removal
\cite{Hoffmann1998,Oswald1988}.
We focus on the industrial production of microalgae in the so called photobioreactors. In this work, photobioreactor is meant in the large, including many possibles types of culturing devices, including the most simple open raceways systems.
Concentrating on the bioenergy applications of
microalgae, one central feature is the energy balance of the process; the only
conceivable light source ({\it i.e} the primary energy source) is natural
sunlight, which varies during the day. This variation might have an important
impact on the microalgae productivity; it is therefore necessary to take it
into account in order to design control laws that will be applied to the
bioreactor.
The objective of this paper is to develop an optimal control law that would
maximize the yield of a photobioreactor operating in continuous mode, while
taking into account that the light source that will be used is the natural
light. The light source is therefore periodic with a light phase (day) and a
dark phase (night). In addition to this time-varying periodic light source, we
will take the auto-shading in the photobioreactor into account: the pigment
concentration (mainly chlorophyll) affects the light distribution and thus the
biological activity within the reactor. As a consequence, for too high a
biomass, light in the photobioreactor is strongly attenuated and per-capita
growth is low.
Optimal control of bioreactors has been studied for many years whether it was
for metabolites production \cite{Tartakovsky1995}, ethanol fermentation
\cite{Wang1997}, baker yeast production \cite{Wu1985} or, more generally,
optimal control of fed-batch processes taking kinetics uncertainties into
account \cite{Smets2004}. The control of photobioreactors is however a lot
more scarce in the literature, though the influence of self-shading on the
optimal setpoint \cite{Masci2010} or on an MPC control algorithm
\cite{Becerra2008} for productivity optimization has already been studied. The
light-variation was mostly absent \cite{Masci2010,Becerra2008} or considered
to be an input that could be manipulated in order to impose the physiological
state of the microalgae \cite{Marxen2005} or maximize productivity as one of
the parameters of bioreactor design \cite{Suh2003}. The present problem has
however not been tackled yet in the literature.
We therefore developed a model that takes both the light variation and the
self-shading features into account in order to develop the control law, where
the substrate concentration in the input (marginally) and the dilution rate
(mainly) will be used. This model should not be too complicated in order to be
tractable and should present the main features of the process. Since we want
to develop a control strategy that will be used on the long run, we could
choose an infinite time-horizon measure of the yield. However, we rather take
advantage of the observation that, in the absence of a discount rate in the
cost functional, the control should be identical everyday and force the state
of the system to be identical at the beginning of the day and 24 hours
later. We therefore opted for optimizing a cost over one day with the
constraint that the initial and terminal states should be identical.
The paper is structured as follows: in Section~\ref{sec:model} we develop a
photobioreactor model presenting all the aforementioned features; in Section
\ref{sec:constant} we identify the optimal operating mode in constant light
environment; in Section \ref{sec:varying} we develop our main result, that is
the form of the optimal control law in a day \& night setting; in Section
\ref{sec:large} we identify the consequence if the control constraint is
generous; we conclude by a simulation study and bifurcation analysis in
Section \ref{sec:simulations}.
\section{A photobioreactor model with light attenuation}\label{sec:model}
Micro-algae growth in a photobioreactor is often modeled through one of two
models, the Monod model \cite{Monod1942} or the Droop Model
\cite{Droop1968}. The latter is more accurate as it separates the process of
substrate uptake and growth of the microalgae. However, the former already
gives a reasonable representation of reality by coupling growth and uptake,
and is more convenient for building control laws since it is simpler. The
Monod model writes:
\begin{equation}\label{monod}
\left\{
\begin{array}{lll}
\dot S& =& D(S_{in}-S)-k\mu(S,I)X\;,\\
\dot X& =& \mu(S,I)X-DX\;,
\end{array}
\right.
\end{equation}
where $S$ and $X$ are the substrate (e.g. nitrate concentration) and biomass (measured in carbon concentration, gC.L$^{-1}$) in the medium,
which is supposed to be homogeneous through constant stirring, while $D$ is the dilution rate, $S_{in}$ is the substrate input concentration and $k$ is the substrate/biomass yield coefficient; $\mu(S,I)$ is the microalgae biomass growth rate, depending on substrate and light intensity $I$, which is often taken to be of the Michaelis-Menten form with respect to $S$ and $I$:
\begin{equation}\label{michaelis-menten}
\mu(S)=\frac{\bar\mu S}{S+K_S}\frac{ I}{I+K_I} \;.
\end{equation}
With $K_S$ and $K_I$ the associated half saturation constants.
We will however not focus on this specific form and simply consider that $\mu(S,I)=\mu_S(S)\mu_I(I)$ is the product of a light related function $\mu_I(I)$ and of a substrate related function $\mu_S(S)$, such that $\mu_S(0)=0$, is increasing and bounded (with $\lim_{S\rightarrow\infty}\mu_S(S)=\bar \mu $). Generalization for the function $\mu_I(I)$ will also be proposed later on.
We will now provide extensions to this model so that it fits better to the production problem in high-density photobioreactors under study.
\subsection{Adding respiration}
The central role played by respiration in the computation and optimization of
the productivity of a photobioreactor has been known for a long time (see
e.g. \cite{GroSoe85}), since it tends to reduce the biomass. We therefore
introduce respiration by the microalgae into our model. In contrast to
photosynthesis, this phenomenon takes place with or without light: from a
carbon point of view, it converts biomass into carbon dioxyde, so that we
represent it as a $-r X$ term that represents loss of biomass. The biomass
dynamics then become
\[
\dot{X} = \mu(S) X -rX -D X\;.
\]
Remark that mortality is also included in the respiration term. Mortality can be high in photobioreactors, due to its high biomass density.
\subsection{Adding light attenuation}
When studying high concentration reactors, light
cannot be considered to have the same intensity in any point of the reactor. We consider an horizontal planar
photobioreactor (or raceway) with constant horizontal section $A$ over the
height of the reactor and vertical incoming light. We will then represent
light attenuation following an exponential Beer-Lambert law \cite{Lambert1760}
where the attenuation at some depth $z$ comes from the total biomass $Xz$ per
surface unit contained in the layer of depth $[0,z]$:
\begin{equation} \label{eq:Iz}
I(Xz) = I_0\mathrm e^{-aXz}\;,
\end{equation}
where $I_0$ is the incident light and $a$ is a light attenuation
coefficient. In microalgae, chlorophyll is mostly the cause of this shadow
effect and, in model (\ref{monod}), it is best represented by a fixed portion
of the biomass \cite{Bernard2009}, which yields the direct dependence in $X$
in model (\ref{eq:Iz}).
With such an hypothesis on the light intensity that reaches depth $z$, growth
rates vary with depth: in the upper part of the reactor, higher light causes
higher growth than in the bottom part. Supposing that light attenuation
directly affects the growth rate \cite{Huismann2002}, the growth rate
in the form (\ref{michaelis-menten}) for a given depth $z$ can then be written
as
\[
\displaystyle \mu_z(S,I(Xz)) = \frac{ I(Xz)}{I(Xz)+K_I} \mu_S(S)\;.
\]
We see that this results in a differentiated biomass growth-rate in the
reactor which could possibly yield spatial heterogeneity in the biomass
concentration; however, constant stirring is supposed to keep the
concentrations of $S$ and $X$ homogeneous. Then, we can compute the mean growth rate in the reactor:
\[
\mu(S,I_0,X) = \frac{1}{L} \int_0^L \mu_z(S,I(Xz))\mathrm dz\;,
\]
where $L$ is the depth of the reactor. It is this average growth rate that will be used
in the lumped model that we develop.
We then have:
$$
\begin{array}{lll}
\mu(S,I_0,X)&=& \displaystyle\frac{1}{L}\int_0^L\frac{I_0\mathrm e^{-aXz}}{I_0\mathrm e^{-a Xz}+K_I}\>\mathrm dz \mu_S(S) \\
&=& \displaystyle\frac{1}{aXL}\ln\left(\!\frac{I_0+K_I}{I_0\mathrm e^{-aXL}+K_I}\!\right)\!\mu_S(S)\;
\end{array}
$$
\subsection{Considering varying light}
In order to determine more precisely the model, we should now indicate what the varying light is like. Classically, it is considered that daylight
varies as the square of a sinusoidal function so that
\[
I_0(t)=\left(\max\left\{\sin\left(\frac{2\pi t}{{T}}\right),0\right\}\right)^2\;,
\]
where $T$ is the length of the day. The introduction of such a varying light would however render the computations analytically untractable. Therefore, we approximate the light source by a step function:
$$
I_0(t)=\left\{\begin{array}{lllll}
\bar I_0,&\ & 0\le t < \bar {T}&\mbox{\quad ---\quad light phase}\;,\\
0,&\ & \bar {T}\le t < {T}&\mbox{\quad ---\quad dark phase}\;.
\end{array}\right.
$$
In a model where the time-unit is the day, ${T}$ will be equal $1$. At the equinoxes, we have that $\bar {T}=\frac{{T}}{2}$, but this quantity obviously depends on the time of the year.
\subsection{Reduction and generalization of the model}
The system for which we want to build an optimal controller is therefore
\begin{equation}
\label{our_monod}
\left\{
\begin{array}{lll}
\dot S& =&\displaystyle D(S_{in}-S)-k \frac{1}{aXL}\ln\left(\!\frac{I_0(t)+K_I}{I_0(t)\mathrm e^{-aXL}+K_I}\!\right)\! \mu_S(S) X\;,\\
\dot X& =& \displaystyle\frac{1}{aXL}\ln\left(\!\frac{I_0(t)+K_I}{I_0(t)\mathrm e^{-aXL}+K_I}\!\right)\!\mu_S(S)X-rX-DX\;.
\end{array}
\right.
\end{equation}
However, in order to maximize the productivity it is clear that the larger $S$
the better: since it translates into larger growth rates of the biomass
$X$. Hence, the control of the inflow concentration of the substrate $S_{in}$
should always be kept very large so as to always keep the substrate in the
region where $\mu_S(S)\approx \bar \mu$. We can then concentrate on the reduced
model
\[
\dot X = \frac{\bar \mu}{a L}\ln\frac{I_0(t)+K_I}{I_0(t)\mathrm e^{-aXL}+K_I}-rX-DX\;,
\]
where the only remaining control is the dilution $D$ and which then
encompasses all the relevant dynamics for the control problem.
As was seen in \cite{Masci2010}, the relevant concentration in the photobioreactor, with
Beer-Lambert light attenuation, is not the volumic density $X$ but rather the surfacic
density $x=XL$: the evolution of this quantity is indeed independent of the reactor's
depth: whether we consider a deep and very diluted reactor or a shallow high
concentration reactor with identical surfacic density
\begin{equation}\label{reduced-monod}
\begin{array}{lll}
\dot x&=& \frac{\mathrm d(XL)}{\mathrm dt}=L\left(\frac{\bar\mu}{aL}\ln\frac{I_0(t)+K_I}{I_0(t)\mathrm e^{-aXL}+K_I}-rX-DX\right)\\
&=& \displaystyle\frac{\bar\mu}{a}\ln\displaystyle\frac{I_0(t)+K_I}{I_0(t)\mathrm e^{-ax}+K_I}-rx-Dx\;,
\end{array}
\end{equation}
which is indeed independent of the depth $L$.
The reduced model (\ref{reduced-monod}) can be rewritten by taking advantage of the special form of varying light source as follows
\begin{equation}\label{reduced-monod-bang}
\dot x= \left\{\begin{array}{llll}
\displaystyle f(x)-rx-Dx,&\ & 0\le t < \bar {T}\;,&\\
-rx-Dx,&\ & \bar{T}\le t<{T}\;.&
\end{array}\right.
\end{equation}
with
\[f(x)= \frac{\bar \mu}{a}\ln\frac{\bar I_0+K_I}{\bar I_0\mathrm e^{-ax}+K_I}\].
This model is quite simple except for the only nonlinear term which directly
comes from the form of $\mu_I$ and from the Beer-Lambert law (\ref{eq:Iz}). In order to generalize our approach to more light responses ({\it e.g.} for high density photobioreactor with possible high light inhibition , see \cite{Bernard2011}) and to not restrict ourselves to the Beer-Lambert law, we notice that the function
$f(x)$ is zero in zero,
increasing in $0$, bounded and strictly concave in $x$. Such a choice of $f(x)$ yields the following somewhat trivial property, that will prove important in the following, and that is linked to $f(x)$ being concave, with $f(0)=0$ and $f'(0)>0$:
\begin{Property}\label{PropTRIVIALE}
With $f:I \! \! R^+\rightarrow I \! \! R$ strictly concave and such that $f(0)=0$ and $f'(0)>0$, we have that, for any $x>0$:
\[
f'(0)>\frac{f(x)}{x}>f'(x)\;,
\]
which also shows that $\liminf_{x\rightarrow+\infty}f'(x)\leq 0$.
Moreover \[
\frac{\mathrm d}{\mathrm dx}\left(\frac{f(x)}{x}\right)<0\;.
\]
\end{Property}
\begin{proof} We have $f(x)=f(0)+\int_0^x f'(s)\mathrm ds$ with $f(0)=0$ and $f'(0)>f'(s)>f'(x)$ for all $0<s<x$ due to the concavity, so that $f(x)<\int_0^x f'(0)\mathrm ds=f'(0)x$ and $f(x)>\int_0^x f'(x)\mathrm ds=f'(x)x$.
The decreasing of $\frac{f(x)}{x}$ comes from the explicit computation $\frac{\mathrm d}{\mathrm dx}\left(\frac{f(x)}{x}\right)=\frac{f'(x)x-f(x)}{x^2}$ which is negative thanks to the previous property.
\end{proof}
The four properties that we impose on $f(x)$ should
be expected from the net-growth in a photobioreactor: no growth should take place in the absence of biomass, it should be increasing
because additional biomass should lead to more growth (at least at low densities); it should be bounded
because, when $x$ is very large the bottom of the reactor is in the dark, so
that adding new biomass simply increases the dark zone without allowing additional growth; and the per-capita growth-rate $\frac{f(x)}{x}$ should be decreasing because additional biomass slightly degrades the environment
for all because of the shadowing it forces.
As a generalization of the model (\ref{reduced-monod}), we consider that, during the day of length $\overline{T}<T$, the system is written as
\begin{equation}\label{day}
\dot x=f(x)-rx-ux\;,
\end{equation}
with $f(0)=0, f'(x)>0, f''(x)< 0$, $f(x)$ bounded, $r>0$ and $x,u\,\inI \! \! R^+$ and, during the night of length $T-\overline{T}$, as
\begin{equation}\label{night}
\dot x=-rx-ux\;.
\end{equation}
In order to couple both these systems, we define $h(t)$ as a step function whose value is $1$ for $t<\overline{T}$ and $0$ for $t\geq \overline{T}$ that will allow to synthetize (\ref{day})-(\ref{night}) in the form
\begin{equation}\label{total}
\dot x=f(x)h(t)-rx-ux\;.
\end{equation}
As we will see, the central property that will be exploited is the strict concavity of $f(x)$; in the following, we will use the simpler term ``concavity'' to denote that property.
Lastly, in practice $u$ cannot take any value: it should be positive, but also upper-bounded since its value is determined by the maximum capacity of some pumps. In the following, we will consider that $0\leq u(t)\leq \bar{u}$ at all times (with $\bar u>0$).
\section{Productivity optimization in constant light environment}\label{sec:constant}
In a constant light environment ({\it i.e.} $h(t)$ is constant), we will be able to exploit the system at a
steady state. In that case, the dynamics are strictly imposed by (\ref{day})
and the constant $u$ and corresponding equilibrium value $x^*(u)$ are chosen
in such a way that the maximum of the biomass flow rate at the output of the
reactor with the upper surface $A$ is reached. This is defined as $\max_{u}
(uA) x^*(u)$ and constitutes this productivity analysis.
Since $ux=f(x)-rx$ at equilibrium, maximizing $uA x$ amounts to maximizing $f(x)-rx$. $f(x)$ being concave, this is equivalent to find the unique solution of
\[
f'(x)=r\;,
\]
when it exists. Noting that $\lim_{x\rightarrow+\infty}f'(x)\leq 0$ and that
$f'(.)$ (see Property \ref{PropTRIVIALE}) is a decreasing function of $x$ because of the concavity of $f$, this
last equality has a positive solution if $f'(0)>r$ is satisfied. This
condition is crucial since, without it, the only equilibrium of the system is
$x=0$, independently of the choice of $u$. Under this assumption, the biomass
density that corresponds to the maximization of the productivity in a constant
light environment is
\begin{equation}\label{xsigma}
x_\sigma=(f')^{-1}(r)\;.
\end{equation}
so that that $f(x)-rx$ is increasing for $x<x_\sigma$ and decreasing for $x>x_\sigma$, which will extensively use in the following. We then obtain
\begin{equation}\label{usigma}
u_\sigma=\frac{f(x_\sigma)}{x_\sigma}-r=\frac{f\left[(f')^{-1}(r)\right]}{(f')^{-1}(r)}-r\;,
\end{equation}
which is positive by definition of $x_\sigma$. In that case, the optimal instantaneous surfacic productivity $u_\sigma x_\sigma= f(x_\sigma)-r x_\sigma
Taking into account the actuator constraint, if $u_\sigma\leq \bar{u}$, it yields the optimal productivity. Indeed, since it satisfies
\[
\frac{f(x^*(u))}{x^*(u)}-r=u\;.
\]
with $\frac{f(x)}{x}$ decreasing because of Property \ref{PropTRIVIALE}, $x^*(u)$ is a decreasing function which is larger than $x_\sigma$ for $u<u_\sigma$. The productivity $ux^*(u)=f(x^*(u))-rx^*(u)$ is then an increasing function of $u$ because $f(x)-rx$ is decreasing as long as $u<u^{\sigma}$ and the optimal productivity is obtained with $u=\bar{u}$.
For convenience, we will define two other equilibria beyond $x_\sigma$: the equilibria of (\ref{day}) for $u=0$ and $u=\bar{u}$, that we will denote $\bar x^0$ and $\bar x^{\bar{u}}$ respectively.
Note that the study of the present section is in line with our work in
\cite{Masci2010}, where we considered the productivity optimization in a
constant light for a Droop model with light attenuation. In that study, the
analysis was much complicated by the link between shading and nitrogen content
of the algae, so that both $S_{in}$ and $D$ had to be taken into account.
\section{Productivity optimization in day/night environment}\label{sec:varying}
In an environment with varying light,a non constant input must be considered. Here we consider the case where the photobioreactor is operated on the long term, with a daily biomass production from the reactor outlet. The problem that we thus consider is the
maximization of the biomass production over a single day
\[
\max_{u(t)\,\in\,[0,\bar{u}]} \int_0^T (u(t)A) x(t)\mathrm dt\;,
\]
We are then looking for a periodic regime, where, the photobioreactor is operated identically each day. This means that the initial condition at the beginning of the day should equal the final condition at the end of the day. This then requires
that we add the constraint
\[
x(T)=x(0)\;.
\]
In actual applications, the length of the bright phase will change slightly
from one day to the next. This will probably impose a slight change of biomass
at the beginning of the next day but, in this preliminary study, we will
consider that such a phenomenon has little effect on the qualitative form of
the solutions.
We therefore are faced with the following Optimal Control Problem:
\begin{equation}\label{OCP}
\begin{array}{l}
\displaystyle\max_{u(t)\,\in\,[0,\bar{u}]} \int_0^T u(t) x(t)\mathrm dt\\
\hspace*{1cm}\mbox{with}\hspace*{0.5cm}\dot x = f(x)h(t)-rx-ux\;,\\
\hspace*{2.15cm}x(T)=x(0)\;,\\
\end{array}
\end{equation}.
\subsection{Dealing with the T-periodicity}
In order to solve this problem, it is convenient to observe that $x(T)=x(0)=x_0$
cannot be achieved for all values of $x_0$ even without requiring
optimality. For that, we consider the best case scenario, that is the one where $u=0$ for all $t$, which yields the
largest value of $x(T)$ since no biomass is taken out of the system (this is
also seen by comparing the system with $u=0$ to systems with any $u=u(t)$); the value of
$x(T)$ obtained in the closed photobioreactor must then be larger than $x_0$
for (\ref{OCP}) to have a chance to have a solution that starts in that $x_0$.
In this section, we will give a condition that guarantees that the set of initial conditions $x_0$ that yield $x(T)>x_0$ in a closed photobiotreactor is contained in an interval $\mathcal I=[0,x_{0max}]$, inside which the initial condition in the solution of (\ref{OCP}) will lie.
The first observation that we can make is that, for $x_0\geq \bar x^0$ (the equilibrium of system (\ref{day}) with $u=0$), we have $x(T)<x_0$ because $x(t)$ is then decreasing all along the solution. The upper-bound $x_{0max}$, if it exists, is therefore smaller than $\bar x^0$.
In order to guarantee that the interval is non-empty (or equivalently that $x_{0max}>0$) we will then concentrate on what happens for $x_0$
in a small neighborhood of $0$. The (\ref{day}) dynamics with
$u=0$ can then be rewritten as
\[
\dot x=( f'(0)-r)x\;,
\]
so that $x(\overline{T})=x_0\mathrm e^{(f'(0)-r)\overline{T}}$ and $x(T)=x(\overline{T})\mathrm e^{-r(T-\overline{T})}= x_0\mathrm e^{( f'(0)-r)\bar T}\mathrm e^{-r(T-\overline{T})}$. We then have $x(T)>x_0$ for $x_0$ small if the exponential factor is larger than $1$, that is if:
\begin{assumption}\label{fprimemin} The growth function $f(x)$ is $\mathcal{C}^1$, bounded, satisfies $f(0)=0$, $f'(0)>0$, $f''(x)<0$ for all $x\geq 0$ and
\begin{equation}\label{mumin}
f'(0) \bar{T}>rT\;.
\end{equation}
\end{assumption}
This condition is quite natural since it imposes that, when the population is
small, that is when the per capita growth rate is the largest, growth during
the daylight period exceeds the net effect of respiration that takes place
We have shown that this condition is sufficient but it is also necessary for $x_{0max}>0$ since Property \ref{PropTRIVIALE} imposes that
\[
\dot x=f(x)h(t)-rx-ux\leq (f'(0)h(t)-r-u)x
\]
so that, for a given $x_0$, $x(T)$ in a closed photobioreactor is always smaller with the nonlinear system than the value $x^l(T)$ obtained with its linear upper approximate. A necessary condition for $x(T)>x_0$ is therefore $x^l(T)>x_0$, which amount to Assumption \ref{fprimemin}, which is therefore necessary for $x_{0max}>0$.
Further properties are summed-up in the following proposition
\begin{prop}
If Assumption \ref{fprimemin} is satisfied
\begin{itemize}
\item System (\ref{total}) has a unique initial condition $x_0=x^f_0$ which is such that $x(T)=x_0$ if $u(t)=0$ for all times. It is the unique fixed point of
\[
\int_{x_0}^{x_0\mathrm e^{r(T-\overline{T})}}\frac{1}{f(\xi)-r\xi}\>\mathrm d\xi=\overline{T}\;.
\]
\item This value $x^f_0$ is such that, with $u(t)=0$ at all times, $x(T)>x_0$ for all $x_0<x^f_0$, and $x(T)<x_0$ for all $x_0>x^f_0$ so that $x_{0max}=x^f_0$.
\item If
\begin{equation}\label{x0max}
f'(0)\overline{T}>(r+\bar{u})T\;,
\end{equation}
there is a unique $x_0=x_{0min}$ such that $x(T)=x_0$ if $u(t)=\bar u$ for all times. Also $x_0$ solution of (\ref{OCP}) belongs to the interval $[x_{0min},x_{0max}]$.
\end{itemize}
\end{prop}
All this is obtained by analyzing the dependency
\[
\int_{x_0}^{x_(T)\mathrm e^{r(T-\overline{T})}}\frac{1}{f(\xi)-r\xi}\>\mathrm d\xi=\overline{T}\;.
\]
where $x(T)$ is seen as a function of $x_0$.
Finally, since $x_0<\bar x^0$ in the solution of (\ref{OCP}), $x(t)<\bar
x^0$ at all times. Indeed, in the bright phase, $x(t)$ cannot go through
$\bar x^0$ since the choice of $u$ that maximizes $\dot x$ is $u=0$ that
simply imposes convergence in infinite time toward $\bar x^0$; in the dark
phase, $\dot x<0$ which also prevents $x(t)$ from going through $\bar x^0$.
\subsection{Maximum principle}
In this section, we will show that the solution of (\ref{OCP}) can have one of
three patterns. In order to solve problem (\ref{OCP}), we will use
Pontryagin's Maximum Principle (PMP, \cite{Pont}) in looking for a control law
maximizing the Hamiltonian
\[
H(x,u,\lambda,t)\triangleq \lambda\left(f(x)h(t)-rx-ux\right)+ux\;,
\]
with the constraint
\begin{equation}\label{x-lambda}
\left\{\begin{array}{lll}
\dot x&=& f(x)h(t)-rx-ux\;,\\
\dot \lambda&=&\lambda\left(-f'(x)h(t)+r+u\right)-u\;.
\end{array}\right.
\end{equation}
In addition, we should add the constraint
\[
\lambda(T)=\lambda(0)\;.
\]
as shown in \cite{Gilbert}. We see from the form of the Hamiltonian that
\[
\frac{\partial H}{\partial u}=(1-\lambda)x\;,
\]
so that, when $\lambda>1$, we have $u=0$, when $\lambda<1$, we have $u=\bar{u}$, and when $\lambda=1$ over some time interval, intermediate singular control might be applied.
Of paramount importance in the proofs will be the constancy of the
Hamiltonian. Indeed, it is known that the Hamiltonian is constant along
optimal solutions as long as the time does not intervene into the dynamics or
the payoff. We then have that the Hamiltonian is constant in the interval
$[0,\overline{T})$ (with $h(t)=1$) and in the interval $(\overline{T},T]$
(with $h(t)=0$). The Hamiltonian presents a discontinuity at $\overline{T}$.
We can first give some general statements about where and when switches can occur
\begin{proposition}\label{prop1} If Assumption \ref{fprimemin} is satisfied then, in the solution of problem (\ref{OCP})
\begin{itemize}
\item[(i)] No switch from $u=0$ to $u=\bar{u}$ can take place in the dark phase;
\item[(ii)] Switches from $u=0$ to $u=\bar{u}$ in the bright phase take place with $x\leq x_\sigma$;
\item[(iii)] Switches from $u=\bar{u}$ to $u=0$ in the bright phase take place with $x\geq x_\sigma$.
\end{itemize}
\end{proposition}
\begin{proof}
All these results come from the analysis of $\dot \lambda$ at the switching instant. Indeed, at a switching instant, we have $\lambda=1$ so that:
\begin{equation}\label{atswitch}
\dot \lambda=\lambda\left(
-f'(x)h(t)+r\right)\;.
\end{equation}
The form of $\frac{\partial H}{\partial u}$ indicates that a switch from $u=0$ to $u=\bar{u}$ (resp. from $u=\bar{u}$ to $u=0$) can only occur if $\dot \lambda\leq 0$ (resp. $\dot \lambda\geq 0$).
In the dark phase, (\ref{atswitch}) becomes $\dot \lambda=r\lambda>0$; no switch from $u=0$ to $u=\bar{u}$ can therefore take place (this shows (i)).
In the bright phase, $\dot \lambda\leq0$ at a switching instant if $f'(x)\geq r$, which is equivalent to having $x\leq x_\sigma$; this is therefore a condition for a switch from $u=0$ to $u=\bar{u}$ (this shows (ii)).
Conversely, in the bright phase, $\dot \lambda\geq 0$ at a switching instant if $f'(x)\leq r$, which is equivalent to having $x\geq x_\sigma$; this is therefore a condition for a switch from $u=\bar{u}$ to $u=0$ (this shows (iii)).
\end{proof}
In the following, we will propose candidate solutions to the PMP by making various hypotheses on the value of $\lambda(0)=\lambda_0$.
\begin{theorem}\label{thm1} If Assumption \ref{fprimemin} is satisfied then three forms of solutions of problem (\ref{OCP}) are possible:
\begin{itemize}
\item Bang-bang with $u(0)=0$, a single switch to $u(t)=\bar{u}$ taking place strictly before $\overline{T}$ and a single switch back to $u(t)=0$ taking place strictly after $\overline{T}$;
\item Bang-singular-bang with $u(0)=0$, a switch to $u(t)=u_\sigma$ taking
place first strictly before $\overline{T}$, a single switch to
$u(t)=\bar{u}$ also taking place strictly before $\overline{T}$ and a
single switch back to $u(t)=0$ taking place strictly after $\overline{T}$.
\item Constant control at $u(t)=\bar{u}$
\end{itemize}
In the first two cases, the switch back to $u(t)=0$ takes place with $x$ strictly smaller than it was at the moment of switch to $u(t)=\bar{u}$.
\end{theorem}
The proof of this result is detailed in appendix and is obtained by considering all possble situations that are in concordance with Proposition (\ref{prop1}) and eventually eliminating all possibilities but the three cases detailed in Theorem \ref{thm1}.
\subsection{Synthesis: the three possible optimal solutions}
Without needing an explicit form of $f(x)$, we have been able to obtain the
qualitative form of the optimal solution analytically. In the bang-bang case,
it is made of four phases:
\begin{solution}{Bang-bang}\label{sol1}
\begin{enumerate}
\item Growth with a closed photobioreactor until a sufficient biomass level is
reached ($u=0$, $\dot x>0$, $\lambda>1$, $\dot \lambda<0$);
\item Maximal harvesting of the photobioreactor with simultaneous growth ($u=\bar u$, $\dot x$ not determined, $\lambda<1$, $\dot \lambda<0$);
\item Maximal harvesting of the photobioreactor with no growth until a low level of
biomass is reached ($u=\bar u$, $\dot x < 0$, $\lambda<1$, $\dot \lambda>0$);
\item Passive photobioreactor: no harvesting, no growth, only respiration ($u=0$, $\dot x < 0$, $\lambda>1$, $\dot \lambda>0$).
\end{enumerate}
\end{solution}
The first two phases take place in the presence of light, the other two in the dark. In phase~3, harvesting of as much biomass produced in the light phase as possible is continued while not going below the level where the residual biomass left is sufficient to efficiently start again the next day.
If the optimal solution contains a singular phase, the analytical approach has helped us to identify the qualitative form of the optimal productivity solution. It now contains five phases:
\begin{solution}{Bang-singular-bang}\label{sol2}
\begin{enumerate}
\item Growth with a closed photobioreactor until the $x_\sigma$ biomass level is reached ($u=0$, $x<x_{\sigma}$, $\dot x>0$, $\lambda>1$, $\dot \lambda<0$);
\item Maximal equilibrium productivity rate on the singular arc ($u=u_\sigma$, $x=x_{\sigma}$, $\dot x=0$, $\lambda=1$, $\dot \lambda=0$);
\item Maximal harvesting of the photobioreactor with simultaneous growth ($u=\bar u$, $x<x_{\sigma}$, $\dot x<0$, $\lambda<1$, $\dot \lambda<0$);
\item Maximal harvesting of the photobioreactor with no growth until a low level of biomass is reached ($u=\bar u$, $x<x_{\sigma}$, $\dot x<0$, $\lambda<1$, $\dot \lambda>0$);
\item Passive photobioreactor: no harvesting, no growth, only respiration ($u=0$, $x<x_{\sigma}$, $\dot x<0$, $\lambda>1$, $\dot \lambda>0$);
\end{enumerate}
\end{solution}
For this form of solution, we see that maximal instantaneous productivity is
achieved during the whole second phase, when the singular solution occurs
(Figure 3B). This solution with a singular form then seems to be naturally the
most efficient one. It can however not always be achieved for two reasons:
\begin{itemize}
\item if (\ref{mumax}) is not satisfied, $u_\sigma>\bar{u}$, so that it is not
an admissible control. The solution should then be bang-bang and the
application of $u=\bar{u}$ has the same role as $u_\sigma$ in the solution
with a singular arc since $u=\bar{u}$ is then the optimal solution to the
instantaneous productivity optimization problem (Figure 3C).
\item if growth is not sufficiently stronger than respiration (with (\ref{mumin}) satisfied however), a bang-bang solution that stays below $x_\sigma$ is optimal, because there is not enough time for it to reach $x_\sigma$ (Figure 3A).
\end{itemize}
The only remaining solution is the one with $u$ constant at $\bar u$ and is characterized by
\begin{solution}{$u=\bar u$}\label{sol3}
\begin{enumerate}
\item Maximal harvesting of the photobioreactor with simultaneous growth ($u=\bar u$, $x<x_{\sigma}$, $\dot x>0$, $\lambda<1$, $\dot \lambda<0$);
\item Maximal harvesting of the photobioreactor with no growth ($u=\bar u$, $x<x_{\sigma}$, $\dot x<0$, $\lambda<1$, $\dot \lambda>0$);
\end{enumerate}
\end{solution}
Since this solution relies on the satisfaction of condition (\ref{x0max}), it is clear that it could mainly occur if the actuator has been under-dimensioned and when the dark phase does not last too long.
\section{Discussion}\label{sec:large}
\subsection{Existence and uniqueness of the optimal solution}
Existence of an optimal solution is obvious since the achieved yield for
solutions satisfying $x(t)=x(0)$ is bounded between $0$ (obtained for
$x_0=x_{0max}$ and $u=0$) and $\bar x^0\bar u A T$ (not achievable but an
upper-bound nonetheless because $x(t)\leq \bar x^0$ and $u(t)\leq \bar u$ at
all times). The productivity level therefore has a finite supremum, which
translates into a maximum since the set of definition of the $u(t)$ control
laws bounded by $[0,\bar u]$ is closed. The optimal control problem therefore has
a solution.
We have also carried out a tedious analysis of the impossibility of existence
of two different solutions of the maximum principle by showing that, once one
solution has been evidenced, variations of the switching times
cannot produce a second solution of the PMP satisfying both periodicity
conditions (on $x$ and $\lambda$). Once we have found a
solution of the PMP, it is therefore optimal. This uniqueness result is not
crucial in the following discussions. Therefore, we do not detail them.
\subsection{Large but limited $\bar u$}
Too small an upper-bound $\bar u$ for $u(t)$ gives rise to two kinds of
optimal solutions where the constraint on the control really limits the
productivity. First, when $\bar u<u_\sigma$, there could exist optimal
solutions that go through $x_\sigma$ but cannot stay at this optimal level
because the required control value is not admissible. Secondly, an optimal
solution (denoted Solution 3) could require the actuator to always be open.
In order to prevent the first case, it suffices to design the actuator and
chemostat so that $\bar u>u_\sigma$; note however that this does not imply
that the solution of (\ref{OCP}) goes through a singular phase: growth could
very well be too slow for the solution to reach the $x_\sigma$ level during
the interval $[0,\bar{T}]$. In the second case, the constant control
$u(t)=\bar u$ for all times also indicates that the actuator is not strong
enough since the only way the dilution can be efficient enough is by being
active during the whole dark phase; a solution where the
dilution succeeds in harvesting the chemostat at the beginning of the night is
certainly to be preferred since it prevents respiration from taking too big a
role. Condition (\ref{x0max}) should then not be satisfied so that no periodic
solution with $u=\bar u$ exists. In this Section, we then should have that
\[
\bar u>\max\left(\frac{f\left[(f')^{-1}(r)\right]}{(f')^{-1}(r)}-r,\frac{f'(0)\overline{T}}{T}-r\right)\;.
\]
It is however convenient to remember from Remark~1 that $\frac{f\left[(f')^{-1}(r)\right]}{(f')^{-1}(r)}<f'(0)$, and that we trivially have $\frac{f'(0)\overline{T}}{T}<f'(0)$. We will therefore make the simpler hypothesis:
\begin{equation}\label{bigubar}
\bar u>f'(0)-r\;.
\end{equation}
Note first that this implies that, when applying $u=\bar u$, $\dot x=f(x)-rx-\bar u x<f'(0)x-rx-\bar u x<0$ so that $\bar x^{\bar u}=0$ and the biomass is always decreasing when the maximal dilution is applied.
When (\ref{bigubar}) is satisfied, only two forms of solutions of the PMP are possible: the bang-bang solution that never reaches the $x_\sigma$ level because the net biomass growth is too weak (respiration included) and the bang-singular-bang solution.
\subsection{Unconstrained dilution rate}
When considering that $u$ can be unbounded, there is the possibility for $\delta$ impulses to occur in the solution of the PMP. We will not give any further mathematical developments, but things readily seem clear.
In the limit, bang-singular-bang solutions would have the following form: the
chemostat would be closed in the dark phase and at the beginning of the light phase until
$x=x_\sigma$ is reached. The solution would stay on $x=x_\sigma$ until
$t=\overline{T}$ is reached. Any earlier impulse would force the solution to
have more bang phases than possible, as demonstrated in the proof of Theorem \ref{thm1}. At $t=\overline{T}$,
a Dirac impulse is applied to bring $\lambda$ to $1$ so that $u=0$ is then
applied during the whole dark phase. The reactor then has three modes: a batch
mode when $x$ is different from $x_\sigma$, a continuous mode, on the singular
arc and an instantaneous harvest at the transition between the bright and dark
phases. The harvest consists in instantaneously replacing the medium with
biomass-free, but substrate-rich, medium, to bring the biomass concentration
to the mandated level.
The bang-bang solutions that stay below $x_\sigma$ here become pure batch solutions with instantaneous harvest, the reactor being always closed except at $t=\overline{T}$ (where $\lambda$ is brought to $1$ through an impulse).
We see from this analysis that, depending on the chosen microalgae and chemostat design, the optimal productivity is either obtained in batch mode or through the introduction of some continuous mode between batch phases.
\section{Simulations}\label{sec:simulations}
\subsection{{\it Isochrysis galbana}}\label{sub:low}
We will now show the forms of Solutions 1,2, and 3 in the $(t,x)$ space. For that, we start with a dynamical model (\ref{reduced-monod-bang}) for the growth of {\it Isochrysis galbana} with the parameters taken as in \cite{Masci2010}.
\begin{table}
\begin{center}$ \begin{array}{|c|c|c|}
\hline
\mbox{Parameter}&\mbox{Value}&\mbox{Units}\\
\hline
\hline
\bar\mu & 1.7 &day^{-1}\\
a&0.5&m^2/g[C]\\
\bar I_0&1500& \mu \mbox{mol quanta}m^{-2}s\\
K_I&20&\mu \mbox{mol quanta}m^{-2}s\\
r&0.07 &day^{-1}\\
\overline{T}&0.5&day\\
\bar u&2&day^{-1}\\
\hline
\end{array}$
\end{center}
\caption{Growth and bioreactor parameters for {\it Isochrysis galbana}}\label{Galbana}
\end{table}
With such parameters, the critical values $x_\sigma=14.93$ and
$u_\sigma=0.9066$ are easily computed, and the optimal solution is represented
by the blue curves in Figure~\ref{fig_case1}. It presents the
bang-singular-bang structure of Solution 2 with $u=0$ until time $t=0.282$
followed by $u=u_\sigma$ until $t=0.420$, $u=\bar u$ until $t=0.584$ followed
by $u=0$. The corresponding daily surfacic productivity is then $6.33
g[C]/m^2$ for a total cumulated flow $\int_0^1 u(\tau)\>\mathrm d\tau$ equal
to $0.453$, that is $45\%$ of the medium has been renewed during the $24$ hours. We then considered the application of a
constant control during the $24$ hours and optimized the level of this control
numerically. The optimum was achieved for $\hat u=0.461$, which yields a daily
productivity equal to $6.26 g[C]/m^2$ and a cumulated flow equal to $0,461$
also. Three things need to be noticed from this comparison: (i) both optimal
solutions are quite different in Figure~\ref{fig_case1} though the $x$ values
stay in the same
range; (ii) the productivity increase generated by the optimal solution is
very weak ($1.11\%$); (iii) the total flow required to attain both optimal
solutions are very similar. In fact, the fact that the improvement of the
productivity is small is not surprising: the necessity of shutting down the
chemostat at night is linked to the respiration that would consume the
biomass. In the present case, the respiration is weak so that, during one
night, only $3.44\%$ of the biomass is consumed. This phenomenon is here
marginal so that the optimal control approach developed to limit it provides
little gain and what really matters is the total flow that goes through the
photobioreactor.
\begin{figure}[t]
\centerline{\includegraphics[width=0.7\columnwidth]{fig_case1.pdf}}
\caption{Bang-singular-bang optimal solution (in blue) confronted with the
most productive constant dilution rate scenario (in red) for the microalgae {\it Isochrysis
galbana} and the parameters of Table \ref{Galbana}. At the top is the
evolution of the biomass and at the bottom that of the control. The black
dash-dotted lines represent the values of $x_\sigma$ and $u_\sigma$
respectively}\label{fig_case1}
\end{figure}
A second case needs to be studied, that is the one that corresponds to the
case where $\bar u$ is smaller than $u_\sigma$, so that the singular phase is
not possible. This yields Solution 1, the bang-bang case, when we take $\bar
u=0.8$. However, we expect that the productivity would not be degraded much
since we should be able to do better than the aforementioned case with $\hat
u=0.461$, which is an admissible solution for the present control problem, and
which gives a productivity that is barely below the optimal one with $\bar
u=2$. It is indeed the case since, with switching times at $t=0.222$ and
$t=0.790$ a productivity of $6.30 g[C]/m^2$ is achieved with a total flow of
$0.457$ still similar to the one observed in both previous cases.
If we now reduce $\bar u$ to $0.1$, a significant performance degradation
should be expected since there is an upper-bound on the total flow. In fact,
we obtained numerically that the optimal control here yields a constant
control at $u=0.1$ and a productivity of $4.35 g[C]/m^2$ which is $68.7 \%$ of
the one obtained when $\bar u=2$.
\subsection{Importance of the respiration}\label{sub:high}
In this section we explore the impact of a large value for parameter $r$, which can be due to increased respiration, or to a high mortality as often is the case in high density cultures (however, we will stick to the respiration terminology).
If we now consider a species that has all the characteristics of {\it Isochrysis
galbana} except that it has a very large $r$ equal to $0.7day^{-1}$, we
expect the optimal strategy to have a much bigger impact on the
outcome. Indeed, this respiration has much more importance at night than in
the previous case since it consumes $29.5\%$ of the biomass, hence the
importance of limiting the biomass level at night. We see in
Figure~\ref{fig_case3} that the optimal solution is here bang-bang with a
short opening window at the end of the day and at the beginning of the night
to harvest the produced biomass ($u=\bar u$ for $t\,\in[0.479,0.527]$). The
optimal production is here $0.607 g[C]/m^2$ for a total flow of $0.096$, that
is a very little daily medium renewal while the best constant control $\hat
u=0.095$ yields $0.519 g[C]/m^2$. We see here that the daily total flows are
again almost equivalent but that the productivity is here improved by $17\%$
through the bang-bang approach, which is far from being
negligible, especially since it is made with almost an exact same hydraulic
effort as the constant dilution strategy. Though the $x(t)$ solutions in
Figure~\ref{fig_case3} both look similar, the larger population at night with
the constant control strategy explains why there is more respiration when the
control is constant than when it is bang-bang, hence less productivity.
\begin{figure}[t]
\centerline{\includegraphics[width=0.7\columnwidth]{fig_case3}}
\caption{Bang-bang optimal solution (in blue) confronted with the most productive constant dilution rate scenario (in red) for a high-respiration species ($r=0.7$) and $\bar u=2$}\label{fig_case3}
\end{figure}
\subsection{Bifurcation analysis}\label{sub:bif}
\begin{figure}[t]
\centerline{\includegraphics[width=0.8\columnwidth]{bifurcation_rubar}}
\caption{Bifurcation diagram defining the four $(r,\bar u)$-parametric
regions where the different patterns of optimal solutions are encountered:
bang-singular-bang control (above the black line), bang-bang control
(between the black, red, and magenta lines, constant control (below the red
line) and no optimal solution (on the right of the magenta line)
}\label{bifurcation_rubar}
\end{figure}
In this section, we study more quantitatively the outcome of sections \ref{sub:low}
and \ref{sub:high}. We draw 2D-bifurcation figures for
the parameters $r$ and $\bar u$ (the other parameters being fixed at the
values of Table~\ref{Galbana}). We build a bifurcation diagram for these two
parameters by identifying in which regions the different optimal solution
patterns appear; these regions are delimited by solid lines.
We first see in Figure~\ref{bifurcation_rubar} that, when $r>\frac{f'(0)\bar
T}{T}$, no solution is possible because condition (\ref{mumin}) is not
satisfied. Only the wash-out of the photobioreactor can occur, whatever the
control strategy. In the region below the red curve, the optimal solution is a
constant control $u(t)=\bar u$ during the whole day and night; this was
expected for these small values of $\bar u$ since the photobioreactor produces
a quantity of biomass during the day that cannot be taken out if the actuator
is not always open. The region where the solution is bang-bang is split into
two by the $\bar u=u_\sigma(r)$ curve (dashed-dotted blue line of Figure
\ref{bifurcation_rubar}): when $\bar u<u_\sigma(r)$, no singular phase is
possible and when $\bar u>u_\sigma(r)$ in that region, the singular phase is
theoretically possible but does not occur because the biomass does not reach
the $x_\sigma$ level. Finally, we see that the bang-singular-bang phase is
concentrated in a region where $r$ is
small and $\bar u$ large: the former allows for a reduced natural decrease of
the biomass concentration, so that it does not need a lot of effort to be
brought back up to $x_\sigma$ at the beginning of the light phase; the latter
allows for a very short phase of maximal harvesting so that it leaves plenty
of time for a singular phase. Finally, we illustrated condition
(\ref{bigubar}) by a dashed-dotted cyan line; this figure confirms what we had
evidenced earlier: when $\bar u$ is above that line the optimal control does
not suffer from control limitations that intrinsically prevent a singular
phase (because $\bar u<u_\sigma$) or forces the optimal control to be
constant.
\subsection{Importance of the open-reactor phase}
In the high-respiration case, we have identified a short window of opening of
the bioreactor around the end of the day and the beginning of the night. In
this subsection, we will show a simulation that evidences the fact that the
main characteristics of this window is its length and not so much the exact
time at which it takes place. We have considered the case where $r=0.7$ and
computed the value of the productivity for switching times $t_1$ from $u=0$ to
$\bar u$ between $0.44$ and $0.5$ and $t_2$ from $\bar u$ to $0$ between $0.5$
and $0.56$. The productivity is then illustrated by different color levels in
Figure~\ref{contour_prod}, the dark blue corresponding to zero (wash out of
the reactor or almost closed reactor) and the purple to values above $0.6$. A
definite pattern appears on this figure: the productivity level is roughly
constant along lines of the form $t_2=\delta+t_1$. The productivity level then
mainly depends on $t_2-t_1$, that is the opening duration of the reactor or,
equivalently, the
total flow that goes through the reactor.
\begin{figure}[t]
\centerline{\includegraphics[width=0.7\columnwidth]{contour_prod}}
\caption{Productivity level contours for bang-bang solutions with the first switching time in abscissa and the second switching time in ordinate. The optimal productivity level computed in the previous section is obtained at the black star.}\label{contour_prod}
\end{figure}
\subsection{Near-optimal strategies}\label{sec:near}
We have seen that the daily flow that goes through the chemostat has great
importance for the productivity level of a solution. In order to confirm
that, we propose a strategy that is of the bang-bang type and consists in
having the dilution equal to $\bar u$ in the interval
$[\overline{T}-\displaystyle\frac{\tilde u}{2\bar
u},\overline{T}+\displaystyle\frac{\tilde u}{2\bar u} ]$ and $0$ outside of
this interval. That way, the total flow that goes through the reactor is equal
to $\tilde u$, so that we will be able to compare the obtained productivity
level between that strategy and constant control strategies that have the same
daily total flow. We did the computations for $\bar u=2$ and values of $\tilde
u$ that did not lead to the wash-out of the reactor for both species of
subsections \ref{sub:low} (on the left of Figure \ref{near}) and
\ref{sub:high} (on the right of Figure \ref{near}). We see that, in both
cases, the constant control (in red) yields a less productive process that the
corresponding bang-bang strategy. This is especially true in the
high-respiration case, but is
also valid in the low respiration case, so that, when the exact optimal
control law is not computed, it is advisable to choose a bang-bang one rather
than a constant control. This strategy is stronly advisable since the optimal
productivity level is represented through a dotted level in both figures and
the near-optimal strategy achieves it almost in both cases for an appropriate
daily flow. This was expected in the high respiration case where the optimal
solution is bang-bang, because we had evidenced in Figure \ref{contour_prod}
that the actual timing of the beginning of the max-control window had little
influence on the productivity level, but is also the case in the
low-respiration case where the optimal solution is bang-singular-bang; even in
this case, our proposed near-optimal strategy can almost achieve the optimal
control level. This last property might however not hold for a high
respiration species whose optimal pattern is bang-singular-bang: in that case,
the corresponding best bang-bang near-optimal strategy might lead to large
values of $x(t)$ hence more respiration, at the time where the control should
be singular.
\begin{figure}[t]
\centerline{\includegraphics[width=0.45\columnwidth]{nearlow}\includegraphics[width=0.45\columnwidth]{nearhigh}}
\caption{Productivity levels of species of subsections \ref{sub:low} (on the
left) and \ref{sub:high} (on the right ): in red, the productivity attained
with the constant control $u=\tilde u$ and in blue the one with the
near-optimal strategy of Section \ref{sec:near}. The dashed-lines represent
the optimal productivity levels with $\bar u=2$}\label{near}
\end{figure}
\subsection{Beyond the photobioreactor}
Photobioreactors are not the only ecological systems that undergo natural periodic forcing. We will now derive an example that relies more on seasonality and that is built on the famous fishing-stock model and the Maximum-Sustainable-Yield (MSY, \cite{Clark}). The question here is therefore to determine how fishermen can best exploit a fishing stock while allowing it to survive. When basing the study on the logistic growth model
\[
\dot x=\alpha x\left(1-\frac{x}{K}\right)-qEx
\]
where $x$ is the size of the stock, $E$ is the fishing effort and $q$ is the fish catchability, it is determined that the maximization of the number of caught fishes over the long-run is obtained by keeping the stock at $x=\frac{K}{2}$ with $E=\frac{1}{2q}$. These are our $x_\sigma$ and $u_\sigma$ values. If we now consider that the fishing stock has a limited growing season (of length $\bar T$) during which it satisfies the logistic growth) and a season (of length $T-\bar T$) where only natural mortality and predation take place, we can set ourselves in the setting of the present paper. Indeed we can then define $f(x)-rx=\alpha x\left(1-\frac{x}{K}\right)$ with $r$ the mortality rate during the non-growing season. We then define $u=qE$. Taking $\alpha=6$, $K=10$, $r=1$, $\bar T=0.2$, $T=1$ and $\bar u=2$, we first noticed on Figure \ref{fig_fishing} (black curve) that, in the absence of fishing, the fishing stock settles at a level that is way below $K$ and even below $\frac{K}{2}$, the value of the MSY, which shows that no singular phase is possible in the optimal solution. The bang-bang optimal solution is then computed (blue curve) with switching times at $t=0.188$ and $0.288$ and compared with the best constant harvesting solution (red curve). In both cases, the fishing stocks are very similar during the growing season, but the optimal harvesting method reduces it at the beginning season so that mortality does not have time to do a lot of damage and, in the end, it improves the total of caught fishes during the season by 37\%.
\begin{figure}[t]
\centerline{\includegraphics[width=0.7\columnwidth]{fig_fishing}}
\caption{Comparison of the fishing stock when not fished (black curve), constantly fished (red curve) and using the bang-bang optimal control (blue curve).}\label{fig_fishing}
\end{figure}
\section{Conclusions}
In this paper, we have shown how the day-night constraint influences the
optimal control strategy that achieves maximal biomass daily productivity in a
photobioreactor and reduces the optimal productivity level. We have identified
three families of strategies that can achieve optimality: bang-singular-bang,
bang-bang, and constant maximal control; the first two are characterized by an
harvesting of the biomass at the end of the light phase and beginning of the
night to limit the negative effects of the respiration, while the latter leads
to permanent harvesting because the maximal dilution rate is too weak compared
with the growth rate of the biomass. These families of control strategies have
been built for a large set of nonlinear one-dimensional models with light-dark
phases so that they can be applied beyond their motivation in this paper: that
is a photobioreactor with Monod-like growth and Beer-Lambert light
attenuation. Through simulations, we have shown that the necessity of applying
an optimal control
strategy strongly depends on the respiration rate: if the latter is weak,
constant control can achieve almost the same performance
since the night phase consumes little
biomass. However, we have also shown that a better choice is probably to apply
a bang-bang control law which, in the presented simulations, always yields
better productivity than comparable strategies with constant dilution. This is
particularly supported by the fact the a bang-bang law with a proper timing
can almost achieve the same productivity level as the optimal control law
developed in the present paper; also this better productivity is achieved with
a very similar cumulated effort as the one necessary for a constant dilution
rate.
\bibliographystyle{ieeetr}
|
2,877,628,091,160 | arxiv | \section*{The first negative Fourier coefficient of an Eisenstein series newform}
\section{Introduction}
For a Dirichlet character $\chi$ and a positive integer $N$, we will denote by $ M_k(N,\chi)$ the vector space of modular forms on $\Ga_0(N)$ of weight $k$, level $N$ and character $\chi$. Let $E_k(N,\chi)$ be the subspace of Eisenstein series and $S_k(N,\chi)$ the subspace of cusp forms. For a prime $p$, we let $T_p$ be the $p$th Hecke operator.
Let $H_k^{*}(N)$ be the subspace of $S_k(\chi_0,N)$ of newforms with trivial character $\chi_0$. Given a newform $f\in H_k^{*}(N)$, let $\lambda_f(p)$ be the eigenvalue of $f$ with respect to the Hecke operator $T_p$. The restriction to the trivial character ensures that the sequence $\{\lambda_f(p)\}$ is real. Many authors have studied the sequence of signs of the Hecke eigenvalues of $f$. For example, one could pose questions such as:
\begin{enumerate}[{\rm (i)}]
\item Are there infinitely many primes $p$ such that $\lambda_f(p)\gt 0$ (or $\lambda_f(p)\lt 0$)?
\item What is the first change of sign? More specifically, what is the smallest $n\ge 1$ (or prime $p$) such that $\lambda_f(n)\lt 0$ (or $\lambda_f(p)\lt 0$)? This is an analogue of the least quadratic non-residue problem.
\item Given an arbitrary sequence of signs $\ve_p\in\{\pm 1\}$, what is the number of newforms $f$ (in some family) such that $\sgn\lambda_f(p)=\ve_p$ for all $p\le x$?
\end{enumerate}
In the cusp form setting, questions (i) and (ii) are answered in \cite{Kohnen}, \cite{Kowalski}, and \cite{Matomaki}. In this paper, we focus on (iii). Kowalski, Lau, Soundararajan and Wu \cite{Kowalski} obtained a lower bound for the proportion of newforms $f\in H_k^{*}(N)$ whose sequence of eigenvalues $\lambda_f(p)$ has signs coinciding with a prescribed sequence $\{\ve_p\}$:
\begin{theo}[Kowalski, Lau, Soundararajan, Wu, 2010]
Let $N$ be a squarefree number, $k\ge 2$ an even integer, and $\{\ve_p\}$ a sequence of signs. Then, for any $0\lt \ve \lt \mf12$, there exists some $c\gt 0$ such that
\[
\frac{1}{|H_k^{*}(N)|}\# \{f\in H_k^{*}(N) \; : \; \sgn\lambda_f(p)=\ve_p \mbox{ for } p\le z,\; p\nmid N\} \ge \Big(\f12 - \ve\Big)^{\pi(z)}
\]
for $z=c\sqrt{\log{kN}\log\log{kN}}$ provided $kN$ is large enough.
\end{theo}
Now, let $\chi_1$, $\chi_2$ be Dirichlet characters modulo $N_1, N_2$ and define the following variant of the sum of divisors function:
\begin{equation}\label{eq: sigma(n)}
\sigma_{\chi_1,\chi_2}^{k-1}(n)=\sum_{d|n} \chi_1\big(\f{n}{d}\big)\chi_2(d)d^{k-1}.
\end{equation}
We also define the function
\[
E(\chi_1,\chi_2,k)=\frac{\de(\chi_1)}{2}L(1-k,\chi_2) + \sum_{n\ge 1}\sigma_{\chi_1,\chi_2}^{k-1}(n)q^n,
\]
where $q=\me^{2\pi i z}$ and
\[
\de(\chi_1)=\begin{cases}
1, & \mbox{if } \chi_1 \mbox{ is principal} \\
0, & \mbox{otherwise}.
\end{cases}
\]
Assume that $\chi_1$ and $\chi_2$ are not simultaneously principal$\mod{1}$ and $k\neq 2$. It is well known (see, for example, \cite{Diamond}) that if $\chi_1\chi_2(-1)=(-1)^k$, then
\[
E(\chi_1,\chi_2,k)\in E_k(N_1N_2, \chi_1\chi_2).
\]
In 1977, Weisinger \cite{Weisinger} developed a newform theory for $E_k(N,\chi)$ analogous to the one developed by Atkin and Lehner \cite{Atkin} for cusp forms. In this theory, we have:
\begin{itemize}
\item The newforms of $E_k(N,\chi)$ are functions of the form $E(\chi_1,\chi_2,k)$ for which $N=N_1N_2$, $\chi=\chi_1\chi_2$, and $\chi_1,\chi_2$ are primitive.
\item The eigenvalue of $E(\chi_1,\chi_2,k)$ with respect to the Hecke operator $T_p$ is $\sigma_{\chi_1,\chi_2}^{k-1}(p)$. In other words, the eigenvalues of this type of Eisenstein series coincide with its Fourier coefficients.
\end{itemize}
By exploiting the analytical properties of $\sigma_{\chi_1,\chi_2}^{k-1}(n)$, Linowitz and Thompson \cite{Lola} answered the three questions mentioned at the beginning of this article for Eisenstein series newforms.
Note that by \eqref{eq: sigma(n)}, $\sigma_{\chi_1,\chi_2}^{k-1}(n)\in\bb{R}$ only if $\chi_1,\chi_2$ are quadratic Dirichlet characters. Therefore, counting Eisenstein series newforms of level $N\le x$ is equivalent to counting fundamental discriminants $D_1, D_2$ with $|D_1D_2|\le x$. Let
\[
\cs{D}\defeq \{(D_1, D_2) \; : \; |D_1D_2|\le x\}.
\]
Taking all of these facts into consideration, Linowitz and Thompson \cite{Lola} showed:
\begin{theo}[Linowitz, Thompson, 2015]\label{th: LolaMainTheorem}
Let $\{p_1,\ldots, p_k\}$ be a sequence of primes and $\{\ve_{p_1},\ldots, \ve_{p_k}\}\in\{-1,0,1\}$ a sequence of signs. Then,
\begin{align*}
\frac{1}{|\cs{D}|}\#\{(D_1,D_2)\in\cs{D} \; :\; &{}\sgn\sigma_{\chi_1,\chi_2}^{k-1}(p_i)=\ve_{p_i},\; 1\le i \le k\} \\ &{}\xrightarrow[x\to \infty]{}\prod_{\substack{\ve_{p_i}=0 \\ 1\le i \le k}}\frac{1}{(p_i+1)^2}\prod_{\substack{\ve_{p_i}\neq 0 \\ 1\le i \le k}}\frac{p_i(p_i+2)}{2(p_i+1)^2}\cdot
\end{align*}
\end{theo}
Now, let $\eta(D_1,D_2)$ represent the smallest prime $p$ such that $\sgn(\sigma_{\chi_1,\chi_2}^{k-1}(p))=-1$. Linowitz and Thompson \cite{Lola} then conjectured:
\begin{conjecture}\label{con: ConLola}
We have
\[
\frac{\sum_{|D_1D_2|\le x}\eta(D_1,D_2)}{\sum_{|D_1D_2|\le x} 1} \xrightarrow[x\to \infty]{} \theta,
\]
where
\begin{equation}\label{eq: theta}
\theta\defeq \sum_{k=1}^{\infty}\frac{p_k^2(p_k+2)}{2(p_k+1)^2}\prod_{j=1}^{k-1}\frac{2+p_j(p_j+2)}{2(p_j+1)^2}\approx 3.9750223902\ldots
\end{equation}
\end{conjecture}
They gave a heuristic argument as evidence towards their conjecture, showing:
\begin{align*}
\frac{\sum_{|D_1D_2|\le x}\eta(D_1,D_2)}{\sum_{|D_1D_2|\le x} 1} ={}&\sum_{k=1}^{\infty}p_k\, \mbox{Prob}(\eta(D_1,D_2)=p_k) \\
={} & \sum_{k=1}^{\infty}p_k\, \mbox{Prob}(\ve_{p_k}=-1)\prod_{i=1}^{k-1}\mbox{Prob}(\ve_{p_i}=0\mbox{ or }1) \\
={}& \sum_{k=1}^{\infty}\frac{p_k^2(p_k+2)}{2(p_k+1)^2}\prod_{i=1}^{k-1}\bigg(\frac{1}{(p_i+1)^2}+\frac{p_i(p_i+2)}{2(p_i+1)^2}\bigg),
\end{align*}
where the last equality follows from Theorem \ref{th: LolaMainTheorem}. The problem with this argument is that Theorem \ref{th: LolaMainTheorem} fixes a set of primes and then lets $x\to\infty$. In this argument we need to allow the primes to tend to infinity with $x$. The authors stated: ``[W]e have a good understanding of the effect of the small primes, but one would need to argue that the primes after some cutoff point do not make much of an impact on the average. Presumably, this would require using the large sieve''.
The goal of the present article is to correct their conjecture by proving the following result:
\begin{theo}\label{th: Main Theorem}
We have
\[
\frac{\sum_{|D_1D_2|\le x}\eta(D_1,D_2)}{\sum_{|D_1D_2|\le x} 1} \xrightarrow[x\to \infty]{} \Theta\cdot(1-\be)+\al,
\]
where
\[
\Theta =\sum_{k=1}^{\infty}\frac{p_k^2}{2(p_k+1)}\prod_{j=1}^{k-1}\frac{p_j+2}{2(p_j+1)},
\]
\[
\al= \sum_{k=1}^{\infty}\frac{p_k^2}{2(p_k+1)^2}\prod_{j=1}^{k-1}\frac{p_j+2}{2(p_j+1)},
\]
and
\[
\be= \sum_{k=1}^{\infty}\frac{p_k}{2(p_k+1)^2}\prod_{j=1}^{k-1}\frac{p_j+2}{2(p_j+1)}\cdot
\]
Numerically,
\[
\Theta\cdot(1-\be)+\al\approx 4.63255603509332\ldots
\]
\end{theo}
The numerical computation was done using Sage. We used RIF for interval arithmetic and we truncated at $k=1000$.
\section{Main Tools}
First we will need asymptotic estimates for some sets of fundamental discriminants. It is well known (see, for example, \cite{Cohen}) that
\begin{equation}\label{eq: D le x}
\sum_{|D|\le x} 1 \sim \frac{x}{\ze(2)},
\end{equation}
where $D$ runs over all fundamental discriminants with $|D|\le x$. Now, let $n_1(m)$ be the smallest integer $n\ge 1$ relatively prime to $m$ such that the congruence $x^2\equiv n\mod{m}$ has no solutions. Even though Vinogradov's conjecture remains open, it is possible to show that large values of $n_1(p)$ are rare. More specifically, using the large sieve, Linnik \cite{Linnik} showed that for all $\ve\gt 0$, we have
\[
\#\{p\le x \; : \; n_1(p)\gt x^{\ve}\}\ll_{\ve} 1.
\]
Using similar ideas to the ones from Linnik's paper, Erd\H{o}s \cite{Erdos} obtained a result concerning the average of $n_1(p)$ as $p$ varies over prime numbers less than or equal to $x$:
\begin{equation}\label{eq: Average n2(p)}
\frac{1}{\pi(x)}\sum_{p\le x}n_1(p)\xrightarrow[x\to \infty]{}\sum_{k=1}^{\infty}\frac{p_k}{2^k},
\end{equation}
where $p_k$ is the $k$th prime. In a similar fashion, Pollack \cite{Pollack} considered a variation of \eqref{eq: Average n2(p)}. We summarize his result in the following theorem:
\begin{theo}[Pollack, 2012]\label{th: Pollack}
For each fundamental discriminant $D$, let $\chi_D$ be the associated Dirichlet character, i.e., $\chi_D(m)\defeq \big(\frac{D}{m}\big)$. For each character $\chi$, let $n_{\chi}$ denote the least $n$ for which $\chi(n)\notin \{0,1\}$. Finally, let $n(D)\defeq n_{\chi_D}$. Then
\begin{enumerate}[{\rm (i)}]
\item Uniformly in $k$ such that the $k$th prime satisfies $p_k\le (\log{x})^{\f13}$, we have
\begin{equation*}\label{eq: D le x, n(D)=p_k}
\# \{|D|\le x \; : \; n(D)=p_k\}=\frac{p_k}{2(p_k+1)}\prod_{j=1}^{k-1}\frac{p_j+2}{2(p_j+1)}\frac{x}{\ze(2)}+ O(x^{\f23}).
\end{equation*}
\item \begin{equation*}\label{eq: n(D) gt (log{x})^{1/3}}
\sum_{\substack{|D|\le x \\ n(D)\gt (\log{x})^{\f13}}}n(D)=o(x).
\end{equation*}
\end{enumerate}
Therefore, using \eqref{eq: D le x}, we have
\begin{equation}\label{eq: Average of n(D)}
\frac{\sum_{|D|\le x}n(D)}{\sum_{|D|\le x} 1} \xrightarrow[x\to \infty]{} \Theta,
\end{equation}
where
\begin{equation*}\label{eq: ThetaDefinition}
\Theta\defeq \sum_{k=1}^{\infty}\frac{p_k^2}{2(p_k+1)}\prod_{j=1}^{k-1}\frac{p_j+2}{2(p_j+1)}\approx 4.9809473396\ldots
\end{equation*}
\end{theo}
We will also need the following lemma from Linowitz and Thompson \cite{Lola}:
\begin{lem}
Let $\bb{P}(\ve, p)$ denote the proportion of fundamental discriminants $D$ with $\big( \tf{D}{p}\big)=\ve$. Then, we have
\[
\bb{P}(\ve, p)=\begin{cases}
\mfrac{p}{2p+2}, & \mbox{if } \ve=1 \\
\mfrac{p}{2p+2}, & \mbox{if } \ve=-1 \\
\mfrac{1}{p+1}, & \mbox{if } \ve=0.
\end{cases}
\]
\end{lem}
\section{Proof of Theorem \ref{th: Main Theorem}}
Let $\chi_1, \chi_2$ be Dirichlet characters associated with the fundamental discriminants $D_1$ and $D_2$. For a prime $p$,
\[
\sigma_{\chi_1,\chi_2}^{k-1}(p)=\sum_{d|p} \chi_1\big(\f{p}{d}\big)\chi_2(d)d^{k-1}=\chi_1(p)+\chi_2(p)p^{k-1},
\]
so that
\begin{equation}\label{eq: sgn(sigma)}
\sgn{\sigma_{\chi_1,\chi_2}^{k-1}(p)}=\begin{cases}
\chi_1(p), & \mbox{if } p|D_2 \\
\chi_2(p), & \mbox{otherwise}.
\end{cases}
\end{equation}
\begin{proof}[Proof of Theorem \ref{th: Main Theorem}]
By \eqref{eq: D le x},
\[
\sum_{|D_1D_2|\le x} 1=\sum_{|D_1|\le x}\sum_{D_2\le \frac{x}{|D_1|}}1\sim \f{x}{\ze(2)}\sum_{|D_1|\le x}\frac{1}{ |D_1|}\cdot
\]
Let $A(x)\defeq \sum_{|D|\le x} 1$ and $f(x)\defeq \mf{1}{x}$. Since $A(x)\sim \mf{x}{\ze(2)}$, then by partial summation
\begin{align}
\nonumber \sum_{|D_1|\le x}\frac{1}{|D_1|}={}&A(x)f(x)-A(1)f(1)-\int_{1}^{x}A(t)f'(t)\dt \\
\nonumber \sim{}& \frac{1}{\ze(2)}-1+\int_{1}^{x}\frac{\dt}{\ze(2)t} \\
\sim{}& \frac{\log{x}}{\ze(2)}\cdot \label{eq: D le x 1/D}
\end{align}
Hence
\begin{equation}\label{eq: denominator}
\sum_{|D_1D_2|\le x} 1\sim \frac{x\log{x}}{\ze(2)^2}\cdot
\end{equation}
Now let us estimate the numerator. For the sake of simplicity, let $\eta\defeq \eta(D_1,D_2)$. Then,
\[
\sum_{|D_1D_2|\le x}\eta=\sum_{\substack{|D_1D_2|\le x \\ \eta|D_2}}\eta+\sum_{\substack{|D_1D_2|\le x \\ \eta\nmid D_2}}\eta.
\]
If $\eta | D_2$, then by \eqref{eq: sgn(sigma)}, $\eta$ is the smallest prime $p$ such that $\chi_1(p)\notin \{0,1\}$, and with the notation of Theorem \ref{th: Pollack}, this means that $\eta=n(D_1)$. Similarly, if $\eta\nmid D_2$, then $\eta=n(D_2)$. Therefore,
\[
\sum_{|D_1D_2|\le x}\eta=\sum_{\substack{|D_1D_2|\le x \\ \eta|D_2}}n(D_1)+\sum_{\substack{|D_1D_2|\le x \\ \eta\nmid D_2}}n(D_2).
\]
Now,
\[
\sum_{\substack{|D_1D_2|\le x \\ \eta\nmid D_2}}n(D_2)=\sum_{|D_1D_2|\le x }n(D_2)-\sum_{\substack{|D_1D_2|\le x \\ \eta| D_2}}n(D_2),
\]
so that
\begin{equation}\label{eq: Numerator}
\sum_{|D_1D_2|\le x}\eta=\sum_{|D_1D_2|\le x }n(D_2)+\sum_{\substack{|D_1D_2|\le x \\ \eta|D_2}}n(D_1)-\sum_{\substack{|D_1D_2|\le x \\ \eta|D_2}}n(D_2).
\end{equation}
By \eqref{eq: Average of n(D)}, we have
\begin{align}
\nonumber \sum_{|D_1D_2|\le x }n(D_2)=&{}\sum_{|D_1|\le x}\sum_{|D_2|\le \frac{x}{|D_1|}}n(D_2) \\
\nonumber \sim&{}\Theta \frac{ x}{\ze(2)}\sum_{|D_1|\le x}\frac{1}{|D_1|} \\
\sim &{} \Theta\frac{x\log{x}}{\ze(2)^2}, \label{eq: S1}
\end{align}
where the final estimate follows from \eqref{eq: D le x 1/D}. Now, by \cite[Lemma 4.2]{Lola}, the proportion of fundamental discriminants such that $p | D$ is $\mf{1}{p+1}\cdot$ Hence,
\begin{align*}
\sum_{\substack{|D_1D_2|\le x \\ \eta|D_2}}n(D_1)={}&\sum_{|D_1|\le x}n(D_1)\sum_{\substack{|D_2|\le\frac{x}{|D_1|} \ \\ n(D_1)|D_2}}1 \\ ={}&\sum_{|D_1|\le x}\f{n(D_1)}{n(D_1)+1}\sum_{|D_2|\le\frac{x}{|D_1|}}1 \\ \sim{}&\frac{x}{\ze(2)}\sum_{|D_1|\le x}\frac{n(D_1)}{|D_1|(n(D_1)+1)}\cdot
\end{align*}
To find an asymptotic for the last sum we again use partial summation. Let
\[
B(x)\defeq \sum_{|D_1|\le x}\frac{n(D_1)}{n(D_1)+1}\cdot
\]
Then, by (i) of Theorem \ref{th: Pollack},
\[
\sum_{\substack{|D_1|\le x\\ n(D_1)\le (\log{x})^{\f13}}}\frac{n(D_1)}{n(D_1)+1}\sim \al\frac{x}{\ze(2)},
\]
where
\[
\al= \sum_{k=1}^{\infty}\frac{p_k^2}{2(p_k+1)^2}\prod_{j=1}^{k-1}\frac{p_j+2}{2(p_j+1)},
\]
and by (ii) of Theorem \ref{th: Pollack},
\[
\sum_{\substack{|D_1|\le x\\ n(D_1)\gt (\log{x})^{\f13}}}\frac{n(D_1)}{n(D_1)+1}\le \sum_{\substack{|D_1|\le x\\ n(D_1)\gt (\log{x})^{\f13}}}n(D_1)=o(x).
\]
Hence,
\begin{equation*}\label{eq: B(x)}
B(x)= \sum_{\substack{|D_1|\le x\\ n(D_1)\le (\log{x})^{\f13}}}\frac{n(D_1)}{n(D_1)+1} + \sum_{\substack{|D_1|\le x\\ n(D_1)\gt (\log{x})^{\f13}}}\frac{n(D_1)}{n(D_1)+1}
\sim\al\frac{x}{\ze(2)}\cdot
\end{equation*}
Therefore,
\[
\sum_{|D_1|\le x}\frac{n(D_1)}{|D_1|(n(D_1)+1)}=B(x)f(x)-B(1)f(1)-\int_{1}^{x}B(t)f'(t)\sim \al\frac{\log{x}}{\ze(2)},
\]
so that
\begin{equation}\label{eq: S2}
\sum_{\substack{|D_1D_2|\le x \\ \eta|D_2}}n(D_1)\sim \al\frac{x\log{x}}{\ze(2)^2}\cdot
\end{equation}
Finally,
\[
\sum_{\substack{|D_1D_2|\le x \\ \eta|D_2}}n(D_2)=\sum_{|D_2|\le x}n(D_2) \sum_{|D_1|\le\f{x}{|D_2|}}\frac{1}{n(D_1)+1}\cdot
\]
To get an estimate for the inner sum we once again use Theorem \ref{th: Pollack} to obtain
\[
\sum_{|D_1|\le\f{x}{|D_2|}}\frac{1}{n(D_1)+1}\sim \be\frac{x}{\ze(2)|D_2|},
\]
where
\[
\be= \sum_{k=1}^{\infty}\frac{p_k}{2(p_k+1)^2}\prod_{j=1}^{k-1}\frac{p_j+2}{2(p_j+1)}\cdot
\]
From this we see that
\begin{equation}\label{eq: S_3}
\sum_{\substack{|D_1D_2|\le x \\ \eta|D_2}}n(D_2)\sim \sum_{|D_2|\le x}\be\frac{n(D_2)x}{\ze(2)|D_2|}\sim \Theta\be\frac{x\log{x}}{\ze(2)^2},
\end{equation}
where the last estimate follows from partial summation and applying Theorem \ref{th: Pollack}. Therefore, plugging \eqref{eq: S1}, \eqref{eq: S2} and \eqref{eq: S_3} into \eqref{eq: Numerator} shows that
\[
\sum_{|D_1D_2|\le x}\eta\sim (\Theta+\al-\Theta\be)\frac{x\log{x}}{\ze(2)^2}\cdot
\]
This together with \eqref{eq: denominator} completes the proof.
\end{proof}
\begin{remark}
We can give the following explanation of why Linowitz and Thompson's Conjecture \ref{con: ConLola} was slightly off from the correct number: the result from Theorem \ref{th: LolaMainTheorem} is not uniform in $k$ for the choice of the $p_k$ (we fix a set of primes beforehand), while the result from Theorem \ref{th: Pollack} is uniform in $k$ satisfying $p_k\le (\log{x})^{\f13}$. In order to make Linowitz and Thompson's heuristic argument rigorous we would first need to show that Theorem \ref{th: LolaMainTheorem} holds uniformly in $k$ such that $p_k\le f(x)$ for some function $f$ with $f(x) \xrightarrow[x\to \infty]{} \infty$. Then,
\begin{align*}
\frac{\sum_{|D_1D_2|\le x}\eta(D_1,D_2)}{\sum_{|D_1D_2|\le x} 1} =\sum_{p_k\le f(x)}{}&p_k\, \mbox{Prob}(\eta(D_1,D_2)=p_k) \\ {}& + \sum_{p_k\gt f(x)}p_k\, \mbox{Prob}(\eta(D_1,D_2)=p_k) \\
\xrightarrow[x\to \infty]{}{}&\theta + \mu,
\end{align*}
where $\theta$ is the conjectured constant \eqref{eq: theta} and
\[
\mu=\lim_{x\to\infty} \sum_{p_k\gt f(x)}p_k\, \mbox{Prob}(\eta(D_1,D_2)=p_k).
\]
Linowitz and Thompson conjectured that $\mu=0$, but according to Theorem \ref{th: Main Theorem}, $\mu$ does make a small contribution.
\end{remark}
\textbf{Acknowledgments} I would like to thank my PhD supervisor Lola Thompson for guiding me throughout this work and taking the time to give me suggestions about the paper.
\printbibliography
\end{document} |
2,877,628,091,161 | arxiv | \section{Introduction}
Detecting neutrino oscillations \cite{key-1} in atmospheric and solar
neutrino fluxes (and then observing the same phenomenon in reactor
and accelerator experiments) stands as a milestone in particle physics
of the last decade. This compelling experimental evidence proves that
a suitable theory must rely - among many other ingredients - on the
fact that lepton flavor can not be conserved. Therefore, one must
take into account that neutrinos mix (as quarks do). In other words
they can oscillate into one another. That is: when the neutrino flavor
is subject of a measurement in a neutrino flux, one obtaines different
results if a macroscopic distance separates the detectors interacting
with that flux. On the other hand, the Quantum Field Theory suggests
that the neutrinos have to be represented by fermion massive fields,
let them be of Majorana or Dirac type. At the same time, the unified
theory design to describe the interactions must explain why neutrino
masses are so tiny when compared to the charged lepton ones. A striking
feature also arises since the data favor surprisingly large atmospheric
and solar mixing angles, in contrast with the quark mixing pattern.
All these features make difficult the attempts to precisely establish
the structure of the neutrino mass matrix able to fit the available
data supplied by global analysis \cite{key-2}. Specific textures
have been proposed taking into consideration certain discrete symmetries
that could govern the lepton families. Some of these symmetries are
compatible with the elegant seesaw mechanism \cite{key-3} designed
to predict the observed small order of magnitude for the masses of
physical neutrinos.
A different approach consists of advancing certain gauge models that
give rise to specific Yukawa sectors able to supply a concrete mass
matrix. Encouraged by the success of such a strategy in the case of
a particular 3-3-1 gauge model, we propose here an analytical diagonalization
of a general neutrino mass matrix just by taking into account arbitrary
diagonal entries instead of the particular ones considered in the
3-3-1 model previous papers of the author \cite{key-4} based on the
general method of exactly solving gauge models with high symmetries
\cite{key-5}.
The paper is organized as follows. In Section 2 the theoretical framework
\cite{key-6} of the neutrino mixing is briefly presented with the
standard notations of the field. Section 3 deals with the exact neutrino
mass eigenstates and eigenvalues obtained by solving a set of equations
corresponding to the diagonalization of the general mass matrix. Certain
phenomenological restrictions are introduced in Section 4 where the
main results of the paper are presented. Last section is devoted to
conclusions and comments on the obtained results.
\section{The neutrino mass matrix}
We start by assuming the neutrino mixing formula: $\nu_{\alpha L}(x)=\sum_{i=1}^{3}U_{\alpha i}\nu_{iL}(x)$,
where $\alpha=e,\mu,\nu$ label the flavor space (flavor gauge eigenstates),
while $i=1,2,3$ denote the massive physical eigenstates. We consider
throughout this paper the physical neutrinos as Majorana fields, \emph{i.e.}
$\nu_{iL}^{c}(x)=\nu_{iL}(x)$. The mass term in the Yukawa sector
of any gauge unified theory that generate Majorana neutrinos stands:
\begin{equation}
\mathcal{-L}_{Y}=\frac{1}{2}\bar{\nu}_{L}M\nu_{L}^{c}+H.c\label{Eq. 1}\end{equation}
with $\nu_{L}=\left(\begin{array}{ccc}
\nu_{e} & \nu_{\mu} & \nu_{\tau}\end{array}\right)_{L}^{T}$ where the superscript $T$ denotes ''transposed'' . The complex
mixing matrix $U$ that diagonalizes the mass matrix $M$ in the manner
$U^{+}MU=m_{ij}\delta_{j}$ has in the standard parametrization the
form:
\begin{equation}
U=\left(\begin{array}{ccc}
c_{2}c_{3} & s_{2}c_{3} & s_{3}e^{-i\delta}\\
-s_{2}c_{1}-c_{2}s_{1}s_{3}e^{i\delta} & c_{1}c_{2}-s_{2}s_{3}s_{1}e^{i\delta} & c_{3}s_{1}\\
s_{2}s_{1}-c_{2}c_{1}s_{3}e^{i\delta} & -s_{1}c_{2}-s_{2}s_{3}c_{1}e^{i\delta} & c_{3}c_{1}\end{array}\right)\label{Eq. 2}\end{equation}
with natural substitutions: $\sin\theta_{23}=s_{1}$, $\sin\theta_{12}=s_{2}$,
$\sin\theta_{13}=s_{3}$, $\cos\theta_{23}=c_{1}$, $\cos\theta_{12}=c_{2}$,
$\cos\theta_{13}=c_{3}$ for the mixing angles, and $\delta$ for
the CP Dirac phase.
Let us assume the most general symmetric mass matrix for the neutrino
sector as:
\begin{equation}
M=\left(\begin{array}{ccc}
A & D & E\\
D & B & F\\
E & F & C\end{array}\right)\label{Eq. 3}\end{equation}
and try to obtain its eigenvalues. More specifically, this reduces
to solving the set of equations:
\begin{equation}
M\mid\nu_{i}>=m_{i}\mid\nu_{i}>\label{Eq. 4}\end{equation}
\section{Neutrino mass eigenvalues}
The eigenvalues problem (4) resides in diagonalizing the matrix (3)
in order to get the eigenstates basis of the physical neutrinos, and
thus their mass eigenvalues. The procedure will lead to the following
generic solution:
\begin{equation}
m_{i}=m_{i}\left(A,B,C,\theta_{12},\theta_{23},\theta_{13}\right)\label{Eq. 5}\end{equation}
with $i=1,2,3$. In these expressions $m_{i}s$ are analytical functions
depending only on the mixing angles and the diagonal entries in the
general mass matrix. At this stage, we do not make any assumption
on the specific textures that can occur in the mass matrix when particular
symmetries are added or \emph{ad hoc} hypothesis are enforced.
The concrete forms of $m_{i}s$ remain to be determined by solving
the following set of equations:
\begin{equation}
\begin{cases}
\begin{array}{c}
m_{1}=c_{2}^{2}A+c_{1}^{2}s_{2}^{2}B+s_{1}^{2}s_{2}^{2}C-2c_{1}c_{2}s_{2}D+2s_{1}s_{2}c_{2}E-2c_{1}s_{1}s_{2}^{2}F\\
0=c_{2}s_{2}A-c_{1}^{2}c_{2}s_{2}B-s_{1}^{2}s_{2}c_{2}C-(1-2s_{2}^{2})s_{1}E+2s_{1}s_{2}c_{1}c_{2}F\\
0=-c_{1}s_{1}s_{2}B+c_{1}s_{1}s_{2}C+c_{2}s_{1}D+c_{1}c_{2}E-(1-2s_{1}^{2})s_{2}F\\
m_{2}=s_{2}^{2}A+c_{1}^{2}c_{2}^{2}B+s_{1}^{2}c_{2}^{2}C+2c_{1}c_{2}s_{2}D-2s_{1}s_{2}c_{2}E-2c_{1}s_{1}c_{2}^{2}F\\
0=s_{1}c_{1}c_{2}B-s_{1}c_{1}c_{2}C+s_{1}s_{2}D+c_{1}s_{2}E+(1-2s_{1}^{2})c_{2}F\\
m_{3}=s_{1}^{2}B+c_{1}^{2}C+2c_{1}s_{1}F\end{array}\end{cases}\label{Eq. 6}\end{equation}
Since the actual data are not sensitive to any CP-phase violation
in the lepton sector, we have taken into account from the very beginning
$\sin^{2}\theta_{13}\simeq0$ - as it can be easily observed by inspecting
the shape of Eqs. (6) - but the proposed values for the other two
mixing angles will be embedded only in the resulting formulas for
the neutrino masses (7).
Furthermore, one obtaines after a few manipulations the following
analytical equations:
\[
m_{1}=\frac{C\sin^{2}\theta_{12}\sin^{2}\theta_{23}-B\sin^{2}\theta_{12}\left(1+\sin^{2}\theta_{23}\right)}{\left(1-2\sin^{2}\theta_{23}\right)\left(1-2\sin^{2}\theta_{12}\right)}+\frac{A\left(1-\sin^{2}\theta_{12}\right)}{\left(1-2\sin^{2}\theta_{12}\right)},\]
\[
m_{2}=\frac{B(1-\sin^{2}\theta_{12}-\sin^{2}\theta_{23}+3\sin^{2}\theta_{12}\sin^{2}\theta_{23})-C\sin^{2}\theta_{23}\left(1-\sin^{2}\theta_{12}\right)}{\left(1-2\sin^{2}\theta_{23}\right)\left(1-2\sin^{2}\theta_{12}\right)}\]
\[
+\frac{A\sin^{2}\theta_{12}}{\left(1-2\sin^{2}\theta_{12}\right)},\]
\begin{equation}
m_{3}=\frac{C\left(1-\sin^{2}\theta_{23}\right)-B\sin^{2}\theta_{23}}{1-2\sin^{2}\theta_{23}}.\label{Eq. 7}\end{equation}
Assuming the available data concerning the mixing angles \cite{key-2}
and the mass matrix diagonal entries, one can proceed to a detailed
investigation of the resulting equations. They can reveal some interesting
features, not only with respect to the type of the mass hierarchy
(normal, inverted or degenerate) but also regarding the minimal absolute
value in the neutrino mass spectrum and the mass splitting ratio (which
imposes finally a certain relation between the diagonal entries).
Note that some of the masses could come out negative (for certain
combinations of angles), but this is not an impediment since for any
fermion field a $\gamma_{5}\psi$ transformation can be performed
at any time, without altering the physical content of the theory.
As a result of this manipulation the mass sign changes or, equivalently,
some neutrinos have opposite CP-phases.
Let us observe that the analytical mass equations (7) strictly impose
$\sin^{2}\theta_{12}\neq0.5$ and $\sin^{2}\theta_{12}\neq0.5$, yet
this does not forbid any closer approximation to the bi-maximal neutrino
mixing. However, in the case of solar mixing angle this behaviour
does not seem to be disturbing, since data confirm a large but not
maximal mixing. Eventually, some radiative corrections can also be
employed in order to get a more precise account for these angles,
but let us observe tha no particular mixing case is excluded \emph{ab
initio}.
These equations do not contradict the trace condition which requires
indeed a finite neutrino mass sum independently of the values of the
mixing angles. As a matter of fact, if one summs the three masses
in Eqs. (7), then the troublesome denominators disappear and the value
required by Eq. (3) is recovered.
The particular shape of the analytical neutrino masses is due to both
the choice of the $\theta_{13}=0$ and the nonzero diagonal entries
in the mixing matrix. Any other choice - as one can observe in subsequent
section - definitely leads to a different set of equations to be solved
and, thus, to a different form of the solution.
\section{Phenomenological restrictions}
We will analyze - in the following subsections - some particular cases
of the analytical solution presented above and emphasize the most
appealing setting. We are guided in our choice by the need to obtain
plausible predictions, and even a rough estimate regarding the absolute
masses in the spectrum.
\subsection{Conserving the global lepton number $L=L_{e}-L_{\mu}-L_{\tau}$}
One of the most invoked symmetries in the lepton sector was the total
lepton number $L=L_{e}+L_{\mu}+L_{\tau}$which still holds when one
deals with Dirac neutrinos, while Majorana neutrinos violate this
symmetry with two units. Therefore, it had to be abandoned in scenarios
with Majorana neutrinos, as here is the case.
In the particular case of conserving the global lepton number $L=L_{e}-L_{\mu}-L_{\tau}$
\cite{key-7} the shape of the mass matrix (3) becomes:\begin{equation}
M=\left(\begin{array}{ccc}
0 & D & E\\
D & 0 & 0\\
E & 0 & 0\end{array}\right)\label{Eq. 8}\end{equation}
The concrete forms of $m_{i}s$ remain to be computed by solving the
modified set of equations:
\begin{equation}
\begin{cases}
\begin{array}{c}
m_{1}=-2c_{1}c_{2}s_{2}D+2s_{1}c_{2}s_{2}E\\
0=-c_{1}s_{2}^{2}D+c_{1}s_{2}^{2}D-s_{1}c_{2}^{2}E+s_{1}s_{2}^{2}E\\
0=c_{2}s_{1}{}D+c_{2}c_{1}E{}\\
0=c_{1}c_{2}^{2}D-c_{1}s_{2}^{2}D+s_{1}s_{2}^{2}E-s_{1}c_{2}^{2}E\\
m_{2}=2c_{1}c_{2}s_{2}D-2s_{1}c_{2}s_{2}E\\
0=s_{1}s_{2}{}D+c_{1}s_{2}E{}\\
0=s_{1}c_{2}{}D+c_{1}c_{2}E{}\\
0=s_{1}s_{2}{}D+c_{1}s_{2}E{}\\
m_{3}={}0\end{array}\end{cases}\label{Eq. 9}\end{equation}
obtained straightforwardly from Eq. (6) if one puts $A=B=C=F=0$.
The lines 3, 6, 7 and 8 in Eqs. (9) express the same condition, namely:
$D=-E\cot\theta_{23}$ giving rise to a $\mu-\tau$ interchange symmetry
if $\cot\theta_{23}=-1$. The lines 2 and 4 in the set of equations
(9) are fulfiled simultaneously if and only if $\cos^{2}\theta_{12}=\sin^{2}\theta_{12}$
(maximal solar mixing angle).
Under these circumstances, taking into consideration the maximal atmospheric
mixing angle too, the solution reads:
\begin{equation}
\left|m_{1}\right|=\left|m_{2}\right|=\sqrt{2}D\label{Eq.9}\end{equation}
\begin{equation}
m_{3}=0\label{Eq.10}\end{equation}
If the lepton number $L=L_{e}-L_{\mu}-L_{\tau}$ is rigorously conserved
the mass spectrum exhibits an inverted mass hierarchy with two degenerate
nonzero masses and bi-maximal mixing. The minimal neutrino maxx is
identical zero.
\subsection{Mass matrix with $\mu-\tau$ interchange symmetry}
Many papers \cite{key-8} develop scenarios with the $\mu-\tau$ interchange
symmetry. It seems more appealing, since the mass matrix of the neutrino
sector
\begin{equation}
M=\left(\begin{array}{ccc}
A & D & D\\
D & B & F\\
D & F & B\end{array}\right)\label{Eq. 11}\end{equation}
can predict interesting results.
We have to insert in Eqs. (7) the restrictive condition $B=C$ and
simply express the resulting masses. They stand: \[
m_{1}=-B\frac{\sin^{2}\theta_{12}}{\left(1-2\sin^{2}\theta_{23}\right)\left(1-2\sin^{2}\theta_{12}\right)}+A\frac{\left(1-\sin^{2}\theta_{12}\right)}{\left(1-2\sin^{2}\theta_{12}\right)},\]
\begin{equation}
m_{2}=B\frac{\sin^{2}\theta_{12}}{\left(1-2\sin^{2}\theta_{23}\right)\left(1-2\sin^{2}\theta_{12}\right)}+A\frac{\sin^{2}\theta_{12}}{\left(1-2\sin^{2}\theta_{12}\right)},\label{Eq. 12}\end{equation}
\[
m_{3}=B.\]
Evidently, it is required a $\gamma^{5}$ transformation performed
on the first neutrino field in order to get the sign change for its
mass ($m_{1}$), if we assume that $A$ and $B$ have the same order
of magnitude and a suitable close-to-maximal atmospheric mixing is
invoked. In case $A\gg B$ and the atmospheric angle has a reasonable
value, no mass could need a chiral transformation to get positive
values.
The mass spectrum in the neutrino sector becomes:
\[
\left|m_{1}\right|=\left[\frac{B\sin^{2}\theta_{12}}{\left(1-2\sin^{2}\theta_{23}\right)\left(1-2\sin^{2}\theta_{12}\right)}-\frac{A\left(1-\sin^{2}\theta_{12}\right)}{\left(1-2\sin^{2}\theta_{12}\right)}\right],\]
\begin{equation}
m_{2}=\left[\frac{B\sin^{2}\theta_{12}}{\left(1-2\sin^{2}\theta_{23}\right)\left(1-2\sin^{2}\theta_{12}\right)}+\frac{A\sin^{2}\theta_{12}}{\left(1-2\sin^{2}\theta_{12}\right)}\right],\label{Eq. 13}\end{equation}
\[
m_{3}=B.\]
The physical relevant magnitudes in neutrino oscillation experiments
are the mass squared differences for solar and atmospheric neutrinos,
defined as: $\Delta m_{12}^{2}=m_{2}^{2}-m_{1}^{2}$ and $\Delta m_{23}^{2}=m_{3}^{2}-m_{2}^{2}$
respectively. They result from the above expressions (Eqs. (13)):
\begin{equation}
\Delta m_{12}^{2}\cong\frac{2AB\sin^{2}\theta_{12}}{\left(1-2\sin^{2}\theta_{12}\right)^{2}\left(1-2\sin^{2}\theta_{23}\right)}\label{Eq. 14}\end{equation}
\begin{equation}
\Delta m_{23}^{2}\cong\frac{B^{2}\sin^{4}\theta_{12}}{\left(1-2\sin^{2}\theta_{12}\right)^{2}\left(1-2\sin^{2}\theta_{23}\right)^{2}}\label{Eq. 15}\end{equation}
The mass splitting ratio defined as $r_{\Delta}=\Delta m_{12}^{2}/\Delta m_{23}^{2}$
yields in our scenario:
\begin{equation}
r_{\Delta}=2\frac{A}{B}\left(\frac{1-2\sin^{2}\theta_{23}}{\sin^{2}\theta_{12}}\right)\label{Eq. 16}\end{equation}
It is natural to presume that $A$ and $B$ have the same order of
magnitude and consequently $A/B\simeq1$. Under these circumstances
$\sin^{2}\theta_{23}\simeq0.497$ in order to fulfil the phenomenological
requirement $r_{\Delta}\simeq0.033$.
Regarding the neutrino mass sum
\begin{equation}
\sum_{i=1}^{3}m_{i}\simeq\frac{2AB\sin^{2}\theta_{12}}{\left(1-2\sin^{2}\theta_{12}\right)\left(1-2\sin^{2}\theta_{23}\right)}\label{Eq. 17}\end{equation}
this is experimentally restricted to: $\sum_{i=1}^{3}m_{i}\sim1$eV,
if we take into consideration the Troitsk \cite{key-9} and Mainz
\cite{key-10} experiments. On the other hand, combining Eqs. (13)
and (17) one obtaines:
\begin{equation}
\sum_{i=1}^{3}m_{i}=\frac{2\sin^{2}\theta_{12}}{\left(1-2\sin^{2}\theta_{12}\right)\left(1-2\sin^{2}\theta_{23}\right)}m_{0}\label{Eq. 18}\end{equation}
with minimal neutrino mass $m_{0}=m_{3}$ . This leads to
\begin{equation}
m_{0}=\frac{\left(1-2\sin^{2}\theta_{12}\right)\left(1-2\sin^{2}\theta_{23}\right)}{2\sin^{2}\theta_{12}}\sum_{i=1}^{3}m_{i}\label{Eq. 19}\end{equation}
Assuming the phenomenological values for the sum of the neutrino masses
and the solar mixing angle $\sin^{2}\theta_{12}\simeq0.31$ one can
analyze the behaviour of the $m_{0}$ in terms of the atmospheric
mixing angle by studying the function:
\begin{equation}
m_{0}(\sin^{2}\theta_{23})=0.613\left(1-2\sin^{2}\theta_{23}\right)\sum_{i=1}^{3}m_{i}\label{Eq. 20}\end{equation}
A plausible value (with the above considered values for mixing angles)
can now be inferred: $m_{0}\simeq0.0035$eV. It is very close to the
value obtained by the author (second reference in \cite{key-4}) in
a particular 3-3-1 model where the diagonal entries of the neutrino
mass matrix were obtained in a specific manner, without resorting
to any additional symmetry.
\section{Concluding remarks}
In this paper we have proved that the exact mass-eigenstates of a
general neutrino mass matrix with no CP-phase violation can be exactly
computed. The results accomodate the observed solar mixing angle and
exclude the exact maximal mixing for the atmospheric angle, but do
not forbid any closer approximation for such a setting. Therefore,
they could be in good agreement with the data and can predict the
correct mass splitting ratio. Our predictions also include the inverted
mass hierarchy in the neutrino sector and the minimal absolute mass
- $m_{0}\simeq0.0035$ eV - since the $\mu-\tau$ interchange symmetry
is employed. The global lepton symmetry $L=L_{e}-L_{\mu}-L_{\tau}$supplies
two degenerate nonzero masses and one identical to zero, within the
inverted hierarchy as well. However, the general case can be ragarded
as a perturbation that softly breaks this lepton symmetry by introducing
small nonzero diagonal entries. The amazing feature seems to be the
unexpected similarity of these general results with the ones obtained
by the author in a particular 3-3-1 model with specific diagonal entries
(proportional to charged lepton masses).
|
2,877,628,091,162 | arxiv | \section*{\large{\textsf{Acknowledgement}}}
The author is grateful to Ingemar Bengtsson, Brett McInnes, Michael Good, Sabine Hossenfelder, and Carmen Li for useful comments and discussions. He also thanks Baocheng Zhang for pointing out a crucial mistake in the previous version of this work.
|
2,877,628,091,163 | arxiv | \section{\sc Introduction}
A finite permutation group $(G,\, \Omega)$ is {\it minimally
transitive } if $G$ is transitive on $\Omega$ while all proper
subgroups of $G$ are intransitive on $\Omega.$ Evidently, any
transitive permutation group contains minimally transitive
subgroups acting on the same set and so this concept occurs
naturally in reduction arguments. Closely related is the notion of
minimally irreducible linear groups, namely those linear groups
which act irreducibly on a vector space $V$ while all proper
subgroups leave some proper subspace of $V$ invariant.
Solvable minimally transitive groups were first considered by
Suprunenko~\cite{Sup1} and Kopylova~\cite{Kopy1} who studied the
groups of degree $pq$ with $p$ and $q$ primes. More recently
Lucchini~\cite{Lucch1} studied minimal generating sets in
minimally transitive groups, in connection with the asymptotic
properties of permutation groups considered in
Pyber~\cite{Pyber1}. In Ngo~\cite{Dak} non-regular metabelian
minimally transitive groups are investigated, and
Miller-Praeger~\cite{Praeg1} mention such groups in the
context of vertex transitive graphs which are not Cayley graphs. A
list of minimally transitive groups up to degree $30$ is available
in Hulpke~\cite{Hulp1}, see also Conway, Hulpke and
McKay~\cite{Hulp2}.
In this paper we consider solvable groups. Here in particular it happens
frequently that a group action is not faithful. Therefore we study more generally arbitrary {\it minimally transitive representations} which may or may not be faithful. This language requires technical detail which could detract from the main matter; wherever possible we therefore try to stay close to the language of permutation groups which may appear more natural.
Any transitive permutation group contains minimally
transitive subgroups and therefore it would be unreasonable to expect full classifications in general. However, under suitable restrictions some general results can be expected. For instance, for nilpotent groups there is a simple description of all their minimally transitive representations.
For a faithful action the primes dividing the order of a solvable group must divide the degree, see Theorem~\ref{1012}. In Sections 2 and 3 we prove some reduction theorems for subgroups and factor groups. In particular, a construction is given to reduce a general minimally transitive action to the case where the degree contains only two primes. A good result is also available for actions of square-free degree, extending the work of Suprunenko and Kopylova.
\section{\sc Minimally Transitive Groups}
Let $G\subseteq {\rm Sym\,}\Omega$ be a transitive permutation group
on a finite set $\Omega.$ Then $G$ is {\it minimally transitive} on
$\Omega$ if every proper subgroup of $G$ is intransitive on $\Omega.$
In the following we consider
more generally an abstract finite group $G$ together with all its
transitive actions, faithful or not. Thus if $A\subset G$ is a
subgroup of $G$ let $G$ act on the cosets $G\!:\!A$ of
$A$ in $G.$ The kernel of this action is the core
$K_{G:A}:=\bigcap_{g\in G}\,A^{g}$ of $A$ in $G.$
Thus $G$ acts {\it minimally transitively } on $G\!:\!A$ if and
only if every subgroup $H$ with $K_{G:A}\subseteq H \subset G$ acts
intransitively on $G\!:\!A.$ It will be convenient to call such a
subgroup $A$ an {\it mt-stabilizer} in $G$; we denote this as $A\subset_{\!\!{ \rm m}}
G.$ Therefore $A\subset_{\!\!{ \rm m}} G$ if and only if the following holds:
Whenever $H\subseteq G$ is transitive on $G\!:\!A$ then $HK_{G:A}=G.$
Evidently, if $A$ is an arbitrary subgroup of $G$ then $G/K_{G:A}$ always acts faithfully on $G\!:\!A$
and hence is a permutation group on $G\!:\!A.$ This permutation group then is minimally transitive
if and only if $A$ is an {\rm mt}-stabilizer.
For instance, if $A=1$ then $G$ is regular on $G\!:\!A$ and so $1\subset_{\!\!{ \rm m}}
G.$ For another example suppose that $|G|=pq$ with
distinct primes and Sylow subgroups $A\unlhd G$ and $B\,\,\,\,
/\!\!\!\!\!\!\unlhd G.$ Then $K_{G:A}=A$ and $A\subset_{\!\!{ \rm m}} G$ while
$K_{G:B}=1$ and $B\not\subset_{\!\!{ \rm m}} G.$
\vspace{5mm}
\subsection{\sc Preliminaries}
We begin by listing general properties of groups with minimally transitive action.
For the remainder let $G$ be a finite group and let $A$ be a
subgroup of $G.$ The property of being an {\rm mt}-stabilizer in $G$ is
quite special as it relates to the subgroup as well as its embedding in $G.$ Let
${\cal L}(G)$ denote the lattice of all subgroups of $G.$ We will
begin by describing some general properties of groups in ${\cal
L}(G)$ which are {\rm mt}-stabilizers in $G.$ The next lemma is technical
but essential; the first part we use later on without further
mention.
\begin{lem}
\label{0009}
(i) \,\,\, Let $A\subseteq G.$ Then $A\subset_{\!\!{ \rm m}} G$ if and only if
$AH=G$ for a subgroup $H\subseteq G$ implies that $HK_{G:A}=G.$\newline (ii) \,\,
Let $A \subset_{\!\!{ \rm m}} G$ and let $B\subseteq A.$ Then \,\,(a):\, $B\subset_{\!\!{ \rm m}} G$ or
\,\,(b):\, $K_{G:B}\neq K_{G:A},$ $BK_{G:A}\subset_{\!\!{ \rm m}} G$ and there exists a
subgroup $H\subseteq G$ with $HK_{G:B}\neq HK_{G:A}=G.$ In particular,
if
$A\subset_{\!\!{ \rm m}} G$ and $K_{G:A}\subseteq B\subseteq A$ then $B\subset_{\!\!{ \rm m}}
G.$
\end{lem}
\smallskip{\it Proof:} \,\,\, (i) Suppose that $A\subseteq G$ and that also $H\subseteq G.$
Then $H$ is transitive on $G\!:\!A$ if and only if $AH=G.$
Therefore by definition, if $A\subset_{\!\!{ \rm m}} G$ and if $AH=G$ then $HK_{G:A}=G.$
Conversely, if $AH=G$ implies that $HK_{G:A}=G$ then $H$ being
transitive on $G\!:\!A$ means that $AH=G$ and so $HK_{G:A}=G.$ Hence
$A\subset_{\!\!{ \rm m}} G.$\newline
(ii) Assume that $A \subset_{\!\!{ \rm m}} G$ and $B\subseteq A.$ If $B\not\subset_{\!\!{ \rm m}} G$ then there exists some $H$
such that $G=BHK_{G:B}$ but $G\neq HK_{G:B}.$ As $K_{G:B}\unlhd
K_{G:A}$ we have $G=AHK_{G:A}.$ If $HK_{G:A}\neq G$ then $A\not\subset_{\!\!{ \rm m}} G.$ Next
we compute the core $\bar K$ of $BK_{G:A}$ in $G.$ Evidently,
$K_{G:A}\subseteq \bar K\subseteq BK_{G:A}\subseteq A$ so that $\bar
K=K_{G:A}.$ If $BK_{G:A}\not\subset_{\!\!{ \rm m}} G$ then there exists a subgroup $\bar H$
such that $(BK_{G:A} )\bar H=G$ but $\bar H\bar K\neq G.$ But then
$A(K_{G:A} \bar H)=G$ and $\bar HK_{G:A}\neq G,$ a contradiction. If
$K_{G:A}\subseteq B\subseteq A$ then $K_{G:B}=K_{G:A}$ and hence the
second alternative can not happen. \hfill $\Box$
\bigskip
When dealing with the set of all {\rm mt}-stabilizers in
the subgroup lattice of $G$ the following is a useful notion: If
$({\cal L},\,\leq)$ is any partially ordered set then a subset
${\cal M}$ of ${\cal L}$ is an {\it order ideal\,} in ${\cal L}$
if $X\in {\cal M}$ and $Y\leq X$ with $Y\in{\cal L}$ implies that
$Y\in {\cal M}.$
\begin{rem}
\label{5001} From the last part of the lemma we
deduce that the core-free {\rm mt}-stabilizers in $G$ form an
order ideal in the subgroup lattice ${\cal L}(G).$
\end{rem}
\medskip
It is therefore often sufficient to know the 'top'
{\rm mt}-stabilizers, that is those which are maximal subject to being
an {\rm mt}-stabilizer. For instance, if $G$ is simple then the
{\rm mt}-stabilizers form an order ideal and this is described
completely by its top elements. We may also ask when such top
elements are maximal subgroups of $G.$ Evidently, $A$ is maximal
in $G$ precisely when $G$ acts primitively on $G\!:\!A.$ More
generally, $G$ acts {\it quasi-primitively} on $G\!:\!A$ if and
only if any subgroup $N$ with $K_{G:A}\neq N\unlhd G$ acts transitively on
$G\!:\!A.$ In particular, a transitive permutation group is quasi-primitive if all its
normal subgroups $\neq 1$ are transitive.
\begin{prop}
\label{1000}
Let $G$ be quasi-primitive on $G\!:\!A.$ If $A\subset_{\!\!{ \rm m}} G$ then
$G/K_{G:A}$ is simple. Equivalently, if $(G,\Omega)$ is a
quasi-primitive minimally transitive permutation group then $G$ is simple.
\end{prop}
\smallskip{\it Proof:} \,\,\, Suppose that $A\subset_{\!\!{ \rm m}} G.$ If $G\unrhd N\supset K_{G:A}$ then $N$ is
transitive on $G\!:\!A$ as $G$ is quasi-primitive on $G\!:\!A.$
Hence $N=G$ as $A\subset_{\!\!{ \rm m}} G.$\hfill $\Box$
\vspace{5mm}
\subsection{\sc A Reduction Theorem}
When studying minimal transitivity it is obviously useful to reduce a minimally transitive action $A\subset_{\!\!{ \rm m}} G$ to one of a smaller group or to one of smaller degree. Minimal transitivity lends itself to good reduction arguments of this kind for
normal subgroups. For this let $G$ be an arbitary finite group
with an {\rm mt}-stabilizer $A\subset_{\!\!{ \rm m}} G$ of index $n$ in $G.$ Let $H$ be a
normal subgroup of $G$ with $K_{G:A}\subset H$ and $K_{G:A}\neq H\neq G.$
Then $H$ is not transitive on $\Omega:=G\!:\!A$ and the orbits of
$H$ on $\Omega$ are a system of imprimitivity for $G.$ So these
are of the shape $\Omega_{1},...,\Omega_{n^{*}}$ with
$|\Omega_{i}|=s$ and $n^{*}:=\frac{n}{s}.$ Let therefore
$\Omega^{*}:=\{\,\Omega_{i}\,\,|\,\,i=1,...,n^{*}\,\}.$
Let $\Omega_{1}$ be such that it contains the coset $1A$ and let
$B$ be the set-stabilizer of $\Omega_{1}.$ In other words, $B=AH$
and in particular $H\subseteq K_{G:B}.$ Now note that $B\subset_{\!\!{ \rm m}} G.$
For if $M\subseteq G$ satisfies $BM=G$ then $AHM=G.$ As $A\subset_{\!\!{ \rm m}} G$
we have $G=MHK_{G:A}.$ But by choice, $K_{G:A}\subseteq H$ so that $G=MH.$
As $H\subseteq K_{G:B}$ we get $G=MK_{G:B}$ and so $B\subset_{\!\!{ \rm m}} G.$
Equivalently, $G$ acts minimally transitively on $\Omega^{*}.$
Therefore we have the following:
\begin{thm} \label{reduct}
Let $A\subset_{\!\!{ \rm m}} G$ and suppose $H\neq G$ is normal in $G$ with
$K_{G:A}\subset H\neq K_{G:A}.$ Then $A\neq AH\subset_{\!\!{ \rm m}} G.$ \end{thm}
It is worth to formulate this statement in terms of permutation
groups. In conjunction with Proposition~\ref{1000} we have:
\begin{thm} \label{reductperm}
Let $G$ be a minimally transitive permutation group on $\Omega.$ If $G$ is
quasi-primitive on $\Omega$ then $G$ is simple. Otherwise, if $H$
is a proper normal subgroup of $G$ then $G$ acts minimally transitively
on the set of $H\!$-orbits. \end{thm}
In other words, a minimally transitive permutation group is either simple or otherwise induces a minimally transitive action on the orbits of any normal subgroup. Another kind of reduction occurs for the action of
quotient groups, and this will be used later.
\begin{lem} \label{reduct2}
Let $N$ be a normal subgroup of $G$ and let $N\subseteq
A\subseteq G.$ Then $A/N\subset_{\!\!{ \rm m}} G/N$ if and only if $A\subset_{\!\!{ \rm m}} G.$
\end{lem}
\smallskip{\it Proof:} \,\,\, If $N\subseteq A\subseteq G$ then $K_{G/N:A/N}=K_{G:A}/N.$
Suppose that $A/N\subset_{\!\!{ \rm m}} G/N$ but $A\not \subset_{\!\!{ \rm m}} G.$ So there exist
$H\subseteq G$ with $G=AH$ and $G\neq HK_{G:A}.$ Consider
$G/N=A/N\,\cdot\,HK_{G:A}/N$ and evaluate $HK_{G:A}/N\,\cdot
\,K_{G/N:A/N}=HK_{G:A}/N\,\cdot\, K_{G:A}/N=HK_{G:A}/N\neq G/N,$ a
contradiction. Conversely, suppose that $A\subset_{\!\!{ \rm m}} G$ but that
$A/N\not \subset_{\!\!{ \rm m}} G/N.$ So there is a subgroup $N\subseteq H\subseteq
G$ with $G/N=A/N\,\cdot H/N$ and $H/N\,\cdot \,K_{G:A}/N=HK_{G:A}/N\neq
G/N.$ So $G=AH$ with $HK_{G:A}\neq G,$ a contradiction.\hfill $\Box$
\vspace{15mm}
\section{\sc Solvable Groups}
For the remainder of the paper we shall restrict ourselves to
minimally transitive representations of solvable groups. If $n$ is
an integer let $\pi(n)$ be the set of primes dividing $n.$ Similarly, $\pi(G)$ and $\pi(G\!:\!H)$ are the prime divisors in $|G|$ and $|G\!:\!H|$ respectively. Also, $|n|_{p}$ is the highest $p\!$-power dividing $n.$
The following theorem states the basic relation between $\pi(G)$ and the degree of any faithful minimally transitive action when $G$ is solvable. For nilpotent groups it
completely characterizes all minimally transitive
actions.
\begin{thm}
\label{1012}
\quad (i) Let $A\subset_{\!\!{ \rm m}} G$ such that $G/K_{G:A}$ is solvable.
Then $\pi(G\!:\!A)=\pi(G/K_{G:A}).$ In particular,
for a solvable minimally transitive permutation group $G$ of
degree $n$ we have $\pi(G)=\pi(n).$
(ii) Let $A\subset G.$ If $A/K_{G:A}$ is contained in the Frattini
subgroup of $G/K_{G:A}$ then $A\subset_{\!\!{ \rm m}} G.$ Conversely, if $G/K_{G:A}$ is
nilpotent and $A\subset_{\!\!{ \rm m}} G$
then $A/K_{G:A}$ is contained in the Frattini subgroup of $G/K_{G:A}.$
(iii) If $A\subset_{\!\!{ \rm m}} G$ and
$|G\!:\!A|=p^{i}$ for some prime $p$ then $G/K_{G:A}$ is a $p\!$-group
and $A/K_{G:A}$ is contained in the Frattini subgroup of $G/K_{G:A}.$
\end{thm}
\smallskip{\it Proof:} \,\,\, (i) Let $H$ be a Hall $\pi(G\!:\!A)\!$-subgroup of $G$. Then $AH=G$ as the
left hand side has order $|G|.$ As $A\subset_{\!\!{ \rm m}} G$ therefore $HK_{G:A}=G.$ As
$K_{G:A}\subseteq A$ therefore $\pi(G\!:\!A)=\pi(G:K_{G:A}).$
(ii) Suppose that $A\not\subset_{\!\!{ \rm m}} G.$ Then there exists some maximal subgroup $H'\supseteq K_{G:A}$ with $G=AH'$ and $H'\neq G.$ If in addition $A/K_{G:A}$ is contained in the Frattini subgroup of $G/K_{G:A}$ we have $A\subseteq H',$ a contradiction.
Conversely, if $G/K_{G:A}$ is nilpotent and if $H\supseteq K_{G:A}$ is a maximal subgroup of $G$ then $H$ is normal in $G.$ Therefore $AH\supseteq K_{G:A}$ is a group
and if $A\subset_{\!\!{ \rm m}} G$ then $AH\neq G,$ and hence
$A\subseteq H.$
(iii) This follows from (i) and (ii). \hfill $\Box$
\medskip
The next result is
a general splitting principle reducing representations of non-nilpotent groups to representations of subgroups, generally involving fewer primes. We denote the Fitting subgroup of $X$ by $F(X).$
\begin{thm}
\label{3001}
Let $G$ be a solvable group, suppose that $A\subset_{\!\!{ \rm m}} G$ is core-free and that $A$ is contained in $F=F(G).$ Let $\pi^{*}:=\pi(G:F)$ and let $Q$ be a Hall $\pi^{*}\!$-subgroup of $G.$ Suppose that $P$ is a normal Sylow $p\!$-subgroup of $G,$ let $A_{P}:=A\cap P$ and $A_{Q}:= A\cap Q.$ Then $p$ does not belong to $\pi^{*}$. Furthermore, $A_{Q}\times A_{P}$ is core-free in $Q^{*}P$ and $A_{Q}\times A_{P}\subset_{\!\!{ \rm m}} Q^{*}P$ for any conjugate $Q^{*}$ of $Q.$
Conversely, let $P_{1},\,P_{2},..,\,P_{t}$ be the normal Sylow $p_{i}\!$-subgroup of $G.$ Suppose there exist a subgroup $A_{Q}$ of $F\cap Q$ and subgroups $A_{P_{i}}\subseteq P_{i}$ such that $ A_{Q}\times A_{P_{i}}\subset_{\!\!{ \rm m}} Q^{*}P_{i}$ is core-free in $Q^{*}P_{i},$
for all conjugates $Q^{*}$ of $Q$ and all $i=1,..,\,t.$ Then $A_{Q}\times A_{P_{1}}\times \cdots \times A_{P_{t}}\subset_{\!\!{ \rm m}} G$ is core-free in $G.$
\end{thm}
\medskip For instance, in the simplest case when $\pi^{*}=\{q\}$ we may take $p$ to be any prime in $\pi(n)\setminus \{q\}$ where $n=|G:A|.$ Then $Q\cap F$ is the Sylow $q\!$-subgroup of $F$ so that $A_{Q}$ is the Sylow $q\!$-subgroup of $A.$ Similarly, $A_{P}$ is the Sylow $p\!$-subgroup of $A$ and hence $A_{Q}\times A_{P}\subset_{\!\!{ \rm m}} QP$ is minimally transitive of degree $|n|_{q}|n|_{p}.$ Note, for at least one choice of $p$ the group $QP$ is not nilpotent, and evidently, groups of this type are at the basis of any induction in this case.
\medskip
\smallskip{\it Proof:} \,\,\, Evidently, as $P$ is a normal Sylow subgroup of $G$ we have $P\subseteq F.$ Put $K:=K_{QP:(A_{Q}\times A_{P})}.$ Then $K$ is centralized by every Sylow $r\!$-subgroup of $G$ for $r\neq p$ not dividing the order of $Q.$ Further, it is normalized by $QP$ and hence $K\subseteq A$ is a normal subgroup of $G.$ As $A$ is core-free, $K$ is trivial.
Now, suppose $A_{Q}\times A_{P}\not\subset_{\!\!{ \rm m}} QP.$ Then there exists a subgroup $Y\subseteq
QP$ such that $(A_{Q}\times A_{P})Y=QP$ but
$Y\neq QP.$ Let $S$ be the direct product of all Sylow $r\!$-subgroups of $F$ for $r\neq p$ not dividing the order of $Q.$ This group is characteristic in $F$ and so normal in $G.$ Therefore, $YS$ is a group and $A(YS)=(AY)S\supseteq(QP)S=QF=G.$ However, $YS\neq G.$ This is a contradiction, since $A\subset_{\!\!{ \rm m}} G.$ Finally note that $Q\cap F$ is normal in $F.$ Thus if $Q$ is replaced by $Q^{f}$ then $A_{Q^{f}}\times A_{P}\subset_{\!\!{ \rm m}} Q^{f}P.$ But as $A\subseteq F$ we have $A\cap Q=A\cap Q^{f}.$
Conversely, let $A= A_{Q}\times A_{P_{1}} \cdots \times A_{P_{t}}.$ Since $ A_{Q}\times A_{P_{i}}$ are core-free in $QP_{i}$ also $A$ is core-free in $G.$ To show that $A\subset_{\!\!{ \rm m}} G$ suppose that this is not the case. Let therefore $Y$ be a subgroup such that $G=AY$ but $Y\neq G.$ Thus $Y=Y^{*}(Y_{1}\times \cdots \times Y_{t})$ for $Y_{i}:=Y\cap P_{i}$ and $Y^{*}$ a Hall $\pi^{*}\!$-subgroup of $Y.$
Therefore $Y^{*}\subseteq Q^{f}$ for some $f\in F.$
Further, $G=(P_{1}\times \cdots \times P_{t})Q^{f}$ and since $Y=(Y_{1}\times \cdots \times Y_{t})Y^{*}$ we have $G=AY=(A_{Q}\times A_{P_{1}} \cdots \times A_{P_{t}})(Y_{1}\times \cdots \times Y_{t})Y^{*}.$ As $A_{Q}$ is a $\pi^{*}\!$-subgroup of $F$ it centralizes all terms other than $Y^{*}.$ Similarly, $Y_{i}$ centralizes all terms $A_{P_{j}}$ with $i\neq j.$ Therefore we can rewrite this as $(P_{1}\times \cdots \times P_{t})Q^{f}= (A_{P_{1}}Y_{1})\times \cdots \times( A_{P_{t}}Y_{t})(A_{Q}Y^{*}).$ For order reasons we have $A_{Q}Y^{*}=Q^{f}$ and
$A_{P_{i}}Y_{i}=P_{i}$ for $i=1,..,\,t.$
As $Y \neq G$ we have $Y\cap Q^{f}P_{r}=Y^{*}Y_{r}\neq Q^{f}{P_{r}}$ for at least one $r,$ say $r=1.$
Now consider $(A_{P_{1} }\times A_{Q})(Y^{*}Y_{1})
=(A_{P_{1}}Y_{1})(A_{Q}Y^{*})=P_{1}Q^{f}$
This is a contradiction, since $A_{P_{1}}\times A_{Q}\subset_{\!\!{ \rm m}} P_{1}Q^{f}.$ \hfill $\Box$
\vspace{7mm}
{\sc Representations of Square-Free Degree:} \, Next we turn to representations of square-free degree. Here we get precise information on the Fitting subgroup.
\begin{thm}\label{square
For a solvable group $G$ suppose that $A\subset_{\!\!{ \rm m}} G$ is core-free and has
square-free index $n$ in $G.$ Let $F$ be the Fitting group of $G.$ Then $|F|$ is coprime to $|G:F|$ and all Sylow subgroups of $F$ are elementary abelian. In particular, $G$ is nilpotent if and only if $G$ is cyclic of order $n,$ with $A=1.$
Let $\pi^{*}=\pi(n)\setminus \pi(F)$ and let $n^{*}$ be the product of the primes in $\pi^{*}.$ If $C$ is a Hall $\pi^{*}\!$-subgroup of $A$ and if $Q$ is a Hall $\pi^{*}\!$-subgroup of $G$
containing $C$ then $|Q\!:\!C|= n^{*} $ and the action of $Q$ on $Q\!:\!C$ is permutationally equivalent to the action of $G$ on $G\!:\!AF.$
\end{thm}
\medskip
\smallskip{\it Proof:} \,\,\, If $n=p_{1}p_{2}\cdots p_{t}$ with pairwise distinct primes
$p_{i}$ then $\pi(G)=\{p_{1},\,p_{2},\,\dots, p_{t}\}$ by
Theorem~\ref{1012}. Let $N\neq 1$ be a $p\!$-subgroup of $G,$
say $p=p_{1},$ which is normal in $G.$ We claim that $N$ is a Sylow subgroup of $G.$ To prove this note that $N$ has $m:=\frac{n}{p}$ orbits
$\Omega_{1},..,\Omega_{s},..,\Omega_{m}$ on $\Omega:=G\!:\!A,$ all of length $|\Omega_{s}|=|N\!:\!N\cap A|=p.$
Let $S$ be a Sylow $p\!$-subgroup of $AN.$ As $AN$ is the setwise
stabilizer of the orbit $\Omega_{s}$ that contains $1A$ we have that
$p$ does not divide $|G\!:\!AN|.$ Hence $S$ is a Sylow
$p\!$-subgroup of $G.$ If $Q$ is a Hall $p'\!$-subgroup of $G$ then
$SQ=G$ so that in particular $G=(AN)Q=A(NQ)$ for order reasons.
As $A\subset_{\!\!{ \rm m}} G$ we have $(NQ)K_{G:A}=G$ but as $A$ is core-free we have
$NQ=G.$ Therefore $N=S$ is a Sylow subgroup of $G.$ For any $p\in \pi(F)$ let now $N$ be the unique Sylow $p\!$-subgroup of $F.$ Thus $N$ is normal in $G$ and hence is a Sylow $p\!$-subgroup of $G.$ It follows that $|F|$ is co-prime to $|G\!:\!F|.$ By the same argument $N$ is characteristically simple and hence elementary abelian. Evidently, if $G=F$ is nilpotent then $G$ is abelian, hence regular on $\Omega$ and so cyclic of order $n=|\Omega|.$
As $|F|$ is co-prime to $|G\!:\!F|$ we may assume for the remainder that $Q$ is a $\pi^{*}\!$-subgroup of $G$ complementing $F,$ with the further property that $Q\cap A=C$ is a $\pi^{*}\!$-subgroup of $A.$ Then $G=QF$ with $Q\cap F=1$ and $AF=CF$ with
$C\cap F=1$ implies that the action of $G$ on the $n^{*}$
cosets of $AF$ in $G$ is permutationally equivalent to the action of $Q$
on the cosets of $C.$ Hence $C\subset_{\!\!{ \rm m}} Q$ by Theorem~\ref{reduct}.
\hfill $\Box$
\bigskip
Some comments are in order. (1) \, While the
theorem could be formulated for permutation groups the resulting
action of $G$ on the cosets of $AN$ is not faithful, and the same
may be true for the action of $Q$ on the cosets of $C.$\newline
(2) \, As $G$ is solvable there is at least one normal
$p\!$-subgroup $N,$ as in the proof, and for this $p$ it is the unique normal
$p\!$-subgroup. This subgroup is elementary abelian, and $G$ acts
irreducibly on it. \newline (3) \, The basis of induction for square free degrees occurs when $n$ is the product of two
distinct primes. A
complete analysis of the possibilities for $G$ can be found in
Suprunenko~\cite{Sup1} and Kopylova~\cite{Kopy1}. For the reader's benefit we collect their results here.
\begin{thm}\label{sk}
(Suprunenko~\cite{Sup1}) The permutation group $G$ is minimally transitive of degree $ pq,$ where $q<p$ are prime numbers with $q$ not dividing
$(p-1),$ if and only if $G$ is isomorphic to one of the following\newline
(i) \,\,\, the cyclic group of order $pq,$\newline
(ii) \,\, a minimal non-abelian
group G=$PQ,$ where $|Q|=q$ and $P$ is normal in $G,$ with $|P|=p^{m}$ where $m$ is the exponent of $p$ mod $q,$ or\newline
(iii) \, a minimal non-abelian group $G=PQ$ with $Q$ is normal, $|P|=p$ and
$|Q|=q^{r}$ where $r$ is the exponent of
$q$ mod $p.$
\end{thm}
The remaining case where $q^{r}$ with $r>0$ is the highest power of $q$ dividing $ (p-1)$ is analyzed in Kopylova~\cite{Kopy1}. Here a similar description is obtained and it is shown that $G$ is
(i) a group of order $q^tp$ with $0<t\leq r$; \,\, (ii) a group of order $q^{r+1}p^q$ or \,\, (iii) a group of order $pq^l$ where $l
$ is the exponent of $q$ mod $p.$
\vspace{7mm}
{\sc Representations of Degree Involving Two Primes:} \,From the discussion so far it is clear that $\{p,\,q\}\!$-groups and their minimally transitive representations play a special role. So let $A\subset_{\!\!{ \rm m}} G$ be core-free with $\pi(G)=\{p,\,q\}.$ From Theorem~\ref{reduct} it is clear that any normal subgroup $N$ in $G$ gives rise to a minimally transitive representation $AN/N\subset_{\!\!{ \rm m}} G/N$ of degree $\leq |G:A|.$ Our first observation is the following
\begin{lem} \,
\label{4009
Let $G$ be solvable and let $A\subset_{\!\!{ \rm m}} G$ be core-free in $G.$ Suppose that the prime $q$ divides $|G\!:\!A|$ to the first power only. Let
$N$ be a $q\!$-group which is normal in $G.$ Then $N$ is elementary abelian and is a Sylow subgroup, with $G$ acting irreducibly on $N.$ \end{lem}
\medskip
\smallskip{\it Proof:} \,\,\, Let $Q$ be a Sylow
$q\!$-subgroup containing $N$ and let $P$ be a $q'\!$-complement in $G=PQ.$ Note that the $N\!$-orbits on
$G\!:\!A$ are blocks of imprimitivity, all of equal size $q$
and $AN$ is the stabilizer of the orbit containing $1A.$ Therefore $q$ and
$|G\!:\!AN|$ are co-prime so that $AN$ contains some Sylow $q\!$-subgroups
of $G.$ From this we have $(AN)P=G,$ for order reasons. Since
$A\subset_{\!\!{ \rm m}} G$ and $A(NP)=G$ we have that $(NP)K_{G:A}=G$ but $K_{G:A}=1$ means
$NP=G.$ This says that $N$ is a Sylow $q\!$-subgroup of $G$ and hence $N=Q.$
Next replace $N$ by a minimal normal subgroup of $G.$ This group is elementary abelian. By the same argument $N$ has to be Sylow $q\!$-subgroup of $G.$\hfill $\Box$
\bigskip The lemma suggests that the natural choice for a normal subgroup is indeed the Fitting subgroup of $G.$ We follow through with this process when
$A$ has index $p^{i}q.$ In this case, if either Sylow subgroup $S$ of $G$ is normal then $AS\subset_{\!\!{ \rm m}}
G$ gives a minimally transitive representation of a nilpotent group, and this situation is known from Theorem~\ref{1012}.
Otherwise none of the Sylow subgroups are normal and by Lemma~\ref{4009} $F_{1}=F(G)$ is a $p\!$-group. By Theorem~\ref{reduct} we have $AF_{1}\subset_{\!\!{ \rm m}} G$ and if $K\supseteq F_{1}$ is the core of $AF_{1}$ in $G$ then $AF_{1}/K\subset_{\!\!{ \rm m}} G/K$ is minimally transitive, faithful of degree $p^{j}q$ for $j<i$ and with $|G/K|_{p}<|G|_{p}.$ If $F_{2}$ is the pre-image of $F(G/F)$ in $G$ then $ F_{2}/F_{1}$ is a $q\!$-group. Thus, if $F_{2}$ is not contained in $K$ then Lemma~\ref{4009} shows that $F_{2}K/K$ is a normal Sylow $q\!$-subgroup of $G/K.$ In this case we are reduced to the nilpotent case.
Otherwise $K\supseteq F_{2}$ so that $|G/K|_{q}<|G|_{q}.$ The process stops when the group becomes nilpotent or when it is of Suprunenko-Kopylova type.
|
2,877,628,091,164 | arxiv |
\section{Introduction}
In today's digital society, protecting Internet privacy has become more important than ever.
The growing demand for online anonymity has resulted in advanced technical solutions to satisfy this need.
With around 2.2~million daily users~\cite{tor-metrics},
the Tor network~\cite{dingledine2004tor} is by far the most widely used anonymization network, as of today.
Tor builds upon the idea of \emph{onion routing}~\cite{goldschlag1996onionrouting}.
It consists of an overlay network connecting so-called relay nodes,
which can be used to establish anonymous connections.
To this end, the Tor client software builds a cryptographically-secured \emph{circuit},
a path over three relays, where each relay knows its immediate neighbors only.
This has several performance implications.
Due to re-routing the traffic multiple times through the overlay network,
an extra delay is inevitable to gain anonymity.
The performance---in terms of latency, data rates, and fairness---is however
suboptimal~\cite{DBLP:conf/uss/ReardonG09,DBLP:conf/p2p/DhungelSRHR10}.
One of the major shortcomings is the lack of
effective congestion control~\cite{DBLP:conf/pet/AlSabahBGGMSV11,DBLP:conf/p2p/DhungelSRHR10}
that minimizes network load and optimizes the user-perceivable performance.
While congestion control is a nontrivial task even for single connections,
relaying data over a series of nodes, like in Tor, amplifies the problem;
especially when rising delays occur in the network.
In particular, Tor relays are unable to react to congestion,
for example by signaling upstream to throttle sending rates.
Moreover, a growing demand for the Tor network will also result in
a growing need for effective congestion control to satisfy the users' expectations
as far as latency, throughput, and fairness are concerned.
Therefore, one also has to consider novel research approaches to meet these challenges.
With \emph{PredicTor}~\cite{PredicTor},
we introduce a new research direction towards congestion control
in multi-hop overlay networks like the Tor network.
PredicTor is the first system to apply
\emph{distributed Model Predictive Control}~(MPC)~\cite{Mota2012} to congestion control
in the Tor network.
MPC in general is a modern technique from the field of control theory
that uses predictions about the future system state as well as the repeated solving
of a formal optimization problem to achieve optimal behavior.
Predictions are deduced from a mathematical system model
that is instantiated with real-world measurements.
For applying MPC in the context of multi-hop congestion control,
we \emph{distribute} it among relays.
That is, each relay solves the optimization problem with its \emph{local} view of the network,
but controllers%
\footnote{The term \enquote{controller} refers to the local application of control techniques.
It does not imply a centralized entity.}
cooperate by exchanging their predictions to establish network-wide behavior.
In contrast to the current behavior of Tor,
PredicTor avoids congestion by generating \emph{backpressure}.
By relying on a formal definition of the optimization goal,
it becomes possible to optimize the congestion control
within the network for specific optimization objectives.
In PredicTor, we put a special emphasis on low latency and fairness in the network,
because these have previously been identified
to be especially problematic in the Tor network~\cite{tschorsch11maxmin,DBLP:conf/uss/ReardonG09}.
While optimization-based rate allocation has been researched before,
with equivalent formulations for TCP and other methods~\cite{He2007},
we introduce a novel optimization-based \emph{max-min fairness} formulation.
In addition to presenting PredicTor,
the goal of this paper is to bring the underlying
control-theoretic approach to the network community.
We pinpoint the merits and the potential of applying distributed MPC to congestion control,
but also point out current shortcomings thereof.
Our work should be understood as a concept study for this novel field
rather than a ready-to-deploy finished technical solution.
We envision opening up new directions and fostering the development
of novel, innovative techniques for congestion control.
Our evaluation reveals that PredicTor is able to clearly reduce latency:
In a small model scenario, it achieves a latency reduction from 553~ms (vanilla Tor) to 94~ms.
In larger, random networks, the advantage becomes even more apparent because,
in contrast to traditional approaches,
latency does not significantly grow with growing congestion.
At the same time, PredicTor consistently realizes near-perfect max-min fairness.
However, we show that this comes at the cost of lower throughput and more signaling overhead. %
The contributions are summarized as follows.
\begin{itemize}
\item We introduce \emph{PredicTor} for congestion control in the Tor network based on distributed MPC.
Comparing to~\cite{PredicTor}, we add an important change
to its optimization problem that strengthens its robustness.
\item We present a novel, optimization-based formulation of max-min fairness
and leverage it as an optimization goal in PredicTor.
\item We implement a prototype of PredicTor to enable experimental assessment of its behavior.
Our implementation is made available as an open source software project.\footnote{\texttt{https://github.com/cdoepmann/PredicTor}}
\end{itemize}
PredicTor was first introduced in~~\cite{PredicTor}.
In this journal paper, we especially focus on PredicTor's significance for networking research by covering the following additional aspects.
\begin{itemize}
\item We introduce optimization-based predictive congestion control to the networking community.
\item We discuss possible security and privacy implications of PredicTor.
\item We provide a simulation study of PredicTor's performance in complex network scenarios,
analyzing whether its optimization goals can be realized in non-trivial environments.
\item We leverage the results as well as a thorough investigation of PredicTor's
underlying assumptions to identify the benefits as well as potential drawbacks
of this new approach towards congestion control.
\end{itemize}
This paper is structured as follows:
We start by presenting the preliminaries for our approach,
including related terminology and mathematical notation, in Section~\ref{sec:preliminaries}.
Section~\ref{sec:mpc_formulation} introduces PredicTor itself in full detail,
including our novel optimization-based method for obtaining max-min fairness
and the dynamic system model.
We discuss security and privacy implications of PredicTor in Section~\ref{sec:security}.
In Section~\ref{sec:eval}, we present our evaluation of PredicTor,
focusing first on small scenarios to demonstrate its functioning
before we leverage larger-scale simulations of complex networks for deeper insights.
In this section, we also discuss the implications for further research on congestion control using MPC.
We complete our work by presenting related work in Section~\ref{sec:related-work}
and summarizing this contribution in Section~\ref{sec:conclusion}.
\section{Preliminaries} \label{sec:preliminaries}
\subsection{The Tor Network}
Facing today's growing need for online privacy,
Tor denotes an essential tool for applications that require anonymity on the Internet.
It is an \emph{overlay network} that makes use of
the principle of \emph{onion routing}~\cite{goldschlag1996onionrouting}.
It achieves anonymity by tunneling users' data through the network,
over a series of relays, called a \emph{circuit}.
Onion routing ensures that each hop in the relay only knows its immediate predecessor and successor.
As a consequence, destinations cannot identify the origin of streams of communication they receive.
Clients typically choose a sequence of three random relays for constructing a circuit.
The necessary resources (service and bandwidth) are contributed by volunteers
and are not subject to a central authority.
More formally, we introduce Tor as an overlay network graph~$G(N,E)$
where $N$ denotes the set of nodes and $E$ the set of overlay links.
The network has a total of $|N|=n$~nodes and $|E|=e$~connections.
We denote the set of Tor circuits~$P$ with $i \in P$
being the i-th circuit of the set of cardinality $|P|=p$.
$P_{\alpha} \in P$ denotes the subset of circuits traversing node~$\alpha \in N$.
Generally, we refer to circuits with Roman letters and to nodes with Greek letters.
When considering the network at the circuit level,
we denote the data rate of circuit~$i$ with $r_i$ (in packets per second).
Furthermore, each node~$\alpha \in N$ of the overlay network
has a limited capacity~$C_{\alpha}$, since overlay connections share the same physical connection.
Each node~$\alpha \in N$ can receive, store, and send data from each circuit~$i \in P_{\alpha}$.
We denote $s_{\alpha, i} $ the circuit queue (storage in number of packets)
in node~$\alpha$ for circuit~$i$
and the vector with all queues for each circuit
in node~$\alpha$ as $s_{\alpha} \in \mathbb{N}^{|P_{\alpha}|}$.
A useful metric to measure the congestion in the network is given by the \emph{backlog} that captures the amount of data that is on its way through the network. In terms of our formal description, we define it as follows:
\begin{definition}
The data backlog~$b$ of a network~$G(N,E)$ is computed for all nodes~$\alpha \in N$ and all circuits~$i\in P$ as:
\begin{equation}
b = \sum_{\alpha \in N} \sum_{i \in P} s_{\alpha,i}.
\end{equation}
\end{definition}
\definecolor{color0}{rgb}{0.12156862745098,0.466666666666667,0.705882352941177}
\definecolor{color1}{rgb}{1,0.498039215686275,0.0549019607843137}
\definecolor{color2}{rgb}{0.172549019607843,0.627450980392157,0.172549019607843}
\tikzset{
written/.style={text=black,font=\scriptsize},
circuit/.style={ultra thick},
}
\def\includegraphics[height=3.0em]{figures/onion2}{\includegraphics[height=3.0em]{figures/onion2}}
\def0.6{1.0}
\def\drawtopology{%
\node (bottleneck) {\includegraphics[height=3.0em]{figures/onion2}};
\node[left=0.6*7em of bottleneck] (sender 1) {\includegraphics[height=3.0em]{figures/onion2}};
\node[below=0.5em of sender 1] (sender 2) {\includegraphics[height=3.0em]{figures/onion2}};
\node[right=0.6*7em of bottleneck] (receiver 2) {\includegraphics[height=3.0em]{figures/onion2}};
\node[above=0.5em of receiver 2] (receiver 1) {\includegraphics[height=3.0em]{figures/onion2}};
\node[below=0.5em of receiver 2] (receiver 3) {\includegraphics[height=3.0em]{figures/onion2}};
\begin{scope}[on background layer]
\draw[->,circuit,color=color0,transform canvas={yshift=-6pt}]
($(sender 1.center) + (-4em*0.6,+6pt)$) --
($(bottleneck.center) + (0,+6pt)$) --
node[above,sloped,written] {circuit 1}
(receiver 1.center) --
++(4em*0.6,0);
\draw[->,circuit,color=color1,transform canvas={yshift=-6pt}]
($(sender 1.center) + (-4em*0.6,0)$) --
(sender 1.center) --
(bottleneck.center) --
node[above,sloped,written] {circuit 2}
(receiver 2.center) --
++(4em*0.6,0);
\draw[->,circuit,color=color2,transform canvas={yshift=-6pt}]
($(sender 2.center) + (-4em*0.6,0)$) --
(sender 2.center) --
($(bottleneck.center) + (0,-6pt)$) --
node[above,sloped,written] {circuit 3}
(receiver 3.center) --
++(4em*0.6,0);
\end{scope}
}
\begin{figure}
\centering
\begin{tikzpicture}
\drawtopology
\node[above=-5pt of bottleneck,written] {bottleneck};
\node[fit={($(sender 2.south west) + (-0.5em,0.5em)$) ($(receiver 1.north east) + (0.5em,0)$)}] (container) {};
\draw[dashed,thick] (container.north west) -- (container.south west);
\draw[dashed,thick] (container.north east) -- (container.south east);
\node[below left=3pt of container.north west,written] {\bfseries senders};
\node[below right=3pt of container.north east,written] {\bfseries receivers};
\end{tikzpicture}
\caption{Example Tor topology (toy scenario).}
\label{fig:example-topology}
\end{figure}
For various parts of this paper, we focus on a small example topology of circuits,
depicted in Figure~\ref{fig:example-topology}.
It consists of two sending relays that carry three circuits,
whose traffic streams meet in a shared node, constituting a bottleneck.
Afterwards, the three circuits go to distinct destination relays.
We found this simple topology to be useful
for evaluating the behavior of congestion control in Tor
because it represents a typical situation in which a single relay is overloaded by several circuits.
The scenario is still simple enough to understand the decisions made by congestion control.
\subsection{Fairness}
When considering performance in networks,
the immediate measures that come into mind are throughput and latency.
However, another important metric is fairness.
Especially in overlay networks like Tor, where many data transfers compete for the available resources,
fairness is important to guarantee an adequate user experience to the majority of users.
Different fairness measures have been put forward~\cite{bertsekas1992data}.
We here focus on the strong notion of \emph{max-min fairness}
as it has been proposed as fairness goal for the Tor network.
To define it formally, we first introduce the notion of \emph{feasibility}---%
that is, distributions of data rates that can actually be realized in the network~\cite{bertsekas1992data}.
\begin{definition}
\label{def:feasible_rate}
A rate vector $r=[r_1, r_2, \dots, r_p]$ is feasible if:
\begin{align}
\forall i \in P:& \quad 0\leq r_i \quad \text{and}\\
\forall \alpha \in N:& \quad \sum_{i\in P_\alpha} r_i \leq C_{\alpha}.
\label{eq:feasible_rate_capacity_limit}
\end{align}
We denote $R_f$ the set of feasible rate vectors.
\end{definition}
This allows us to define max-min fairness as follows:
\begin{definition}
\label{def:max_min_fair}
A feasible rate vector~$r^f\in R_f$ is called max-min fair,
if for all circuits~$i\in P$ and for all other feasible rates~$\bar{r}\in R_f$ it holds that:
\begin{equation}
\begin{gathered}
\bar{r}_i \geq r_i^f \Rightarrow \exists \, j\in P: r_{j}^f \leq r_i^f \land \bar{r}_j \leq r_j^f.
\end{gathered}
\end{equation}
\end{definition}
This definition means that if a rate~$r^f$ is max-min fair,
any other feasible rate that increases the rate for the favored circuit~$i$
comes at the cost of reducing the rate for the disadvantaged circuit~$j$,
which is already smaller than the rate of circuit~$i$.
\subsection{Model Predictive Control}\label{ssec:MPC}
In this subsection, we briefly review the concept of model predictive control~(MPC),
which is used to formulate our proposed congestion controller.
Fundamental for this approach is the notion of a \emph{dynamic system}:
\begin{equation}\label{eq:dynamic_system}
x^{k+1} = f(x^k, u^k, p^k),
\end{equation}
which relates a state $x^k \in \mathbb{R}^n$,
input $u^k \in \mathbb{R}^m$,
and parameter~$p^k\in \mathbb{R}^p$ at sampling time~$k$
to the state at the next sampling time $k+1$.
Under the assumption that the model~$f(x^k,u^k, p^k)$ accurately describes the system,
Equation~\eqref{eq:dynamic_system} can be used to compute the future states of the system given the initial state,
sequence of inputs and parameters.
Based on the dynamic system in \eqref{eq:dynamic_system}, we introduce the finite horizon optimal control problem (OCP):
\begin{subequations}\label{eq:MPC_problem_general}
\begin{align}
\label{eq:MPC_problem_general_01}
\min_{\textbf{u}, \textbf{x}}\quad &\sum_{k=0}^{N_{\text{horz}}}
l(x^k,u^k,p^k)\\
\text{\textbf{subject to}:}\quad && \nonumber\\
\text{state dynamics:} \quad & x^{k+1} = f(x^k, u^k, p^k),
\label{eq:MPC_problem_general_02}\\
\text{constraints:}\quad & g(x^k,u^k,p^k) \leq 0 \quad \forall k =0,\dots, N_{\text{horz}}
\label{eq:MPC_problem_general_03}\\
\text{initial conditions:}\quad & x^0 = x_{\text{init}}.
\end{align}
\end{subequations}
In this problem, we are optimizing over finite sequences of inputs $\mathbf{u}=[u^0,\dots, u^{N_{\text{horz}}}]$
and states $\mathbf{x}=[x^0,\dots, x^{N_{\text{horz}}+1}]$.
The optimal solution is obtained for a given initial state $x_{\text{init}}$
and a sequence of parameters $\mathbf{p}=[p^0,\dots, p^{N_{\text{horz}}}]$.
Note that we use bold letters to denote trajectories.
The objective is to minimize an arbitrary cost function under the consideration of additional constraints.
Typically, this cost function consists of individual contributions for each step of the horizon,
as shown in~\eqref{eq:MPC_problem_general_01}.
The previously introduced dynamic system \eqref{eq:dynamic_system} is considered in~\eqref{eq:MPC_problem_general_02}
as an equality constraint.
Additionally, we have in~\eqref{eq:MPC_problem_general_03} inequality constraints on states and inputs, possibly under consideration of the parameters.
This possibility to explicitly formulate constraints
is a major advantage of MPC over alternative advanced control techniques.
As the solution of the OCP (see~\eqref{eq:MPC_problem_general}),
we obtain the predicted future sequence of states and the respective sequence of inputs.
For the control application, the first element of the sequence of inputs, i.e.\@\xspace $u^0$, is applied to the system,
typically in the form of a constant value over a finite sampling time.
After this sampling time, the new state of the system is obtained and together with the updated sequence of parameters
problem the OCP in~\eqref{eq:MPC_problem_general} is solved again.
Feedback trough this \emph{closed-loop} application allows to robustly react to disturbances
and mitigates potential mismatches between model and controlled system.
MPC is also a popular method to deal with distributed control systems.
In this application, multiple controllers make local decisions and attempt to achieve global control goals
by communicating their decisions.
In particular, the distributed MPC controllers can exchange their predicted future states and inputs
which are obtained as a byproduct when computing the current input to the system.
Connected controllers can consider this information as the additional parameters in~\eqref{eq:MPC_problem_general}.
Knowledge over future actions of connected controllers has the significant advantage that the effect of delay can be mitigated.
\section{PredicTor}\label{sec:mpc_formulation}
In this section, we introduce PredicTor, our newly proposed congestion controller for the Tor network.
PredicTor is developed with the following objectives in mind:
Primarily, we are aiming to avoid congestion by limiting the data backlog of circuits,
and secondly, we want to achieve \emph{max-min fairness} of the network.
To this end, we first present an optimization-based method to obtain max-min fairness
of an overlay network (Theorem~\ref{theo:max_min_optim}) in Subsection~\ref{ssec:optim_fairness},
which we have previously derived in~\cite{PredicTor}.
However, the presented Theorem~\ref{theo:max_min_optim} cannot directly be used for congestion control,
as it would require global knowledge and control authority of the Tor network.
Instead, it serves as the basis for our proposed distributed MPC formulation
for which we establish the preliminaries in Subsection~\ref{ssec:feedback_distr_mpc}.
In particular, we introduce the states, inputs, and system dynamics as well as the
concept of information exchange between adjacent nodes.
In comparison to our previous work~\cite{PredicTor},
this concept has been extended to address several shortcomings in previously unconsidered situations.
Most importantly, PredicTor nodes are now capable to request an exact rate increase from their successor nodes.
The full optimal control problem is then stated in Subsection~\ref{ssec:ocp}.
Finally, we discuss the interaction of PredicTor and a Tor relay in Subsection~\ref{ssec:controller_tor_interact}.
\subsection{Optimization-based Fairness}\label{ssec:optim_fairness}
We present an optimization-based method (Theorem~\ref{theo:max_min_optim}) to achieve max-min fairness.
For this, we first introduce the formal notion of a \emph{bottleneck}.
\begin{definition}
\label{def:bottleneck}
For a circuit~$i\in P_{\alpha}$ and a rate vector~$r$, we denote node~$\alpha \in N$ a bottleneck, if:
\begin{equation}
\sum_{i \in P_{\alpha}} r_i = C_{\alpha}, \quad \forall j \in P_{\alpha}: \ r_i \geq r_j
\end{equation}
\end{definition}
\begin{lemma}
\label{lemma:min_max_bottleneck}
Let $r^f$ be a max-min fair rate vector. Each circuit~$i \in P$ has exactly one bottleneck.
This bottleneck is the global rate-limiting factor of the circuit under stationary conditions.
\end{lemma}
\begin{proof}
The proof is shown in~\cite{bertsekas1992data}.
\end{proof}
This allows us to state the following theorem, which we previously introduced in~\cite{PredicTor}.
\begin{theorem}
\label{theo:max_min_optim}
An overlay network achieves max-min fairness with rate~$r = r^{\text{max}} - \Delta r$ as the optimal solution of:
\begin{equation}
\label{eq:opt_max_min_fairness}
\begin{aligned}
c= \min_{\Delta r} \sum_{i \in P}& \Delta r_i^2\\
\text{subject to:}\quad
r^{\text{max}} - \Delta r&\in R_f,\\
0\leq \Delta r &\leq r^{\text{max}},
\end{aligned}
\end{equation}
where $\Delta r$ is an auxiliary variable that can be interpreted as the unused rate with respect to the arbitrary upper limit~$r^{\text{max}}$ which must satisfy
$r^{\text{max}}\geq\max(C_1, C_2, \dots, C_n)$.
\end{theorem}
\begin{proof}
The proof is presented in~\cite{PredicTor}.
\end{proof}
Theorem~\ref{theo:max_min_optim} thus allows us
to obtain the global \emph{max-min fair} rate $r$ of an overlay network
as the solution of a convex optimization problem.
Intuitively, the formulation in Theorem~\ref{theo:max_min_optim} works because we are minimizing the rate that is not allocated, with respect to some arbitrary upper limit and under consideration of feasible rates. The quadratic term results in fairness because it is always desirable to allocate a higher rate (i.e.\@\xspace, reduce $\Delta r$) to the circuit with the smallest rate (i.e.\@\xspace, with the highest $\Delta r$).
\subsection{Distributed MPC}\label{ssec:feedback_distr_mpc}
In this subsection, we present the preliminaries for the statement of the PredicTor optimal control problem.
In particular, we define states, inputs, and the dynamic system equation
and introduce our concept for distributed MPC.
This includes the question which information is exchanged and how it is incorporated
into the optimal control problem to achieve our previously defined control goals.
For the interaction of multiple nodes, we denote $\alpha \in N$ the currently considered node, with connections to predecessor ($\beta$) and successor ($\gamma$) nodes.
For the current node~$\alpha$, it is irrelevant whether the incoming data comes from several nodes or only from a single node.
To simplify the notation, we assume that all incoming data (even for different circuits)
comes from a single predecessor node~$\beta$ and is forwarded to a single successor node~$\gamma$.
The interaction of multiple controllers is illustrated in Figure~\ref{fig:feedback_exchange}
and will be discussed in the following.
\begin{figure}
\footnotesize
\centering
\includegraphics[width=1\linewidth]{./graphics/feedback_exchange_04}
\caption{Information exchange (predicted future trajectories in bold font) of node~$\alpha$ with adjacent nodes.
From the perspective of $\alpha$ all predecessor nodes are summarized as $\beta$ and all successor nodes as $\gamma$,
for the sake of a concise notation.
The rates at which data is sent at time~$k$ for circuit~$i$ at node~$\alpha$ is $r_{\text{out},\alpha, i}^k$.
}
\label{fig:feedback_exchange}
\end{figure}
The controller at node $\alpha$ takes local decisions regarding incoming and outgoing rates,
under consideration of predicted future actions from the adjacent nodes.
Predictions are obtained on the basis of dynamic models in the form of \eqref{eq:dynamic_system}.
We first introduce the state~$s_{\alpha}^k$ that denotes the queue size
for all circuits $i\in P_{\alpha}$ in node~$\alpha$ and at time step $k$.
The dynamic model equation can be written as:
\begin{align}
\label{eq:mpc_model}
s_{\alpha}^{k+1} &= s_{\alpha}^k + \Delta t(r_{\text{in}, \alpha}^k - r_{\text{out}, \alpha}^k),
\end{align}
where $\Delta t$ denotes the sampling time.
As inputs in \eqref{eq:mpc_model}, we introduce the incoming $r_{\text{in},\alpha}$ and outgoing $r_{\text{out},\alpha}$ rate.
For the optimal control problem, we first introduce trivial constraints for the queue size:
\begin{equation}\label{eq:cons_queue_size}
0\leq s_{\alpha}^k \leq s_{\alpha}^{\text{max}},
\end{equation}
for the rates:
\begin{subequations}
\begin{align}
\label{eq:predictor_r_in_positive}
0 &\leq r_{\text{in},\alpha}^k\\
\label{eq:predictor_r_out_positive}
0 &\leq r_{\text{out},\alpha}^k,
\end{align}
\end{subequations}
and for the link capacities:
\begin{subequations}\label{eq:predictor_link_capacity_constraints}
\begin{align}
\label{eq:predictor_link_capacity_constraints_in}
\sum_{i\in P_{\alpha}} &r_{\text{in},\alpha,i}^k \leq C_{\alpha}^{\text{in}}\\
\label{eq:predictor_link_capacity_constraints_out}
\sum_{i\in P_{\alpha}} & r_{\text{out},\alpha,i}^k \leq C_{\alpha}^{\text{out}}.
\end{align}
\end{subequations}
Furthermore, we have additional constraints regarding the incoming and outgoing rates which depend
on the actions of the adjacent nodes.
From the successor node $\gamma$, we receive information about $\mathbf{r}_{\text{in},\gamma}^k$
which limits the local outgoing rate as follows:
\begin{equation}
\label{eq:mpc_pred_r_out_max}
{r}_{\text{out},\alpha}^{k} \leq
{r}_{\text{out},\alpha}^{\text{max},k}
={r}_{\text{in},\gamma}^{k-1}.
\end{equation}
Note that we consider $\mathbf{r}_{\text{in},\gamma}^{k-1}$
(the successor's information from the \emph{previous} time step $k-1$)
since the information is delayed.
The constraints for the incoming rate $\mathbf{r}_{\text{in}, \alpha}^k$ are considerably more complex.
From the source node,
we receive the predicted outgoing rate $\mathbf{r}_{\text{out},\beta}^k$,
the predicted circuit queue $\mathbf{s}_{\beta}^k$
and a virtual outgoing rate $\mathbf{\hat r}_{\text{out},\beta}^k$.
With this information, we constrain $\mathbf{r}_{\text{in}, \alpha}^k$ in two ways.
First, we introduce an additional variable $\tilde{s}_{\alpha|\beta}^{k}$, which denotes
the estimated queue size of node $\beta$ from the perspective of node $\alpha$ at time $k$.
To express $\tilde{s}_{\alpha|\beta}^{k}$, we introduce the state variable $\Delta s_{\alpha|\beta}$, such that
\begin{subequations}
\label{eq:mpc_pred_predecessor}
\begin{align}
\tilde{s}_{\alpha|\beta}^{k} &= s_{\beta}^k - \Delta s_{\alpha|\beta}^{k},
\intertext{with the dynamic system equation in the form of \eqref{eq:dynamic_system}:}
\Delta s_{\alpha|\beta}^{k+1} &= \Delta s_{\alpha|\beta}^{k} +\Delta t( r_{\text{in},\alpha}^k - r_{\text{out},\beta}^k).
\label{eq:mpc_update_eq_delta_s}
\end{align}
\end{subequations}%
Note that in contrast to \eqref{eq:mpc_pred_r_out_max}, we consider information from the source node $\beta$ at the current time-step $k$,
as both the control action and the information are delayed.
Equation~\eqref{eq:mpc_pred_predecessor} states that any value $r_{\text{in},\alpha}^k\neq r_{\text{out},\beta}^k$ will adjust the predicted circuit queue at the predecessor node.
This way, we can indirectly constrain the incoming rate by enforcing
\begin{equation}\label{eq:queue_cons_s_tilde}
\tilde{\mathbf{s}}_{\alpha|\beta}^{k}\geq 0,
\end{equation}
which ensures that the incoming rate can only be increased as long as data is available in the source node.
A problem arises if the availability of data is not the rate-limiting factor at the source node.
To cope with this situation, we propose a new mechanism for the source node to request a rate increase,
which is incorporated as the second constraint on $\mathbf{r}_{\text{in}, \alpha}^k$.
For this mechanism, we introduce an additional state $\hat s_{\alpha}^k$ and input $\hat r_{\text{out}, \alpha}^k$
with the dynamic model equation:
\begin{align}
\label{eq:mpc_model_s_hat}
\hat s_{\alpha}^{k+1} &= \hat s_{\alpha,i}^k + \Delta t(r_{\text{in}, \alpha}^k - \hat r_{\text{out}, \alpha}^k).
\end{align}
State, input and dynamics are reminiscent of \eqref{eq:mpc_model}, with the difference that
$\hat s_{\alpha}^k$ denotes the virtual queue size and
$\hat r_{\text{out}, \alpha}^k$ the virtual outgoing rate.
These variables are virtual in the sense that
they \emph{do not} respect the rate constraint
imposed by the successor node in \eqref{eq:mpc_pred_r_out_max},
but only the local constraints:
\begin{equation}\label{eq:predictor_rate_larger_zero_r_out_hat}
0 \leq \hat r_{\text{out},\alpha}^k,
\end{equation}
\begin{equation}\label{eq:predictor_link_capacity_constraints_r_out_hat}
\sum_{i\in P_{\alpha}} \hat r_{\text{out},\alpha}^k \leq C_{\alpha}^{\text{out}} ,
\end{equation}
and
\begin{equation}\label{eq:cons_queue_size_virtual}
0\leq s_{\alpha}^k \leq \hat s_{\alpha}^{\text{max}}.
\end{equation}
In this regard, the virtual outgoing rate $\hat r_{\text{out}, \alpha,i}^k$
can be interpreted as the potential of node $\alpha$ to increase the rate for circuit $i$.
Node $\alpha$ receives the respective information from source node $\beta$
as $\mathbf{\hat r}_{\text{out},\beta}^{k}$.
The most straightforward way to consider this information at node $\alpha$ is to enforce:
\begin{equation}
\label{eq:mpc_pred_r_in_max}
{r}_{\text{in},\alpha}^{k} \leq
{r}_{\text{in},\alpha}^{\text{max},k}
={\hat r}_{\text{out},\beta}^{k},
\end{equation}
which complements \eqref{eq:mpc_pred_r_out_max} for the incoming rates.
In practice, however, we found that with
$
{r}_{\text{in},\alpha}^{\text{max},k}
={\hat r}_{\text{out},\beta}^{k}
$,
the incoming rate should be limited with:
\begin{equation}
\label{eq:mpc_pred_r_in_max_sum}
0\leq \sum_{l=0}^{k} \left[r_{\text{in},\alpha}^{\text{max},l} - {r}_{\text{in},\alpha}^{l} \right].
\end{equation}
In most cases, the effect of constraint \eqref{eq:mpc_pred_r_in_max_sum}
is similar to constraint~\eqref{eq:mpc_pred_r_in_max}.
The difference is that~\eqref{eq:mpc_pred_r_in_max}
enforces a concrete upper limit for the incoming rate at time $k$,
whereas~\eqref{eq:mpc_pred_r_in_max_sum} balances the limit over the horizon.
This is beneficial as the sequence $\mathbf{\hat r}_{\text{out},\beta}^{k}$
often contains single elements with very high data rates
which cannot necessarily be fully utilized by the successor at that exact point in time.
\subsection{Optimization Problem}\label{ssec:ocp}
In the following, we propose an optimal control problem for congestion control
with fairness formulation for node~$\alpha$, predecessor node~$\beta$ and successor node $\gamma$.
As optimization variables we introduce the states~$\mathbf{s}_\alpha$, $\Delta \mathbf{s}_{\alpha|\beta}$ and
$\mathbf{\hat s}_{\alpha}$ with their respective dynamics in~\eqref{eq:mpc_model}, \eqref{eq:mpc_update_eq_delta_s} and
\eqref{eq:mpc_model_s_hat}.
Furthermore, we optimize the inputs~$\Delta \mathbf{r}_{\text{in},\alpha}$ and $\Delta \mathbf{r}_{\text{out},\alpha}$ from which rates are determined with:
\begin{align}
\label{eq:r_in_from_delta_r_in}
\mathbf{r}_{\text{in},\alpha} &= \mathbf{r}^{\text{max}}-\Delta \mathbf{r}_{\text{in},\alpha}\\
\label{eq:r_out_from_delta_r_out}
\mathbf{r}_{\text{out},\alpha} &= \mathbf{r}^{\text{max}}-\Delta \mathbf{r}_{\text{in},\alpha}.
\end{align}
To express the newly introduced variable $\mathbf{\hat r}_{\text{out}, \alpha}$ we introduce
two optimization variables $\Delta \mathbf{r}_{\text{out,extra}}$ and $\mathbf{r}_{\text{out,minus}}$,
such that:
\begin{equation}
\label{eq:r_hat_out_from_r_in_extra_r_in_minus}
\mathbf{\hat r}_{\text{out}, \alpha} = \mathbf{r}_{\text{out}, \alpha} + \underbrace{(\mathbf{r}^{\text{max}}-\Delta \mathbf{\hat r}_{\text{out,extra}, \alpha})}_{\mathbf{\hat r}_{\text{out,extra}, \alpha}} - \mathbf{\hat r}_{\text{out,minus}, \alpha}.
\end{equation}
The virtual outgoing rate $\mathbf{\hat r}_{\text{out}, \alpha}$ is thus not chosen independently
but as a deviation from the outgoing rate $\mathbf{r}_{\text{out}, \alpha}$.
This mathematical construction was found to aid the convergence to the global max-min fair solution.
Considering~\eqref{eq:r_in_from_delta_r_in}, \eqref{eq:r_out_from_delta_r_out} and \eqref{eq:r_hat_out_from_r_in_extra_r_in_minus}, we state the OCP:
\begin{samepage}
\begin{small}
\begin{subequations}
\label{eq:mpc_full_optim}
\begin{equation}
\min_{
\begin{array}{c}
\mathbf{s}_{\alpha},
\Delta \mathbf{s}_{\alpha|\beta},
\mathbf{\hat s}_{\alpha},
\Delta \mathbf{r}_{\text{out},\alpha},
\Delta \mathbf{r}_{\text{in},\alpha},\\
\Delta \mathbf{\hat r}_{\text{out,extra}},
\mathbf{\hat r}_{\text{out,minus}}
\end{array}
}\quad
\sum_{k=0}^{N_{\text{horz}}} d^k \left(
(\Delta r_{\text{in},\alpha}^k)^2+ (\Delta r_{\text{out},\alpha}^k)^2
+ (\Delta \hat{r}_{\text{out,extra},\alpha}^k)^2
+ ( \hat r_{\text{out,minus},\alpha}^k)^2
\right)
\tag{\ref{eq:mpc_full_optim}}
\end{equation}
\begin{align}
\text{\textbf{subject to}:}\quad & & \nonumber\\
\text{queue dynamics
\eqref{eq:mpc_model}, \eqref{eq:mpc_update_eq_delta_s}, \eqref{eq:mpc_model_s_hat}:
} \quad &s_{\alpha}^{k+1} = s_{\alpha}^k + \Delta t(r_{\text{in},\alpha}^k - r_{\text{out}, \alpha}^k), & \forall k = 0 ,\dots, N_{\text{horz}}\\
&\Delta s_{\alpha|\beta}^{k+1} = \Delta s_{\alpha|\beta}^{k} +\Delta t( r_{\text{in},\alpha}^k - r_{\text{out},\beta}^k) & \forall k = 0 ,\dots, N_{\text{horz}}\\
&\hat s_{\alpha}^{k+1} = \hat s_{\alpha}^k + \Delta t(r_{\text{in}, \alpha}^k - \hat r_{\text{out}, \alpha}^k) & \forall k = 0 ,\dots, N_{\text{horz}}\\
\text{queue constraints \eqref{eq:cons_queue_size}, \eqref{eq:queue_cons_s_tilde}, \eqref{eq:cons_queue_size_virtual}:
} \quad
&0 \leq s_{\alpha}^k \leq s_{\alpha}^{\text{max}}
\label{eq:mpc_full_optim_sc_max} & \forall k = 0 ,\dots, N_{\text{horz}}\\
&0 \leq \tilde{s}_{\alpha|\beta}^{k} & \forall k = 0 ,\dots, N_{\text{horz}}\\
&0 \leq \hat s_{\alpha}^k \leq s_{\alpha}^{\text{max}}
\label{eq:mpc_full_optim_shat_max} & \forall k = 0 ,\dots, N_{\text{horz}}\\
\text{rate in constraints \eqref{eq:predictor_r_in_positive}, \eqref{eq:predictor_link_capacity_constraints_in}, \eqref{eq:mpc_pred_r_in_max_sum}:
} \quad
&0\leq r_{\text{in},\alpha}^k & \forall k = 0 ,\dots, N_{\text{horz}}\\
&\sum_{i\in P_{\alpha}} r_{\text{in},\alpha,i}^k \leq C_{\alpha}^{\text{in}} & \forall k = 0 ,\dots, N_{\text{horz}}\\
&0\leq \sum_{l=0}^{k} \left[r_{\text{in},\alpha}^{\text{max},l} - {r}_{\text{in},\alpha}^{l} \right] & \forall k = 0 ,\dots, N_{\text{horz}}\\
\text{rate out constraints \eqref{eq:predictor_r_out_positive}, \eqref{eq:mpc_pred_r_out_max}, \eqref{eq:predictor_link_capacity_constraints_out}:
} \quad
&0\leq r_{\text{out},\alpha}^k \leq r_{\text{out},\alpha}^{\text{max},k} & \forall k = 0 ,\dots, N_{\text{horz}}\\
&\sum_{i\in P_{\alpha}} r_{\text{out},\alpha,i}^k \leq C_{\alpha}^{\text{out}} & \forall k = 0 ,\dots, N_{\text{horz}}\\
\text{virt. rate out constraints \eqref{eq:predictor_link_capacity_constraints_r_out_hat}, \eqref{eq:predictor_rate_larger_zero_r_out_hat}:
} \quad
&\sum_{i\in P_{\alpha}} \hat r_{\text{out},\alpha,i}^k \leq C_{\alpha}^{\text{out}} & \forall k = 0 ,\dots, N_{\text{horz}}\\
&0\leq \hat r_{\text{out},\alpha}^k & \forall k = 0 ,\dots, N_{\text{horz}}\\
\text{initial conditions:
} \quad
&s_{\alpha}^0 = s_{\alpha}^{\text{init}},\ \Delta s_{\alpha|\beta}^{0} = 0,\ \hat s_{\alpha}=s_{\alpha}^{\text{init}} &
\end{align}
\end{subequations}
\end{small}
\end{samepage}%
The optimization problem~\eqref{eq:mpc_full_optim} is solved at each time-step and in each node of the network under consideration of the predicted trajectories of adjacent nodes~$\mathbf{r}_{\text{in},\gamma}^{k-1}$,
$\mathbf{r}_{\text{out},\beta}^k$ and $\mathbf{s}_{\beta}^k$,
$\mathbf{\hat r}_{\text{out},\beta}^{k}$
as well as the current size of the circuit queue in the current
node~$s_{\alpha}^{\text{init}}$.
Note that according to \eqref{eq:mpc_pred_r_out_max},
we set $\mathbf{r}_{\text{out},\alpha}^{\text{max},k}
=\mathbf{r}_{\text{in},\gamma}^{k-1}$
and according to \eqref{eq:mpc_pred_r_in_max}
we have $\mathbf{r}_{\text{in},\alpha}^{\text{max}}=\mathbf{\hat r}_{\text{out},\beta}$.
The objective in~\eqref{eq:mpc_full_optim} is motivated by the presented Theorem~\ref{theo:max_min_optim},
but with some important adaptations:
Most notably, we introduced~$\Delta r$ variables for both the incoming and outgoing rates.
Introducing the control variable~$\Delta r_{\text{in},\alpha}$ allows to control the incoming rate.
This is of significant importance for the desired congestion control as it realizes \textit{backpressure} and data will be stopped from entering the network if it cannot be forwarded.
The quadratic term in~$\Delta r_{\text{out},\alpha}$ ensures that the circuit queue is emptied even if there are no new packets entering the node.
Moreover, to further avoid congestion, PredicTor is \emph{explicitly} designed to limit the circuit queues through constraint~\eqref{eq:mpc_full_optim_sc_max}.
The objective in~\eqref{eq:mpc_full_optim} is further modified by introducing a discount factor~$d$.
This is necessary because naively implementing our presented fairness formulation also results in fairness along the prediction horizon,
where it is always preferable to increase the rate of the smallest element in a sequence for a given circuit.
In practice, however, we want to send and receive as soon as possible as long as \emph{instantaneous} fairness is achieved.
In the appendix of~\cite{PredicTor}, we present a guideline on how to choose~$d$ to obtain the desired behavior.
Note that the theoretical analysis of the feasibility and stability of the proposed MPC controller
in~\eqref{eq:mpc_full_optim} is out of the scope of this work.
We however found that the proposed controller is stable and feasible
for all investigated simulation studies.
\subsection{Controller Integration}\label{ssec:controller_tor_interact}
The proposed controller is implemented on the application layer of each node in the Tor network.
At each time step, problem~\eqref{eq:mpc_full_optim} is solved
with the most recent measurement of the circuit queue~$s_{\alpha}^{\text{init}}$
of the current node~$\alpha \in N$
and with the received information from adjacent nodes.
The optimal solution of~\eqref{eq:mpc_full_optim} is converted to trajectories
of incoming ($\mathbf{r}_{\text{in},\alpha}$) and outgoing ($\mathbf{r}_{\text{out},\alpha}$) rates,
where the first element of~$\mathbf{r}_{\text{out},\alpha}$
is used to control at which rate data is sent.
In particular, we employ a token-bucket method~\cite{bertsekas1992data}
to shape the outgoing traffic.
Note that we are controlling the data rates
on a per-circuit basis, which is similar to
how PCTCP~\cite{alsabah2013pctcp} changes Tor's circuit handling.
In order to exchange the trajectories between relays,
we extend the Tor protocol with respective control messages.
This is technically possible due to the extensibility of the Tor~protocol
that allows the definition of new cell types.
For the edges of each circuit that do not have a predecessor or successor to exchange data,
we provide reasonable, synthetic trajectories to bootstrap the data transfer
and behave accordingly.
For example, the first node in a circuit reads data from its source
according to its computed incoming rate.
While PredicTor is generally agnostic to the underlying transport protocol,
we implement it using TCP as a reliability mechanism,
to avoid packet loss and packet reordering.
\section{Security and Privacy Considerations} \label{sec:security}
Extending a widespread anonymity networks like Tor, in a fundamental way as PredicTor does,
requires great care to avoid reducing the security level
and putting the users' privacy to danger.
While this section does not constitute a formal, complete security analysis,
it aims to give an intuition of how PredicTor would interfere
with the existing security guarantees and privacy level in Tor.
In order to do so, we first recall Tor's adversary model
and afterwards review typical attacks on Tor, analyzing how PredicTor relates to them.
For this, we make use of the categorization of Tor attacks
presented in~\cite{DBLP:journals/csur/AlSabahG16}.
Still, PredicTor was primarily designed
for exploring new directions towards multi-hop congestion control
and does not claim to fully satisfy the security requirements needed for production use.
\paragraph{Adversary Model}
The Tor network aims to protect the privacy of its users.
More specifically, it obfuscates their IP addresses
by relaying the traffic over circuits of intermediate hops.
In this setting, Tor assumes the following adversary~\cite{dingledine2004tor}:
An attacker may control a certain fraction of the network, including underlay links and relays.
However, Tor does not protect against a global adversary that can monitor the whole network.
Consequently, Tor aims to protect against local adversaries,
but cannot currently avoid end-to-end traffic confirmation attacks.
One design aspect that goes hand-in-hand with this assumption
is that Tor does not rely on any specific trust relationship between relays.
In particular, the Tor protocol tries to make it as hard as possible
for cooperating malicious relays to de-anonymize users.
One essential building block for this is the use of telescope-like onion encryption,
such that each a relay within a circuit can only see its immediate predecessor and successor.
Payload data, including the target address,
is only visible end-to-end between the client and the exit.
All other information for building and maintaining circuits is only visible hop-by-hop,
so the insight every single relay can gain about the overall circuit is minimized.
The design of PredicTor's network protocol is in line with this general concept:
feedback data is only exchanged between adjacent relays,
utilizing Tor's existing cryptography and trust assumptions.
\paragraph{Information Leakage from Feedback Messages}
Although feedback messages themselves are only exchanged between directly neighboring relays,
one can argue that parts of the information they contain
actually trickle over more than one hop.
This is because feedback information is taken into account by the optimizer
and therefore implicitly influences the feedback information
which is, in turn, sent to other relays after the optimization step.
We cannot see how such information could be exploited by malicious relays,
but cannot prove this kind of information leakage irrelevant, either.
This would require a more in-depth analysis, which is subject to future work.
Our initial assessment, however, can be summarized as follows:
The potential danger would be that local relays
could learn more about the overall circuit than before,
e.g.\@\xspace, inferring parts of the circuit topology.
For such inference, obtaining the exact number of circuits handled by another relay
may be useful to the adversary.
Feedback messages in PredicTor contain per-circuit trajectories,
but only for the circuits that are multiplexed between any two adjacent relays.
The same piece of information is already available to vanilla Tor relays for circuit handling.
Obtaining the overall number of circuits handled by another relay would, just as before,
require heuristics that make use of additional information,
such as the data rate advertised in the network consensus.
We therefore do not think that PredicTor would facilitate such inference.
The same is true for the exchanged data rates.
Since these are propagated between relays after optimization,
malicious relays could try to infer characteristics of relays at the far end of the circuit,
e.g.\@\xspace, ruling out relay candidates that would lead to different bottleneck values.
However, such inference could also be carried out as of today,
simply by locally measuring the throughput achieved per circuit.
\paragraph{Traffic Confirmation Attacks}
The most fundamental vulnerability of Tor is
its susceptibility to timing-based traffic confirmation attacks
that could either be carried out by a global adversary or by an adversary
that controls, both, the entry and the exit relay.
In general, low-latency anonymity networks cannot prevent traffic confirmation attacks.
However, even if such attacks cannot be prevented entirely,
their difficulty depends largely on the temporal correlation
between data entering and leaving the network.
Consequently, better and more predictable performance, as is achieved by PredicTor,
may facilitate traffic confirmation attacks.
This is an inherent challenge every performance enhancement for Tor has to face.
On the other hand, better performance bears the potential to attract a broader user base,
strengthening the anonymity set,
so it is not entirely clear whether the overall privacy level would be harmed.
The precise interdependencies between these factors are one of our main future research directions.
\paragraph{Routing and Circuit Selection}
Adversaries may be able to increase their chances
of successfully attacking a larger portion of the user base
by taking into account the circuit selection process that is carried out by the Tor clients.
Likewise, if adversaries not only control individual relays, but Autonomous Systems~(AS),
this can be problematic.
PredicTor, however, is fully orthogonal and does not touch circuit selection or routing.
Therefore, it does not increase vulnerability to these attacks.
\paragraph{Website Fingerprinting and Watermarking}
One attack vector that has extensively been researched in the past
works by inferring communication contents by analyzing the timing of the encrypted data traffic.
Closely related, there are watermarking techniques
that actively insert traffic pattern peculiarities
in order to make flows more easily recognizable.
We argue that both attack strategies are either not touched by PredicTor
or even become more difficult because the explicit traffic handling defined by PredicTor
results in smoother and more homogeneous traffic patterns on the wire,
as we will show in Section~\ref{sec:eval-small-network}.
\paragraph{Congestion Attacks and Denial of Service~(DoS)}
Attackers can try to divert target traffic from relays they do not control
to their own ones by artificially congesting parts of the network
up to the point where benign relays cannot continue operation~(DoS).
As far as traffic congestion is concerned,
PredicTor is likely more resilient to such attacks than vanilla Tor,
because it handles situations of congestion or load much more explicitly.
Attackers cannot as easily flood the network,
because the controllers running at each relay optimize the traffic flows
to keep queues low.
On the other hand, we suspect PredicTor to be prone to DoS~attacks
that specifically target the optimizer.
Although the underlying optimization problem can be solved efficiently,
the computational effort still depends on the number of circuits involved.
Therefore, an attacker could trigger increased resource usage
by constructing tailored circuits that increase the difficulty of PredicTor's optimization problem.
\vspace{\baselineskip}
All in all, our initial security and privacy assessment
shows that future work would be necessary to ensure
that PredicTor does not introduce new attack vectors.
On the other hand, it can also help mitigating several existing attacks.
\section{Evaluation and Discussion} \label{sec:eval}
Evaluating PredicTor on the live Tor network is prohibitive,
due to its sensitive nature of touching users' anonymity.
Instead, we here investigate its behavior by carrying out simulation studies.
It should be noted that, in its current form,
PredicTor is not meant for immediate real-world application on the Tor network.
Instead, it constitutes a concept study opening a novel research direction
towards realizing congestion control in complex networks.
Therefore, all of our experimentation has the goal
of maximizing the \emph{understanding} of this new approach,
applying distributed model predictive control for congestion control.
We aim to clearly point out benefits and potential drawbacks of such strategies.
In particular, our evaluation covers the PredicTor controller's detailed behavior and its implications.
We look at the isolated behavior of single controllers as well as
the overall system behavior that emerges from the cooperative interaction
of multiple controllers.
To put the results into context, we compare PredicTor to vanilla Tor
as well as PCTCP~\cite{alsabah2013pctcp}, an alternative circuit handling strategy for Tor.
From these observations, we deduce insight about the benefits, inherent limitations,
as well as the expected applicability of such approaches.
Our evaluation strategy is twofold: First, we analyze PredicTor's behavior
in small toy scenarios that allow us to better understand its behavior in situations
that are simple enough to be investigated by hand.
By doing so, we establish an intuition of its behavior.
To this end, we first investigate the working of a single, isolated PredicTor controller
before setting up multiple controllers to cooperate.
This way, we can analyze the behavior of a single, isolated PredicTor controller
as well as the interaction between multiple controllers.
At a second step, we take our evaluation further by scaling up our experiments to more complex networks.
This serves as a means of exploring the applicability of PredicTor in more realistic scenarios.
On the one hand, we thus verify whether PredicTor's claimed benefits
do also exist in scenarios that are not as easy to understand as the toy scenarios before.
On the other hand, we investigate to what extent the underlying assumptions made in PredicTor
collide with reality and what implications this has for further research in the field.
Lastly, we evaluate how PredicTor handles different traffic patterns (bulk and web traffic).
\paragraph{Implementation} \label{sec:implementation}
As laid out before, PredicTor comprises, on the one hand, the controller logic
that uses model predictive control to find the optimal data rates
based on the implemented system model and optimization objectives.
On the other hand, these control decisions have to be realized in the network
and the predicted trajectories have to be exchanged with other relays.
The structure of our prototype implementation of PredicTor also exhibits these two main tasks.
We implement the PredicTor core model in Python, utilizing CasADi~\cite{Andersson2018}
in combination with IPOPT~\cite{Andreas2006} and the MA27\footnote{HSL.
A collection of Fortran codes for large scale scientific computation. \texttt{http://www.hsl.rl.ac.uk/}}
linear solver for fast state-of-the-art optimization.
For implementing the network behavior, we embed it into \texttt{nstor}~\cite{tschorsch16bktap},
an implementation of Tor for the ns-3~network simulator.\footnote{\texttt{https://www.nsnam.org/}}
Our implementation thus covers PredicTor, vanilla Tor and PCTCP.
PCTCP differs from vanilla~Tor in that it establishes a separate connection per circuit,
instead of multiplexing them.
While the overall network simulation is carried out by \texttt{nstor},
each simulated relay has access to the controller code library for carrying out its local optimizations.
The results are then used as the Tor relay's scheduling strategy in the network simulation,
More specifically, this means throttling data transmission
based on the controller's optimization output and exchanging the predicted trajectories between relays.
Our implementation is publicly available online.\footnote{\texttt{https://github.com/cdoepmann/PredicTor}}
The results presented in the following are obtained
with a discount factor of $d = \frac{1}{3}$,
as discussed in the appendix of~\cite{PredicTor}.
For the prediction horizon, we choose $N_{\text{horz}}=10$
\paragraph{Metrics} \label{sec:metrics}
In order to assess PredicTor's performance,
we quantify different parameters that are relevant for comparing it to existing approaches.
For each of these parameters, we initially evaluate the steady state behavior.
First of all, we consider the \emph{latency} of the data transfer.
Apart from the physical underlay latency, which denotes a natural lower bound,
latency stems primarily from the existence of buffers and queues in the network.
Since the reduction of queue sizes is an explicit optimization goal of PredicTor,
we expect a considerable enhancement in this regard.
We define our notion of latency as follows:
For each data transfer through the network (running over a circuit of multiple relays),
we define latency as the difference in time between
when it reaches its destination and when it entered the (overlay) network.
We therefore focus explicitly on the Tor network itself
and disregard additional latency that may occur, e.g.\@\xspace,
for the communication between the exit relay and an outside webserver.
This approach ensures that we capture the two important consequences of latency:
Firstly, the impact on the user who experiences additional waiting time
before seeing the response to her request.
And secondly, large queue sizes also mean a higher load on the network itself.
Therefore, small queues---and thus, lower latency---are desirable
also from a network perspective in reducing congestion.
We refer to the total amount of data that is present in the network at any point in time as \emph{backlog}.
For latency, we employ a byte-wise perspective.
That is, we precisely track for each payload byte the times of sending and receiving
and aggregate them into an overall value by taking the average.
Another relevant metric is the \emph{throughput} that is achieved by each of the circuits.
On the one hand, it expresses how well the network is utilized.
On the other hand, we can use these characteristics to define a notion of \emph{fairness} in the network.
As explained before, PredicTor aims at establishing max-min fairness for all circuits,
which Tor is not currently capable of.
Given a specific topology of circuits and relays,
we calculate the optimal max-min fair data rate distribution for this scenario.
We can then compare the observed values from our simulations to this optimum in two ways:
For an in-depth analysis of the resulting data rate distributions,
we visualize them as cumulative distribution functions in a CDF plot.
On the other hand, however, if we only want a single value to express the degree of fairness,
we make use of the following construction:
Let $r^f_1, \dots, r^f_n$ be the max-min fair data rate distribution,
and $r_1, \dots, r_n$ be the observed data rates.
We then define the fairness index $F$ as follows:
\[
F = 1 - \frac{\sum_{i=1}^n \vert r^f_i - r_i \vert}{\sum_{i=1}^n r^f_i}
\]
Put differently, $F$ denotes the share of traffic that behaves according to max-min fairness.
While other established fairness measures exist,
most notably Jain's fairness index~\cite{jains-fairness-index},
they are not applicable here.
Jain defines fairness as a uniform allocation of a resource.
Since we make use of max-min fairness, however,
a uniform data rate distribution is not necessarily the optimal choice.
\paragraph{Limitations} \label{sec:limitations}
One of our main simplifications includes that
we do not simulate the transmission of feedback messages over the wire,
but emulate their exchange \enquote{out of band}.
Since feedback traffic is independent of the network speed,
running the simulations at different simulated data rates,
one could achieve arbitrary goodput-overhead ratios, which we refrain from doing.
Moreover, our intention is to evaluate the scheduling behavior itself as a baseline
instead of capturing artifacts that stem purely from implementation details such as packet format.
However, we note (and later discuss) that, in real networks,
feedback overhead would constitute a severe issue due to its linear growth with the number of circuits.
Moreover, for this prototype, we base the data transfer on TCP (like Tor and PCTCP)
instead of introducing a tailored transport protocol.
PredicTor thus merely takes the role of a scheduler.
We will later discuss that this approach can still be beneficial for robustness.
\subsection{Single Controller}
We first present an investigation of the decision-making process of PredicTor's
proposed controller from~\eqref{eq:mpc_full_optim} for a single node.
In order to highlight several interesting aspects of its behavior,
we investigate an \emph{open-loop prediction}.
This means that we solve the optimal control problem~\eqref{eq:mpc_full_optim} once
and display the resulting predicted future trajectories.
In the \emph{closed-loop control} application, these predictions will change repeatedly, as new information becomes available.
For the investigation, we consider the small-scale topology presented in Figure~\ref{fig:example-topology}.
The topology consists of six relays handling a total of three circuits.
All of the three circuits meet in a shared bottleneck.
Additionally, two of these circuits originate from the same sending relay.
The scenario therefore demonstrates a simple congestion situation.
We focus on the controller at the bottleneck and
denote $\alpha$ the current node, $\beta$ its predecessors and $\gamma$ its successors.
\begin{figure}
\definecolor{color0}{rgb}{0.12156862745098,0.466666666666667,0.705882352941177}
\definecolor{color1}{rgb}{1,0.498039215686275,0.0549019607843137}
\definecolor{color2}{rgb}{0.172549019607843,0.627450980392157,0.172549019607843}
\begin{tikzpicture}[font=\scriptsize]
\node (v) {circuit 1};
\draw[color=color0,very thick] ($(v.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\node[right=1.75cm of v] (s) {circuit 2};
\draw[color=color1,very thick] ($(s.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\node[right=1.75cm of s] (n) {circuit 3};
\draw[color=color2,very thick] ($(n.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\end{tikzpicture} \par
\tikzset{
circuit/.style={line width=3.5pt},
written/.style={text opacity=0},
}
\def0.6{0.6}
\hfill
\scalebox{0.4}{
\begin{tikzpicture}[]
\drawtopology
\node[draw,fit=(sender 1) (sender 2),dashed,ultra thick] {};
\end{tikzpicture}
}
\hfill
\scalebox{0.4}{
\begin{tikzpicture}[]
\drawtopology
\coordinate (incoming) at ($(sender 1.center)!0.5!(bottleneck.center)$);
\draw[dashed,ultra thick] ($(incoming) + (-1.5em,+1.5em)$) rectangle ($(incoming) + (+1.5em,-4.5em)$);
\end{tikzpicture}
}
\hfill
\scalebox{0.4}{
\begin{tikzpicture}[]
\drawtopology
\node[draw,fit=(bottleneck),dashed,ultra thick] {};
\end{tikzpicture}
}
\hfill
\scalebox{0.4}{
\begin{tikzpicture}[]
\drawtopology
\coordinate (outgoing) at ($(receiver 2.center)!0.5!(bottleneck.center)$);
\draw[dashed,ultra thick] ($(outgoing) + (-1.5em,+4.0em)$) rectangle ($(outgoing) + (+1.5em,-4.5em)$);
\end{tikzpicture}
}
\hfill
\par\vspace{0.5em}
\pgfplotsset{width=0.29\textwidth, height=0.35\textwidth}
\pgfplotsset{
tick label style={font=\scriptsize\sffamily},
label style={font=\scriptsize\sffamily},
y label style={yshift = -0.01\textwidth},
title style={font=\scriptsize\bfseries\sffamily},
legend style={font=\tiny},
group/horizontal sep=1.1cm,
group/vertical sep = 0.8cm,
}
\input{figures/single_controller_open_loop_pred.tex}
\caption{Open-loop simulation of the MPC predictions
made by the controller in the bottleneck relay of our sample topology.}
\label{fig:open_loop_pred}
\end{figure}
We investigate a synthetic scenario
with manually-defined trajectories from the adjacent nodes.
In particular, we define for the predecessor node
$\mathbf{r}_{\text{out},\beta}^k$,
$\mathbf{\hat r}_{\text{out},\beta}^k$,
and $\mathbf{s}_{\beta}^k$.
For the successor node we define
$\mathbf{r}_{\text{in},\gamma}^{k-1}$.
We set $\mathbf{r}_{\text{out},\alpha}^{\text{max},k}
=\mathbf{r}_{\text{in},\gamma}^{k-1}$
and $\mathbf{r}_{\text{in},\alpha}^{\text{max},k}=\mathbf{\hat r}_{\text{out},\beta}^k$.
Additionally, the initial buffer size $s_{\alpha}^{\text{init},k}$ is required.
Based on this information, the solution of \eqref{eq:mpc_full_optim} allows to compute the trajectories
$\mathbf{r}_{\text{in},\alpha}^k$,
$\mathbf{r}_{\text{out},\alpha}^k$,
$\mathbf{s}_{\alpha,i}^k$,
$\mathbf{\hat{s}}_{\alpha}^k$ and
$\mathbf{\tilde{s}}_{\alpha|\beta}^k$.
The trajectories are displayed in Figure~\ref{fig:open_loop_pred} and will be discussed in the following.
The most important decision of the PredicTor controller is with respect to the outgoing rates~$\mathbf{r}_{\text{out},\alpha}$.
The first element of this sequence determines the rate at which data is sent until the next sampling instance.
In Figure~\ref{fig:open_loop_pred},
we can see at~\circled{1} that all circuits obtain the same rate at the beginning of the sequence.
At this point in time the rate is only limited by the node capacity~$C_{\alpha}^{\text{out}}$, which can be seen in~\circled{2}.
Over the course of the prediction horizon, the outgoing rate for circuit $2$ is increasing,
whereas the rates for circuits~1 and 3 is decreasing.
This behavior is due to the buffer sizes for the circuits, shown in~\circled{3}.
Here we can see that even with an increasing rate,
the buffer for circuit~2 is growing whereas that for circuits~1 and 3 is approaching zero.
With the buffer size for circuits~1 and 3 close to zero, the outgoing rate for these circuits
approaches the incoming rates~$\mathbf{r}_{\text{in},\alpha}$, shown in~\circled{4},
at the end of the horizon.
Under stationary conditions, the incoming rates~$\mathbf{r}_{\text{in},\alpha}$ are typically equivalent
to the predicted outgoing rates~$\mathbf{r}_{\text{out},\beta}$
of the source node $\beta$, which can also be seen in~\circled{4} for circuit~3.
For circuit~1, however, the algorithm determines to
increase
$\mathbf{r}_{\text{in},\alpha,1}$
with respect to
$\mathbf{r}_{\text{out},\beta,1}$.
This can be seen in~\circled{5}.
When increasing the incoming rate, PredicTor needs to consider
$\mathbf{r}_{\text{in},\alpha}^{\text{max}}$
and the respective constraint in~\eqref{eq:mpc_pred_r_in_max_sum}.
Note that this constraint is not enforced at each time-step with
$\mathbf{r}_{\text{in},\alpha}\leq \mathbf{r}_{\text{in},\alpha}^{\text{max}}$
but the limit increases at each time-step with
$\mathbf{r}_{\text{in},\alpha}^{\text{max}}\geq 0$.
This can also be seen for circuit~1 in~\circled{5}.
Since the algorithm determines to increase the incoming rate
$\mathbf{r}_{\text{in},\alpha,1}$
with respect to the prediction of the source node
$\mathbf{r}_{\text{out},\beta,1}$,
it also predicts a deviation in the buffer size at the source node.
In \circled{6}, we can see that originally the buffer size $\mathbf{s}_{\beta,1}$ for circuit 1
is predicted to be constant over the horizon.
The obtained trajectory $\tilde{s}_{\alpha|\beta,1}$ takes into consideration
the increased $\mathbf{r}_{\text{in},\alpha,1}$ and predicts
that the buffer for circuit 1 at node $\beta$ will be emptied at time-step $k=5$.
Consequently, we can see at \circled{4} that the rate $r_{\text{in},\alpha,1}^{k+5}$ is zero.
As a final aspect, we want to mention $\mathbf{\hat r}_{\text{out},\alpha}$.
This virtual outgoing rate is different to $\mathbf{ r}_{\text{out},\alpha}$ as it does not consider the
constraint $\mathbf{r}_{\text{out},\alpha}\leq \mathbf{ r}_{\text{out},\alpha}^{\text{max}}$.
This can be seen in \circled{7} for circuit 2.
The purpose of this virtual rate is to request from the successor node an increase in
$\mathbf{ r}_{\text{out},\alpha}^{\text{max}}$, such that at a later iteration the true rate can be increased.
In summary, we find that the solution of the PredicTor problem
(see~\eqref{eq:mpc_full_optim})
leads to sound and interpretable behavior in terms of its predicted future trajectories.
For the synthetic scenario, it can be seen that the controller attempts to achieve fairness and to avoid congestion,
while utilizing the available resources of the network.
The results allow no conclusions regarding performance, however,
as only a single controller in an open-loop solution is investigated.
\subsection{Interaction of Multiple Controllers} \label{sec:eval-small-network}
As a next step, we carried out a full simulation of the sample network.
This now includes not only the isolated controllers,
but also the interaction between them as well as the application logic and network stack behavior.
To this end, ns-3 allows us to achieve a high degree of realism
by emulating the network down to the physical layer,
including queuing effects, packet loss, etc.
We refer to this kind of simulation as \emph{closed-loop},
because each MPC step takes into account the current state of the system,
determined from measurements and information exchange between relays.
For this simple setup, we define all underlay links to have the same, constant latency.
We will lift this assumption in later experiments.
We compare the performance of PredicTor to vanilla Tor and PCTCP.
We investigate two scenarios that differ slightly by circuit behavior:
The three circuits start at slightly different times, purely for easier visualization.
In scenario~1, circuits~1 and~3 have an infinite source of packets to forward,
whereas circuit~2 stops and restarts twice during the simulation.
This allows us to better investigate how PredicTor assigns data rates to each of the circuits.
In scenario~2, all circuits have an infinite source of packets.
We use this setup for comparing the absolute values of achieved data rates more easily.
\definecolor{color0}{rgb}{0.12156862745098,0.466666666666667,0.705882352941177}
\definecolor{color1}{rgb}{1,0.498039215686275,0.0549019607843137}
\definecolor{color2}{rgb}{0.172549019607843,0.627450980392157,0.172549019607843}
\begin{figure}
\pgfplotsset{width=.38\textwidth,height=.275\textwidth}
\pgfplotsset{
tick label style={font=\scriptsize\sffamily},
label style={font=\scriptsize\sffamily},
title style={font=\scriptsize\bfseries\sffamily},
legend style={font=\footnotesize},
group/horizontal sep=0.5cm,
}
\begin{tikzpicture}[font=\scriptsize]
\node (v) {circuit 1};
\draw[color=color0,very thick] ($(v.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\node[right=1.75cm of v] (s) {circuit 2};
\draw[color=color1,very thick] ($(s.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\node[right=1.75cm of s] (n) {circuit 3};
\draw[color=color2,very thick] ($(n.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\end{tikzpicture} \par
\input{figures/acm-toit-single-run-data-rates.tex}
\caption{Data rates of the circuits in the sample topology (scenario~1) over time.}
\label{fig:multiple-controllers-data-rate}
\end{figure}
Figure~\ref{fig:multiple-controllers-data-rate} shows the data rates of the individual circuits
in scenario~1 over the course of the simulation time.
These per-circuit values were measured in ns-3 by recording the outgoing rates at the bottleneck.
We can see that PredicTor exhibits a desirable behavior with constant,
sustainable rates and smooth transitions when circuit~2 stops and restarts.
Fair behavior can be observed in these transitions:
All circuits share the same rate during activity
and circuits~1 and~3 are allocated the same, higher rate when circuit~2 stops sending.
The sum of all rates is visibly constant over time.
On the other hand, vanilla Tor and PCTCP show erratic, oscillatory behavior
where bursts are followed by very low rates,
while the individual circuits take turns sending data.
Fairness cannot be assessed visually for Tor and PCTCP,
which is why we quantify it later. %
\begin{figure}
\pgfplotsset{width=.38\textwidth,height=.275\textwidth}
\pgfplotsset{
tick label style={font=\scriptsize\sffamily},
label style={font=\scriptsize\sffamily},
title style={font=\scriptsize\bfseries\sffamily},
legend style={font=\footnotesize},
group/horizontal sep=0.5cm,
}
\begin{tikzpicture}[font=\scriptsize]
\node (v) {circuit 1};
\draw[color=color0,very thick] ($(v.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\node[right=1.75cm of v] (s) {circuit 2};
\draw[color=color1,very thick] ($(s.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\node[right=1.75cm of s] (n) {circuit 3};
\draw[color=color2,very thick] ($(n.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\end{tikzpicture} \par
\input{figures/acm-toit-single-run-backlog.tex}
\vspace{-0.5em}
\caption{Backlog of the circuits in the sample topology (scenario~1) over time.}
\label{fig:multiple-controllers-backlog}
\end{figure}
\begin{figure}
\pgfplotsset{width=.38\textwidth,height=.275\textwidth}
\pgfplotsset{
tick label style={font=\scriptsize\sffamily},
label style={font=\scriptsize\sffamily},
title style={font=\scriptsize\bfseries\sffamily},
legend style={font=\footnotesize},
group/horizontal sep=0.5cm,
}
\input{figures/acm-toit-single-run-latency.tex}
\vspace{-1.5em}
\caption{Histogram of per-byte latency in the sample topology (scenario~1) for all three circuits.}
\label{fig:multiple-controllers-latency}
\end{figure}
We further compare PredicTor, Tor, and PCTCP
in Figure~\ref{fig:multiple-controllers-backlog} and~\ref{fig:multiple-controllers-latency},
where we display the backlog and latency.
PredicTor succeeds at its primary goal of sustaining a manageable backlog,
especially compared to vanilla Tor and PCTCP.
The importance of this effective congestion control becomes apparent
in Figure~\ref{fig:multiple-controllers-latency},
where we compare histograms for the latencies of received packets.
With an average latency of 93~ms, PredicTor significantly improves on vanilla Tor (553~ms),
and PCTCP (635~ms).
Based on the underlay link latency, the theoretical minimum was at 80~ms.
\begin{table}
\footnotesize
\caption{Comparison of latency and throughput in the sample topology.}
\begin{tabular}{ccccccccc}
\toprule
& \multicolumn{3}{c}{\textbf{Mean latency}} && \multicolumn{4}{c}{\textbf{Throughput}} \\
& \multicolumn{3}{c}{[ms]} && \multicolumn{4}{c}{[KB/s]} \\
\cmidrule{2-4}\cmidrule{6-9}
circuit & PredicTor & Vanilla & PCTCP &\hspace*{.5cm}& PredicTor & Vanilla & PCTCP & \emph{max-min fair} \\
\midrule
1 & 91.7 & 534.7 & 601.6 & & 134.2 & 102.5 & 130.9 & \emph{136.7} \\
2 & 98.7 & 545.4 & 697.5 & & 134.1 & 102.5 & 143.9 & \emph{136.7} \\
3 & 93.7 & 572.1 & 654.2 & & 134.1 & 197.3 & 130.5 & \emph{136.7} \\
\midrule
\textbf{Total} & 93.5 & 553.1 & 635.5 & & 402.4 & 402.3 & 405.3 & \emph{410.1} \\
\bottomrule
\end{tabular}
\label{tab:multiple-controllers-comparison}
\end{table}
To summarize our findings from this simple network topology,
we present the achieved latency and data rate values per circuit
in Table~\ref{tab:multiple-controllers-comparison}.
Please note that, as mentioned before, the data rates stem from a slightly different setup (scenario~2),
in which circuit~2 does not stop sending.
Otherwise, the data rates would not be easily comparable.
Regarding throughput, the three methods perform very similarly,
with the difference that PredicTor achieves near perfect fairness~($F=0.98$).
Vanilla Tor clearly discriminates circuits~1 and~2 which share a connection~($F=0.68$).
On the other hand, PCTCP, as expected, manages to revise this effect to some extent
in this simple setup~($F=0.95$).
We conclude that PredicTor bears the potential to provide a clear advantage
with respect to latency and fairness.
However, it should be noted that PredicTor also introduces significant complexity
compared to the previous methods.
On the other hand, the optimization problem~\eqref{eq:mpc_full_optim} is convex,
which guarantees a global solution in polynomial time.
For the given scenario, obtaining a solution takes around 10~ms (laptop-grade CPU).
The problem complexity (number of optimization variables and constraints)
grows linearly with the number of circuits per node.
It is therefore expected that also larger topologies can be tackled with this approach in real-time.
However, scaling the network excessively may well render the approach unusable at some point.
This also becomes apparent when considering the overhead induced by the exchange of feedback messages.
\subsection{Impact of Network Complexity} \label{sec:network-complexity}
In the previous subsections, we have demonstrated the general utility of PredicTor
for congestion control in the Tor network.
While this constitutes an important precondition for applying such approaches,
it is not sufficient for thoroughly assessing its potential.
We therefore now go a step further and investigate the extent
to which the observed benefits and drawbacks also apply to more realistic network situations.
In order to do so, we now focus on more complex networks
that are not trivial to comprehend in every simulated detail.
In the course of this evaluation, we put a special focus on the assumptions and trade-offs made in PredicTor.
Note that this evaluation is still explorative in nature.
We first analyze PredicTor's behavior in networks that are significantly larger
and more complex than in Section~\ref{sec:eval-small-network},
but otherwise do not differ much from the simulation assumptions.
In particular, the underlay links still have a uniform, constant latency.
We construct random networks of different size.
In particular, we fix the number of relays at 50,
but vary the number of circuits, ranging from 10 to 1,000.
Each circuit consists of a random sequence of three relays.
We chose this simple network model for several reasons.
First of all, it allows us to easily realize different levels of congestion in the network.
Since we want to compare PredicTor against existing
congestion control mechanisms for the Tor network,
it is desirable to investigate the influence of network load.
And secondly, we do not want to make too strict assumptions on the precise topology.
Instead, we regard a completely random network as a suitable baseline to compare against.
Much more elaborate models of the Tor network do exist~\cite{jansen2012methodically,neverenough-sec2021}.
In fact, our methodology is still influenced by~\cite{neverenough-sec2021},
e.g.\@\xspace in that we pay attention to generating a completely new random network
for every single simulation run to avoid statistical bias.
Our focus is mainly on bulk traffic, that is, each circuit transfers and infinite stream of data.
After a lead time, we evaluate the steady state (identified manually by ourselves),
i.e.\@\xspace, the last two seconds of simulation time.
For this time span, we again analyze the following metrics:
byte-wise end-to-end latency, throughput, and fairness.
For each data point, we carry out the simulation 25~times (with different random seeds)
and report mean values.
In Section~\ref{sec:eval-traffic-patterns},
we lift the assumption of having only bulk traffic
and consider a mix of bulk and interactive web traffic instead.
\definecolor{color0}{rgb}{0.12156862745098,0.466666666666667,0.705882352941177}
\definecolor{color1}{rgb}{1,0.498039215686275,0.0549019607843137}
\definecolor{color2}{rgb}{0.172549019607843,0.627450980392157,0.172549019607843}
\begin{figure}
\pgfplotsset{width=.45\textwidth,height=.3\textwidth}
\pgfplotsset{
tick label style={font=\scriptsize\sffamily},
label style={font=\scriptsize\sffamily},
legend style={font=\footnotesize},
}
\centering{%
\begin{tikzpicture}[font=\scriptsize]
\node (v) {PredicTor};
\draw[color=color2,very thick] ($(v.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\node[right=1.75cm of v] (s) {Vanilla Tor};
\draw[color=color1,very thick] ($(s.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\node[right=1.75cm of s] (n) {PCTCP};
\draw[color=color0,very thick] ($(n.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\end{tikzpicture} \par \vspace{-1em}
\subfloat[\textbf{Latency}\label{fig:eval-latency}]{%
\input{figures/acm-toit-complex-network-latency.tex}}\quad
\subfloat[\textbf{Throughput}\label{fig:eval-throughput}]{%
\input{figures/acm-toit-complex-network-throughput.tex}%
}
\caption{Performance in random networks with 50~relays and variable number of circuits.
(\ref{fig:eval-latency})~Average byte-wise latency, and
(\ref{fig:eval-throughput})~average overall throughput (per-run sum of all circuits).} \label{fig:eval-complexity-latency-throughput}
}
\end{figure}
\paragraph{Latency and Throughput}
We first focus on the latency and throughput that is achieved
by each of the three algorithms with growing congestion in the network.
Figure~\ref{fig:eval-complexity-latency-throughput} presents our results.
With respect to latency, we can see that PredicTor offers great potential
to heavily improve on the status quo (see Figure~\ref{fig:eval-latency}).
In particular, by explicitly requiring small queues during optimization,
PredicTor achieves low latency independently of the number of circuits.
In contrast, latency grows indefinitely for denser networks in the case of vanilla Tor and PCTCP.
This is because PredicTor does not \enquote{blindly} send data into the network,
but only if the controller's optimization result allows to do so,
based on local measurements and feedback from adjacent relays.
The resulting lower backlog leads to much lower latencies, even for heavily crowded networks.
In contrast, vanilla Tor and PCTCP have to rely solely
on the state of their local TCP connections
that cannot take into account the state in the network more than one hop down the circuit.
Therefore, they send too much data, leading to significant backlog and latency.
The only way that vanilla Tor and PCTCP could react to congestion on the overall circuit
would be Tor's end-to-end \texttt{SENDME} window mechanism.
However, this window has previously been identified to be too coarse-grained and rigid
to help with efficient congestion control~\cite{DBLP:conf/pet/AlSabahBGGMSV11}.
In contrast to vanilla Tor, PCTCP can deal with extreme congestion slightly better
due to its avoidance of head-of-line blocking in the case of packet loss
that becomes more relevant in these scenarios.
When looking at the achieved throughput, however,
PredicTor cannot fully compete with the traditional approaches.
Over the whole parameter range, it achieves considerably lower overall data rates,
averaging at a disadvantage of around 20\%.
While this is an insight
that was not apparent from the toy scenarios we examined in the previous section,
we attribute it to two root causes:
Firstly, the controller behaves conservatively as far as data rate assignment is concerned.
A circuit is only given a share of the available bandwidth
\emph{after} this was decided to be beneficial in the sense of the MPC optimization problem.
This lack of aggressiveness differentiates PredicTor from the other approaches.
And secondly, just as vanilla Tor and PCTCP,
PredicTor currently uses TCP as the underlying transport protocol
and acts as an additional scheduler on top of it.
In the general case, it will therefore not be able to outperform
approaches that only use TCP without an additional sending limit.
Either way, we can state that the lower average throughput
constitutes a clear trade-off that PredicTor makes in favor of lower latency.
\begin{figure}
\pgfplotsset{width=.45\textwidth,height=.3\textwidth}
\pgfplotsset{
tick label style={font=\scriptsize\sffamily},
label style={font=\scriptsize\sffamily},
title style={font=\scriptsize\bfseries\sffamily},
legend style={font=\footnotesize},
group/horizontal sep=0.5cm,
}
\centering{%
\begin{tikzpicture}[font=\scriptsize]
\node (v) {PredicTor};
\draw[color=color2,very thick] ($(v.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\node[right=1.75cm of v] (s) {Vanilla Tor};
\draw[color=color1,very thick] ($(s.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\node[right=1.75cm of s] (n) {PCTCP};
\draw[color=color0,very thick] ($(n.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\end{tikzpicture} \par %
\input{figures/acm-toit-complex-network-throttled_50_100.tex} \quad
\input{figures/acm-toit-complex-network-throttled_50_500.tex}
\caption{Impact of application-layer throttling on latency in networks with 50 relays.
Note that reducing the data rate does not enable vanilla Tor and PCTCP
to achieve latency values as low as PredicTor.}
\label{fig:eval-complexity-throttled}
}
\end{figure}
Putting this relationship into perspective, one might argue
that the lower latency is not an achievement of PredicTor itself,
but simply an artifact and a consequence of the lower throughput,
because the lower data rates result in smaller queues.
However, this is not the case as another experiment shows:
Out of the previously explored parameter space,
we chose two scenarios that are representative
for a low and high degree of congestion in the network,
respectively (100 and 500 circuits, with 50 relays).
For both of these scenarios, we introduced an artificial,
\emph{application-layer} throttling mechanism,
reducing the amount of available bandwidth each relay can use.
We varied this throttling factor between 0.0 (no throttling at all)
and 0.9 (only 10\% of bandwidth remain) and recorded the achieved latency.
The assumption was that artificially lowering the data rates of vanilla Tor and PCTCP
would already be enough to achieve also lower latency even for these mechanisms.
Figure~\ref{fig:eval-complexity-throttled} reveals that the opposite is true,
for both the heavily and less congested networks.
In fact, lowering the data rates mostly even leads to
an \emph{increase} in latency with vanilla Tor and PCTCP.
To understand this behavior, we have to emphasize that each data transfer through the Tor network
does not consist of only one single TCP connection, but denotes a multi-hop data transfer.
Lowering the data rates therefore does not automatically
lead to a substantial reduction of backlog.
Instead, the packets are queued at the application layer
and experience the same throttling when being forwarded.
We can thus conclude that the low latency is in fact an achievement of PredicTor
and not only a side effect of the lower data rates.
\begin{figure}
\pgfplotsset{width=.45\textwidth,height=.3\textwidth}
\pgfplotsset{
tick label style={font=\scriptsize\sffamily},
label style={font=\scriptsize\sffamily},
legend style={font=\footnotesize},
}
\centering{%
\begin{tikzpicture}[font=\scriptsize]
\node (v) {PredicTor};
\draw[color=color2,very thick] ($(v.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\node[right=1.75cm of v] (s) {Vanilla Tor};
\draw[color=color1,very thick] ($(s.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\node[right=1.75cm of s] (n) {PCTCP};
\draw[color=color0,very thick] ($(n.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\node[right=1.75cm of n] (f) {max-min fairness};
\draw[color=black,very thick] ($(f.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\end{tikzpicture} \par \vspace{-1em}
\subfloat[\textbf{Data rate distribution} (750 circuits)\label{fig:eval-fairness-cdf}]{%
\input{figures/acm-toit-complex-network-fairness-representative-r50-c750.tex}}
\subfloat[\textbf{Fairness index} (variable number of circuits)\label{fig:eval-fairness-index}]{%
\input{figures/acm-toit-complex-network-fairness-index.tex}}\quad
\caption{Throughput fairness in random networks with 50~relays.
(\ref{fig:eval-fairness-cdf})~Data rate CDF plot of an example run (750~circuits), and
(\ref{fig:eval-fairness-index})~fairness index over varying number of circuits.} \label{fig:eval-complexity-fairness}
}
\end{figure}
\paragraph{Fairness}
Another central promise of PredicTor is the achievement of much better fairness,
based on the notion of max-min fairness.
We now evaluate the degree to which PredicTor can realize fairness also in complex networks.
Our results are based on the same simulation runs as for the latency and throughput evaluation.
In Figure~\ref{fig:eval-fairness-cdf},
we show an individual simulation run with 750~concurrent circuits
as a CDF plot of the data rates to visually inspect the fairness.
We also included the max-min fair rate distribution as a baseline.
As can be seen, vanilla Tor and PCTCP give most circuits either too low or too high data rates.
In contrast, PredicTor very closely approximates max-min fairness.
The only deviation that can visually be identified
is that several circuits use less bandwidth than optimal max-min would allow them to.
This is in line with our previous observation
that the traffic generated by PredicTor is rather conservative
and the overall data rate tends to be lower than with the traditional approaches.
We validated and generalized the insight gained from the single simulation runs
by calculating the fairness index~$F$ for varying circuit numbers.
The plot in Figure~\ref{fig:eval-fairness-index} reveals that PredicTor is highly effective at ensuring fairness,
even in situations in which the network is heavily congested.
The explicit max-min fairness formulation in PredicTor's optimization goal
consistently causes around 90\% of the network traffic to adhere to max-min fairness.
In contrast, vanilla Tor and PCTCP generally generate much less fair traffic.
This becomes especially apparent the more congested the network is.
Again, PCTCP performs slightly better than vanilla Tor,
but still cannot clearly surpass the threshold of around $F=0.4$
if there is considerable congestion in the network.
We can also see that, if there is only a little congestion,
both of the traditional approaches generate fairer traffic.
This, however, is not because they can \emph{ensure} this behavior in any kind.
Instead, the lack of congestion also implies that
more circuits can fully utilize the available bandwidth on their path.
Therefore, a larger share of circuits can be regarded as behaving in a fair way.
\subsection{Impact of Model Assumptions} \label{sec:eval-model-assumptions}
As shown in the previous subsection, PredicTor is able to improve on both, latency and fairness,
even in complex, \enquote{crowded} networks.
However, even if the concurrent data transmissions of many circuits in these simulations
added a considerable degree of randomness to the network behavior,
the scenario is still specific to the assumptions made in PredicTor's system model.
Most importantly, the underlay link latencies exactly match the values
that are used for calculation in the model.
In reality, this would not be the case
as many influences outside our model affect the connection.
Such factors might include cross traffic on the Internet, routing topology changes and others.
The system model describing the expected network behavior is crucial
to the functioning of model predictive control approaches like PredicTor.
It is therefore important to investigate to which degree the overall system
is susceptible to deviations from the model.
In this section, we thus focus on the robustness of PredicTor
in the face of network behavior differing from PredicTor's expectations drawn from its system model.
We note that the strongest assumption made by PredicTor is
that the latency of the underlay links can reliably be known in advance.
Therefore, we now evaluate PredicTor's behavior if this assumption is violated.
For this, we employ a similar setup as in Section~\ref{sec:network-complexity}.
However, we now do not simulate uniform link latency.
In contrast, we introduce a fuzziness factor $f$.
The fuzziness $f$ defines the uncertainty in link latency as follows:
If PredicTor's system model expects a link latency of $l$,
the link latency is instead chosen uniformly at random from the interval
$[\max(l \cdot (1-f), \epsilon ) ; l \cdot (1+f)]$.
As a consequence of this construction, the larger the fuzziness value $f$,
the larger will be the average deviation of the underlay link latencies from PredicTor's system model.
This gives us a concise parameter to evaluate the robustness of PredicTor against a system model mismatch.
As a technical detail, we introduce a lower bound
of some arbitrarily small value $\epsilon > 0$ for the latencies
to avoid negative and zero-valued latencies.
For fuzziness values~$f > 1$, this by design shifts the distribution towards higher latencies.
We now fix the number of relays and circuits
to evaluate PredicTor's robustness by varying the link fuzziness.
Since the analysis without link latency deviation has revealed
that the performance differs depending on how crowded the network is,
we carry out the following evaluation twice:
Firstly, with a circuit number of~100, representing a network situation with little congestion,
and secondly, with 500~circuits, which induces much more congestion in the network.
Again, each trial is repeated 25~times with newly generated network topologies.
\begin{figure}
\pgfplotsset{width=.45\textwidth,height=.3\textwidth}
\pgfplotsset{
tick label style={font=\scriptsize\sffamily},
label style={font=\scriptsize\sffamily},
title style={font=\scriptsize\bfseries\sffamily},
legend style={font=\footnotesize},
group/horizontal sep=0.5cm,
}
\centering{%
\begin{tikzpicture}[font=\scriptsize]
\node (v) {PredicTor};
\draw[color=color2,very thick] ($(v.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\node[right=1.75cm of v] (s) {Vanilla Tor};
\draw[color=color1,very thick] ($(s.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\node[right=1.75cm of s] (n) {PCTCP};
\draw[color=color0,very thick] ($(n.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\end{tikzpicture} \par %
\input{figures/fuzziness-latency_50_100.tex} \quad
\input{figures/fuzziness-latency_50_500.tex}
\caption{Impact of link latency deviation from the system model on achieved latency, in random networks with 50~relays.}
\label{fig:eval-fuzziness-latency}
}
\end{figure}
\begin{figure}
\pgfplotsset{width=.45\textwidth,height=.3\textwidth}
\pgfplotsset{
tick label style={font=\scriptsize\sffamily},
label style={font=\scriptsize\sffamily},
title style={font=\scriptsize\bfseries\sffamily},
legend style={font=\footnotesize},
group/horizontal sep=0.5cm,
}
\centering{%
\begin{tikzpicture}[font=\scriptsize]
\node (v) {PredicTor};
\draw[color=color2,very thick] ($(v.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\node[right=1.75cm of v] (s) {Vanilla Tor};
\draw[color=color1,very thick] ($(s.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\node[right=1.75cm of s] (n) {PCTCP};
\draw[color=color0,very thick] ($(n.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\end{tikzpicture} \par %
\input{figures/fuzziness-throughput_50_100.tex} \quad
\input{figures/fuzziness-throughput_50_500.tex}
\caption{Impact of link latency deviation from the system model on achieved throughput, in random networks with 50~relays. The throughput values are obtained as the sum of all circuits.}
\label{fig:eval-fuzziness-throughput}
}
\end{figure}
\begin{figure}
\pgfplotsset{width=.45\textwidth,height=.3\textwidth}
\pgfplotsset{
tick label style={font=\scriptsize\sffamily},
label style={font=\scriptsize\sffamily},
title style={font=\scriptsize\bfseries\sffamily},
legend style={font=\footnotesize},
group/horizontal sep=0.5cm,
}
\centering{%
\begin{tikzpicture}[font=\scriptsize]
\node (v) {PredicTor};
\draw[color=color2,very thick] ($(v.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\node[right=1.75cm of v] (s) {Vanilla Tor};
\draw[color=color1,very thick] ($(s.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\node[right=1.75cm of s] (n) {PCTCP};
\draw[color=color0,very thick] ($(n.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\end{tikzpicture} \par %
\input{figures/fuzziness-fairness_50_100.tex} \quad
\input{figures/fuzziness-fairness_50_500.tex}
\caption{Impact of link latency deviation from the system model on achieved fairness, in random networks with 50~relays. Fairness is measured in terms of the fairness index $F$.}
\label{fig:eval-fuzziness-fairness}
}
\end{figure}
We first consider the achieved latency, shown in Figure~\ref{fig:eval-fuzziness-latency}.
Recall that in both cases (more and less congestion),
PredicTor achieved much lower latency than vanilla Tor and PCTCP
if the link latencies exactly matched the system model, as presented in Section~\ref{sec:network-complexity}.
Introducing link fuzziness now creates a more differentiated picture.
The first observation that can be made is that, for all of the considered Tor variants,
the overall latency grows with a growing link fuzziness factor.
This is not surprising due to the aforementioned
shift of the latency distribution for large fuzziness values.
A more significant observation, however, is that
PredicTor is affected by an increasing fuzziness much more
than the traditional approaches in vanilla Tor and PCTCP.
Despite the fact that PredicTor still performs better in this regard,
it loses much of its advantage.
This insight is important for evaluating the suitability of approaches
based on model predictive control for congestion handling:
Such approaches, like PredicTor, heavily rely on the assumption that
their internal system model gives a suitable representation of the real system behavior.
What happens in the case of latency is that PredicTor's predictions about
when data will arrive and what size the buffers will have in the future,
become less accurate with growing link fuzziness.
As a consequence, its effectiveness in reducing latency in the network also declines.
To a certain degree, this issue could be tackled by extending the system model,
including a more complex model for latency
as well as an explicit notion of dealing with latency variations in the controller.
However, on a conceptual level,
the issue of a mismatch between the modeled system behavior and the real behavior cannot fully be avoided.
In this regard, traditional approaches may prove more robust against unexpected external influences.
However, when looking at the achieved throughput and fairness,
presented in Figures~\ref{fig:eval-fuzziness-throughput} and~\ref{fig:eval-fuzziness-fairness},
we can see that a deviation from the system model
does not necessarily degrade performance in every regard.
As can be concluded from the plots,
the achieved throughput and fairness remain relatively unaffected even by large fuzziness values,
even less than the traditional approaches.
On the one hand, this may be due to the fact that we simulate
the distribution of feedback trajectories out-of-band, as discussed before.
On the other hand, however, we attribute this to the fact that
PredicTor operates on the notion of \emph{data~rates}
instead of absolute numbers of packets to be transferred.
These rates can be realized by Tor even if data is available later than expected due to higher latency
or if there are temporary peaks of data due to inaccurate latency prediction.
The controller is only called for computing a data transfer schedule in distinct time intervals.
In the meantime, Tor can adhere to the calculated plan
and benefit from the enforced properties such as max-min fairness,
even if the system model was partly inaccurate.
As a result, PredicTor even proves more robust than traditional approaches in these specific regards.
Please also note that, in its current form,
PredicTor makes use of TCP for realizing the underlying data transfer.
While this design choice was primarily made for simplicity reasons,
we now see that it also helps with robustness:
If PredicTor also ran its own transport protocol based on its system model,
the impact of a system model mismatch might have been more severe.
We thus think that it may constitute a promising strategy to combine predictive control approaches
with traditional algorithms in a way similar to what we did in PredicTor.
\subsection{Impact of Traffic Patterns} \label{sec:eval-traffic-patterns}
In order to better understand PredicTor's behavior with regard to its traffic dynamics,
we now focus on how it deals with other \emph{traffic patterns}.
The behavior and effectiveness of every congestion control algorithm
clearly also depends on the kind of traffic it is supposed to handle.
In the previous subsections, we only considered \emph{bulk traffic}.
That is, the data to be transferred denotes an infinite stream of bytes.
The intention thereof was to analyze the steady state behavior,
which provided general insights on PredicTor's mechanics.
At the same time, it enabled us to accurately measure the achieved throughput.
This approach, however, is not sufficient for establishing an understanding
of PredicTor's dynamic behavior.
We therefore consider another scenario with more dynamic traffic,
where data streams come and go.
\begin{figure}
\pgfplotsset{width=.8\textwidth,height=.4\textwidth}
\pgfplotsset{
tick label style={font=\scriptsize\sffamily},
label style={font=\scriptsize\sffamily},
title style={font=\scriptsize\bfseries\sffamily},
legend style={font=\footnotesize},
group/horizontal sep=0.5cm,
}
\begin{tikzpicture}[font=\scriptsize]
\node (v) {PredicTor (web)};
\draw[color=color2,very thick] ($(v.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\node[below=1em of v.west,anchor=west] (v2) {PredicTor (bulk)};
\draw[color=color2,very thick,dashed] ({$(v.east) + (0.15cm,0cm)$} |- {v2.east}) -- +(0.7cm,0cm);
\node[right=1.75cm of v] (s) {Vanilla Tor (web)};
\draw[color=color1,very thick] ($(s.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\node[below=1em of s.west,anchor=west] (s2) {Vanilla Tor (bulk)};
\draw[color=color1,very thick,dashed] ({$(s.east) + (0.15cm,0cm)$} |- {s2.east}) -- +(0.7cm,0cm);
\node[right=1.75cm of s] (n) {PCTCP (web)};
\draw[color=color0,very thick] ($(n.east) + (0.15cm,0cm)$) -- +(0.7cm,0cm);
\node[below=1em of n.west,anchor=west] (n2) {PCTCP (bulk)};
\draw[color=color0,very thick,dashed] ({$(n.east) + (0.15cm,0cm)$} |- {n2.east}) -- +(0.7cm,0cm);
\end{tikzpicture} \par %
\includegraphics{paper-figure41.pdf}
\caption{Circuit latencies with mixed bulk/web traffic (50~relays,
300~circuits, 90\%~web circuits).}
\label{fig:webratio}
\end{figure}
In order to do so, we follow a simple methodology that was put forward in~\cite{jansen2012methodically}
and since then has been applied by a series of publications in this field~%
\cite{DBLP:conf/ndss/WacekTBS13,DBLP:conf/uss/JansenGWSS14,tschorsch16bktap}.
The general approach is to divide the circuits into two groups:
On the one hand, a certain fraction of the circuits carries out bulk data transfers,
similarly to our previous approach.
On the other hand, the other circuits are regarded as \emph{interactive web} circuits.
Their behavior is meant to mimic that of a client interactively browsing the web.
More specifically, such circuits transfer an object 320~KB in size
and wait for a random amount of time (between 1~and 2~seconds) before they repeat.
This way, a certain amount of traffic volatility is created.
Although this model is very simplistic in nature,
we rely on it to establish comparability with previous work in the field.
In our implementation, the bulk circuits start first and the web circuits join later at random times.
We focus on one representative example for explaining various behavioral aspects that can be observed.
In particular, we continue to set the number of relays to~50 and the number of circuits to~300,
which corresponds to the mean value between the previous examples for low and high degrees of congestion.
Considering even more congested networks
would run counter to our goal of simulating interactive circuits
because even smaller per-circuit data rates would make bulk and interactive traffic more similar
due to the increased transfer times.
Moreover, we choose a share of interactive web circuits as~90\%
to approximate the estimation from previous work~\cite{DBLP:conf/pet/McCoyBGKS08,tschorsch16bktap}.
We carry out 25~random repetitions of this simulation scenario
and measure the byte-wise latency of data through the network, as before.
Figure~\ref{fig:webratio} presents a CDF plot of the achieved per-circuit latencies over all runs,
differentiating between bulk and web circuits, for PredicTor as well as vanilla Tor and PCTCP.
The main observation that can be made is that PredicTor achieves low latency for its circuits
even in this scenario with 90\%~web circuits.
In contrast, the majority of circuits handled by the traditional congestion control algorithms
exhibit clearly worse byte-wise latency.
Also, for web circuits, they lead to a long tail of extremely high latency.
This is due to the fact that the web circuits join the network
when it is already overloaded and contains large queues.
One might have expected PredicTor to perform worse,
because the short flows give it less opportunity to apply its predictive behavior.
However, there are two main reasons why this is not the case:
Firstly, even these short flows cover several optimization time steps of PredicTor
so it can in fact apply its predictions to some extent.
Secondly, and even more importantly,
this experiment clearly visualizes PredicTor's second characteristic behavioral trait,
apart from predictiveness---its cautiousness or \emph{pessimistic scheduling}.
While vanilla Tor and PCTCP send as much data into the network as possible,
PredicTor only does so when the circuits are assigned an appropriate data rate by the optimizer.
This, in return, only happens if the network is in fact
able to promptly process and forward the data.
Put differently, we can again see the trade-off made in PredicTor:
It optimizes the latency that payload bytes in the network experience,
at the cost of sacrificing throughput, as we have shown in Section~\ref{sec:network-complexity}.
In the following, we discuss, among others, this relationship more in-depth.
\subsection{Discussion} \label{sec:discussion}
The different steps of our evaluation convey the following overall picture:
PredicTor is highly effective at realizing what is explicitly defined
within its formal optimization objective.
In particular, the improvements it achieves with regard to latency and fairness
compared to vanilla~Tor and PCTCP, are considerable.
While this already becomes apparent by looking at absolute measurement values from selected runs,
the most remarkable difference lies in its asymptomatic behavior:
Unlike vanilla~Tor and PCTCP, both metrics do not degrade with growing levels of congestion,
but stay nearly constant independently of the level of congestion,
due to the created backpressure.
However, we have also seen that PredicTor achieves lower throughput than the traditional approaches.
It therefore makes a clear trade-off that is defined by the optimization goal in the controller.
Not being able to simultaneously optimize throughput and latency
is not a shortcoming specific to PredicTor,
but has long been known as an inherent limitation of congestion control algorithms~%
\cite{DBLP:journals/tcom/Jaffe81a}.
While the traditional approaches we considered (vanilla Tor and PCTCP) optimize for throughput,
PredicTor puts the emphasis on latency instead.
Other strategies would include, e.g.\@\xspace, alternating between these goals over time,
as is done by BBR~\cite{DBLP:journals/queue/CardwellCGYJ16}.
We also showed that the latency improvement is not an artifact of the lower throughput,
but an achievement of the controller itself.
The explicit queue constraints~%
\eqref{eq:mpc_full_optim_sc_max}--\eqref{eq:mpc_full_optim_shat_max}
in the optimization problem enforce low backlog
and thus a reduced aggressiveness of the generated traffic.
As a consequence, we consider PredicTor and similar approaches based on distributed MPC
to bear strong potential as the base for novel congestion control mechanisms
that achieve performance values and trade-offs
that are not yet covered by existing traditional approaches.
Apart from this trade-off, we have seen two major disadvantages:
Firstly, like all MPC-based approaches,
PredicTor is dependent on the mathematical system model
it utilizes for making predictions of the network state.
Our evaluation reveals that deviations from this model can severely degrade the performance.
Moreover, the consequences of such model mismatches are not necessarily easy to foresee in advance.
As an example, we saw that throughput and fairness remained relatively unaffected
by growing link latency model errors.
We have also seen that the combination of MPC-based approaches like PredicTor
with traditional underlay transport protocols like TCP can be beneficial with regard to robustness.
The second factor that may potentially prove disadvantageous concerns scalability.
In our naive implementation, computational effort and communication overhead
grow linearly with the number of participating relays.
A multitude of improvement steps could be considered to alleviate or overcome this issue:
For instance, we imagine solving only individually reduced optimization problems at each relay
instead of the complete optimization problem we presented here.
Also, parameters like data resolution, horizon length, and data representation%
---including data compression---should be taken into account.
Continuing to research such refinements can make MPC-based congestion control schemes
very interesting alternatives to existing algorithms.
\section{Related Work} \label{sec:related-work}
Efficiently transferring the circuits' data through the Tor network is far from trivial.
There is a multitude of factors that is known for contributing to
performance issues in Tor~\cite{DBLP:journals/csur/AlSabahG16}.
These include the circuit selection~\cite{congestion-tor12},
local handling of connections at each relay~\cite{DBLP:conf/uss/JansenGWSS14}
and the transport protocol~\cite{udp-or}.
Congestion control touches each of these fields.
Research has shown that insufficient congestion control is a major factor
for Tor's performance problems~\cite{DBLP:journals/csur/AlSabahG16}.
Despite years of research on the topic,
many of these problems remain as of today,
also due to the challenging deployment process
for fundamental changes to the Tor network
\cite{DBLP:journals/corr/abs-1709-01044,doepmann18deployingonions}.
Since Tor is an overlay network,
there generally are multiple conceivable approaches towards congestion control.
In particular, congestion control could either be carried out
\emph{end-to-end} or \emph{hop-by-hop}.
Operating end-to-end matches more closely the classical notion of congestion control
as it is commonly understood for underlay networks.
In Tor, this would mean that only the endpoints (client and exit) are involved.
In fact, this is how vanilla Tor currently operates.
Contrary to many other IP~networks, though,
reliability is currently implemented in a hop-by-hop manner between Tor relays.
Several previous proposals have decided to stick to the paradigm of end-to-end congestion control.
For example, UDP-OR~\cite{udp-or} tunnels a single TCP connection through Tor
using UDP as an underlay.
IPPriv~\cite{DBLP:conf/icc/KiralyC09} follows a similar approach using IPsec.
Taking the idea of \enquote{stateless} intermediate relays one step further,
one could even consider applying active queue management techniques like CoDel~\cite{codel}
or quality~of~service approaches (e.g.\@\xspace DiffServ~\cite{rfc2474} or DPS~\cite{stoica1999dps}).
The anonymization functionality could even be moved completely to the network stack,
as LAP~\cite{hsiao2012lap}, HORNET~\cite{chen2015hornet}, and TARANET~\cite{chen2018taranet} demonstrate.
However, there is a common drawback for all of these approaches:
Since the relays within a circuit may be located all over the world,
the resulting round-trip times between the endpoints become very large.
As a consequence, the increased feedback loop
results in degraded performance~\cite{DBLP:conf/dsn/AmirD03,tschorsch12transport}.
PredicTor therefore takes advantage of the fact
that the intermediate relays operate on the application layer anyways,
so they can be taken into account for congestion control,
keeping the feedback loop small and potentially enabling better performance.
This strategy has been followed before, e.g.\@\xspace, by replacing Tor's very coarse-grained
end-to-end congestion window with a more flexible scheme~\cite{DBLP:conf/pet/AlSabahBGGMSV11},
or even integrating multi-hop congestion control
into a tailored transport protocol~\cite{tschorsch16bktap}.
On the other hand, PCTCP~\cite{alsabah2013pctcp},
which uses a dedicated TCP connection between each relay for every circuit,
has the potential to be actually deployed in Tor.
While PCTCP provides some improvements, e.g.\@\xspace, in fairness,
it still does not provide sufficient congestion control.
Other approaches often require changes to the network infrastructure
and are therefore not directly applicable.
These approaches have focused primarily on re-using existing approaches
from the networking research for this specific use case.
In contrast, our work facilitates recent advances in the field of control technology.
Thereby, we open a new perspective for advancing congestion control using interdisciplinary research.
Multi-hop congestion control has been an active research topic in other fields as well.
For example, numerous scientific contributions apply suitable schemes
in the context of (wireless) mesh networks~%
\cite{hop-by-hop-wireless,DBLP:journals/adhoc/ScheuermannLM08,DBLP:conf/mobicom/JiangZW09}.
These, however, have slightly different use cases and premises.
For example, in Tor, it would not be acceptable to route around
congested areas of the network as this would put the user's anonymity to risk.
However, in special situations,
our approach may also be applicable to these scenarios in some similar form.
The problem of congestion in networks has also been studied extensively
from a control theoretical perspective in the past.
Previous works include classic linear control~\cite{Mascolo1999}
including PID~\cite{Yanfie2003} and state-feedback LQR control~\cite{Azuma2006}.
It is well understood
that delay is among the main challenges of controlling the network.
More recently, especially optimization-based methods have been applied
to the problem with promising results~\cite{He2007,Mota2012}.
Model predictive control (MPC), as applied in~\cite{Mota2012},
is an advanced control technique that can deal with non-linear systems and
explicitly take constraints into consideration.
Its predictive control action is particularly suited for systems with significant delay.
Furthermore, MPC has received significant attention
as a method for distributed control~\cite{Negenborn2014, CHRISTOFIDES2013},
where local controllers interact to jointly control an interconnected system.
Distributed MPC is often applied to systems with a complex network character,
such as transportation systems~\cite{Dunbar2012},
energy management~\cite{Patel2016}
or process industry applications~\cite{CHRISTOFIDES2013},
where a centralized solution is prohibitive due to the size of the system
or privacy concerns.
In order to obtain global properties, the local action is often coordinated by exchanging information about predicted future behavior~\cite{Negenborn2014}.
\section{Conclusion and Future Work} \label{sec:conclusion}
In this work, we proposed a refined version of PredicTor,
a novel model predictive control formulation to tackle the challenge of
congestion control in multi-hop overlay networks like Tor.
PredicTor is a distributed approach that relies on exchanging information
about the predicted network state between adjacent nodes.
We presented a thorough evaluation of PredicTor's performance in complex networks.
Our results indicate that approaches like PredicTor
that build upon distributed model predictive control for congestion control
can, in fact, achieve clear improvements in various regards.
The flexibility of tailoring the optimization goal to the exact requirements of a use case
makes it possible to realize flexible trade-offs.
In PredicTor, we chose to prioritize a strict notion of max-min fairness
as well as low latency in the network.
As our evaluation shows, PredicTor is able to clearly improve on the status quo in these regards.
On the other hand, these benefits are traded for lower throughput.
Our work on using model predictive control for congestion handling
shows the potential of bringing together traditional networking research
and modern control theoretical approaches.
However, our work only denotes a starting point for this research direction.
Several open issues remain:
One important current drawback is the dependency on an accurate system model.
If this underlying description of the system does, e.g.\@\xspace, not capture an unexpected external influence,
the advantage of model predictive control can easily be lost.
Another research question that remains unanswered for now is scalability:
Our implementation of PredicTor was not optimized in this regard
and we did not take communication overhead into account
because the resulting goodput ratio is too dependent on the real network conditions
(i.e.\@\xspace the absolute data rate).
However, the computational effort and communication
necessary for exchanging the feedback information
currently grows linearly in the number of relays for each node.
This puts a natural upper limit on the size of networks that can efficiently be handled.
However, for overlay networks that are limited in size, it may prove viable.
Future research will have to show
whether these disadvantages can be alleviated to enable real-world application.
\begin{acks}
This work has been partially funded by the
Deutsche Forschungsgemeinschaft
(DFG, German Research Foundation, TS 477/1-1).
\end{acks}
\printbibliography
\end{document}
|
2,877,628,091,165 | arxiv | \section{Introduction}
A classical problem in mechanics is the study of the rotation of a heavy rigid body with a fixed point. An important special case is the case of symmetric top or Lagrangian top, see \cite{arnold}, for which the inertia ellipsoid at fixed point is an ellipsoid of revolution and whose center of gravity lies on the axis of symmetry. The rotation of the heavy top is governed by the differential system
\begin{equation}\label{top}
\left\
\begin{array}{ll}
\dot{\vec{M}}=\vec{M}\times \mathbb{I}^{-1}\vec{M} +mg\vec{\gamma}\times\vec{r}_G \\
\dot{\vec{\gamma}}=\vec{\gamma}\times\mathbb{I}^{-1}\vec{M},\end{array
\right.
\end{equation}
where $m$ is the mass of the symmetric top, $g$ is the gravitational acceleration, $\vec{r}_G$ is the vector with the initial point in the fixed point $O$ and the terminal point in the center of gravity $G$, $\mathbb{I}$ is the moment of inertia tensor at the point $O$, $\vec{M}$ is the angular momentum vector and $\vec{\gamma}$ is the unit vector of the direction of the gravitational field. Also, one can use the equivalent description with the state parameters $\vec{\omega}$ and $\vec{\gamma}$, where $\vec{\omega}$ is the angular velocity vector and $\vec{M}=\mathbb{I}\vec{\omega}$. We denote by $A=B$ and $C$ the principal moments of inertia. For the following considerations we use a body frame for which the axes are principal axes of inertia and $G$ has the components $(0,0,z)$ with $z>0$. The matrix of the moment of inertia tensor in this body frame has the form $\mathbb{I}=\hbox{diag}(A,A,C)$. In the above frame the angular momentum vector $\vec{M}$ has the components $M_1, M_2, M_3$ and the unit vector of the direction of the gravitational field $\vec{\gamma}$ has the components $\gamma_1,\gamma_2,\gamma_3$.
We have four conserved quantities:
$$H=\frac{1}{2}(\frac{M_1^2}{A}+\frac{M_2^2}{A}+\frac{M_3^2}{C})+mgz\gamma_3,\,\,\,C_1=\gamma_1^2+\gamma_2^2+\gamma_3^2,\,\,\,C_2=M_1\gamma_1+M_2\gamma_2+M_3\gamma_3,
\,\,\,\text{and}\,\,\,F=M_3.$$
It is easy to see that a vertical uniform rotation $(0,0,\mathfrak{M}_{3},0,0,1)$ of the top is an equilibrium point for the system \eqref{top}.
It is well known, see \cite{chetaev}, \cite{holm-marsden-ratiu-weinstein}, \cite{marsden-ratiu} and \cite{rouche}, that the condition $C^2\omega^2>4Amgz$ is a sufficient condition for stability of the equilibrium point $(0,0,\omega,0,0,1)$ when we use the state parameters $\vec{\omega}$ and $\vec{\gamma}$. This condition implies the following sufficient condition for the stability of the vertical uniform rotation $(0,0,\mathfrak{M}_{3},0,0,1)$,
\begin{equation}\label{condition}
\mathfrak{M}_{3}^2>4Amgz.
\end{equation}
The method used by N.G. Chetaev (see \cite{chetaev}) and presented in the paper \cite{rouche} construct a Lyapunov function of the form $\lambda_1 H+\lambda_2 C_1+\lambda_3 C_2+\lambda_4 F+\mu F^2$. In the papers \cite{holm-marsden-ratiu-weinstein} and \cite{marsden-ratiu} is used the energy-Casimir method which also construct a Lyapunov function by using the conserved quantities $H,C_1,C_2$ and $F$.
In this paper we study the possibility to construct a Lyapunov function using the conserved quantities $H,C_1,C_2$ and $F$. We apply an algebraic method also used in the papers \cite{comanescu} and \cite{comanescu-1}. We prove that it is possible to construct in a neighborhood of the vertical uniform rotation $(0,0,\mathfrak{M}_{3},0,0,1)$ a Lyapunov function using the conserved quantities $H,C_1,C_2,F$ if and only if we have $\mathfrak{M}_{3}^2\geq 4Amgz$.
We recover the sufficient condition \eqref{condition} for the Lyapunov stability of the vertical uniform rotation. We prove that the condition
\begin{equation}\label{condition-equality}
\mathfrak{M}_{3}^2=4Amgz
\end{equation}
is also a sufficient condition for the Lyapunov stability.
In the papers \cite{holm-marsden-ratiu-weinstein} and \cite{marsden-ratiu} is noted that the condition $\mathfrak{M}_{3}^2<4Amgz$ implies the instability of the uniform rotation $(0,0,\mathfrak{M}_{3},0,0,1)$; more precisely the uniform rotation is not spectrally stable (the linearization has an eigenvalue with strictly positive real part).
The stability problem of a vertical uniform rotation of a heavy top is completely solved by using the conserved quantities $H,C_1,C_2,F$ and the linearization method.
\section{Stability of the vertical uniform rotations}
The stability of an equilibrium point with respect to a set of conserved quantities is a sufficient condition for Lyapunov stability. If an equilibrium point is not stable with respect to a set of conserved quantities, then we cannot construct a Lyapunov function using this set of conserved quantities. We remind some theoretical considerations, from the paper \cite{comanescu}. We consider an open set $D\subset\mathbb{R}^n$ and the locally Lipschitz function $f:D\rightarrow \mathbb{R}^n$ which generates the differential equation
\begin{equation}\label{ecuatie-diferentiala}
\dot{x}=f(x)
\end{equation}
Let $x_e$ be an equilibrium point. A continuous function $V:D\rightarrow \mathbb{R}$ which satisfies $V(x_e)=0$ and $V(x)>0$ for every $x$ in a neighborhood of $x_e$ and $x\neq x_e$ is called a positive definite function at the equilibrium point $x_e$.
{\it The equilibrium point $x_e$ of \eqref{ecuatie-diferentiala} is stable with respect to the set of conserved quantities $\{F_1,...,F_k\}$ if there exists a continuous function $\Phi:\mathbb{R}^k\rightarrow \mathbb{R}$ such that $x\rightarrow \Phi(F_1,....,F_k)(x)-\Phi(F_1,....,F_k)(x_e)$ is a positive definite function at $x_e$.}
In the conditions of the above definition the function $x\rightarrow \Phi(F_1,....,F_k)(x)-\Phi(F_1,....,F_k)(x_e)$ is a Lyapunov function at the equilibrium point $x_e$ and we have the following results.
\begin{thm}\label{implication-stability}
If the equilibrium point $x_e$ of \eqref{ecuatie-diferentiala} is stable with respect to the set of conserved quantities $\{F_1,...,F_k\}$ then it is stable in the sense of Lyapunov.
\end{thm}
\begin{thm}\label{stability}
Let $x_e$ be an equilibrium point of \eqref{ecuatie-diferentiala} and $\{F_1,...,F_k\}$ a set of conserved quantities. The following statements are equivalent:
\begin{itemize}
\item[(i)] $x_e$ is stable with respect to the set of conserved quantities $\{F_1,...,F_k\}$;
\item [(ii)] $x\rightarrow ||(F_1,...,F_k)(x)-(F_1,...,F_k)(x_e)||$ is a positive definite function at $x_e$;
\item [(iii)] the system of equations $F_1(x)=F_1(x_e),...,F_k(x)=F_k(x_e)$ has no root besides $x_e$ in some neighborhood of $x_e$.
\end{itemize}
\end{thm}
Theorem \ref{stability} $(iii)$ offer an algebraic method to prove the Lyapunov stability of an equilibrium point. This method have been used in \cite{comanescu} and \cite{comanescu-1} for studying the stability problem of the uniform rotations of a torque-free gyrostat and also for studying the stability problem of the equilibrium states of a heavy gyrostat (Zhukovskii case).
In our case the algebraic system at the equilibrium point $(0,0,\mathfrak{M}_{3},0,0,1)$ is
\begin{equation}\label{algebraic}
\frac{1}{2}(\frac{M_1^2}{A}+\frac{M_2^2}{A}+\frac{M_3^2}{C})+mgz\gamma_3=\frac{\mathfrak{M}_3^2}{2C}+mgz,\,\,\gamma_1^2+\gamma_2^2+\gamma_3^2=1,
\,\,M_1\gamma_1+M_2\gamma_2+M_3\gamma_3=\mathfrak{M}_3,\,\,M_3=\mathfrak{M}_3.
\end{equation}
This system is equivalent with the following system
\begin{equation}\label{algebraic-1}
M_1^2+M_2^2-2Amgz(1-\gamma_3)=0,\,\,\gamma_1^2+\gamma_2^2+\gamma_3^2=1,
\,\,M_1\gamma_1+M_2\gamma_2-\mathfrak{M}_3(1-\gamma_3)=0.
\end{equation}
We introduce $u,\varphi,v,\theta$ which satisfies
$M_1=u\cos\varphi,\,\,M_2=u\sin\varphi,\,\,\gamma_1=v\cos\theta,\,\,\gamma_2=v\sin\theta$.
The algebraic system for $u,\varphi,v,\theta$ and $\gamma_3$ is
\begin{equation}\label{algebraic-2}
u^2=2Amgz(1-\gamma_3),\,\,v^2=1-\gamma_3^2,
\,\,uv\cos(\theta-\varphi)=\mathfrak{M}_3(1-\gamma_3).
\end{equation}
\begin{lem}\label{lema}
The solution $(0,0,\mathfrak{M}_{3},0,0,1)$ of the system \eqref{algebraic} is isolated in the set of the solutions if and only if
$\mathfrak{M}_{3}^2\geq 4Amgz$.
\end{lem}
\begin{proof}
Let $(M_1,M_2,\mathfrak{M}_3,\gamma_1,\gamma_2,\gamma_3)$ be a solution of \eqref{algebraic} in a ball with the center in $(0,0,\mathfrak{M}_{3},0,0,1)$ and a radius $R<1$, then $0<\gamma_3\leq 1$ (see \eqref{algebraic-2}). If $\gamma_3=1$ then we have that $u=v=0$ and consequently the solution is $(0,0,\mathfrak{M}_{3},0,0,1)$. If $0<\gamma_3<1$ then, by using \eqref{algebraic-2}, we have
$$\frac{|\mathfrak{M}_3|}{\sqrt{2Amgz}}=\sqrt{1+\gamma_3}\cdot |\cos(\varphi-\theta)|<\sqrt{2}.$$
We deduce that a necessary condition for a solution of \eqref{algebraic}, except $(0,0,\mathfrak{M}_{3},0,0,1)$, to be situated in the ball with the center in $(0,0,\mathfrak{M}_{3},0,0,1)$ and a radius $R<1$ is $\mathfrak{M}_{3}^2< 4Amgz$. Consequently, if $\mathfrak{M}_{3}^2\geq 4Amgz$, then $(0,0,\mathfrak{M}_{3},0,0,1)$ is isolated in the set of the solutions of \eqref{algebraic}.
We suppose that $\mathfrak{M}_{3}^2< 4Amgz$ and consider a sequence $(\gamma_{3})_n$ which satisfy the conditions: $0<(\gamma_{3})_n<1$ and $(\gamma_{3})_n\rightarrow_{n\rightarrow\infty}1$. There exists the sequences $(\varphi_n)$ and $(\theta_n)$ such that $\sqrt{1+(\gamma_{3})_n}\cos(\varphi_n-\theta_n)=\frac{\mathfrak{M}_3}{\sqrt{2Amgz}}$. We obtain a sequence $(u_n,\varphi_n,v_n,\theta_n,(\gamma_{3})_n)$ of solutions of \eqref{algebraic-2} with $0<u_n \rightarrow_{n\rightarrow\infty}0$ and $0<v_n \rightarrow_{n\rightarrow\infty}0$.
Consequently, we obtain a nonconstant sequence $((M_{1})_n,(M_{2})_n,\mathfrak{M}_{3},(\gamma_{1})_n,(\gamma_{2})_n,(\gamma_{3})_n)$ of solutions of \eqref{algebraic} which converge to $(0,0,\mathfrak{M}_{3},0,0,1)$. We deduce that the solution $(0,0,\mathfrak{M}_{3},0,0,1)$ is not isolated in the set of the solutions of \eqref{algebraic}.
\end{proof}
Using Lemma \ref{lema},and Theorem \ref{implication-stability}, and Theorem \ref{stability} and linearization method (see \cite{holm-marsden-ratiu-weinstein} and \cite{marsden-ratiu}) we obtain the following results.
\begin{thm} Let $(0,0,\mathfrak{M}_{3},0,0,1)$ be a vertical uniform rotation of the system \eqref{top}.
\begin{itemize}
\item [(i)] The vertical uniform rotation is stable with respect to the set of conserved quantities $\{H,C_1,C_2,F\}$ if and only if $\mathfrak{M}_{3}^2\geq 4Amgz$.
\item [(ii)] The inequality $\mathfrak{M}_{3}^2\geq 4Amgz$ is a necessary and sufficient condition for the Lyapunov stability of the vertical uniform rotation.
\end{itemize}
\end{thm}
\begin{rem}
If we use the angular velocity vector $\vec{\omega}$ and the unit vector of the direction of the gravitational field $\vec{\gamma}$ to describe the rotation of the top, then the necessary and sufficient condition for the Lyapunov stability of the vertical uniform rotation $(0,0,\omega,0,0,1)$ is $C^2\omega^2\geq 4Amgz$.
\end{rem}
\medskip
\noindent {\bf Acknowledgments.} This work was supported by a grant of the Romanian National Authority for
Scientific Research, CNCS UEFISCDI, project number PN-II-RU-TE-2011-3-0006.
|
2,877,628,091,166 | arxiv | \section{Introduction}
\label{sec:intro}
In this paper we study the critical focusing wave equation
\begin{equation}
\label{eq:mainintro}
(-\partial_t^2 +\Delta)u(t,x)+u(t,x)^5=0
\end{equation}
for $u: I\times \mathbb{R}^3\to\mathbb{R}$, $I\subset \mathbb{R}$ an interval.
It is well-known that the Cauchy problem for Eq.~\eqref{eq:mainintro}
is well-posed for data in the energy space $\dot{H}^1\times L^2(\mathbb{R}^3)$, see
e.g.~\cite{LS95}, \cite{Sog08}.
Furthermore, Eq.~\eqref{eq:mainintro} admits a static solution $W$, the \emph{ground state
soliton} given by $W(x)=(1+\frac13 |x|^2)^{-\frac12}$, which indicates the presence of interesting
dynamics.
Our main result is the following.
\begin{theorem}
\label{thm:main}
There exists an $\varepsilon_0>0$ such that for any $\delta>0$ and $\mu \in \mathbb{R}$ with $|\mu|\leq \varepsilon_0$ there exists
a $t_0\geq 1$ and an energy class solution $u: [t_0,\infty)\times \mathbb{R}^3\to \mathbb{R}$ of Eq.~\eqref{eq:mainintro}
of the form
\[ u(t,x)=t^{\frac{\mu}{2}}W(t^\mu x)+\eta(t,x),\quad |x|\leq t,\: t\geq t_0 \]
and
\begin{align*}
\|\partial_t u(t,\cdot)\|_{L^2(\mathbb{R}^3\backslash B_t)}+\|\nabla u(t,\cdot)\|_{L^2(\mathbb{R}^3\backslash B_t)}&\leq \delta,
\\
\|\partial_t \eta(t,\cdot)\|_{L^2(B_t)}+\|\nabla \eta(t,\cdot)\|_{L^2(B_t)}&\leq \delta
\end{align*}
for all $t\geq t_0$ where $B_t:=\{x\in\mathbb{R}^3: |x|<t\}$.
\end{theorem}
The Cauchy problem for Eq.~\eqref{eq:mainintro} has attracted a lot of interest in the recent past
and we briefly review the most important contributions.
Eq.~\eqref{eq:mainintro} is invariant under the scaling transformation
\begin{equation}
\label{eq:scaling}
u(t,x)\mapsto u^\lambda(t,x):=\lambda^\frac12 u(\lambda t, \lambda x),\quad \lambda>0
\end{equation}
and its conserved energy
\[ E(u(t,\cdot), u_t(t,\cdot))=\tfrac12 \|(u(t,\cdot),u_t(t,\cdot))\|_{\dot{H}^1\times L^2(\mathbb{R}^3)}^2
-\tfrac16 \|u(t,\cdot)\|_{L^6(\mathbb{R}^3)}^6 \]
satisfies $E(u^\lambda(t/\lambda,\cdot), u_t^\lambda(t/\lambda, \cdot))=E(u(t,\cdot),u_t(t,\cdot))$
which is why Eq.~\eqref{eq:mainintro} is called \emph{energy critical}.
Historically, the investigation of the global Cauchy problem for
energy critical wave equations started with the \emph{defocusing case},
\[ (-\partial_t^2+\Delta)u(t,x)-u(t,x)^5=0, \]
where the sign of the nonlinearity is reversed compared to Eq.~\eqref{eq:mainintro}.
After the pioneering works \cite{Rau81} and \cite{Pec84}, the development culminated in a proof of global existence
and scattering for arbitrary data \cite{Str88}, \cite{Gri90}, \cite{GSV92}, \cite{SS93}, \cite{SS94},
\cite{Kap94}, \cite{BS98}, see also \cite{Tao06}.
However, the dynamics in the focusing case are much more complicated.
For instance, it is well-known that there exist solutions with compactly supported smooth initial data which
blow up in finite time.
This is most easily seen by observing that
\[ u(t,x)=(\tfrac34)^\frac14 (1-t)^{-\frac12} \]
is an explicit solution which, by finite speed of propagation, can be used to construct a blowup
solution of the aforementioned type.
This kind of breakdown is referred to as \emph{ODE blowup} and it is conjectured to comprise the
``generic'' blowup
scenario \cite{BCT04}.
We remark in passing that (parts of) this conjecture have been proved for the \emph{subcritical case}
\[ (-\partial_t^2+\Delta ) u+u|u|^{p-1}=0,\quad p\in (1,3], \]
see \cite{MZ03}, \cite{MZ05}, \cite{DS12},
but for $p>3$ the problem is largely open (see, however, \cite{DonSch12a}).
Another (less explicit but classical) argument to obtain finite time blowup for focusing wave equations
is due to Levine \cite{Lev74}.
Recently, Krieger, Schlag and Tataru \cite{KST09} constructed in the energy critical case $p = 5$ more ``exotic'' blowup solutions
of the form \footnote{The existence of ground state solitons, i.e., positive static solutions with finite
energy such as $W$ requires $p=5$ in spatial dimension 3, cf.~\cite{GS81}, \cite{JPY93}.}
\[ u(t,x)=(1-t)^{-\frac12(1+\nu)} W((1-t)^{-1-\nu}x)+\eta(t,x),\quad |x|\leq 1-t \]
where $\nu>\frac12$ can be prescribed arbitrarily and
$\eta$ is small in a suitable sense.
As a matter of fact, the proof of Theorem \ref{thm:main} makes extensive use of the techniques
developed in \cite{KST09}, see also \cite{KST08} and \cite{KST09a}
for analogous results in the case of critical wave maps and Yang-Mills equations.
In this respect we also mention another construction of blowup solutions for the critical wave equation by
Hillairet and Rapha\"el \cite{HR10}, albeit
for the higher dimensional case $\mathbb{R}^{1+4}$.
Furthermore, Duyckaerts, Kenig and Merle \cite{DKM11}, \cite{DKM12}
showed that any type II blowup solution \footnote{
A type II blowup solution stays bounded in the energy space. The solutions constructed in
\cite{KST09} are of this type.}
which satisfies a suitable smallness condition decomposes
into a rescaled ground state soliton plus a small remainder.
Apart from the construction of blowup solutions, it is of interest to obtain conditions on the initial
data under which the solution exists globally.
As a consequence of Strichartz estimates it is relatively easy to establish global existence
and scattering for data with small energy, see \cite{Pec84}, \cite{Sog08}.
However, for energies close to the ground state the situation becomes much more involved.
Krieger and Schlag \cite{KS07} proved the existence of a small codimension one manifold in the
space of initial data, containing $(W,0)$, which leads to solutions of the form
\[ u(t,x)=\lambda(t)^\frac12 W(\lambda(t)x)+\eta(t,x) \]
where $\lambda(t)\to a>0$ as $t\to \infty$ and $\eta$ scatters like a free wave.
In other words, the solutions arising from data on this manifold exist globally
and scatter to a rescaling of the ground state soliton, see also \cite{Spa01} for numerical
work in this direction.
A different line of investigation was pursued by Kenig and Merle \cite{KM08} who
established the following celebrated dichotomy.
\begin{theorem}[Kenig-Merle \cite{KM08}]
\label{thm:KM}
Let $u$ be an energy class solution to Eq.~\eqref{eq:mainintro} with
\[ E(u(0,\cdot),u_t(0,\cdot))<E(W,0). \]
\begin{itemize}
\item
If $\|u(0,\cdot)\|_{\dot{H}^1(\mathbb{R}^3)}<\|W\|_{\dot{H}^1(\mathbb{R}^3)}$ then the solution $u(t,x)$ exists
for all $t\in\mathbb{R}$ and scatters like a free wave as $t\to \pm \infty$.
\item If
$\|u(0,\cdot)\|_{\dot{H}^1(\mathbb{R}^3)}>\|W\|_{\dot{H}^1(\mathbb{R}^3)}$ then the solution $u(t,x)$ blows up
in finite time in both temporal directions.
\end{itemize}
\end{theorem}
Theorem \ref{thm:KM} was extended by Duyckaerts and Merle \cite{DM07} to include the case
$E(u(0,\cdot),u_t(0,\cdot))=E(W,0)$ which, in addition to the possibilities of Theorem \ref{thm:KM}, entails
solutions which scatter towards (a rescaling of) $W$.
We also refer the reader to the recent works by Krieger, Nakanishi and Schlag \cite{KNS10}, \cite{KNS11} where they
consider
data with energies slightly above the ground state.
Based on the results in \cite{KST09}, \cite{KS07}, \cite{KM08} and \cite{DM07} it seemed plausible
to expect a strong version of the \emph{soliton resolution conjecture} to hold.
Roughly speaking, this conjecture states that the long time evolution
splits into a finite sum of solitons plus radiation, see \cite{Sof06}.
\begin{conjecture}[Strong soliton resolution at energies close to the ground state]
\label{conj:solres}
Any radial energy class solution of Eq.~\eqref{eq:mainintro} with energy close to $E(W,0)$
either blows up in finite time or scatters to zero like a free wave or scatters towards a rescaling of $W$.
\end{conjecture}
Our Theorem \ref{thm:main}, however, shows that Conjecture \ref{conj:solres} is wrong.
In addition to the already known dynamics
\begin{itemize}
\item $\lambda(t)\to \infty$ as $t\to 1-$ (exotic blowup \cite{KST09})
\item $\lambda(t)\to a>0$ as $t\to \infty$ (scattering towards (a rescaling of) $W$ \cite{DM07},
\cite{KS07})
\end{itemize}
for solutions of the form
$u(t,x)=\lambda(t)^\frac12 W(\lambda(t)x)+\eta(t,x)$ with $\eta$ small,
our result adds the two new possibilities
\begin{itemize}
\item $\lambda(t)\to 0$ as $t\to \infty$ (``vanishing'')
\item $\lambda(t)\to \infty$ as $t\to\infty$ (``blowup at infinity'')
\end{itemize}
and either of which contradicts Conjecture \ref{conj:solres}.
Furthermore, there exists a continuum of rates at which the blowup (or vanishing) occurs.
It has to be remarked that the blowup does not take place in the energy norm (which stays bounded)
but in a type II fashion (i.e., only higher order norms blow up).
We also note that, although Conjecture \ref{conj:solres} in this sharp form does not hold,
a weaker version in the radial context was proved after the submission of the present paper \cite{DuyKenMer12b}.
The result in \cite{DuyKenMer12b} shows in particular that the solutions we
construct are in a sense the only global solutions which do not scatter.
We also remark in passing that we expect the solutions of Theorem \ref{thm:main} to be smooth and therefore,
unlike in the case of the exotic blowup in \cite{KST09}, there is no conjectured ``quantization''
of blowup rates as one passes to smooth solutions.
The intuitive reason for this is that we do not encounter a singularity at the lightcone as in \cite{KST09}
since we cut off our approximate solutions in such a way that they are supported
inside the smaller cone $r\leq t-c$ for some constant $c$.
At the moment, however, we
cannot rigorously prove the smoothness of the full solutions since parts of our construction rely on a ``soft'' argument which only
yields energy class regularity.
Furthermore, it is worth mentioning that the technique used to prove Theorem~\ref{thm:main} appears to have much
wider applicability for similar critical nonlinear problems, as did the construction in \cite{KST09}.
We also note that the restrictions on $\mu$ in our Theorem~\ref{thm:main} seem to be in part
technical and nonessential. Thus, it appears to be natural to expect at least the range
$-1<\mu<\varepsilon_0$ to be allowable, for suitable $\varepsilon_0>0$.
While this paper was being written up, T.~Duyckaerts informed the authors of the following result, obtained jointly with C. Kenig and F. Merle,
which nicely combines with our Theorem \ref{thm:main} in a similar way as
\cite{DKM11} and \cite{DKM12} are related to \cite{KST09}.
\begin{theorem}[Duyckaerts-Kenig-Merle \cite{DKM12b}, private communication by T. Duyckaerts]
If $u: [0,\infty)\times \mathbb{R}^3\to\mathbb{R}$ is a radial energy class solution of Eq.~\eqref{eq:mainintro} with
\[ \limsup_{t\to\infty}\left (\|\partial_t u(t,\cdot)\|_{L^2(\mathbb{R}^3)}+\|\nabla u(t,\cdot)\|_{L^2(\mathbb{R}^3)} \right )<
2 \|\nabla W\|_{L^2(\mathbb{R}^3)} \]
then (up to a change of sign)
\begin{align*}
u(t,x)&=\lambda(t)^\frac12 W(\lambda(t)x)+v(t,x)+o_{\dot{H}^1(\mathbb{R}^3)}(1) \\
\partial_t u(t,x)&=\partial_t v(t,x)+o_{L^2(\mathbb{R}^3)}(1)
\end{align*}
with $v$ a free wave and $t\lambda(t)\to \infty$ as $t \to \infty$.
\end{theorem}
Theorem \ref{thm:main} should also be contrasted to previous works on other dispersive systems such as the
nonlinear Schr\"odinger equation.
We cannot do justice to the vast literature on this subject but
as an example we mention Tao's result \cite{Tao04} on the cubic focusing Schr\"odinger equation
in $\mathbb{R}^{1+3}$
which states that radial solutions which exist globally decouple into a smooth function localized near the origin, a
radiative term, and an error that goes to zero as $t \to \infty$.
Unlike Theorem \ref{thm:main}, this result is in concordance with the soliton resolution conjecture.
We refer the reader to
\cite{Sof06} and the references therein for more positive results in this direction.
Furthermore, the only system (to our knowledge) of ``wave type'' (i.e.,
either nonlinear wave or Schr\"odinger equation) for which nonscattering solutions similar to ours
are known is the $L^2$-critical nonlinear Schr\"odinger equation in $\mathbb{R}^{1+1}$.
For this system nonscattering solutions can be constructed by combining
the ``log log'' blowup of \cite{Per01} with the pseudo-conformal symmetry (the ``log log'' blowup
exists in other dimensions as well, see \cite{MerRap04}, \cite{MerRap05}, \cite{MerRap06}).
However, it is evident that the mechanism which furnishes these solutions is completely different and not related
to the situation here.
\subsection{A roadmap of the proof}
We give a brief overview of the proof of Theorem \ref{thm:main} without going into technical details.
As already mentioned, the construction is in parts based on the techniques developed in \cite{KST08}, \cite{KST09},
\cite{KST09a}.
However, in order to deal with the present situation, the method has to be modified and extended considerably.
In the following we restrict ourselves to radial functions and the symbols $r$ and $|x|$ are used
interchangeably.
By a slight abuse of notation we also write $u(t,r)$ instead of $u(t,x)$ meaning that $u(t,\cdot)$
is a radial function.
Furthermore, throughout this paper we set
\footnote{We use this
convention for ``historical'' reasons, cf.~\cite{KST09}.}
$\lambda(t):=t^{-(1-\nu)}$ where $\nu$
is assumed to be close to $1$.
Roughly speaking, the proof splits into three main parts which we now describe in more detail.
\begin{enumerate}
\item Construction of ``elliptic profile modifiers''.
An obvious idea is to
insert the naive ansatz $u(t,r)=\lambda(t)^\frac12 W(\lambda(t)r)+\eta(t,r)$ into Eq.~\eqref{eq:mainintro}
and to derive an equation for $\eta$.
By doing so, however, one produces an error $\partial_t^2 [\lambda(t)^\frac12 W(\lambda(t)r)]$ which
decays roughly like
$t^{-2}$ and this turns out to be insufficient.
Consequently, we first modify the profile $\lambda(t)^\frac12 W(\lambda(t)r)$ by a nonperturbative
procedure.
This is done in two steps where we solve suitable (linear) approximations to
Eq.~\eqref{eq:mainintro} and thereby improve the error at the center and, in the second step,
near the lightcone $r \approx t$.
This does not yield an actual solution but a function $u_2$ which solves Eq.~\eqref{eq:mainintro}
only up to an error.
However, this error now decays approximately like $t^{-4}$ and thus, we have gained two powers which
is sufficient to proceed.
This is in contrast to the analogous procedure in \cite{KST09} where a very large number of
modifications are added to the ground state. In our situation, it turns out that additional
modifications do not improve the error further.
The improvement in decay comes at the expense of differentiability at the lightcone which has to be accounted
for by using suitable cut-offs. Thus, in this first stage of the construction, we only obtain an
approximate solution in a smaller forward light cone (i.e., of the form $|x|\leq t-c$ and hence
strictly contained inside the standard light cone $|x|\leq t$), in contrast to the procedure in \cite{KST09}.
\item In a second step we insert the ansatz $u=u_2+\varepsilon$ into Eq.~\eqref{eq:mainintro}
and derive an equation for $\varepsilon$ which is of the form
\begin{equation}
\label{eq:epsintro}
(-\partial_t^2+\Delta) \varepsilon+5W_{\lambda(t)}^4\varepsilon=\mbox{nonlinear terms+error}
\end{equation}
where $W_\lambda(x):=\lambda^\frac12 W(\lambda x)$.
Due to the aforementioned lack of smoothness, the right-hand side of the equation is only defined in a forward lightcone
and for the moment we restrict ourselves to this region.
In order to obtain a time-independent potential on the left-hand side we use
$R:=\lambda(t)r$ as a spatial coordinate and, with an appropriate new time coordinate $\tau$,
Eq.~\eqref{eq:epsintro} transforms into
\begin{equation}
\label{eq:eps2intro}
\mathcal D^2 \varepsilon+c\tau^{-1}\mathcal{D}\varepsilon+(-\Delta+V)\varepsilon=\mbox{nonlinear terms+error}
\end{equation}
where $\mathcal{D}$ is a first order transport-type operator and $V=-5W^4$.
In order to solve Eq.~\eqref{eq:eps2intro} we apply the ``distorted Fourier transform'' relative
to the self-adjoint operator $-\Delta+V$.
This requires a careful spectral analysis which is only feasible since we are in the radial case.
The existence of a zero energy resonance plays a prominent role here.
As a result we obtain a transport-type equation for the Fourier coefficients which is then solved
by the method of characteristics combined with a fixed point argument.
The treatment of the error terms on the right-hand side of the transport equation is delicate
and requires a good amount of harmonic analysis. In particular, the functional framework employed differs from that in \cite{KST09}.
\item The last step consists of a partially ``soft'' argument which is used to extend the solution
to the whole space and, second, to extract suitable initial data that lead to the desired solution
(recall that we have solved the equation in a forward lightcone where the Cauchy problem
is not well-posed).
For this we rely on a concentration-compactness approach based on the celebrated
Bahouri-G\'erard decomposition \cite{BG99}.
\end{enumerate}
\subsection{Notation}
We write $\mathbb{N}$ for the natural numbers $\{1,2,3,\dots\}$ and set $\mathbb{N}_0:=\{0\}\cup \mathbb{N}$.
We use standard Lebesgue and (fractional) Sobolev spaces denoted by
$L^p(\Omega)$, $W^{s,p}(\Omega)$ and $H^s:=W^{s,2}$ with $\Omega \subset \mathbb{R}^d$.
Our sign convention for the wave operator is $\Box:=-\partial_t^2+\Delta$.
Unless otherwise stated, the letter $C$ (possibly with indices) denotes a positive constant which may change from line to line.
As usual, we write $a\lesssim b$ if $a\leq Cb$ and if the constant $C$ has to be sufficiently large,
we indicate this by $a \ll b$. Similarly, we use $a \gtrsim b$ and $a\simeq b$ means $a\lesssim b$ and $b\lesssim a$.
Furthermore, we reiterate that $\lambda(t):=t^{-(1-\nu)}$ with $\nu$ a real number close to $1$.
Throughout the paper, $\nu$ is supposed to be fixed and sufficiently close to $1$.
Note also that Theorem \ref{thm:main} is trivial if $\nu=1$ since in this case $u(t,x)=W(x)$.
Thus, whenever convenient we exclude the case $\nu=1$ without further notice.
For $x\in\mathbb{R}^d$ we set $\langle x \rangle:=\sqrt{1+|x|^2}$ and write $O(f(x))$ to denote a generic
\emph{real-valued} function which satisfies $|O(f(x))|\lesssim |f(x)|$ in a domain of $x$ that
is either specified explicitly or follows from the context. If the function attains
complex values as well we indicate this by a subscript, e.g.~$O_\mathbb{C}(x)$.
An $O$-term $O(x^\gamma)$, where $x,\gamma \in \mathbb{R}$, is said to behave like a symbol if
$|\partial_x^k O(x^\gamma)|\leq C_k |x|^{\gamma-k}$ for all $k\in\mathbb{N}$.
A similar definition applies to symbol behavior of $O(\langle x \rangle^\gamma)$ with $|\cdot|$
substituted by $\langle \cdot \rangle$.
\section{Construction of an approximate solution}
\label{sec:approx}
Our intention is to construct a solution $u$ of the form $u(t,r)=\lambda(t)^\frac12
W(\lambda(t)r)+\eta(t,r)$ with $\lambda(t)=t^{-(1-\nu)}$ where $\nu$ is sufficiently close
to $1$ and $\eta$ is small in a suitable sense.
We first improve the approximate solution $W_{\lambda(t)}(r):=\lambda(t)^\frac12 W(\lambda(t)r)$ by successively adding
two correction terms $v_0$ and $v_1$.
These corrections are obtained by approximately solving the equation in a way we describe in
the following.
\subsection{Improvement at the center}
We set $u_0(t,r):=W_{\lambda(t)}(r)$ and define the first error $e_0$ by
$$ e_0:=\Box u_0+u_0^5=-\partial_t^2 u_0 $$
with $\Box=-\partial_t^2+\Delta$.
\begin{lemma}
\label{lem:e0}
The error $e_0$ is of the form
$$ t^2 e_0(t,r)=c_\nu \lambda(t)^\frac12 \langle \lambda(t)r\rangle^{-1}+t^2 e_0^*(t,r) $$
where $c_\nu$ is a real constant and $e_0^*$ satisfies the bounds
$$ |\partial_t^\ell \partial_r^k
t^2 e_0^*(t,r)|\leq C_{k,\ell} \lambda(t)^{\frac12+k+\ell} [\lambda(t)t]^{-\ell}
\langle \lambda(t)r\rangle^{-3-k} $$
for all $t \geq t_0>0$, $r \geq 0$ and $k,\ell \in \mathbb{N}_0$.
In addition, we have
$$\partial_r^{2k+1}e_0(t,r)|_{r=0}=\partial_r^{2k+1}e_0^*(t,r)|_{r=0}=0. $$
\end{lemma}
\begin{proof}
Note first that $W(R)=\sqrt{3}\langle R \rangle^{-1}+W^*(R)$ where
$|\partial_R^k W^*(R)|\leq C_k \langle R \rangle^{-3-k}$
for all $R\geq 0$ and $k \in \mathbb{N}_0$.
Consequently, the claim follows from
$$e_0(t,r)=-\partial_t^2 u_0(t,r)=-\partial_t^2 [\lambda(t)^\frac12 W(\lambda(t)r)]$$
by the chain rule.
\end{proof}
Note that the decay of $e_0(t,r)$ near the lightcone $r=t$ is better than at the
center.
Consequently, we first attempt to improve the approximation near $r=0$.
Ideally, we would like to add a correction $v_0$ such that $u_1:=u_0+v_0$ becomes an exact solution,
i.e.,
$$ 0=\Box u_1+u_1^5=\Box v_0+5 u_0^4 v_0+N(u_0,v_0)+e_0 $$
where
$$ N(u_0,v_0):=10 u_0^3 v_0^2+10 u_0^2 v_0^3 + 5 u_0 v_0^4 +v_0^5. $$
Near the center $r=0$ we expect the time derivative to be less important and therefore we neglect
it altogether and also drop the nonlinearity to obtain the approximate equation
\begin{equation}
\label{def:v0}
\Delta v_0+5 u_0^4 v_0=-e_0
\end{equation}
and the next error $e_1$ is defined as
\begin{equation}
\label{def:e1}
e_1:=\Box u_1+u_1^5.
\end{equation}
We solve Eq.~\eqref{def:v0} for $v_0$ and subsequently show that $e_1$ decays faster than $e_0$.
\begin{lemma}
\label{lem:e1}
There exists a function $v_0$ satisfying Eq.~\eqref{def:v0} such that
$$ v_0(t,r)=c_\nu \lambda(t)^{\frac12}[\lambda(t)t]^{-2}\lambda(t)r+d_\nu \lambda(t)^\frac12 [\lambda(t)t]^{-2}+v_0^*(t,r) $$
where $c_\nu, d_\nu$ are real constants and
$$ |\partial_t^\ell \partial_r^k v_0^*(t,r)|\leq C_{k,\ell}
\lambda(t)^{\frac12+k+\ell}
[\lambda(t)t]^{-2-\ell}\langle \lambda(t) r \rangle^{-1-k}\left \langle \log \langle \lambda(t)r \rangle \right \rangle$$
for all $t\geq t_0>0$, $r \geq 0$ and $k,\ell \in \mathbb{N}_0$.
As a consequence, the error $e_1$ defined by Eq.~\eqref{def:e1} is of the form
$$ t^2 e_1(t,r)=c_\nu \lambda(t)^{\frac12}[\lambda(t)t]^{-2}\lambda(t)r+d_\nu \lambda(t)^\frac12 [\lambda(t)t]^{-2}+t^2 e_1^*(t,r) $$
with (different) real constants $c_\nu, d_\nu$ and $e_1^*$ satisfies the bounds
$$ |\partial_t^\ell \partial_r^k t^2 e_1^*(t,r)|\leq C_{k,\ell}\lambda(t)^{\frac12+k+\ell}
[\lambda(t)t]^{-2+\epsilon-\ell}
\langle \lambda(t)r\rangle^{-1-k} $$
for all $t\geq t_0>0$, $0\leq r\lesssim t$, any (fixed) $\epsilon>0$, and $k,\ell \in \mathbb{N}_0$.
In addition, we have $\partial_r^{2k+1}w(t,r)|_{r=0}=0$ where $w \in \{v_0,v_0^*,e_1,e_1^*\}$.
\end{lemma}
\begin{proof}
Setting $\tilde{v}_0(t,R):=Rv_0(t,\lambda(t)^{-1}R)$ and $R:=\lambda(t)r$, Eq.~\eqref{def:v0} reads
\begin{equation}
\label{eq:tildev0}
\partial_R^2 \tilde{v}_0(t,R)+5 W(R)^4 \tilde{v}_0(t,R)=-\lambda(t)^{-2}Re_0(t,\lambda(t)^{-1}R)
\end{equation}
which is an inhomogeneous ODE in $R$ and $t$ can be treated as a parameter.
Explicitly, the potential reads
$$ 5W(R)^4=\frac{5}{(1+\frac{R^2}{3} )^2} $$
and the homogeneous equation has the fundamental system $\{\phi_0,\theta_0\}$ given by
\begin{align*}
\phi_0(R)&=R(1-\tfrac{R^2}{3})(1+\tfrac{R^2}{3})^{-\frac32}=-\sqrt{3}+\phi_0^*(R) \\
\theta_0(R)&=(1+\tfrac{R^2}{3})^{-\frac32}(1-2 R^2+\tfrac{R^4}{9})=\tfrac{1}{\sqrt{3}}R+\theta_0^*(R)
\end{align*}
where
$$ |\partial_R^k \phi_0^*(R)|\leq C_k \langle R \rangle^{-2-k},\quad
|\partial_R^k \theta_0^*(R)|\leq C_k \langle R \rangle^{-1-k} $$
for all $R\geq 0$ and $k\in \mathbb{N}_0$.
Furthermore, for the Wronskian we
obtain $W(\theta_0,\phi_0)=\theta_0 \phi_0'-\theta_0' \phi_0=1$.
According to the variation of constants formula, a solution $\tilde{v}_0$ of Eq.~\eqref{eq:tildev0}
is therefore given by
\begin{align*}
\tilde{v}_0(t,R)=&-\lambda(t)^{-2}\phi_0(R)\int_0^R \theta_0(R')R'e_0(t,\lambda(t)^{-1}R')dR' \\
&+\lambda(t)^{-2}\theta_0(R)\int_0^R \phi_0(R')R'e_0(t,\lambda(t)^{-1}R')dR'
\end{align*}
and by using Lemma \ref{lem:e0}
we obtain the claim concerning $v_0$.
The assertion for $e_1$ now follows from $e_1=-\partial_t^2 v_0+N(u_0,v_0)$.
We have
$$ e_1(t,r)=c_\nu \lambda(t)^\frac12 [\lambda(t)t]^{-2}t^{-2}\lambda(t)r+d_\nu \lambda(t)^\frac12[\lambda(t)t]^{-2}t^{-2}
-\partial_t^2 v_0^*(t,r)
+N(u_0,v_0) $$
and $|\partial_t^2 v_0^*(t,r)|\lesssim \lambda(t)^{\frac12}[\lambda(t)t]^{-2+\epsilon}t^{-2}
\langle \lambda(t)r\rangle^{-1}$ where the $\epsilon$-loss comes from the logarithm.
The nonlinear contributions are of higher order and belong to $e_1^*$ since
$$ |N(u_0,v_0)(t,r)|\lesssim \lambda(t)^\frac52 [\lambda(t)t]^{-4}\langle \lambda(t)r\rangle^{-1} $$
where we use $[\lambda(t)t]^{-1}\lesssim \langle \lambda(t)r\rangle^{-1}$ for $0\leq r\lesssim t$.
The derivative bounds follow from the corresponding bounds on $v_0^*$ by the Leibniz rule.
\end{proof}
\subsection{Improvement near the lightcone}
We have to go one step further and continue improving our approximate solution.
Thus, we add another correction $v_1$ to $u_1$ and set $u_2:=u_1+v_1=u_0+v_0+v_1$.
This yields
\begin{align*}
\Box u_2+u_2^5&=\Box v_1+5 u_1^4 v_1+N(u_1,v_1)+\Box u_1+u_1^5 \\
&=\Box v_1+5u_1^4 v_1+N(u_1,v_1)+e_1.
\end{align*}
This time we intend to improve the approximate solution near the lightcone $r=t$ since the decay of the error
$e_1$ near the center is already good enough.
Of course, near the lightcone we cannot ignore the temporal derivative but thanks to
the decay of
$u_1(t,t)$ it turns out that we may safely neglect the
potential and the
nonlinearity.
Furthermore, we also ignore the higher order error $e_1^*$ which already decays fast enough.
Thus, we arrive at
the approximate equation
\begin{equation}
\label{eq:defv1}
\Box v_1(t,r)=-c_\nu \lambda(t)^{-\frac32}t^{-4} \lambda(t)r-d_\nu \lambda(t)^{-\frac32}t^{-4}
\end{equation}
and the next error $e_2$ is given by
\begin{equation}
\label{eq:defe2}
e_2:=\Box u_2+u_2^5.
\end{equation}
Note carefully in Lemma \ref{lem:v1} below that in order to gain decay in $t$ we sacrifice differentiability at
the lightcone.
\begin{lemma}
\label{lem:v1}
There exists a solution $v_1=v_{11}+v_{12}$ of Eq.~\eqref{eq:defv1} and a decomposition
$v_{1j}=v_{1j}^g+v_{1j}^b$, $j=1,2$, such that
\begin{align*}
|v_{11}(t,r)|&\lesssim \lambda(t)^{-\frac12}t^{-4}r^3, &
|v_{12}(t,r)|&\lesssim \lambda(t)^{-\frac32}t^{-4}r^2 & (r&\leq \tfrac12 t) \\
|v_{11}^g(t,r)|&\lesssim \lambda(t)^{-\frac12}t^{-1}, &
|v_{12}^g(t,r)|&\lesssim \lambda(t)^{-\frac32}t^{-2} & (r&\leq 2t) \\
|v_{11}^b(t,r)|&\lesssim \lambda(t)^{-\frac12}t^{-1}(1-\tfrac{r}{t})^{\frac12(1-\nu)}, &
|v_{12}^b(t,r)|&\lesssim \lambda(t)^{-\frac32}t^{-2}(1-\tfrac{r}{t})^{\frac12(1-3\nu)} &(r&<t)
\end{align*}
for all $t\geq t_0>0$ and estimates for the derivatives follow by formally differentiating these bounds.
As a consequence, the error $e_2$ as defined by Eq.~\eqref{eq:defe2}, satisfies the bound
\begin{align*}
|t^2 e_2(t,r)|&\lesssim \lambda(t)^\frac12
[\lambda(t)t]^{-2+\epsilon} t^{\frac52|1-\nu|}\langle \lambda(t)r\rangle^{-1}
\end{align*}
for $0<r<t-c$, all $t\geq t_0>c_0\geq c$ (where $c_0$ is a fixed constant), and for any (fixed) $\epsilon>0$.
Finally, we have $\partial_r^{2k}v_{11}(t,r)|_{r=0}=\partial_r^{2k+1}v_{12}(t,r)|_{r=0}=0$, $k\in \mathbb{N}_0$.
\end{lemma}
\begin{proof}
Instead of solving Eq.~\eqref{eq:defv1} directly, we set $v_1=v_{11}+v_{12}$ and consider
\[ \Box v_{11}=-c_\nu \lambda(t)^{-\frac32}t^{-4} \lambda(t)r,\quad
\Box v_{12}=-d_\nu \lambda(t)^{-\frac32}t^{-4} \]
separately.
In order to reduce these equations to ODEs, we use the self-similar coordinate $a=\frac{r}{t}$.
We start with
$$ t^2 \Box v_{11}(t,r)=-c_\nu \lambda(t)^{-\frac12} t^{-1}\tfrac{r}{t} $$
and make the self-similar ansatz
$$ v_{11}(t,r)=\lambda(t)^{-\frac12} t^{-1}\tilde{v}_{11}(\tfrac{r}{t}) $$
which yields
\begin{equation}
\label{eq:odea}
(1-a^2)[\tilde{v}_{11}''(a)+\tfrac{2}{a}\tilde{v}_{11}'(a)]-(1+\nu)a\tilde{v}_{11}'(a)
-[\tfrac12(1-\nu)-1][\tfrac12(1-\nu)-2]\tilde{v}_{11}(a)=-c_\nu a.
\end{equation}
The homogeneous equation has the fundamental system $\{\theta_\pm\}$ given by
$$ \theta_\pm(a)=\tfrac{1}{a}(1\pm a)^{\frac12(1-\nu)} $$
with the Wronskian
$$ W(\theta_+,\theta_-)(a)=\frac{\nu-1}{a^2(1-a^2)^{\frac12(1+\nu)}}. $$
Furthermore, $\psi:=\theta_--\theta_+$ is another solution of the homogeneous equation which is
smooth at $a=0$ and clearly, $W(\theta_+,\psi)=W(\theta_+,\theta_-)$.
Consequently, a solution $\tilde{v}_{11}$
of Eq.~\eqref{eq:odea} is given by
\begin{equation}
\label{eq:tildev1}
\tilde{v}_{11}(a)=\frac{c_\nu \theta_+(a)}{\nu-1}\int_0^a b^3(1-b^2)^{\frac12(\nu-1)}
\psi(b)db
-\frac{c_\nu \psi(a)}{\nu-1}\int_0^a b^3(1-b^2)^{\frac12(\nu-1)}
\theta_+(b)db
\end{equation}
and it follows that
$\tilde{v}_{11}(a)=O(a^3)$ as $a\to 0$.
Thus, we obtain $v_{11}(t,r)=v_{11}^g(t,r)+v_{12}^b(t,r)$ with
the stated bounds.
Next, we turn to the equation $t^2\Box v_{12}=-d_\nu\lambda(t)^{-\frac32}t^{-2}$.
Here, we make the ansatz $v_{12}(t,r)=\lambda(t)^{-\frac32}t^{-2}\tilde v_{12}(\frac{r}{t})$
which yields Eq.~\eqref{eq:odea} but with $\nu$ replaced by $3\nu$.
Thus, we obtain $\tilde v_{12}(a)=O(a^2)$ as $a\to 0$ and $v_{12}=v_{12}^g+v_{12}^b$ with the claimed bounds.
By construction, the error $e_2$ is given by
$$ e_2=5u_1^4 v_1+N(u_1,v_1)+e_1^* $$
and from Lemma \ref{lem:e1} we recall the bounds
\begin{align*}
|u_1(t,r)|&\leq |u_0(t,r)|+|v_0(t,r)|\lesssim \lambda(t)^{\frac12}\langle \lambda(t)r\rangle^{-1}
+\lambda(t)^{\frac12}
[\lambda(t)t]^{-2}\lambda(t)r \\
&\lesssim \lambda(t)^\frac12 \langle \lambda(t)r\rangle ^{-1}\\
|e_1^*(t,r)|&\lesssim \lambda(t)^{\frac12}
[\lambda(t)t]^{-2+\epsilon}t^{-2}\langle \lambda(t)r \rangle^{-1}.
\end{align*}
From above we have the bounds
\[ |v_1(t,r)|\lesssim \lambda(t)^\frac12 [\lambda(t)t]^{-4}\langle \lambda(t)r\rangle^3 \]
for $0\leq r <\frac12 t$ and
\[ |v_{11}^b(t,r)|\lesssim \lambda(t)^\frac12 [\lambda(t)t]^{-1}(1-\tfrac{r}{t})^{\frac12(1-\nu)}
\lesssim \lambda(t)^\frac12 [\lambda(t)t]^{-1}t^{\frac12|1-\nu|} \]
for $\frac12 t\leq r\leq t-c$.
Furthermore,
\[ |v_{12}^b(t,r)|\lesssim \lambda(t)^\frac12 [\lambda(t)t]^{-2}(1-\tfrac{r}{t})^{\frac12(1-3\nu)}\lesssim
\lambda(t)^\frac12 [\lambda(t)t]^{-1}t^{\frac12|1-\nu|} \]
for $\frac12 t\leq r\leq t-c$ and the stated bounds for $e_2$ follow.
\end{proof}
\begin{remark}
There is a slight nuisance associated with the function $v_{11}$ constructed in Lemma \ref{lem:v1}:
its odd derivatives with respect to $r$ do not vanish at the origin, i.e., $v_{11}$ is not smooth
at the center when viewed as a radial function on $\mathbb{R}^3$.
This inconvenient fact, however, is easily remedied if we replace $v_{11}$ by
$\theta v_{11}$ where $\theta$ is a smooth function with $\theta(r)=r$ for, say, $r\in [0,\frac12]$ and
$\theta(r)=1$ for $r\geq 1$.
This modification does not affect the bounds given in Lemma \ref{lem:v1} (provided that $t_0>1$)
and it yields
the desired behavior near the center.
Furthermore, since
$\Box (\theta v_{11})(t,r)=\Box v_{11}(t,r)$ for $r \geq 1$, the stated estimates for the corresponding
error $e_2$ are not altered either.
Consequently, we may equally well assume from the onset that
$v_{11}$ and $e_2$ are smooth at the center (as functions on $\mathbb{R}^3$).
This remark will be useful later on.
\end{remark}
\section{The transport equation}
\label{sec:exact}
For the sake of clarity we outline the main results of this section.
\begin{enumerate}
\item We make the ansatz $u=u_2+\varepsilon$ and perform the change of variables $(t,r)\mapsto (\tau,R)$
where $\tau=\frac{1}{\nu}\lambda(t)t$, $R=\lambda(t)r$. From the requirement $\Box u+u^5=0$ we derive
an equation for $v(\tau,R):=R\varepsilon(\nu \tilde\lambda(\tau)^{-1}\tau,\tilde\lambda(\tau)^{-1}R)$ of the form
\begin{equation}
\label{eq:voutl} \mathcal D^2 v+\beta_\nu(\tau)\mathcal D v+\mathcal L v=\tilde \lambda(\tau)^{-2}[\mbox{r.h.s.}]
\end{equation}
where $\mathcal D=\partial_\tau+\beta_\nu(\tau)(R\partial_R-1)$, $\beta_\nu(\tau)=(1-\frac{1}{\nu})\tau^{-1}$, and
$\mathcal L=-\partial_R^2-5W(R)^4$.
\item Next, we discuss the spectral theory of the Schr\"odinger operator $\mathcal L$. We derive the
asymptotics of the spectral measure $\mu$ associated to $\mathcal L$ by applying Weyl-Titchmarsh theory.
As a consequence, we obtain a precise description of the spectral transformation (the ``distorted Fourier transform'')
$\mathcal U: L^2(0,\infty)\to L^2(\sigma(\mathcal L),d\mu)$, the unitary map satisfying $\mathcal U \mathcal L f(\xi)=\xi \mathcal U f(\xi)$.
\item We apply this map to Eq.~\eqref{eq:voutl} in order to ``transform away'' the potential $-5W(R)^4$.
This yields an equation for \footnote{In fact, we have to work with a vector-valued function $x$ since
$\mathcal L$ has a negative eigenvalue. This is not essential for the argument but complicates the notation.
Thus, for the moment we ignore this issue.}
$x(\tau,\xi):=[\mathcal U v(\tau,\cdot)](\xi)$ of the form
\[ [\hat{\mathcal D}_c^2 +\beta_\nu(\tau) \hat{\mathcal D}_c+\xi]x(\tau,\xi)=\tilde \lambda^{-2}(\tau)[\mbox{r.h.s.}]
+\mbox{``$\mathcal K$-terms''} \]
where $\hat{\mathcal D}_c=\partial_\tau-2\beta_\nu \xi \partial_\xi+O(\tau^{-1})$.
The expression ``$\mathcal K$-terms'' stands for nonlocal error terms which arise from the application
of $\mathcal U$ to $R\partial_R$ (in $\mathcal D$).
\item Then we study the inhomogeneous equation
\[ [\hat {\mathcal D}_c^2+\beta_\nu(\tau) \hat{\mathcal D}_c+\xi]x(\tau,\xi)=b(\tau,\xi) \]
and solve it by the method of characteristics.
We obtain suitable pointwise bounds for the kernel of the solution operator (the ``parametrix'').
\item Finally, we introduce the basic solution spaces we work with and prove bounds for
the parametrix in these spaces.
\end{enumerate}
\subsection{Change of variables}
The function $u_2$ constructed in Section \ref{sec:approx} satisfies the critical wave equation
only up to an error $e_2$.
In this section we aim at constructing an \emph{exact} solution to
$$ \Box u+u^5=0 $$
of the form $u=u_2+\varepsilon$ where
$\varepsilon(t,r)$
decays sufficiently fast as $t \to \infty$.
Recall that $u_2$ is nonsmooth at the lightcone and thus, we restrict our construction to a truncated
forward lightcone
$$ K_{t_0,c}^\infty:=\{(t,x) \in \mathbb{R}\times \mathbb{R}^3: t\geq t_0, |x| \leq t-c\} $$
for $t_0>c>0$.
Consequently, we have to solve the equation
$$ \Box \varepsilon+5u_2^4 \varepsilon+N(u_2,\varepsilon)+e_2=0 $$
which we rewrite as
\begin{equation}
\label{eq:eps}
\Box \varepsilon+5u_0^4 \varepsilon=5(u_0^4-u_2^4)\varepsilon-N(u_2,\varepsilon)-e_2.
\end{equation}
As before, in order to obtain a time-independent potential, we use the new space variable
$R(t,r)=\lambda(t)r$.
Furthermore, it is convenient to introduce the new time variable $\tau(t)=\frac{1}{\nu}t^\nu$.
We write $\lambda(t)=\tilde{\lambda}(\tau(t))$ and note that
$$ \tau'(t)=\lambda(t),\quad \tilde{\lambda}'(\tau(t))=\frac{\lambda'(t)}{\lambda(t)}. $$
Consequently, the derivatives transform according to
$$ \partial_t=\tilde{\lambda}(\tau)\left (\partial_\tau+\tilde{\lambda}'(\tau)
\tilde{\lambda}(\tau)^{-1}R\partial_R \right ),\quad
\partial_r=\tilde{\lambda}(\tau)\partial_R
$$
and by setting $\tilde{v}(\tau(t),R(t,r))=\varepsilon(t,r)$ we obtain from
Eq.~\eqref{eq:eps} the problem
\begin{align}
\label{eq:tildev}
[\partial_\tau&+\beta_\nu(\tau)R\partial_R ]^2 \tilde{v}
+\beta_\nu(\tau)
[\partial_\tau+\beta_\nu(\tau)R\partial_R]\tilde{v}
-\left [\partial_R^2+\tfrac{2}{R}\partial_R+5W(R)^4 \right ]\tilde{v} \\
&=\tilde{\lambda}(\tau)^{-2}\left [5(u_2^4-u_0^4)\tilde{v}+N(u_2,\tilde{v})+e_2 \right ]
\nonumber
\end{align}
with
\begin{equation}
\label{eq:defbeta}
\beta_\nu(\tau)=\tilde{\lambda}'(\tau)\tilde{\lambda}(\tau)^{-1}=-(\tfrac{1}{\nu}-1) \tau^{-1}
\end{equation}
where it is understood, of
course, that the functions $u_0$, $u_2$ and $e_2$ be evaluated accordingly.
Finally, the standard substitution $\tilde{v}(\tau,R)=R^{-1}v(\tau,R)$ transforms the radial
$3d$ Laplacian into the radial $1d$ Laplacian and, by noting that
\[ [\partial_\tau+\beta_\nu(\tau)R\partial_R]\tfrac{v(\tau,R)}{R}=
\tfrac{1}{R}[\partial_\tau+\beta_\nu(\tau)R\partial_R-\beta_\nu(\tau)]v(\tau,R), \]
we end up with the main equation
\begin{align}
\label{eq:main}
\mathcal{D}^2 v+\beta_\nu(\tau)\mathcal{D} v+\mathcal{L}v
=\tilde{\lambda}(\tau)^{-2}\left [5(u_2^4-u_0^4)v+R N(u_2,R^{-1}v)+Re_2\right ]
\end{align}
where $\mathcal{D}=\partial_\tau+\beta_\nu(\tau)(R\partial_R-1)$ and $\mathcal{L}=-\partial_R^2-5W(R)^4$.
Our goal is to solve Eq.~\eqref{eq:main} backwards in time
with zero Cauchy data at $\tau=\infty$.
Roughly speaking, the idea is to perform a distorted Fourier transform with respect to the self-adjoint
operator $\mathcal{L}$ and to solve the remaining transport-type equation on the Fourier side
by the method of characteristics.
\subsection{Spectral theory of $\mathcal{L}$ and the distorted Fourier transform}
In the following we recall some standard facts about the spectral theory of $\mathcal{L}$, see
e.g.~ \cite{GZ06}, \cite{W03II}, \cite{T09}, \cite{DS63II}.
We write $V:=-5W^4$ and emphasize that $V(R)$ depends smoothly on $R$ and decays
like $R^{-4}$ as $R \to \infty$.
The Schr\"odinger operator $\mathcal{L}=-\partial_R^2+V$ is self-adjoint in $L^2(0,\infty)$ with domain
$$ \mathrm{dom}(\mathcal{L})=\{f \in L^2(0,\infty): f,f' \in AC[0,R]\:\forall R>0, f(0)=0, \mathcal{L}f \in L^2(0,\infty)\} $$
since the endpoint $0$ is regular whereas $\infty$ is in the limit-point case.
Furthermore, there exists a zero energy resonance which is induced by the scaling symmetry
of the wave equation.
More precisely, the function
\begin{equation}
\label{eq:res}
\phi_0(R)=2R \partial_\lambda|_{\lambda=1}\lambda^\frac12 W(\lambda R)=\frac{R(1-\frac{1}{3}R^2)}
{(1+\frac{1}{3}R^2)^{3/2}}
\end{equation}
belongs to $L^\infty(0,\infty)$ and (formally) satisfies $\mathcal{L}\phi_0=0$.
Since $\phi_0$ has precisely one zero on $(0,\infty)$, it follows by Sturm oscillation theory
(see e.g.~ \cite{DS63II})
that $\mathcal{L}$ has exactly one simple negative eigenvalue $\xi_d<0$.
The corresponding eigenfunction $\phi_d$ is smooth and positive on $(0,\infty)$ and decays
exponentially towards $\infty$.
Thus, the spectrum of $\mathcal{L}$ is given by $\sigma(\mathcal{L})=\{\xi_d\}\cup [0,\infty)$ and
thanks to the decay of $V$, the continuous part is in fact absolutely continuous.
We denote by $\{\phi(\cdot,z),
\theta(\cdot,z)\}$, $z\in \mathbb{C}$, the standard fundamental system of
\begin{equation}
\label{eq:spec}
\mathcal{L}f=zf
\end{equation}
satisfying
$$ \phi(0,z)=\theta'(0,z)=0,\quad \phi'(0,z)=\theta(0,z)=1. $$
In particular we have $W(\theta(\cdot,z), \phi(\cdot,z))=1$ for all $z \in \mathbb{C}$.
Furthermore, $\psi(\cdot,z)$ for $z\in\mathbb{C}\backslash \mathbb{R}$
denotes the Weyl-Titchmarsh solution of Eq.~\eqref{eq:spec}, i.e.,
the unique solution of Eq.~\eqref{eq:spec} which belongs to $L^2(0,\infty)$ and
satisfies
$\psi(0,z)=1$.
Consequently, for each $z \in \mathbb{C}\backslash \mathbb{R}$ there exists a number $m(z)$ such
that
$$ \psi(\cdot,z)=\theta(\cdot,z)+m(z)\phi(\cdot,z) $$
and we obtain
$m(z)=W(\theta(\cdot,z), \psi(\cdot,z))$.
The Weyl-Titchmarsh $m$-function is of crucial importance since it determines the spectral
measure.
\begin{proposition}
\label{prop:Fourier}
\begin{enumerate}
\item For any $\xi>0$ the limit
$$ \rho(\xi):=\tfrac{1}{\pi}\lim_{\varepsilon \to 0+}\Im m(\xi+i\varepsilon) $$
exists but $\rho(\xi) \to \infty$ as $\xi \to 0+$.
\item
Let $\mu$ be the Borel measure defined by
$$ d\mu(\xi)=\frac{d\Theta_{\xi_d}(\xi)}{\|\phi(\cdot,\xi_d)\|_{L^2(0,\infty)}^2}+\rho(\xi)d\xi $$
where $d\Theta_{\xi_d}$ denotes the Dirac measure at $\xi_d$.
Then there exists a unitary operator $\mathcal{U}: L^2(0,\infty) \to L^2(\sigma(\mathcal{L}),d\mu)$,
the ``distorted Fourier transform'', which diagonalizes $\mathcal{L}$, i.e.,
$$ \mathcal{U}\mathcal{L}=M_{\mathrm{id}}\mathcal{U} $$
where $M_\mathrm{id}$ is the (maximally defined) operator of multiplication
by the identity function. \footnote{In other words, $M_\mathrm{id}f(\xi)=\xi f(\xi)$.}
\item The distorted Fourier transform is explicitly given by
$$ \mathcal{U}f(\xi)=\lim_{b\to \infty}\int_0^b \phi(R,\xi)f(R)dR, \quad \xi\in \sigma(\mathcal{L}) $$
where the limit is understood with respect to $\|\cdot\|_{L^2(\sigma(\mathcal{L}),d\mu)}$.
\item The inverse transform $\mathcal{U}^{-1}$ reads
$$ \mathcal{U}^{-1}\hat{f}(R)=\frac{\phi(R,\xi_d)}{\|\phi(\cdot,\xi_d)\|_{L^2(0,\infty)}^2}
\hat{f}(\xi_d)+\lim_{b\to \infty}\int_0^b
\phi(R,\xi)\hat{f}(\xi)\rho(\xi)d\xi $$
where the limit is understood with respect to $\|\cdot\|_{L^2(0,\infty)}$.
\end{enumerate}
\end{proposition}
\begin{proof}
Let $\Im z>0$ (and $\Im \sqrt{z}>0$).
The Weyl-Titchmarsh solution is given by $\psi(\cdot,z)=c_0(z)f_+(\cdot,z)$ where
the Jost function $f_+(\cdot,z)$ is defined by
$\mathcal{L}f_+(\cdot,z)=zf_+(\cdot,z)$ and $f_+(R,z)\sim e^{i\sqrt{z}R}$ as $R \to \infty$.
The coefficient $c_0(z)$ is chosen such that $\psi(0,z)=1$, i.e.,
\begin{equation}
\label{eq:defc0}
c_0(z)=\frac{1}{W(f_+(\cdot,z),\phi(\cdot,z))}.
\end{equation}
The Jost function
satisfies the Volterra equation
\begin{equation}
\label{eq:Volterraf+}
f_+(R,z)=e^{i\sqrt{z}R}+\frac{1}{\sqrt{z}}\int_R^\infty \sin(\sqrt{z}(R'-R))
V(R')f_+(R',z)dR'
\end{equation}
and from this representation it follows immediately that
$$ f_+(R,\xi)=\lim_{\varepsilon\to 0+}f_+(R,\xi+i\varepsilon)$$
exists provided that $\xi>0$ and moreover, $f_+(R,\xi)$ satisfies Eq.~\eqref{eq:Volterraf+} with $z=\xi$, cf.~\cite{DT79}.
From Eq.~\eqref{eq:Volterraf+} we also have $W(f_+(\cdot,z),\overline{f_+(\cdot,z)})=-2i\sqrt{z}$
and thus, by expanding
$$ f_+(\cdot,\xi)=a(\xi)\phi(\cdot,\xi)+b(\xi)\theta(\cdot,\xi) $$
we obtain (recall that $\phi(\cdot,\xi)$ and $\theta(\cdot,\xi)$ are real-valued)
$$ -2i\sqrt{\xi}=W(f_+(\cdot,\xi),\overline{f_+(\cdot,\xi)})=2i\Im(\overline{a(\xi)}b(\xi)) $$
which in particular implies $b(\xi)\not=0$ if $\xi>0$.
Consequently, we infer
$$ W(f_+(\cdot,\xi),\phi(\cdot,\xi))=b(\xi)\not=0 $$
and this shows that $c_0(\xi)$ (and therefore $\psi(R,\xi)$) is well-defined and finite
provided that $\xi>0$.
However, due to the zero energy resonance we clearly have $|c_0(\xi)| \to \infty$ as $\xi \to 0+$.
The connection between the Weyl-Titchmarsh $m$-function and the spectral measure $\mu$
is provided by the classical formula
$$ \tilde{\mu}(\xi)=\tfrac{1}{\pi}\lim_{\delta \to 0+}\lim_{\varepsilon \to 0+}
\int_\delta^{\xi+\delta}\Im m(t+i\varepsilon)dt $$
where the distribution function $\tilde{\mu}$ determines $\mu$ in the sense of Lebesgue-Stieltjes.
The statements about the distorted Fourier transform are well-known and classical, see e.g.~
\cite{W03II},
\cite{T09}, \cite{DS63II}, \cite{GZ06}.
\end{proof}
\subsection{Asymptotics of the spectral measure for small $\xi$}
In order to be able to apply the distorted Fourier transform, we require more detailed information
on the behavior of the spectral measure. We start with the asymptotics as $\xi \to 0+$ where
the spectral measure blows up due to the existence of the zero energy resonance.
We also obtain estimates for the fundamental system $\{\phi(\cdot,\xi),\theta(\cdot,\xi)\}$ which
will be relevant later on.
As before, $\phi_0(R)=\phi(R,0)$ is the resonance function given in Eq.~\eqref{eq:res} and we write
$\theta_0(R):=\theta(R,0)$. Explicitly, we have
$$ \theta_0(R)=\frac{1-2R^2+\frac19 R^4}{(1+\frac13 R^2)^{3/2}}. $$
\begin{lemma}
\label{lem:Phi}
There exists a (complex-valued) function $\Phi(\cdot,\xi)$ satisfying
$\mathcal{L}\Phi(\cdot,\xi)=\xi \Phi(\cdot,\xi)$ such that
$$ \Phi(R,\xi)=[\phi_0(R)+i\theta_0(R)][1+a(R,\xi)] $$ where
$a(0,\xi)=a'(0,\xi)=0$ and
\begin{align*}
|\partial_\xi^\ell \partial_R^k [\Re a(R,\xi)]|&\leq C_{k,\ell}\langle R \rangle^{2-k}\xi^{1-\ell} \\
|\partial_\xi^\ell \partial_R^k [\Im a(R,\xi)]|&\leq C_{k,\ell}\left [\langle R \rangle^{1-k}\xi^{1-\ell}
+\langle R\rangle^{4-k}\xi^{2-\ell} \right ]
\end{align*}
for all $R \in [0,\xi^{-\frac12}]$, $0<\xi\lesssim 1$ and $k,\ell \in \mathbb{N}_0$.
In particular, we have $\phi=\Re \Phi$ and $\theta=\Im \Phi$.
\end{lemma}
\begin{proof}
We write $\Phi_0:=\phi_0+i\theta_0$ and note that $\Phi_0$ does not vanish anywhere on $[0,\infty)$.
Inserting the ansatz $\Phi(\cdot,\xi)=\Phi_0[1+a(\cdot,\xi)]$ into $\mathcal{L}\Phi(\cdot,\xi)=\xi \Phi(\cdot,\xi)$
yields the Volterra equation
\begin{equation}
\label{eq:volterraa}
a(R,\xi)=-\xi \int_0^R \int_{R'}^R \Phi_0(R'')^{-2}dR''\;\Phi_0(R')^2 [1+a(R',\xi)]dR'
\end{equation}
which is of the form
$$ a(R,\xi)=\int_0^R K(R,R',\xi)[1+a(R',\xi)]dR' $$
with a kernel satisfying $|K(R,R',\xi)|\lesssim \langle R'\rangle \xi$ for all $0\leq R' \leq R$ and $\xi>0$.
Consequently, we have
$$ \int_0^{\xi^{-\frac12}}\sup_{R \in (R',\xi^{-\frac12})}|K(R,R',\xi)|dR' \lesssim 1 $$
and a standard Volterra iteration yields the existence of $a(\cdot,\xi)$ with
$|a(R,\xi)|\lesssim \langle R\rangle^2 \xi$ for all $R \in [0,\xi^{-\frac12}]$ and $0<\xi\lesssim 1$.
Obviously, we have $a(0,\xi)=a'(0,\xi)=0$ and this immediately yields $\phi=\Re \Phi$ and
$\theta=\Im \Phi$.
Now observe that, for $0\leq R' \leq R$,
$$ \int_{R'}^R \Phi_0(R'')^{-2}dR''=\int_{R'}^R \left [O(\langle R'' \rangle^{-2})
+iO(\langle R'' \rangle^{-3})\right ]dR''=O(\langle R'\rangle^{-1})+iO(\langle R'\rangle^{-2}) $$
which implies
$$ \int_{R'}^R \Phi_0(R'')^{-2}dR''\;\Phi_0(R')^2=O(\langle R'\rangle)
+iO(1). $$
Consequently, with $|a(R,\xi)|\lesssim \langle R \rangle^2 \xi$ from above and
Eq.~\eqref{eq:volterraa} we infer
$$ \Im a(R,\xi)=O(\langle R \rangle \xi)+\xi \Im \int_0^R O_\mathbb{C}(\langle R'\rangle )a(R',\xi)dR'
=O(\langle R \rangle \xi)+O(\langle R \rangle^4 \xi^2). $$
The derivative bounds follow inductively from Eq.~\eqref{eq:volterraa}
by symbol calculus.
\end{proof}
Next, we consider the Jost function $f_+(\cdot,\xi)$.
\begin{lemma}
\label{lem:Jost}
The Jost function $f_+(\cdot,\xi)$ of the operator $\mathcal{L}$ is of the form
$$ f_+(R,\xi)=e^{i\sqrt{\xi} R}[1+b(R,\xi)] $$
where $b(\cdot,\xi)$ satisfies the bounds
$$ |\partial_\xi^\ell \partial_R^k b(R,\xi)|\leq C_{k,\ell}\langle R \rangle^{-3-k}\xi^{-\frac12-\ell} $$
for all $R\geq \xi^{-\frac16}$, $0<\xi\lesssim 1$ and $k,\ell \in \mathbb{N}_0$.
\end{lemma}
\begin{proof}
The function $b(\cdot,\xi)$ satisfies the Volterra equation
\begin{align}
\label{eq:volterrab}
b(R,\xi)&=\frac{1}{2i\sqrt{\xi}}\int_R^\infty \left [e^{2i\sqrt{\xi}(R'-R)}-1 \right ]V(R')
[1+b(R',\xi)]dR' \\
&=\int_R^\infty K(R,R',\xi)[1+b(R',\xi)]dR'. \nonumber
\end{align}
Thanks to the strong decay of $V$ we have
$|K(R,R',\xi)|\lesssim \langle R' \rangle^{-4}\xi^{-\frac12}$ for $R\leq R'$ and thus,
$$ \int_{\xi^{-\frac16}}^\infty \sup_{R \in (\xi^{-\frac16},R')}|K(R,R',\xi)|dR' \lesssim 1. $$
Consequently, a standard Volterra iteration yields the claim. The derivative bounds follow
inductively by symbol calculus.
\end{proof}
The information provided by Lemmas \ref{lem:Phi} and \ref{lem:Jost} already suffices to obtain
the asymptotics of the spectral measure for $\xi \to 0+$.
\begin{lemma}
\label{lem:smxi}
For the functions $c_0$ and $\rho$ given in Eq.~\eqref{eq:defc0} and Proposition \ref{prop:Fourier},
respectively, we have
$$
c_0(\xi)=-\tfrac{i}{\sqrt{3}}\xi^{-\frac12}[1+O_\mathbb{C}(\xi^\frac15)],\quad
\rho(\xi)=\tfrac{1}{3\pi}\xi^{-\frac12}[1+O(\xi^\frac15)]
$$
for $0<\xi\ll 1$ where the $O$-terms behave like symbols under differentiation.
\end{lemma}
\begin{proof}
For the Weyl-Titchmarsh solution $\psi(\cdot,\xi)$ we have $\psi(\cdot,\xi)=c_0(\xi)f_+(\cdot,\xi)$
where the coefficient $c_0(\xi)$ is given by
$$ c_0(\xi)=\frac{1}{W(f_+(\cdot,\xi),\phi(\cdot,\xi))}, $$
cf.~the proof of Proposition \ref{prop:Fourier}.
From Lemma \ref{lem:Phi} we have
\begin{align*}
\phi(R,\xi)&=\Re \Phi(R,\xi)=\phi_0(R)[1+\Re a(R,\xi)]-\theta_0(R)\Im a(R,\xi) \\
&=\phi_0(R)[1+O(\langle R\rangle^2 \xi)+O(\langle R\rangle^5\xi^2)],\quad R\geq 1
\end{align*}
and, by noting that $|\phi_0'(R)|\simeq \langle R\rangle^{-3}$ for $R\geq 1$,
$$\phi'(R,\xi)=O(\langle R\rangle^{-3})[1+O(\langle R \rangle^4 \xi)+O(\langle R\rangle^7 \xi^2)],
\quad R\geq 1.$$
We evaluate the Wronskian at $R=\xi^{-\frac{3}{10}}$.
Thus, we use
\begin{align*}
\phi(\xi^{-\frac{3}{10}},\xi)&=\phi_0(\xi^{-\frac{3}{10}})[1+O(\xi^\frac25)]=-\sqrt{3}[1+O(\xi^\frac25)] \\
\phi'(\xi^{-\frac{3}{10}},\xi)&=O(\xi^{\frac{9}{10}})[1+O(\xi^{-\frac{1}{5}})]=O(\xi^{\frac{7}{10}})
\end{align*}
and, from Lemma \ref{lem:Jost},
\begin{align*}
f_+(\xi^{-\frac{3}{10}},\xi)&=e^{i\xi^{1/5}}[1+O_\mathbb{C}(\xi^\frac25)]=1+O_\mathbb{C}(\xi^\frac15) \\
f_+'(\xi^{-\frac{3}{10}},\xi)&=i\xi^\frac12 e^{i\xi^{1/5}}[1+O_\mathbb{C}(\xi^\frac15)]
=i\xi^\frac12 [1+O_\mathbb{C}(\xi^\frac15)]
\end{align*}
and all $O$-terms behave like symbols under differentiation.
Consequently, we obtain
$$ W(f_+(\cdot,\xi),\phi(\cdot,\xi))=\sqrt{3}i\xi^\frac12[1+O_\mathbb{C}(\xi^{\frac{1}{5}})] $$
and thus,
$c_0(\xi)=-\frac{i}{\sqrt{3}}\xi^{-\frac12}[1+O_\mathbb{C}(\xi^{\frac15})]$ where the $O$-term
behaves like a symbol.
The Weyl-Titchmarsh $m$-function is given by
$$m(\xi)=W(\theta(\cdot,\xi),\psi(\cdot,\xi))
=c_0(\xi)W(\theta(\cdot,\xi),f_+(\cdot,\xi)).$$
From Lemma \ref{lem:Phi} we have
$$ \theta(\xi^{-\frac{3}{10}},\xi)=O(\xi^{-\frac{3}{10}}),\quad \theta'(\xi^{-\frac{3}{10}},\xi)
=\tfrac{1}{\sqrt{3}}+O(\xi^\frac15) $$
and thus, $W(\theta(\cdot,\xi),f_+(\cdot,\xi))=-\frac{1}{\sqrt{3}}+O_\mathbb{C}(\xi^\frac15)$.
This yields $\rho(\xi)=\tfrac{1}{\pi}\Im m(\xi)=\frac{1}{3\pi} \xi^{-\frac12}[1+O(\xi^\frac15)]$ with an $O$-term that
behaves like a symbol.
\end{proof}
\subsection{Asymptotics of the spectral measure for large $\xi$}
In this section we study the behavior of $\rho(\xi)$ as $\xi \to \infty$.
This is considerably easier than the limit $\xi \to 0+$.
In order to get a small factor in front of the potential, it is convenient to rescale the equation
$\mathcal{L}f=\xi f$ by setting $f(R)=\tilde{f}(\xi^\frac12 R)$ which yields
\begin{equation}
\label{eq:specrescaled}
\tilde{f}''(y)+\tilde{f}(y)=\xi^{-1}V(\xi^{-\frac12}y)\tilde{f}(y)
\end{equation}
for $y \geq 0$, $\xi \gtrsim 1$ and this form already suggests to treat the right-hand side
perturbatively.
\begin{lemma}
\label{lem:Jostlgxi}
The Jost function $f_+(\cdot,\xi)$ of $\mathcal{L}$ is of the form
$$ f_+(R,\xi)=e^{i\sqrt{\xi}R}[1+b(R,\xi)] $$
where
$$ |\partial_\xi^\ell \partial_R^k b(R,\xi)|\leq C_{k,\ell}\langle R\rangle^{-3-k}
\xi^{-\frac12-\ell} $$
for all $R \geq 0$, $\xi\gtrsim 1$ and $k,\ell \in \mathbb{N}_0$.
\end{lemma}
\begin{proof}
We start by constructing a solution $\tilde{f}_+(\cdot,\xi)$ to Eq.~\eqref{eq:specrescaled}
of the form $\tilde{f}_+(y,\xi)=e^{iy}[1+\tilde{b}(y,\xi)]$.
Inserting this ansatz into Eq.~\eqref{eq:specrescaled} yields the Volterra equation
\begin{align}
\label{eq:volterraf+}
\tilde{b}(y,\xi)&=\tfrac{1}{2i}\xi^{-1}\int_y^\infty [e^{2i(y'-y)}-1]V(\xi^{-\frac12}y')
[1+\tilde{b}(y',\xi)]dy' \\
&=\int_y^\infty K(y,y',\xi)[1+\tilde{b}(y',\xi)]dy' \nonumber
\end{align}
where $|K(y,y',\xi)|\lesssim \xi^{-1}\langle \xi^{-\frac12}y'\rangle^{-4}$ for all $0\leq y\leq y'$
and $\xi \gtrsim 1$.
Consequently, we have
$$ \int_0^\infty \sup_{y \in (0,y')}|K(y,y',\xi)|dy'\lesssim \xi^{-\frac12}\lesssim 1 $$
and a Volterra iteration yields $|\tilde{b}(y,\xi)|\lesssim \xi^{-\frac12}\langle \xi^{-\frac12}
y\rangle^{-3}$.
Furthermore, by introducing the new variable $u=y'-y$, we rewrite Eq.~\eqref{eq:volterraf+} as
$$ \tilde{b}(y,\xi)=\tfrac{1}{2i}\xi^{-1}\int_0^\infty [e^{2iu}-1]V(\xi^{-\frac12}(u+y))
[1+\tilde{b}(u+y,\xi)]du $$
and with
$$ |\partial_\xi^\ell \partial_y^k V(\xi^{-\frac12}(u+y))|\leq C_{k,\ell}
\xi^{-\frac12 k-\ell}\langle \xi^{-\frac12}(u+y)\rangle^{-4-k}, \quad k,\ell \in \mathbb{N}_0 $$
for all $y, u\geq 0$, $\xi \gtrsim 1$,
we obtain inductively the bounds
$$ |\partial_\xi^\ell \partial_y^k \tilde{b}(y,\xi)|\leq C_{k,\ell}\xi^{-\frac12-\frac12 k-\ell}
\langle \xi^{-\frac12}y \rangle^{-3-k} $$
for all $y \geq 0$, $\xi \gtrsim 1$ and $k,\ell \in \mathbb{N}_0$.
We have $\tilde{f_+}(\xi^\frac12 R)\sim e^{i\sqrt{\xi}R}$ as $R \to \infty$ and thus,
as already suggested by the notation,
the Jost solution is given by $f_+(R,\xi)=\tilde{f}_+(\xi^\frac12 R,\xi)$ and by setting
$b(R,\xi)=\tilde{b}(\xi^\frac12 R,\xi)$ we obtain the stated form of $f_+(\cdot,\xi)$ and
the bounds for $b$ follow from the ones for $\tilde{b}$ by the chain rule.
\end{proof}
\begin{lemma}
\label{lem:lgxi}
The functions $c_0$ and $\rho$ are of the form
$$ c_0(\xi)=1+O_\mathbb{C}(\xi^{-\frac12}),\quad \rho(\xi)=\tfrac{1}{\pi}\xi^\frac12 [1+O(\xi^{-\frac12})] $$
for $\xi \gtrsim 1$ where the $O$-terms behave like symbols under differentiation.
\end{lemma}
\begin{proof}
By evaluation at $R=0$ we obtain from Lemma \ref{lem:Jostlgxi}
$$ W(f_+(\cdot,\xi),\phi(\cdot,\xi))=f_+(0,\xi)=1+b(0,\xi)=1+O_\mathbb{C}(\xi^{-\frac12}) $$
which, by Eq.~\eqref{eq:defc0}, implies $c_0(\xi)=1+O_\mathbb{C}(\xi^{-\frac12})$
where the $O$-term
behaves like a symbol.
Furthermore, we have
$$ W(\theta(\cdot,\xi),f_+(\cdot,\xi))=f_+'(0,\xi)=i\xi^\frac12[1+b(0,\xi)]
+b'(0,\xi)=i\xi^\frac12[1+O_\mathbb{C}(\xi^{-\frac12})] $$
and we infer
$$ \rho(\xi)=\tfrac{1}{\pi}\Im\left [c_0(\xi)W(\theta(\cdot,\xi),f_+(\cdot,\xi))\right ]
=\tfrac{1}{\pi}\xi^{\frac12}[1+O(\xi^{-\frac12})] $$
with an $O$-term that behaves like a symbol.
\end{proof}
It is now a simple matter to obtain a convenient representation of $\phi(\cdot,\xi)$ in terms
of the Jost function $f_+(\cdot,\xi)$.
\begin{corollary}
\label{cor:phi}
The function $\phi(\cdot,\xi)$ has the representation
$$ \phi(R,\xi)=a(\xi)f_+(R,\xi)+\overline{a(\xi)f_+(R,\xi)} $$
where
\begin{align*}
a(\xi)&=\tfrac{\sqrt{3}}{2}+O_\mathbb{C}(\xi^\frac15),\quad 0 < \xi \ll 1 \\
a(\xi)&=\tfrac{1}{2i}\xi^{-\frac12}[1+O_\mathbb{C}(\xi^{-\frac12})],\quad \xi \gtrsim 1
\end{align*}
and the $O$-terms behave like symbols.
\end{corollary}
\begin{proof}
Since $W(f_+(\cdot,\xi),\overline{f_+(\cdot,\xi)})=-2i\sqrt{\xi}$ it is clear
that there exist coefficients $a(\xi)$, $b(\xi)$ such that $\phi(\cdot,\xi)=a(\xi)f_+(\cdot,\xi)
+b(\xi)\overline{f_+(\cdot,\xi)}$ provided that $\xi>0$.
From the fact that $\phi(\cdot,\xi)$ is real-valued it follows that $b(\xi)=\overline{a(\xi)}$.
Consequently, we obtain
$$ \tfrac{1}{c_0(\xi)}=W(f_+(\cdot,\xi),\phi(\cdot,\xi))=-2i\xi^\frac12 \overline{a(\xi)} $$
and Lemmas \ref{lem:smxi}, \ref{lem:lgxi} yield the claim.
\end{proof}
\subsection{The transference identity}
Unfortunately,
it is not straightforward to apply the distorted Fourier transform to Eq.~\eqref{eq:main}
due to the presence
of the derivative $R\partial_R$.
The idea is to substitute the term $R\partial_R$ by a suitable derivative on the Fourier
side.
This is not possible without making an error.
For the following it is convenient to distinguish between the continuous and the discrete part
of the spectrum of $\mathcal{L}$.
This is most effectively done by introducing vector notation.
Consequently, we interpret the distorted Fourier transform, now denoted by $\mathcal{F}$, as a
vector-valued map
$\mathcal{F}: L^2(0,\infty) \to \mathbb{C} \times L^2((0,\infty),\rho(\xi)d\xi)$ given by
$$ \mathcal{F}f:=\left ( \begin{array}{c}\mathcal{U}f(\xi_d) \\ \mathcal{U}f|_{[0,\infty)} \end{array}
\right ) $$
with $\mathcal{U}$ from Proposition \ref{prop:Fourier}.
By Proposition \ref{prop:Fourier} the inverse map
\[ \mathcal{F}^{-1}: \mathbb{C}\times L^2((0,\infty),\rho(\xi)d\xi) \to L^2(0,\infty) \] reads
\begin{equation}
\label{eq:defF-1}
\mathcal{F}^{-1}\left (\begin{array}{c} a \\ f \end{array} \right )=
a\frac{\phi(\cdot,\xi_d)}{\|\phi(\cdot,\xi_d)\|_{L^2(0,\infty)}^2}+\lim_{b\to\infty} \int_0^b
\phi(\cdot,\xi)f(\xi)\rho(\xi)d\xi.
\end{equation}
We define the error operator $\mathcal{K}$ by
\begin{equation}
\label{eq:K}
\mathcal{F}((\cdot)f'-f)=\mathcal{A}\mathcal{F}f +\mathcal{K}\mathcal{F}f
\end{equation}
for $f\in C^\infty_c(0,\infty)$ where
$$ \mathcal{A}:=\left (\begin{array}{cc}
0 & 0 \\ 0 & \mathcal A_c\end{array} \right )
,\quad \mathcal A_c g(\xi):=-2 \xi g'(\xi)-(\tfrac52+\tfrac{\xi\rho'(\xi)}{\rho(\xi)})g(\xi) $$
and we write
$$ \mathcal{K}=\left (\begin{array}{cc}\mathcal{K}_{dd} & \mathcal{K}_{dc} \\
\mathcal{K}_{cd} & \mathcal{K}_{cc} \end{array} \right )$$
for the matrix components of $\mathcal{K}$.
We call Eq.~\eqref{eq:K}
the ``transference identity'' since it allows us to transfer derivatives with respect
to $R$ to derivatives with respect to the Fourier variable $\xi$.
In order to motivate the definitions of $\mathcal{A}$ and $\mathcal{K}$, let us take
the free case as a model problem, i.e., assume for the moment that $V=0$ and
$\mathcal{L}=-\partial_R^2$.
Note, however, that the free case with a Dirichlet condition at zero is not a good model for
our problem since it is not resonant.
Consequently, we assume a Neumann condition instead.
In the free case there is no discrete spectrum and the corresponding $\phi(\cdot,\xi)$ can be given explicitly
and reads $\phi(R,\xi)=-\cos(\xi^\frac12 R)$.
Furthermore, the spectral measure is $\rho(\xi)=\frac{1}{\pi}\xi^{-\frac12}$.
For the transference identity we obtain
\begin{align*}
\mathcal{U}((\cdot)f'-f)(\xi)&=\int_0^\infty \phi(R,\xi)Rf'(R)dR-\mathcal{U}f(\xi) \\
&=-\int_0^\infty R\xi^{\frac12}\sin(\xi^\frac12 R)f(R)dR-2\int_0^\infty \phi(R,\xi)f(R)dR \\
&=-2\xi\partial_\xi \int_0^\infty\phi(R,\xi)f(R)dR-2\mathcal{U}f(\xi)
=-2 \xi (\mathcal{U}f)'(\xi)-(\tfrac52+\tfrac{\xi\rho'(\xi)}{\rho(\xi)})\mathcal{U}f(\xi)
\end{align*}
for $f \in C_c^\infty(0,\infty)$ and we recover the operator $\mathcal{A}_c$ whereas the corresponding
error operator is identically zero.
Due to the strong decay of $V(R)$ as $R \to \infty$ it is reasonable
to expect that the transference identity Eq.~\eqref{eq:K} is well approximated by the above model case,
at least to leading order.
Therefore, $\mathcal{K}$ should be ``small'' in a suitable sense.
We will make this idea rigorous in Section \ref{sec:K} where we prove appropriate mapping
properties of $\mathcal{K}$ which exhibit a certain smoothing effect that turns out to be crucial
for the whole construction.
\subsection{Application of the distorted Fourier transform}
Now we intend to apply the distorted Fourier transform to Eq.~\eqref{eq:main}.
In order to be able to do so, however, we have to deal with the fact that the functions on the
right-hand side of Eq.~\eqref{eq:main} are only defined in a forward lightcone and we have to
extend them smoothly to all $r\geq 0$.
To this end we use a smooth cut-off $\chi$ satisfying $\chi(x)=0$ for $x \leq \frac12$ and
$\chi(x)=1$ for $x\geq 1$.
Then $\chi(\frac{t-r}{c})$ is identically $1$ in the truncated cone $r\leq t-c$ and identically
$0$ if $r\geq t-\frac{c}{2}$. Of course, we assume here that $t \geq c$.
Furthermore, in terms of the new variables $\tau=\frac{1}{\nu}t^\nu=\frac{1}{\nu}\lambda(t)t$ and
$R=\lambda(t)r$, the cut-off reads
$$ \tilde{\chi}(\tau,R):=\chi\left (\tilde{\lambda}(\tau)^{-1}\tfrac{\nu \tau-R}{c}\right). $$
Consequently, the equation we really want to solve is given by
\begin{equation}
\label{eq:main2}
\mathcal{D}^2 v+\beta_\nu(\tau)\mathcal{D} v+\mathcal{L}v
=\tilde{\lambda}(\tau)^{-2}\tilde{\chi}\left [5(u_2^4-u_0^4)v+R N(u_2,R^{-1}v)+Re_2\right ],
\end{equation}
cf.~Eq.~\eqref{eq:main}, and we
recall that $\mathcal{D}=\partial_\tau+\beta_\nu(\tau)(R\partial_R-1)$.
Thus, we have
$$ \mathcal{F}\mathcal{D}=\partial_\tau \mathcal{F}+\beta_\nu(\mathcal{A}+\mathcal{K})\mathcal{F}=:\hat{\mathcal{D}}\mathcal{F} $$
and this yields
$
\mathcal{F}\mathcal{D}^2=\hat{\mathcal{D}}^2 \mathcal{F}
$
where
\begin{align*}
\hat{\mathcal{D}}^2&=\partial_\tau^2 +2\beta_\nu(\mathcal{A}+\mathcal{K})\partial_\tau+\beta_\nu^2
(\mathcal{A}^2+\mathcal{K}\mathcal{A}+\mathcal{A}\mathcal{K}+\mathcal{K}^2)+\beta_\nu'(\mathcal{A}+\mathcal{K}) \\
&=(\partial_\tau+\beta_\nu \mathcal{A})^2+2\beta_\nu \mathcal{K}\partial_\tau
+\beta_\nu^2 (2\mathcal{KA}+[\mathcal{A,K}]+\mathcal{K}^2)+\beta_\nu' \mathcal{K}.
\end{align*}
We conclude that
$$ \hat{\mathcal{D}}^2+\beta_\nu \hat{\mathcal{D}}=(\partial_\tau+\beta_\nu \mathcal{A})^2+\beta_\nu(2\mathcal{K}+1)
(\partial_\tau+\beta_\nu\mathcal{A})+\beta_\nu^2(\mathcal{K}^2+[\mathcal{A},\mathcal{K}]+\mathcal{K}+\tfrac{\beta_\nu'}{\beta_\nu^2}\mathcal{K}). $$
In the following we write $(x_d(\tau),x(\tau,\xi))=\mathcal{F}v(\tau,\cdot)(\xi)$.
Consequently, by applying $\mathcal{F}$ to Eq.~\eqref{eq:main2}, we end up with the system
\begin{align}
\label{eq:sysFourier}
&\left ( \begin{array}{cc}\partial_\tau^2+\xi_d & 0 \\
0 & (\partial_\tau+\beta_\nu(\tau)\mathcal A_c)^2+\xi \end{array} \right )
\left (\begin{array}{c}x_d(\tau) \\ x(\tau,\xi) \end{array} \right )
=\sum_{j=1}^5 \mathcal{N}_j\left ( \begin{array}{c}x_d \\ x \end{array} \right)
(\tau,\xi) \\
&\quad \quad -\beta_\nu(\tau)(2\mathcal{K}+1)(\partial_\tau+\beta_\nu(\tau)\mathcal{A})\left (\begin{array}{c}x_d(\tau)\\
x(\tau,\cdot) \end{array} \right )(\xi) \nonumber \\
&\quad \quad -\beta_\nu(\tau)^2 \left ( \mathcal{K}^2+[\mathcal{A},\mathcal{K}]+\mathcal{K}+\tfrac{\beta_\nu'(\tau)}{\beta_\nu(\tau)^2}
\mathcal{K} \right )\left (\begin{array}{c}x_d(\tau) \\ x(\tau,\cdot) \end{array} \right )(\xi) \nonumber \\
&\quad \quad +\left ( \begin{array}{c}\hat{e}_2(\tau,\xi_d) \\ \hat{e}_2(\tau,\xi) \end{array}
\right )\nonumber
\end{align}
where the operators $\mathcal{N}_j$, $j \in \{1,2,3,4,5\}$, are given by
\begin{align}
\label{eq:defR}
\mathcal{N}_j \left (\begin{array}{c}x_d \\ x \end{array} \right )(\tau,\xi)&:=\mathcal{F}
\left (|\cdot|\varphi_j(\tau,\cdot)
\left [|\cdot|^{-1}\mathcal{F}^{-1}
\left (\begin{array}{c}x_d(\tau) \\ x(\tau,\cdot) \end{array} \right ) \right]^j \right )(\xi)
\end{align}
with
\begin{align*}
\varphi_1(\tau,R)&=5\tilde{\lambda}(\tau)^{-2}\tilde{\chi}(\tau,R)[
u_2(\nu \tilde{\lambda}(\tau)^{-1}\tau, \tilde{\lambda}(\tau)^{-1}R)^4
-u_0(\nu \tilde{\lambda}(\tau)^{-1}\tau, \tilde{\lambda}(\tau)^{-1}R)^4] \\
\varphi_2(\tau,R)&=10 \tilde{\lambda}(\tau)^{-2}\tilde{\chi}(\tau,R)
u_2(\nu \tilde{\lambda}(\tau)^{-1}\tau, \tilde{\lambda}(\tau)^{-1}R)^3 \\
\varphi_3(\tau,R)&=10 \tilde{\lambda}(\tau)^{-2}\tilde{\chi}(\tau,R)
u_2(\nu \tilde{\lambda}(\tau)^{-1}\tau, \tilde{\lambda}(\tau)^{-1}R)^2 \\
\varphi_4(\tau,R)&=5\tilde{\lambda}(\tau)^{-2}\tilde{\chi}(\tau,R)
u_2(\nu \tilde{\lambda}(\tau)^{-1}\tau, \tilde{\lambda}(\tau)^{-1}R) \\
\varphi_5(\tau,R)&=\tilde{\lambda}(\tau)^{-2}\tilde{\chi}(\tau,R)
\end{align*}
and
\begin{equation}
\label{eq:defhate2}
\hat{e}_2(\tau,\xi)=\tilde{\lambda}(\tau)^{-2}\int_0^\infty \phi(R,\xi)
\chi\left (\tilde{\lambda}(\tau)^{-1}\tfrac{\nu \tau-R}{c}\right)
R e_2\left (\nu \tilde{\lambda}(\tau)^{-1}\tau,
\tilde{\lambda}(\tau)^{-1}R\right )dR.
\end{equation}
\subsection{Solution of the transport equation}
Our goal is to treat the entire right-hand side of Eq.~\eqref{eq:sysFourier} perturbatively.
To this end it is necessary to be able to solve the two decoupled equations
\begin{align}
\label{eq:xd}
x_d''(\tau)+\xi_d x_d(\tau)&=b_d(\tau) \\
\label{eq:x}
\left [\partial_\tau-\beta_\nu(\tau)\left (2\xi\partial_\xi+\tfrac52+\tfrac{\xi\rho'(\xi)}{\rho(\xi)}
\right )
\right ]^2
x(\tau,\xi)+\xi x(\tau,\xi)&=b(\tau,\xi)
\end{align}
for some given functions $b_d$ and $b$.
Recall that we are interested in decaying solutions as $\tau \to \infty$ and by variation of constants
it is readily seen
that
$$ x_d(\tau)=\int_{\tau_0}^\infty H_d(\tau,\tau')b_d(\tau')d\tau',\quad H_d(\tau,\tau'):=-\tfrac12
|\xi_d|^{-\frac12} e^{-|\xi_d|^{1/2}|\tau-\tau'|} $$
for some constant $\tau_0$ is a solution to Eq.~\eqref{eq:xd} which behaves well at infinity.
For future reference we denote by $\mathcal{H}_d$ the solution operator, i.e.,
\begin{equation}
\label{eq:defHd}
\mathcal{H}_d f(\tau):=\int_{\tau_0}^\infty H_d(\tau,\tau')f(\tau')d\tau'.
\end{equation}
In order to solve Eq.~\eqref{eq:x} we first define a new variable $y(\tau,\xi)$ by
$$ x(\tau,\xi)=\tilde{\lambda}(\tau)^\frac52 \rho(\xi)^{-\frac12}y(\tau,\xi). $$
Then, by recalling that
$\beta_\nu(\tau)=\tilde{\lambda}'(\tau)\tilde{\lambda}(\tau)^{-1}$, we observe that
\[ \left [\partial_\tau-\tfrac52\beta_\nu(\tau)-\beta_\nu(\tau) \left
(2\xi\partial_\xi+\tfrac{\xi\rho'(\xi)}{\rho(\xi)}
\right ) \right]x(\tau,\xi)=\tilde{\lambda}(\tau)^\frac52 \rho(\xi)^{-\frac12}
[\partial_\tau-2\beta_\nu(\tau)\xi\partial_\xi]y(\tau,\xi)
\]
and thus, Eq.~\eqref{eq:x} is equivalent to
\begin{equation}
\label{eq:y}
[\partial_\tau-2\beta_\nu(\tau)\xi\partial_\xi]^2 y(\tau,\xi)+\xi y(\tau,\xi)=\tilde{\lambda}(\tau)^{-\frac52}
\rho(\xi)^\frac12 b(\tau,\xi).
\end{equation}
Now we solve Eq.~\eqref{eq:y} by the method of characteristics, i.e., we compute
$$ \tfrac{d}{d\tau}y(\tau,\xi(\tau))=\partial_\tau y(\tau,\xi(\tau))+\xi'(\tau)\partial_\xi
y(\tau,\xi(\tau)) $$
and by comparison with the differential operator in Eq.~\eqref{eq:y} we
obtain the characteristic equation
$\xi'(\tau)=-2\beta_\nu(\tau)\xi(\tau)$
which, by recalling that $\beta_\nu(\tau)=-(\frac{1}{\nu}-1)\tau^{-1}$ from Eq.~\eqref{eq:defbeta},
is readily solved as $\xi(\tau)=\gamma \tau^{2(\frac{1}{\nu}-1)}$ for some constant $\gamma$.
Thus, along the characteristic $\tau \mapsto (\tau,\gamma\tau^{2(\frac{1}{\nu}-1)})$, Eq.~\eqref{eq:y}
takes the form
\begin{equation}
\label{eq:tildex}
\tilde{y}''(\tau;\gamma)+\gamma \tau^{2(\frac{1}{\nu}-1)}\tilde{y}(\tau;\gamma)=\tilde{b}(\tau;\gamma)
\end{equation}
where $\tilde{y}(\tau;\gamma)=y(\tau,\gamma\tau^{2(\frac{1}{\nu}-1)})$ and $\tilde{b}(\tau;\gamma)
=\tilde{\lambda}(\tau)^{-\frac52}\rho(\gamma\tau^{2(\frac{1}{\nu}-1)})^\frac12 b(\tau,\gamma\tau^{2(\frac{1}{\nu}-1)})$.
By setting $\tilde{y}(\tau;\gamma)=\tau^{-\frac12(\frac{1}{\nu}-1)}w(\nu \gamma^\frac12
\tau^{\frac{1}{\nu}})$ we infer that the homogeneous
version of Eq.~\eqref{eq:tildex} is equivalent to
$$ w''(z)+\left (1-\frac{(\frac{\nu}{2})^2-\frac14}{z^2}\right )w(z)=0 $$
where $z=\nu \gamma^\frac12 \tau^{\frac{1}{\nu}}$ and this identifies Eq.~\eqref{eq:tildex} as a Bessel equation.
Consequently, a fundamental system $\{\phi_j(\cdot;\gamma): j=0,1\}$ for the homogeneous version of
Eq.~\eqref{eq:tildex} is given
by
\begin{equation}
\label{eq:phi01}
\begin{aligned}
\phi_0(\tau;\gamma)&=a_{\nu}\tau^\frac12 J_{\nu/2}(\nu \gamma^\frac12 \tau^{\frac{1}{\nu}}) \\
\phi_1(\tau;\gamma)&=b_{\nu}\tau^\frac12 Y_{\nu/2}(\nu \gamma^\frac12 \tau^{\frac{1}{\nu}})
\end{aligned}
\end{equation}
where $J_{\nu/2}$, $Y_{\nu/2}$ are
the standard Bessel functions,
see e.g.~\cite{Olv74}, \cite{DLMF}, and $a_\nu$, $b_\nu$ are chosen such that
\begin{equation}
\label{eq:Besselasym}
a_\nu J_{\nu/2}(z)=\nu^{-\frac{\nu}{2}}z^{\frac{\nu}{2}}[1+O(z^2)],\quad
b_\nu Y_{\nu/2}(z)=\nu^{\frac{\nu}{2}}z^{-\frac{\nu}{2}}[1+O(z^\nu)]
\end{equation}
as $z \to 0+$.
This yields the asymptotics
\begin{equation}
\label{eq:phiasymsm}
\begin{aligned}
\phi_0(\tau;\gamma)&=\gamma^{\frac{\nu}{4}}\tau[1+O(\gamma \tau^{\frac{2}{\nu}})] \\
\phi_1(\tau;\gamma)&=\gamma^{-\frac{\nu}{4}}[1+O(\gamma^\frac{\nu}{2}\tau)]
\end{aligned}
\end{equation}
for, say, $0<\gamma^\frac12 \tau^\frac{1}{\nu}\leq 1$ and the $O$-terms behave like symbols.
By evaluation at $\tau=0$ we also obtain the Wronskian $W(\phi_0(\cdot;\gamma),\phi_1(\cdot;\gamma))=-1$.
Furthermore, from the Hankel asymptotics we have $|H_{\nu/2}^{(j)}(z)|\lesssim z^{-\frac12}$
for $z\geq 1$, $j=1,2$ (see \cite{Olv74}, \cite{DLMF}) and thus, the relations
$J_{\nu/2}=\frac12 (H_{\nu/2}^{(1)}+H_{\nu/2}^{(2)})$ as well as $Y_{\nu/2}=\frac{1}{2i}
(H_{\nu/2}^{(1)}-H_{\nu/2}^{(2)})$ immediately yield the bound
\begin{equation}
\label{eq:phiasymlg}
|\phi_j(\tau;\gamma)|\lesssim \gamma^{-\frac14}\tau^{-\frac12(\frac{1}{\nu}-1)},\quad j=0,1
\end{equation}
for $\gamma^\frac12 \tau^\frac{1}{\nu}\geq 1$.
Consequently, assuming sufficient decay of $\tilde{b}(\cdot;\gamma)$, a decaying solution to Eq.~\eqref{eq:tildex} is given by
$$ \tilde{y}(\tau;\gamma)=\int_\tau^\infty [\phi_1(\tau;\gamma)\phi_0(\sigma;\gamma)
-\phi_0(\tau;\gamma)\phi_1(\sigma;\gamma)
]\tilde{b}(\sigma;\gamma)d\sigma. $$
In order to obtain an expression for $x(\tau,\xi)$, we set $\gamma=\xi \tau^{-2(\frac{1}{\nu}-1)}$ and
this yields
\begin{equation}
\label{eq:H}
x(\tau,\xi)=\int_\tau^\infty H_c(\tau,\sigma,\xi)b(\sigma,(\tfrac{\sigma}{\tau})^{2(\frac{1}{\nu}-1)}
\xi)d\sigma=:\mathcal{H}_c b(\tau,\xi)
\end{equation}
with
\begin{align}
\label{eq:defH}
H_c(\tau,\sigma,\xi)=&\tilde{\lambda}(\tau)^\frac52 \rho(\xi)^{-\frac12}
\Big[\phi_1\left (\tau;\xi \tau^{-2(\frac{1}{\nu}-1)} \right)
\phi_0\left (\sigma;\xi \tau^{-2(\frac{1}{\nu}-1)} \right)\\
&-\phi_0\left (\tau;\xi \tau^{-2(\frac{1}{\nu}-1)}\right)
\phi_1\left (\sigma;\xi \tau^{-2(\frac{1}{\nu}-1)}\right) \Big]\tilde{\lambda}(\sigma)^{-\frac52}
\rho \left ((\tfrac{\sigma}{\tau})^{2(\frac{1}{\nu}-1)}\xi \right)^{\frac12}. \nonumber
\end{align}
Now we establish bounds for $H_c(\tau,\sigma,\xi)$.
\begin{lemma}
\label{lem:boundsH}
The function $H_c$ defined by Eq.~\eqref{eq:defH} satisfies the bounds
$$ |H_c(\tau,\sigma,\xi)|\lesssim (\tfrac{\sigma}{\tau})^{\frac72|\frac{1}{\nu}-1|} \left \{ \begin{array}{ll}
\xi^{-\frac12} & \tau \xi^\frac12\geq 1,\: \sigma \xi^\frac12 \geq 1 \\
\tau^{\frac12(1-\nu)}\xi^{-\frac14(1+\nu)} & 0<\tau \xi^\frac12 \leq 1,\: \sigma \xi^\frac12 \geq 1 \\
\sigma & 0<\tau \xi^\frac12 \leq 1,\: 0<\sigma \xi^\frac12 \leq 1
\end{array} \right.
$$
for all $1\leq \tau \leq \sigma$ and $\xi > 0$.
\end{lemma}
\begin{proof}
Recall that $\tilde{\lambda}(\tau)=(\nu\tau)^{-(\frac{1}{\nu}-1)}$ and thus,
$\tilde{\lambda}(\tau)^\frac52 \tilde{\lambda}(\sigma)^{-\frac52}=(\tfrac{\sigma}{\tau})^{\frac52(\frac{1}{\nu}-1)}$.
Furthermore, if $\xi\geq 1$ we have $|\rho(\xi)|^{-\frac12}\lesssim \xi^{-\frac14}$ by Lemma
\ref{lem:lgxi} and, if $(\tfrac{\sigma}{\tau})^{2(\frac{1}{\nu}-1)}\xi\geq 1$, we infer
\[ \left |\rho(\xi)^{-\frac12}\rho\left ((\tfrac{\sigma}{\tau})^{2(\frac{1}{\nu}-1)}\xi \right )^\frac12
\right |\lesssim \xi^{-\frac14}(\tfrac{\sigma}{\tau})^{\frac12(\frac{1}{\nu}-1)}\xi^\frac14 \lesssim
(\tfrac{\sigma}{\tau})^{\frac12|\frac{1}{\nu}-1|}\]
again by Lemma \ref{lem:lgxi}.
Note that we always have $(\tfrac{\sigma}{\tau})^{\frac12(\frac{1}{\nu}-1)}\lesssim
(\tfrac{\sigma}{\tau})^{\frac12|\frac{1}{\nu}-1|}$ regardless of the sign of $\frac{1}{\nu}-1$
since $1\leq \tau\leq \sigma$ is assumed throughout this proof.
If, on the other hand, $0<(\tfrac{\sigma}{\tau})^{2(\frac{1}{\nu}-1)} \xi\leq 1$ we obtain
\[ \left |\rho(\xi)^{-\frac12}\rho\left ((\tfrac{\sigma}{\tau})^{2(\frac{1}{\nu}-1)}\xi \right )^\frac12
\right |\lesssim \xi^{-\frac14}(\tfrac{\sigma}{\tau})^{-\frac12(\frac{1}{\nu}-1)}\xi^{-\frac14}
\lesssim (\tfrac{\sigma}{\tau})^{\frac12|\frac{1}{\nu}-1|}\]
since $|\rho(\xi)|^\frac12 \lesssim \xi^{-\frac14}$ for $0<\xi\leq 1$ by Lemma \ref{lem:smxi}.
In the case $0<\xi\leq 1$ we have $|\rho(\xi)|^{-\frac12}\lesssim \xi^\frac14$ and thus,
either
\[ \left |\rho(\xi)^{-\frac12}\rho\left ((\tfrac{\sigma}{\tau})^{2(\frac{1}{\nu}-1)}\xi \right )^\frac12
\right |\lesssim\xi^\frac14
(\tfrac{\sigma}{\tau})^{-\frac12(\frac{1}{\nu}-1)}\xi^{-\frac14} \lesssim
(\tfrac{\sigma}{\tau})^{\frac12|\frac{1}{\nu}-1|} \]
or
\[ \left |\rho(\xi)^{-\frac12}\rho\left ((\tfrac{\sigma}{\tau})^{2(\frac{1}{\nu}-1)}\xi \right )^\frac12
\right |\lesssim \xi^\frac14
(\tfrac{\sigma}{\tau})^{\frac12(\frac{1}{\nu}-1)}\xi^\frac14 \lesssim
(\tfrac{\sigma}{\tau})^{\frac12|\frac{1}{\nu}-1|} \]
by Lemma \ref{lem:lgxi} depending on whether $0<(\tfrac{\sigma}{\tau})^{\frac12(\frac{1}{\nu}-1)}\xi\leq 1$
or $(\tfrac{\sigma}{\tau})^{\frac12(\frac{1}{\nu}-1)}\xi \geq 1$.
We conclude that
\begin{equation}
\label{eq:factor}
\left |\tilde{\lambda}(\tau)^\frac52 \tilde{\lambda}(\sigma)^{-\frac52}
\rho(\xi)^{-\frac12}\rho\left ((\tfrac{\sigma}{\tau})^{2(\frac{1}{\nu}-1)}\xi \right )^\frac12
\right | \lesssim (\tfrac{\sigma}{\tau})^{3|\frac{1}{\nu}-1|}
\end{equation}
for all $1\leq \tau\leq \sigma$ and $\xi>0$.
It remains to estimate the terms involving $\phi_j$.
We have different asymptotic descriptions of $\phi_j(\tau;\gamma)$ depending on whether
$0<\gamma^\frac12 \tau^{\frac{1}{\nu}}\leq 1$ or $\gamma^\frac12 \tau^{\frac{1}{\nu}}\geq 1$.
For $\gamma=\xi \tau^{-2(\frac{1}{\nu}-1)}$ this distinction reads
$0<\tau\xi^\frac12\leq 1$ or $\tau \xi^\frac12 \geq 1$.
Thus, in principle we have to deal with the four cases
\begin{enumerate}
\item $\tau \xi^\frac12 \geq 1$ and $\sigma \xi^\frac12\geq 1$
\item $0<\tau \xi^\frac12 \leq 1$ and $\sigma \xi^\frac12 \geq 1$
\item $0<\tau \xi^\frac12 \leq 1$ and $0<\sigma \xi^\frac12 \leq 1$
\item $\tau \xi^\frac12 \geq 1$ and $0<\sigma \xi^\frac12 \leq 1$.
\end{enumerate}
However, since we are only interested in $\tau \leq \sigma$, case $(4)$ is void.
\begin{enumerate}
\item We use the bound from the Hankel asymptotics stated in Eq.~\eqref{eq:phiasymlg} to obtain
$$ |\phi_0(\tau;\gamma)\phi_1(\sigma;\gamma)|\lesssim \gamma^{-\frac12}\tau^{-\frac12(\frac{1}{\nu}-1)}
\sigma^{-\frac12(\frac{1}{\nu}-1)}\lesssim (\tfrac{\sigma}{\tau})^{\frac12|\frac{1}{\nu}-1|}\xi^{-\frac12} $$
by evaluation at $\gamma=\xi
\tau^{-2(\frac{1}{\nu}-1)}$.
This bound is symmetric in $\tau$ and $\sigma$ and thus, by Eq.~\eqref{eq:factor}, we infer
$$ |H_c(\tau,\sigma,\xi)|\lesssim (\tfrac{\sigma}{\tau})^{\frac72|\frac{1}{\nu}-1|} \xi^{-\frac12}. $$
\item From Eqs.~\eqref{eq:phiasymsm} and \eqref{eq:phiasymlg} we have
\begin{align*}
|\phi_1(\tau;\gamma)\phi_0(\sigma;\gamma)|&\lesssim \gamma^{-\frac14(1+\nu)}\sigma^{-\frac12(\frac{1}{\nu}-1)}\\
|\phi_0(\tau;\gamma)\phi_1(\sigma;\gamma)|&\lesssim \gamma^{-\frac14(1-\nu)}\tau \sigma^{-\frac12(\frac{1}{\nu}-1)}
\lesssim \gamma^{-\frac14(1+\nu)}\sigma^{-\frac12(\frac{1}{\nu}-1)}
\end{align*}
and thus,
\begin{align*}|H_c(\tau,\sigma,\xi)|&\lesssim (\tfrac{\sigma}{\tau})^{3|\frac{1}{\nu}-1|} \tau^{\frac12(\frac{1}{\nu}-\nu)}
\sigma^{-\frac12(\frac{1}{\nu}-1)}\xi^{-\frac14(1+\nu)} \\
&\lesssim
(\tfrac{\sigma}{\tau})^{\frac72|\frac{1}{\nu}-1|} \tau^{\frac12(1-\nu)}\xi^{-\frac14(1+\nu)}
\end{align*}
by Eq.~\eqref{eq:factor}.
\item Eq.~\eqref{eq:phiasymsm} yields
$$ |\phi_0(\tau;\gamma)\phi_1(\sigma;\gamma)|\lesssim \tau $$
and this implies
$|H_c(\tau,\sigma,\xi)|\lesssim (\tfrac{\sigma}{\tau})^{3|\frac{1}{\nu}-1|} (\tau+\sigma)
\lesssim (\tfrac{\sigma}{\tau})^{3|\frac{1}{\nu}-1|} \sigma$.
\end{enumerate}
\end{proof}
We shall also require bounds for the differentiated kernel
\begin{equation}
\label{eq:defdiffH}
\hat{H}_c(\tau,\sigma,\xi):=
[\partial_\tau-\beta_\nu(\tau)(2\xi \partial_\xi+\tfrac52+\tfrac{\xi\rho'(\xi)}{\rho(\xi)})]
H_c(\tau,\sigma,\xi)
\end{equation}
where, as always, $\beta_\nu(\tau)
=-(\frac{1}{\nu}-1)\tau^{-1}$.
These are established next.
\begin{lemma}
\label{lem:diffH}
The differentiated kernel satisfies the bounds
\[ |\hat{H}_c(\tau,\sigma,\xi)|\lesssim (\tfrac{\sigma}{\tau})^{\frac72|\frac{1}{\nu}-1|}
\left \{ \begin{array}{ll}
1 & \tau \xi^\frac12\geq 1,\: \sigma \xi^\frac12 \geq 1 \\
\tau^{-\frac12(1-\nu)}\xi^{-\frac14(1-\nu)} & 0<\tau \xi^\frac12 \leq 1,\: \sigma \xi^\frac12 \geq 1 \\
1 & 0<\tau \xi^\frac12 \leq 1,\: 0<\sigma \xi^\frac12 \leq 1
\end{array} \right.
\]
for all $1\leq \tau \leq \sigma$ and $\xi > 0$.
\end{lemma}
\begin{proof}
Note that for any differentiable function $f$ of two variables we have
\begin{align*}
&[\partial_\tau-\beta_\nu(\tau)(2\xi \partial_\xi+\tfrac52+\tfrac{\xi\rho'(\xi)}{\rho(\xi)})]
\left (\tilde{\lambda}(\tau)^\frac52 \rho(\xi)^{-\frac12}f(\tau,\xi)
\right ) \\
&\quad =\tilde{\lambda}(\tau)^\frac52 \rho(\xi)^{-\frac12}[\partial_\tau-2\beta_\nu(\tau)\xi\partial_\xi]
f(\tau,\xi).
\end{align*}
By Eq.~\eqref{eq:defH} and
\begin{align*} [\partial_\tau-2\beta_\nu(\tau)\xi\partial_\xi]\phi_j\left (\tau; \xi\tau^{-2(\frac{1}{\nu}-1)}
\right )&=\partial_1 \phi_j\left (\tau; \xi\tau^{-2(\frac{1}{\nu}-1)}
\right ) \\
[\partial_\tau-2\beta_\nu(\tau)\xi\partial_\xi]\phi_j\left (\sigma; \xi\tau^{-2(\frac{1}{\nu}-1)}
\right )&=0
\end{align*}
for $j=0,1$ we therefore obtain
\begin{align*}
\hat{H}_c(\tau,\sigma,\xi)=&\tilde{\lambda}(\tau)^\frac52 \rho(\xi)^{-\frac12}\partial_1 \phi_1
\left (\tau;\xi \tau^{-2(\frac{1}{\nu}-1)} \right)
\phi_0\left (\sigma;\xi \tau^{-2(\frac{1}{\nu}-1)} \right)\\
&-\partial_1 \phi_0\left (\tau;\xi \tau^{-2(\frac{1}{\nu}-1)}\right)
\phi_1\left (\sigma;\xi \tau^{-2(\frac{1}{\nu}-1)}\right)
\tilde{\lambda}(\sigma)^{-\frac52}
\rho \left ((\tfrac{\sigma}{\tau})^{2(\frac{1}{\nu}-1)}\xi \right)^{\frac12}. \nonumber
\end{align*}
From Eq.~\eqref{eq:phiasymsm} we immediately infer
\begin{equation}
\label{eq:pphism}
\begin{aligned}
\partial_1 \phi_0 (\tau;\gamma)&=\gamma^{\frac{\nu}{4}}[1+O(\gamma \tau^{\frac{2}{\nu}})] \\
\partial_1 \phi_1(\tau;\gamma)&= O(\gamma^{\frac{\nu}{4}})
\end{aligned}
\end{equation}
for $0<\gamma^\frac12 \tau^{\frac{1}{\nu}} \leq 1$.
Furthermore, since
\begin{align*}
\partial_1 \phi_0(\tau;\gamma)&=\tfrac12 \tau^{-\frac12}a_\nu J_{\nu/2}(\nu \gamma^\frac12
\tau^{\frac{1}{\nu}})+\gamma^\frac12 \tau^{\frac{1}{\nu}-\frac12}a_\nu J'_{\nu/2}
(\nu \gamma^\frac12 \tau^{\frac{1}{\nu}}) \\
\partial_1 \phi_1(\tau;\gamma)&=\tfrac12 \tau^{-\frac12}b_\nu Y_{\nu/2}(\nu \gamma^\frac12
\tau^{\frac{1}{\nu}})+\gamma^\frac12 \tau^{\frac{1}{\nu}-\frac12}b_\nu Y'_{\nu/2}
(\nu \gamma^\frac12 \tau^{\frac{1}{\nu}}),
\end{align*}
see Eq.~\eqref{eq:phi01},
the identity $C'_{\nu/2}=\frac12 (C_{\nu/2-1}-C_{\nu/2+1})$, $C \in \{J,Y\}$ \cite{DLMF},
and the asymptotics of the Hankel functions yield $|C'_{\nu/2}(z)|\lesssim z^{-\frac12}$ for
$z \gtrsim 1$ and thus,
\begin{equation}
\label{eq:pphilg}
|\partial_1\phi_j(\tau;\gamma)|\lesssim \gamma^{-\frac14}\tau^{-\frac12(1+\frac{1}{\nu})}
+\gamma^{\frac14}\tau^{\frac12(\frac{1}{\nu}-1)}\lesssim \gamma^{\frac14}\tau^{\frac12(\frac{1}{\nu}-1)}
\end{equation}
for $\gamma^\frac12 \tau^{\frac{1}{\nu}}\geq 1$ and $j=0,1$.
As in the proof of Lemma \ref{lem:boundsH} we now distinguish three cases and we always assume
$1\leq \tau\leq \sigma$.
\begin{enumerate}
\item If $\tau \xi^\frac12\geq 1$ and $\sigma \xi^\frac12 \geq 1$ we use
Eqs.~\eqref{eq:phiasymlg} and \eqref{eq:pphilg} to conclude
\[ |\partial_1 \phi_1(\tau; \gamma)\phi_0(\sigma; \gamma)|+
|\partial_1 \phi_0(\tau; \gamma)\phi_1(\sigma; \gamma)|\lesssim (\tfrac{\sigma}{\tau})^{\frac12|\frac{1}{\nu}-1|} \]
which, by Eq.~\eqref{eq:factor}, implies
$|\hat{H}_c(\tau,\sigma,\xi)|\lesssim (\tfrac{\sigma}{\tau})^{\frac72 |\frac{1}{\nu}-1|}$.
\item If $0<\tau \xi^\frac12\leq 1$ and $\sigma \xi^\frac12\geq 1$ we obtain
\[ |\partial_1 \phi_1(\tau;\gamma)\phi_0(\sigma;\gamma)|+|\partial_1 \phi_0(\tau;\gamma)
\phi_1(\sigma;\gamma)|\lesssim \gamma^{-\frac14(1-\nu)}\sigma^{-\frac12(\frac{1}{\nu}-1)} \]
by Eqs.~\eqref{eq:pphism} and \eqref{eq:phiasymlg}. Hence,
upon setting $\gamma=\xi \tau^{-2(\frac{1}{\nu}-1)}$, we conclude
\[ |\hat{H}_c(\tau,\sigma,\xi) |\lesssim (\tfrac{\sigma}{\tau})^{\frac72|\frac{1}{\nu}-1|}
\tau^{-\frac12(1-\nu)}\xi^{-\frac14(1-\nu)}. \]
\item In the case $0<\tau \xi^\frac12\leq 1$ and $0<\sigma \xi^\frac12\leq 1$ we have, by
Eqs.~\eqref{eq:phiasymsm} and \eqref{eq:pphism},
\[ |\partial_1 \phi_1(\tau;\gamma)\phi_0(\sigma;\gamma)|+|\partial_1 \phi_0(\tau;\gamma)
\phi_1(\sigma;\gamma)|\lesssim 1+\gamma^\frac{\nu}{2} \sigma\lesssim 1 \]
which yields
$|\hat{H}_c(\tau,\sigma,\xi)|\lesssim (\tfrac{\sigma}{\tau})^{3|\frac{1}{\nu}-1|}$
by Eq.~\eqref{eq:factor}.
\end{enumerate}
\end{proof}
\subsection{Estimates for the solution operator}
In order to set up our main contraction argument for Eq.~\eqref{eq:sysFourier} we have to introduce
appropriate function spaces.
\begin{definition}
\label{def:spaces}
For $\delta,\alpha \in \mathbb{R}$ and $p \in [1,\infty)$ we define norms $\|\cdot\|_{X^{p,\alpha}_\delta}$ and
$\|\cdot\|_{Y^{p,\alpha}}$ by
\begin{align*} \|f\|_{X^{p,\alpha}_\delta}&:=\left (\int_0^\infty \left |f(\xi)\left (\xi \langle \xi\rangle^{-1}
\right)^{\frac12-\delta}
\right |^p
d\xi \right )^{1/p}+\left (\int_0^\infty |f(\xi)|^2 \xi \langle \xi \rangle^{2\alpha}
\rho(\xi)d\xi \right )^{1/2}, \\
\|f\|_{Y^{p,\alpha}}&:=\|f\|_{L^p(0,\infty)}+\left (\int_0^\infty |f(\xi)|^2 \langle \xi \rangle^{2\alpha}
\rho(\xi)d\xi \right )^{1/2}.
\end{align*}
Furthermore, for a function $b$ of two variables and a Banach space $X$ we write
$$ \|b\|_{L^{\infty,\beta}_{\tau_0}X}:=\sup_{\tau \geq \tau_0}\tau^\beta
\|b(\tau,\cdot)\|_{X} $$
where $\beta\geq 0$ and $\tau_0>0$ (in what follows we always assume
$\tau_0$ to be sufficiently large).
\end{definition}
For the following it is convenient to introduce
the notation
\[ \mathcal B_{c,\nu}b(\tau,\xi):=[\partial_\tau-\beta_\nu(\tau)(2\xi \partial_\xi
+\tfrac52+\tfrac{\xi\rho'(\xi)}{\rho(\xi)})]b(\tau,\xi). \]
\begin{proposition}
\label{prop:H}
Fix a $\delta$ with $2|\frac{1}{\nu}-1|<\delta<\frac12$
and let $p \in (1,\infty)$ be so large that
\[ p'(1-\delta+2|\tfrac{1}{\nu}-1|)<1 \]
where $p'$ is the H\"older
conjugate of $p$, i.e., $\frac{1}{p}+\frac{1}{p'}=1$.
Suppose further that $\tau_0\geq 1$, $\beta\geq \frac52$ and
$\alpha \in [0,1]$ be fixed.
Then we have the bounds
\begin{align*}
\|\mathcal{H}_c b\|_{L^{\infty,\beta-1-2\delta}_{\tau_0}{X^{p,\alpha}_\delta}} &\lesssim \|b\|_{L^{\infty,\beta}_{\tau_0}{Y^{p,\alpha}}} \\
\|\mathcal B_{c,\nu}\mathcal{H}_c b\|_{L^{\infty,\beta-1}_{\tau_0}{Y^{p,\alpha}}} &\lesssim \|b\|_{L^{\infty,\beta}_{\tau_0}{Y^{p,\alpha}}}.
\end{align*}
\end{proposition}
\begin{proof}
Let $q\in (1,\infty)$.
By H\"older's inequality we have
\begin{align*} |\mathcal{H}_cb(\tau,\xi)|&\leq \int_\tau^\infty |H_c(\tau,\sigma,\xi)
b(\sigma,(\tfrac{\sigma}{\tau})^{2(\frac{1}{\nu}-1)}\xi)|d\sigma \\
&\leq A_{\mu,q'}(\tau,\xi)\left (\int_\tau^\infty |\sigma^\mu
b(\sigma,(\tfrac{\sigma}{\tau})^{2(\frac{1}{\nu}-1)}\xi)|^q d\sigma \right )^{1/q}
\end{align*}
with $\mu \in \mathbb{R}$ and
\[ A_{\mu,q'}(\tau,\xi):=\left ( \int_\tau^\infty |\sigma^{-\mu}H_c(\tau,\sigma,\xi)|^{q'}d\sigma
\right )^{1/q'}. \]
We claim that
\begin{equation}
\label{eq:A}
A_{\mu,q'}(\tau,\xi)\lesssim \left \{ \begin{array}{ll}
\tau^{-\mu+\frac{1}{q'}}\xi^{-\frac12} & \tau\xi^\frac12 \geq 1 \\
\tau^{-\mu+\frac{1}{q'}+1} & 0<\tau\xi^\frac12\leq 1 \end{array} \right .
\end{equation}
provided that $\mu>1+\frac{1}{q'}+5|\frac{1}{\nu}-1|$.
Indeed, if $\tau\xi^\frac12\geq 1$ we have from Lemma \ref{lem:boundsH} the bound
$|H_c(\tau,\sigma,\xi)|\lesssim (\tfrac{\sigma}{\tau})^{\frac72|\frac{1}{\nu}-1|}\xi^{-\frac12}$ and
this implies the first estimate in Eq.~\eqref{eq:A}.
In order to prove the second bound in Eq.~\eqref{eq:A} we use $|H_c(\tau,\sigma,\xi)|\lesssim
(\tfrac{\sigma}{\tau})^{5|\frac{1}{\nu}-1|}\sigma$
from Lemma \ref{lem:boundsH} which yields
$A_{\mu,q'}(\tau,\xi)\lesssim \tau^{-\mu+\frac{1}{q'}+1}$ in the case $0<\tau \xi^\frac12\leq 1$
as claimed.
Now note that Eq.~\eqref{eq:A} implies
\[ A_{\mu,q'}(\tau,\xi)\lesssim \tau^{-\mu+\frac{1}{q'}+2\delta} \xi^{-\frac12+\delta} \lesssim
\tau^{-\mu+\frac{1}{q'}+2\delta}(\xi \langle \xi \rangle^{-1})^{-\frac12+\delta} \]
for all $\xi>0$ and thus,
by interchanging the order of integration, we obtain
\begin{align*} \int_0^\infty \left |\mathcal H_c b(\tau,\xi) (\xi \langle \xi \rangle^{-1})^{\frac12-\delta}
\right |^p d\xi&\lesssim \int_\tau^\infty \int_0^\infty
\left |A_{\mu,p'}(\tau,\xi)(\xi\langle \xi \rangle^{-1})^{\frac12-\delta}\right |^p
\left |\sigma^\mu
b(\sigma,(\tfrac{\sigma}{\tau})^{2(\frac{1}{\nu}-1)}\xi)\right |^pd\xi d\sigma \\
&\lesssim \tau^{p(-\mu+\frac{1}{p'}+2\delta+2(\frac{1}{\nu}-1))}\int_\tau^\infty \int_0^\infty
|\sigma^{\mu-2(\frac{1}{\nu}-1)} b(\sigma,\eta)|^p d\eta d\sigma.
\end{align*}
Now we set $\mu=\beta-\delta$ which is admissible since $\beta-\delta>1+\frac{1}{p'}+5|\frac{1}{\nu}-1|$
provided $\nu$ is sufficiently close to $1$ which we may safely assume.
Hence, we infer
\begin{align*} \int_0^\infty \left |\mathcal H_c b(\tau,\xi) (\xi \langle \xi \rangle^{-1})^{\frac12-\delta}
\right |^p d\xi &\lesssim \tau^{p(-\beta+\frac{1}{p'}+3\delta+2(\frac{1}{\nu}-1))}\|b\|_{L^{\infty,\beta}_{\tau_0}L^p(0,\infty)}^p
\int_\tau^\infty \sigma^{-p(\delta+2(\frac{1}{\nu}-1))}d\sigma \\
&\lesssim \tau^{p(-\beta+1+2\delta)}\|b\|_{L^{\infty,\beta}_{\tau_0}{Y^{p,\alpha}}}^p
\end{align*}
where the last step is justified since $1-\delta-2(\frac{1}{\nu}-1)<\frac{1}{p'}=1-\frac{1}{p}$ and this implies
$-p(\delta+2(\frac{1}{\nu}-1))<-1$.
For the $L^2$ based part in $\|\cdot\|_{X^{p,\alpha}_\delta}$ we proceed similarly and obtain from Eq.~\eqref{eq:A}
the bound
$A_{\mu,2}(\tau,\xi)\lesssim \tau^{-\mu+\frac12}\xi^{-\frac12}$
for all $\xi>0$.
This shows
\begin{align*}
\int_0^\infty |\mathcal{H}_cb(\tau,\xi)|^2 \xi \langle \xi \rangle^{2\alpha}\rho(\xi)d\xi
&\lesssim \tau^{2(-\mu+\frac12)}\int_\tau^\infty \int_0^\infty
\left |\sigma^\mu b(\sigma,\omega(\tau,\sigma)^{-1}\xi) \right |^2
\langle \xi \rangle^{2\alpha}\rho(\xi)d\xi d\sigma \\
&=\tau^{2(-\mu+\frac12)}\int_\tau^\infty \int_0^\infty
\left |\sigma^\mu b(\sigma,\eta) \right |^2
\langle \omega(\tau,\sigma)\eta \rangle^{2\alpha}\omega(\tau,\sigma)\rho(\omega(\tau,\sigma)\eta)
d\eta d\sigma \\
\end{align*}
where we write $\omega(\tau,\sigma)=(\tfrac{\sigma}{\tau})^{-2(\frac{1}{\nu}-1)}$.
We clearly have
$\langle \omega(\tau,\sigma)\eta \rangle^{2\alpha}\lesssim (\frac{\sigma}{\tau})^{4|\frac{1}{\nu}-1|}
\langle \eta \rangle^{2\alpha}$ for all
$\eta>0$ and also, $\omega(\tau,\rho)\rho(\omega(\tau,\sigma)\eta)\lesssim (\frac{\sigma}{\tau})^{3|\frac{1}{\nu}-1|}
\rho(\eta)$ provided that
$\omega(\tau,\sigma)\eta\geq 1$, cf.~Lemma \ref{lem:lgxi}.
In the case $0<\omega(\tau,\sigma)\eta\leq 1$, Lemma \ref{lem:smxi} implies
\[ \omega(\tau,\sigma)\rho(\omega(\tau,\sigma)\eta)\lesssim \omega(\tau,\sigma)^{\frac12}
\eta^{-\frac12}\lesssim (\tfrac{\sigma}{\tau})^{|\frac{1}{\nu}-1|}\rho(\eta). \]
Consequently, by choosing $\mu=\beta-\frac58$ we infer
\begin{align*}\int_0^\infty |\mathcal{H}_cb(\tau,\xi)|^2 \xi \langle \xi \rangle^{2\alpha}\rho(\xi)d\xi
&\lesssim \tau^{2(-\beta+\frac98-\frac72|\frac{1}{\nu}-1|)}\|b\|_{L^{\infty,\beta}_{\tau_0}{Y^{p,\alpha}}}
\int_\tau^\infty \sigma^{-\frac54+7|\frac{1}{\nu}-1|}d\sigma \\
&\lesssim \tau^{2(-\beta+1)}\|b\|_{L^{\infty,\beta}_{\tau_0}{Y^{p,\alpha}}}
\end{align*}
and this finishes the proof of the first estimate.
For the second bound note that the operator $\mathcal B_{c,\nu}\mathcal{H}_c$ has the kernel
$\hat{H}(\tau,\sigma,\xi)$ from Lemma
\ref{lem:diffH} and based on the bounds given there it is straightforward to prove the claimed
estimate by repeating the above arguments.
\end{proof}
It is also an easy exercise to prove an appropriate bound for the discrete part $\mathcal{H}_d$.
\begin{lemma}
\label{lem:Hd}
Let $\beta>0$ and suppose $b_d \in L^{\infty,\beta}_{\tau_0}$. Then
$$ \|\mathcal{H}_db_d\|_{L^{\infty,\beta}_{\tau_0}}\lesssim \|b_d\|_{L^{\infty,\beta}_{\tau_0}},\quad
\|(\mathcal{H}_db_d)'\|_{L^{\infty,\beta}_{\tau_0}}\lesssim \|b_d\|_{L^{\infty,\beta}_{\tau_0}}. $$
\end{lemma}
\begin{proof}
By definition (Eq.~\eqref{eq:defHd}) we have
\begin{align*}
\mathcal{H}_d b_d(\tau)&=-\tfrac12 |\xi_d|^{-\frac12} e^{-|\xi_d|^{1/2}\tau}\int_{\tau_0}^\tau
e^{|\xi_d|^{1/2}\sigma}
b_d(\sigma)d\sigma \\
&\quad -\tfrac12 |\xi_d|^{-\frac12} e^{|\xi_d|^{1/2}\tau}\int_\tau^\infty e^{-|\xi_d|^{1/2}\sigma}
b_d(\sigma)d\sigma \\
&=:I_1(\tau)+I_2(\tau).
\end{align*}
It is evident that $|I_2(\tau)|\lesssim \langle \tau \rangle^{-\beta}$ and in order to estimate
$I_1(\tau)$
we note that
\[ |I_1(\tau)|\lesssim \sup_{\sigma>\tau_0}\sigma^\beta |b_d(\sigma)|
e^{-|\xi_d|^{1/2}\tau}\int_{\tau_0}^\tau
e^{|\xi_d|^{1/2}\sigma}
\sigma^{-\beta}d\sigma \]
and the first assertion follows by performing one integration by parts.
The proof of the second bound is identical.
\end{proof}
\section{Estimates for the nonlinear and inhomogeneous terms}
\label{sec:nonlinhom}
We provide estimates (in terms of the spaces in Definition \ref{def:spaces}) for the various
contributions on the right-hand side of our main equation \eqref{eq:sysFourier} that do not
involve the operator $\mathcal{K}$ from the transference identity.
Thus, this section is mainly concerned with the nonlinear contributions.
In order to treat the nonlinearity, we first discuss mapping properties of the distorted Fourier transform $\mathcal F$.
These allow us to transfer the problem to the physical side where the nonlinearity can be estimated
using standard tools.
The main ingredients are basic inequalities and interpolation theory of Sobolev
spaces as well as the fractional Leibniz rule.
As a consequence, we infer the crucial contraction property of the nonlinearity on our
spaces.
\subsection{The inhomogeneous term}
\label{sec:inhom}
We start with the inhomogeneous term $\hat{e}_2$ as defined in Eq.~\eqref{eq:defhate2}.
\begin{lemma}
\label{lem:Fe2}
For any fixed $\epsilon>0$ we have
$$ \hat{e}_2 \in L^{\infty,3-\epsilon-3|\frac{1}{\nu}-1|}{Y^{p,\alpha}},\quad
\hat{e}_2(\cdot,\xi_d)\in L^{\infty,3-\epsilon-3|\frac{1}{\nu}-1|} $$
for all $p > 1$ and $\alpha \in [0,\frac14)$.
\end{lemma}
\begin{proof}
We distinguish between $\xi\lesssim 1$ (including $\xi=\xi_d$) and $\xi \gtrsim 1$.
If $\xi\lesssim 1$ we have from Corollary \ref{cor:phi} the bound $|\phi(R,\xi)|\lesssim 1$ and
from Lemma \ref{lem:v1} we recall
$$ \left |\tilde{\lambda}(\tau)^{-2}e_2\left (\nu \tilde{\lambda}(\tau)^{-1}\tau,
\tilde{\lambda}(\tau)^{-1}R\right ) \right |\lesssim \tilde{\lambda}(\tau)^{\frac12}\tau^{-4+\epsilon+\frac52|\frac{1}{\nu}-1|}\langle
R \rangle^{-1} $$
in the truncated cone $r\leq t-c$ which corresponds to $R\leq \nu\tau-\tilde{\lambda}(\tau)c$.
Thus, by Eq.~\eqref{eq:defhate2} we obtain
\begin{align*} |\hat{e}_2(\tau,\xi)|&\lesssim \tilde{\lambda}(\tau)^{\frac12}\tau^{-4+\epsilon+\frac52|\frac{1}{\nu}-1|}
\int_0^\infty \chi\left (\tilde{\lambda}(\tau)^{-1}\tfrac{\nu \tau-R}{c}\right)dR \\
&\lesssim \tilde{\lambda}(\tau)^{\frac12}\tau^{-4+\epsilon+\frac52|\frac{1}{\nu}-1|} \int_0^{\tau}dR\lesssim
\tau^{-3+\epsilon+3 |\frac{1}{\nu}-1|}
\end{align*}
by recalling that $\tilde{\lambda}(\tau)\simeq \tau^{-(\frac{1}{\nu}-1)}$
since the cut-off localizes to the lightcone $R\leq \nu \tau\lesssim \tau$.
If $\xi\gtrsim 1$ we use
$$ \phi(R,\xi)=O_\mathbb{C}(\xi^{-\frac12})e^{i\sqrt{\xi}R}[1+O_\mathbb{C}
(\langle R\rangle^{-3}\xi^{-\frac12})]
+O_\mathbb{C}(\xi^{-\frac12})e^{-i\sqrt{\xi}R}[1+O_\mathbb{C}
(\langle R\rangle^{-3}\xi^{-\frac12})]$$
with symbol behavior of all $O$-terms (Lemma \ref{lem:Jostlgxi} and Corollary \ref{cor:phi})
and perform one integration by parts to obtain
\begin{align*}
|\hat{e}_2(\tau,\xi)|&\lesssim \tilde{\lambda}(\tau)^{\frac12}\tau^{-4+\epsilon+\frac52|\frac{1}{\nu}-1|}
\xi^{-1}\int_0^{\tau}\langle R \rangle^{-1}dR \\
&\quad +\tilde{\lambda}(\tau)^{-\frac12}\tau^{-4+\epsilon+\frac52|\frac{1}{\nu}-1|}\xi^{-1}\int_0^\infty \left |\chi' \left (
\tilde{\lambda}(\tau)^{-1}\tfrac{\nu \tau-R}{c}\right)\right |dR \\
&\lesssim \tilde{\lambda}(\tau)^{\frac12}\tau^{-3+\epsilon+\frac52|\frac{1}{\nu}-1|}\xi^{-1}+
\tilde{\lambda}(\tau)^{-\frac12}\tau^{-4+\epsilon+\frac52|\frac{1}{\nu}-1|}\xi^{-1}
\int_{\nu \tau-\tilde{\lambda}(\tau)c}^{\nu \tau-\frac12 \tilde{\lambda}(\tau)c}dR \\
&\lesssim \tau^{-3+\epsilon+3 |\frac{1}{\nu}-1|}\xi^{-1}.
\end{align*}
\end{proof}
\subsection{Mapping properties of $\mathcal{F}$}
It is of fundamental importance to understand the action of the distorted Fourier transform
on the spaces ${X^{p,\alpha}_\delta}$.
In the following we use the standard Sobolev
spaces $W^{s,p}(\mathbb{R}^3)$ and in order to make sense of $\|f\|_{W^{s,p}(\mathbb{R}^3)}$ for a function
$f: [0,\infty)\to \mathbb{C}$, we identify $f$ with the radial function $x \mapsto f(|x|)$ on $\mathbb{R}^3$.
In this sense we have, for instance, $\|f\|_{L^2(0,\infty)}\simeq \||\cdot|^{-1}f\|_{L^2(\mathbb{R}^3)}$.
Note that if $f: [0,\infty) \to \mathbb{C}$ is smooth and $f^{(2k-1)}(0)=0$ for
all $k \in \mathbb{N}$ then $x \mapsto f(|x|)$ is a smooth function on $\mathbb{R}^3$.
To begin with, we write
$$ \|f\|_{L^{2,\alpha}_\rho}^2:=\int_0^\infty |f(\xi)|^2 \langle \xi\rangle^{2\alpha}\rho(\xi)d\xi $$
and recall a fundamental result from \cite{KST09}.
\begin{lemma}
\label{lem:H2alpha}
Let $\alpha\geq 0$. Then
$$ \|(a,f)\|_{\mathbb{C}\times L^{2,\alpha}_\rho}\simeq \||\cdot|^{-1}\mathcal{F}^{-1}(a,f)\|_{H^{2\alpha}(\mathbb{R}^3)} $$
for all $(a,f) \in \mathbb{C} \times L^{2,\alpha}_\rho$ or, equivalently,
$$ \|\mathcal{F}(|\cdot|\,g)\|_{\mathbb{C}\times L^{2,\alpha}_\rho}\simeq \|g\|_{H^{2\alpha}(\mathbb{R}^3)} $$
for all radial $g \in H^{2\alpha}(\mathbb{R}^3)$.
\end{lemma}
\begin{proof}
This is a consequence of the unitarity of $\mathcal{U}$, see \cite{KST09}.
\end{proof}
\begin{lemma}
\label{lem:F-1}
Fix a small $\delta >0$ and let $p\geq 1$ be so large that $p'(1-\delta)<1$. Furthermore, denote by $\chi$ a smooth
cut-off function which satisfies $\chi(\xi)=0$ for $\xi \in [0,1]$ and $\chi(\xi)=1$ for $\xi\geq 2$.
Then, for any $\alpha\geq 0$, the following estimates hold:
\begin{enumerate}
\item $\||\cdot|^{-1}\mathcal{F}^{-1}(a,\chi f)\|_{H^{2\alpha+1}(\mathbb{R}^3)}\lesssim \|(a,f)\|_{\mathbb{C}\times {X^{p,\alpha}_\delta}}$,
\item $\|\mathcal{F}^{-1}(a,(1-\chi)f)\|_{L^\infty(0,\infty)}\lesssim \|(a,f)\|_{\mathbb{C}\times {X^{p,\alpha}_\delta}}$,
\item $\||\cdot|^{-1}\mathcal{F}^{-1}(a,(1-\chi)f)\|_{L^q(\mathbb{R}^3)}\lesssim \|(a,f)\|_{\mathbb{C}\times {X^{p,\alpha}_\delta}}$
for any $q\in (3,\infty]$,
\item $\||\cdot|^{-1}\mathcal{F}^{-1}(a,(1-\chi)f)\|_{\dot{W}^{2\theta,\frac{2}{\theta+\frac{2}{q}(1-\theta)}}(\mathbb{R}^3)}\lesssim
\|(a,f)\|_{\mathbb{C}\times {X^{p,\alpha}_\delta}}$ for any $q \in (3,\infty)$ and $\theta \in [0,1]$.
\end{enumerate}
\end{lemma}
\begin{proof}
Recall that $\mathcal{F}^{-1}(a,f)$ is given by
$$ \mathcal{F}^{-1}(a,f)=a\frac{\phi(\cdot,\xi_d)}{\|\phi(\cdot,\xi_d)\|^2_{L^2(0,\infty)}}
+\int_0^\infty \phi(\cdot,\xi)f(\xi)\rho(\xi)d\xi. $$
Since $|\cdot|^{-1}\phi(|\cdot|,\xi_d) \in C^\infty(\mathbb{R}^3)$ with exponential decay towards infinity,
the estimates for the discrete part follow immediately.
Thus, we focus on the integral term.
\begin{enumerate}
\item By Lemma \ref{lem:H2alpha} it suffices to note that
$$ \|\chi f\|_{L^{2,\alpha+1/2}_\rho}\lesssim \|\chi f\|_{X^{p,\alpha}_\delta}. $$
\item \label{item:Linf}
By Corollary \ref{cor:phi} we have $|\phi(R,\xi)|\lesssim 1$ for all $R,\xi \geq 0$ and thus,
H\"older's inequality yields
$$ \left |\int_0^\infty \phi(R,\xi)[1-\chi(\xi)]f(\xi)\rho(\xi)d\xi \right |
\lesssim \left (\int_0^2 |f(\xi)\xi^{\frac12-\delta}|^p d\xi\right )^{1/p}
\left ( \int_0^2 |\xi^{-\frac12+\delta} \rho(\xi)|^{p'}d\xi \right )^{1/p'}$$
and the right-hand side of this inequality is finite and controlled by $\|f\|_{X^{p,\alpha}_\delta}$ since
$|\rho(\xi)\xi^{-\frac12+\delta}|\lesssim \xi^{-1+\delta}$ for $\xi \in (0,2)$
(Lemma \ref{lem:smxi}) and $p'(-1+\delta)>-1$ by assumption on $p$.
\item \label{item:Lq}
From Lemma \ref{lem:Phi} and Corollary \ref{cor:phi} we obtain
$$ \sup_{\xi \in (0,2)}\left |\frac{\phi(R,\xi)}{R} \right| \lesssim \langle R \rangle^{-1} $$
for all $R \geq 0$ and by repeating the argument in (\ref{item:Linf}) this implies
$$ |R^{-1}\mathcal{F}^{-1}(a,(1-\chi)f)(R)|\lesssim \langle R \rangle^{-1}\|(a,(1-\chi)f)\|_{\mathbb{C}\times
{X^{p,\alpha}_\delta}} $$
which yields
$$ \||\cdot|^{-1}\mathcal{F}^{-1}(a,(1-\chi)f)\|_{L^q(\mathbb{R}^3)}\lesssim \|(a,(1-\chi)f)\|_{\mathbb{C}\times
{X^{p,\alpha}_\delta}} $$
for any $q \in (3,\infty]$.
\item
From Lemma \ref{lem:H2alpha} we have
$$ \||\cdot|^{-1}\mathcal{F}^{-1}(a,(1-\chi)f)\|_{\dot{H}^2(\mathbb{R}^3)}\lesssim \|(a,(1-\chi)f)\|_{\mathbb{C}\times
{X^{p,\alpha}_\delta}} $$
and the claim follows from this and (\ref{item:Lq}) by complex interpolation since
$[L^q,\dot{H}^2]_\theta=
\dot{W}^{2\theta,\frac{2}{\theta+\frac{2}{q}(1-\theta)}}$, see \cite{BL1976}.
\end{enumerate}
\end{proof}
\subsection{The operator $\mathcal{N}_1$}
In order to estimate the operator $\mathcal{N}_1$, defined in Eq.~\eqref{eq:defR}, we
first prove the following auxiliary result.
\begin{lemma}
\label{lem:FR}
Let $p,\delta$ be as in Lemma \ref{lem:F-1} and let
$\varphi: [0,\infty) \to \mathbb{R}$ be a function satisfying
$$ |\varphi^{(k)}(R)|\leq C_k \langle R \rangle^{-\gamma-k} $$
for some $\gamma > \frac32$ and all $R\geq 0$, $k\in \mathbb{N}_0$.
Then $\varphi \in H^2(\mathbb{R}^3)$ and there exists a small $\epsilon>0$ such that
\begin{align*}
\|\mathcal{F}(|\cdot|\varphi g)\|_{\mathbb{C}\times Y^{p,\frac18}}
\lesssim &\left [\|\varphi\|_{H^2(\mathbb{R}^3)}+\|\varphi\|_{L^1(0,\infty)}\right ]
\Big [\||\cdot|g_-\|_{L^\infty(0,\infty)}+\|g_-\|_{L^\infty(0,\infty)} \\
&+\|g_-\|_{\dot{W}^{\frac14,16-\epsilon}
(\mathbb{R}^3)}+\|g_+\|_{H^{\frac54}(\mathbb{R}^3)} \Big ]
\end{align*}
for all $g=g_-+g_+$ where $g_-$ and $g_+$ belong to the respective spaces on the right-hand side.
\end{lemma}
\begin{proof}
Note first that $\varphi \in L^2(\mathbb{R}^3)$.
Furthermore,
we have
$$\partial_k \partial_j \varphi(|x|)=\varphi''(|x|)\tfrac{x_j x_k}{|x|^2}
+\varphi'(|x|)\left [\tfrac{\delta_{jk}}{|x|}-\tfrac{x_jx_k}{|x|^3}\right ] $$
and thus, $|\partial_k \partial_j \varphi(|x|)|\lesssim |x|^{-1}\langle x\rangle^{-\gamma-1}$
which implies $\varphi \in H^2(\mathbb{R}^3)$.
Now recall that
$$ \mathcal{F}(\varphi g)=\left (\begin{array}{c}
\int_0^\infty \phi(R,\xi_d)\varphi(R)g(R)dR \\
\int_0^\infty \phi(R,\cdot)\varphi(R)g(R)dR
\end{array} \right )$$
and $\|\cdot\|_{Y^{p,\alpha}}\simeq \|\cdot\|_{L^p(0,\infty)}+\|\cdot\|_{L^{2,\alpha}_\rho}$.
We proceed in four steps, estimating each component separately.
\begin{enumerate}
\item
We note the estimate
$$ \|uv\|_{H^\frac14(\mathbb{R}^3)}\lesssim \|u\|_{H^{\frac78}(\mathbb{R}^3)}
\|v\|_{H^{\frac78}(\mathbb{R}^3)} $$
which is a consequence of the fractional Leibniz rule (\cite{T00},
p.~105, Proposition 1.1)
$$ \|uv\|_{H^\frac14(\mathbb{R}^3)}\lesssim \|u\|_{L^\frac{24}{5}(\mathbb{R}^3)}
\|v\|_{W^{\frac14,\frac{24}{7}}(\mathbb{R}^3)}+
\|v\|_{L^\frac{24}{5}(\mathbb{R}^3)}
\|u\|_{W^{\frac14,\frac{24}{7}}(\mathbb{R}^3)}$$
and the Sobolev embeddings $H^\frac78(\mathbb{R}^3)\hookrightarrow L^\frac{24}{5}(\mathbb{R}^3)$,
$H^\frac78(\mathbb{R}^3) \hookrightarrow W^{\frac14,\frac{24}{7}}(\mathbb{R}^3)$.
With these preparations at hand we immediately conclude from
Lemma \ref{lem:H2alpha} that
\begin{align*}\|\mathcal{F}(|\cdot|\varphi g_+)\|_{\mathbb{C}\times L^{2,\frac18}_\rho}&\simeq
\|\varphi g_+\|_{H^{\frac14}(\mathbb{R}^3)}\lesssim
\|\varphi\|_{H^\frac78(\mathbb{R}^3)}\|g_+\|_{H^{\frac78}(\mathbb{R}^3)} \\
&\lesssim \|\varphi\|_{H^2(\mathbb{R}^3)}\|g_+\|_{H^\frac54(\mathbb{R}^3)}.
\end{align*}
\item \label{item:Lp}
In order to estimate the $L^p$-part of $g_+$ note first that
$|\phi(R,\xi)|\lesssim \langle \xi \rangle^{-\frac12}$ for all $R\geq 0$ by Lemmas \ref{lem:Phi},
\ref{lem:Jost}, \ref{lem:Jostlgxi} and Corollary \ref{cor:phi}.
This yields
\begin{align*} |[\mathcal{F}(|\cdot|\varphi g_+)]_2(\xi)|&\lesssim \langle \xi \rangle^{-\frac12}
\|\varphi\|_{L^2(0,\infty)}\||\cdot|g_+\|_{L^2(0,\infty)} \\
&\simeq \langle \xi \rangle^{-\frac12}
\|\varphi\|_{L^2(0,\infty)}\|g_+\|_{L^2(\mathbb{R}^3)} \\
&\lesssim \langle \xi \rangle^{-\frac12}\|\varphi\|_{H^2(\mathbb{R}^3)}
\|g_+\|_{H^{\frac54}(\mathbb{R}^3)}
\end{align*}
by noting that $\|\varphi\|_{L^2(0,\infty)}\lesssim \|\varphi\|_{L^\infty(\mathbb{R}^3)}
+\|\varphi\|_{L^2(\mathbb{R}^3)}\lesssim \|\varphi\|_{H^2(\mathbb{R}^3)}$.
By raising the above inequality to the power $p>2$ we conclude
$$\|\mathcal{F}(|\cdot|\varphi g_+)\|_{\mathbb{C}\times L^p(0,\infty)}\lesssim
\|\varphi\|_{H^2(\mathbb{R}^3)}\|g_+\|_{H^{\frac54}(\mathbb{R}^3)}. $$
\item
In order to estimate the $L^p$-norm of the second component we note, as in (\ref{item:Lp}),
$$ |[\mathcal{F}(|\cdot|\varphi g_-)]_2(\xi)|\lesssim \langle \xi \rangle^{-\frac12}
\|\varphi\|_{L^1(0,\infty)}\||\cdot|g_-\|_{L^\infty(0,\infty)}
$$
and therefore,
$$ \|[\mathcal{F}(|\cdot|\varphi g_-)]_2\|_{L^p(0,\infty)}\lesssim
\|\varphi\|_{L^1(0,\infty)}\||\cdot|g_-\|_{L^\infty(0,\infty)} $$
provided that $p>2$.
\item For the $L^2$-part of $g_-$ we use Lemma \ref{lem:H2alpha} again and obtain
\begin{align*} \|\mathcal{F}(|\cdot|\varphi g_-)\|_{\mathbb{C} \times L^{2,\frac18}_\rho}
&\simeq \|\varphi g_-\|_{H^{\frac14}(\mathbb{R}^3)} \\
&\simeq \|\varphi g_-\|_{L^2(\mathbb{R}^3)}
+\|\varphi g_-\|_{\dot{H}^{\frac14}(\mathbb{R}^3)}.
\end{align*}
Note that
$$ \|\varphi g_-\|_{L^2(\mathbb{R}^3)}
\lesssim \|\varphi\|_{L^2(\mathbb{R}^3)}\|g_-\|_{L^\infty(0,\infty)}
\lesssim \|\varphi\|_{H^2(\mathbb{R}^3)}\|g_-\|_{L^\infty(0,\infty)}$$
and it remains to bound the $\dot{H}^{\frac14}$-norm.
For this we use the fractional Leibniz rule and obtain
\begin{align*} \|\varphi g_-\|_{\dot{H}^{\frac14}(\mathbb{R}^3)}
\lesssim &\|\varphi\|_{\dot{H}^\frac14(\mathbb{R}^3)}
\|g_-\|_{L^\infty(0,\infty)} \\
&+\|\varphi\|_{L^{\frac{16}{7}+}(\mathbb{R}^3)}\|g_-\|_{
\dot{W}^{\frac14,16-}(\mathbb{R}^3)}
\end{align*}
and this yields the claim since
$H^2(\mathbb{R}^3) \hookrightarrow L^{\frac{16}{7}+}(\mathbb{R}^3)$ by Sobolev embedding.
\end{enumerate}
\end{proof}
Now we are ready to establish the crucial estimate for the operator $\mathcal{N}_1$,
cf.~Eq.~\eqref{eq:defR}.
\begin{lemma}
Let $p,\delta$ be as in Lemma \ref{lem:F-1} and $\beta_d, \beta_c \geq 0$.
Then we have
$$ \|\mathcal{N}_1 (x_d,x)\|_{L^{\infty,\beta_d+2}_{\tau_0}\times L^{\infty,\beta_c+2}_{\tau_0}Y^{p,\frac18}}
\lesssim \|(x_d,x)\|_{L^{\infty,\beta_d}_{\tau_0} \times L^{\infty,\beta_c}_{\tau_0}X^{p,\frac18}_\delta}. $$
\end{lemma}
\begin{proof}
Recall from Eq.~\eqref{eq:defR} that
$$ \mathcal{N}_1(x_d,x)(\tau,\xi)=\mathcal{F}\left (|\cdot|\varphi_1(\tau,\cdot) |\cdot|^{-1}\mathcal{F}^{-1}(x_d(\tau),
x(\tau,\cdot))\right )(\xi) $$
with
$$ \varphi_1(\tau,R)=5\tilde{\lambda}(\tau)^{-2}\tilde{\chi}(\tau,R)[
u_2(\nu \tilde{\lambda}(\tau)^{-1}\tau, \tilde{\lambda}(\tau)^{-1}R)^4
-u_0(\nu \tilde{\lambda}(\tau)^{-1}\tau, \tilde{\lambda}(\tau)^{-1}R)^4]. $$
Furthermore, we have $u_2=u_0+v_0+v_1$ with $v_0$ and $v_1$ from Lemmas \ref{lem:e1}
and \ref{lem:v1}, respectively.
Now note that
$$ u_2^4-u_0^4=4u_0^3 v+6u_0^2v^2+4u_0v^3+v^4 $$
where $v:=v_0+v_1$ and from Lemmas \ref{lem:e1} and \ref{lem:v1} as well as the definition of
$u_0$ we have the bounds
\begin{align*}
|v(\nu \tilde{\lambda}(\tau)^{-1}\tau,\tilde{\lambda}(\tau)^{-1}R)|&\lesssim
\tilde{\lambda}(\tau)^\frac12 \tau^{-2}\langle R \rangle \\
|u_0(\nu \tilde{\lambda}(\tau)^{-1}\tau,\tilde{\lambda}(\tau)^{-1}R)|&\lesssim
\tilde{\lambda}(\tau)^{\frac12}\langle R \rangle^{-1}
\end{align*}
for $R\leq \nu\tau-\frac12 \tilde{\lambda}(\tau)c$
which imply
$$ |\varphi_1(\tau,R)|\lesssim \tau^{-2}\langle R \rangle^{-2}. $$
We also have
$|\partial_R^k \varphi_1(\tau,R)|\leq C_k \tau^{-2}\langle R \rangle^{-2-k}$
for all $k \in \mathbb{N}_0$ and this implies
$$ \|\varphi_1(\tau,\cdot)\|_{H^2(\mathbb{R}^3)}+\|\varphi_1(\tau,\cdot)\|_{L^1(0,\infty)}
\lesssim \tau^{-2},$$ cf.~Lemma \ref{lem:FR}.
For given $(x_d,x)\in L^{\infty,\beta_d}_{\tau_0}\times L^{\infty,\beta_c}_{\tau_0}X^{p,\frac18}_\delta$ we now set
$$ y(\tau,R):=R^{-1}\mathcal{F}^{-1}(x_d(\tau),x(\tau,\cdot))(R), $$
i.e., we have $\mathcal{N}_1(x_d,x)(\tau,\xi)=\mathcal{F}(|\cdot|\varphi_1(\tau,\cdot) y(\tau,\cdot))(\xi)$
and obviously, our goal is to apply Lemma \ref{lem:FR}.
According to Lemma \ref{lem:F-1} we have a decomposition $y=y_-+y_+$ such that
\begin{align*}
\|y_+(\tau,\cdot)\|_{H^\frac54(\mathbb{R}^3)}&\lesssim \|(x_d(\tau),x(\tau,\cdot))\|_{\mathbb{C}
\times X^{p,\frac18}_\delta} \\
\||\cdot|y_-(\tau,\cdot)\|_{L^\infty(0,\infty)}&\lesssim \|(x_d(\tau),x(\tau,\cdot))\|_{\mathbb{C}
\times X^{p,\frac18}_\delta} \\
\|y_-(\tau,\cdot)\|_{L^\infty(0,\infty)}&\lesssim \|(x_d(\tau),x(\tau,\cdot))\|_{\mathbb{C}
\times X^{p,\frac18}_\delta} \\
\|y_-(\tau,\cdot)\|_{\dot{W}^{\frac14,16-\epsilon}(\mathbb{R}^3)}
&\lesssim \|(x_d(\tau),x(\tau,\cdot))\|_{\mathbb{C}
\times X^{p,\frac18}_\delta}
\end{align*}
with the $\epsilon$ from Lemma \ref{lem:FR}.
Consequently, Lemma \ref{lem:FR} indeed applies and yields
$$ \|\mathcal{N}_1(x_d,x)(\tau,\cdot)\|_{\mathbb{C}\times Y^{p,\frac18}}\lesssim \tau^{-2}
\|(x_d(\tau),x(\tau,\cdot))\|_{\mathbb{C}\times X^{p,\frac18}_\delta} $$
which implies the claim.
\end{proof}
\subsection{The operator $\mathcal{N}_5$}
Next, we study the operator $\mathcal{N}_5$.
As before, we start with an auxiliary estimate that does not take into account the time dependence.
\begin{lemma}
\label{lem:FR5}
Let $p,\delta$ be as in Lemma \ref{lem:F-1}.
Then there exists a small $\epsilon>0$ such that
\begin{align*} \|\mathcal{F}\left (|\cdot|g^5\right )\|_{\mathbb{C}\times Y^{p,\frac18}}
\lesssim &\|g_-\|_{L^5(\mathbb{R}^3)}^5
+\|g_-\|_{L^\infty(\mathbb{R}^3)}^5 \\
&+\|g_-\|_{\dot{W}^{\frac14,16-\epsilon}(\mathbb{R}^3)}^5
+\|g_+\|_{H^\frac54(\mathbb{R}^3)}^5
\end{align*}
for all $g=g_-+g_+$ such that the right-hand side is finite.
\end{lemma}
\begin{proof}
We have $g^5=(g_-+g_+)^5=g_-^5+\dots+g_+^5$
and study the extreme cases $g_-^5$ and $g_+^5$ first. The intermediate terms are then estimated
by interpolation.
Let us note the estimate
$$ \left \|\prod_{j=1}^5 f_j \right \|_{H^\frac14(\mathbb{R}^3)}
\lesssim \sum_{j=1}^5 \|f_j\|_{W^{\frac14,6}(\mathbb{R}^3)}\prod_{\ell\not= j}
\|f_\ell\|_{L^{12}(\mathbb{R}^3)} $$
which is a consequence of the fractional Leibniz rule (\cite{T00}, p.~105, Proposition 1.1).
By the Sobolev embeddings
$H^\frac54(\mathbb{R}^3)\hookrightarrow L^{12}(\mathbb{R}^3)$ and
$H^\frac54(\mathbb{R}^3)\hookrightarrow W^{\frac14,6}(\mathbb{R}^3)$ we infer
$$ \left \|\prod_{j=1}^5 f_j \right \|_{H^\frac14(\mathbb{R}^3)}
\lesssim \prod_{j=1}^5 \|f_j\|_{H^{\frac54}(\mathbb{R}^3)} $$
which will be useful in the following.
\begin{enumerate}
\item According to Lemma \ref{lem:H2alpha} we have
$$
\|\mathcal{F}(|\cdot|g_+^5)\|_{\mathbb{C}\times L^{2,\frac18}_\rho}\simeq
\|g_+^5\|_{H^\frac14(\mathbb{R}^3)}\lesssim
\|g_+\|_{H^\frac54(\mathbb{R}^3)}^5. $$
\item In order to estimate the $L^p$-part of $g_+$ recall that $|\phi(R,\xi)|\lesssim
\langle \xi\rangle^{-\frac12}$ for all $R\geq 0$ and thus,
\begin{align*}
|[\mathcal{F}(|\cdot|g_+^5)]_2(\xi)|&\lesssim \|\phi(\cdot, \xi)\|_{L^2(0,1)}
\||\cdot|g_+^5\|_{L^2(0,1)} \\
&\quad +\||\cdot|^{-1}\phi(\cdot,\xi)\|_{L^\infty(1,\infty)}
\||\cdot|^2 g_+^5\|_{L^1(1,\infty)} \\
&\lesssim \langle \xi \rangle^{-\frac12}\left (\|g_+\|_{L^{10}(\mathbb{R}^3)}^5
+\|g_+\|_{L^5(\mathbb{R}^3)}^5 \right ).
\end{align*}
Consequently, the Sobolev embeddings $H^\frac54(\mathbb{R}^3)\hookrightarrow L^{10}(\mathbb{R}^3)$
and
$H^\frac54(\mathbb{R}^3)\hookrightarrow L^5(\mathbb{R}^3)$ yield
$$ \|[\mathcal{F}(|\cdot|g_+^5)]_2\|_{L^p(0,\infty)}\lesssim
\|g_+\|_{H^\frac54(\mathbb{R}^3)}^5 $$
for $p>2$ as desired.
\item
For the $L^p$-part of $g_-$ note that
\begin{align*} |[\mathcal{F}(|\cdot|g_-^5)]_2(\xi)|&\lesssim \langle \xi\rangle^{-\frac12}
\left [\|g_-^5\|_{L^\infty(0,1)}+
\||\cdot|^2g_-^5\|_{L^1(1,\infty)} \right ] \\
&\lesssim \langle \xi \rangle^{-\frac12}\left [ \|g_-\|_{L^\infty(\mathbb{R}^3)}^5
+\|g_-\|_{L^{5}(\mathbb{R}^3)}^5 \right ]
\end{align*}
which yields the desired bound
$$ \|[\mathcal{F}(|\cdot|g_-^5)]_2\|_{L^p(0,\infty)}\lesssim
\|g_-\|_{L^\infty(\mathbb{R}^3)}^5
+\|g_-\|_{L^5(\mathbb{R}^3)}^5 $$
provided that $p>2$.
\item
According to Lemma \ref{lem:F-1} we have
$$ \|\mathcal{F}(|\cdot|g_-^5)\|_{\mathbb{C}\times L^{2,\frac18}_\rho}
\simeq \|g_-^5\|_{H^\frac14(\mathbb{R}^3)} $$
and since
$\|g_-^5\|_{L^2(\mathbb{R}^3)}=\|g_-\|_{L^{10}(\mathbb{R}^3)}^5 \lesssim
\|g_-\|_{L^5(\mathbb{R}^3)}^5+\|g_-\|_{L^\infty(\mathbb{R}^3)}^5$
it suffices to control the homogeneous Sobolev norm
$\|g_-^5\|_{\dot{H}^\frac14(\mathbb{R}^3)}$.
For this we use the fractional Leibniz rule to conclude
\begin{align*} \|g_-^5\|_{\dot{H}^\frac14(\mathbb{R}^3)}&\lesssim \|g_-\|_{L^{\frac{64}{7}+}(\mathbb{R}^3)}^4
\|g_-\|_{\dot{W}^{\frac14,16-}(\mathbb{R}^3)} \\
&\lesssim \left [ \|g_-\|_{L^5(\mathbb{R}^3)}^4+\|g_-\|_{L^\infty(\mathbb{R}^3)}^4 \right ]
\|g_-\|_{\dot{W}^{\frac14,16-}(\mathbb{R}^3)}.
\end{align*}
\end{enumerate}
We briefly comment on how to estimate the mixed terms.
First of all, it is trivial to bound the $L^p$ norms since
$$ |[\mathcal{F}(|\cdot|g_-^{5-k}g_+^k)]_2(\xi)|\lesssim \int_0^\infty |\phi(R,\xi)|R \left [|g_-(R)|^5
+|g_+(R)|^5 \right ]dR $$
for all $k \in \{1,2,3,4\}$ by the pointwise inequality $|g_-^{5-k}g_+^k|\lesssim |g_-|^5+|g_+|^5$
which brings us back to the two extreme cases considered above.
For the $L^2$-parts the only nontrivial contributions come from the $\dot{H}^{\frac14}(\mathbb{R}^3)$
homogeneous Sobolev norms.
In order to control these, one proceeds as before by applying
the fractional Leibniz rule followed by Sobolev embedding, i.e.,
\begin{align*}
\|g_-^{5-k} g_+^k\|_{\dot{H}^\frac14(\mathbb{R}^3)}&\lesssim
\|g_-\|_{L^\infty(\mathbb{R}^3)}^{5-k}\|g_+\|_{L^{3(k-1)}(\mathbb{R}^3)}^{k-1}
\|g_+\|_{\dot{W}^{\frac14,6}(\mathbb{R}^3)} \\
&\quad +\|g_+\|_{L^{k\frac{16}{7}+}(\mathbb{R}^3)}^k
\|g_-\|_{L^\infty(\mathbb{R}^3)}^{4-k}\|g_-\|_{\dot{W}^{\frac14,16-}(\mathbb{R}^3)} \\
&\lesssim \|g_-\|_{L^\infty(\mathbb{R}^3)}^{5-k}\|g_+\|_{H^\frac54(\mathbb{R}^3)}^k
+\|g_-\|_{\dot{W}^{\frac14,16-}(\mathbb{R}^3)}^{5-k}\|g_+\|_{H^\frac54(\mathbb{R}^3)}^k.
\end{align*}
\end{proof}
\begin{lemma}
\label{lem:R5}
Let $p,\delta$ be as in Lemma \ref{lem:F-1} and $\beta_d, \beta_c \geq 0$.
Then we have the estimate
$$ \|\mathcal{N}_5(x_d,x)\|_{L^{\infty,5\beta_d-\frac14}_{\tau_0}\times L^{\infty,5\beta_c-\frac14}_{\tau_0}Y^{p,\frac18}}
\lesssim \|(x_d,x)\|_{L^{\infty,\beta_d}_{\tau_0}\times L^{\infty,\beta_c}_{\tau_0}X^{p,\frac18}_\delta}^5. $$
\end{lemma}
\begin{proof}
Recall that
$$ \mathcal{N}_5(x_d,x)(\tau,\xi)=\mathcal{F}\left (|\cdot|\varphi_5(\tau,\cdot)\left [|\cdot|^{-1}\mathcal{F}^{-1}
(x_d(\tau),x(\tau,\cdot))\right ]^5\right )
(\xi) $$
and by setting $y(\tau,R):=R^{-1}\mathcal{F}(x_d(\tau),x(\tau,\cdot))(\xi)$ we obtain from
Lemma \ref{lem:F-1} the existence of a decomposition $y=y_-+y_+$ with the bound
$$ \|y_-(\tau,\cdot)\|_{L^5(\mathbb{R}^3)}+\|y_-(\tau,\cdot)\|_{L^\infty(\mathbb{R}^3)}
+\|y_-(\tau,\cdot)\|_{\dot{W}^{\frac14,16-\epsilon}(\mathbb{R}^3)}\lesssim
\|(x_d(\tau),x(\tau,\cdot))\|_{\mathbb{C}\times X^{p,\frac18}_\delta} $$
as well as
$$ \|y_+(\tau,\cdot)\|_{H^\frac54(\mathbb{R}^3)}\lesssim
\|(x_d(\tau),x(\tau,\cdot))\|_{\mathbb{C}\times X^{p,\frac18}_\delta} $$
for any (small) $\epsilon>0$.
Thus, since $|\partial_R^k \varphi_5(\tau,R)|\leq C_k \tilde{\lambda}(\tau)^{-2-k}\lesssim \tau^\frac14$
for bounded $k$,
Lemma \ref{lem:FR5} yields
$$ \|\mathcal{N}_5(x_d(\tau),x(\tau,\cdot))\|_{\mathbb{C}\times Y^{p,\frac18}}
\lesssim \tau^\frac14\|(x_d(\tau),x(\tau,\cdot))\|_{\mathbb{C}\times X^{p,\frac18}_\delta}^5 $$
which implies the claim.
\end{proof}
\begin{remark}
The loss of $\tau^\frac14$ in Lemma \ref{lem:R5} can be improved to $\tau^\epsilon$ with
$\epsilon\to 0+$
as $\nu \to 1$ but the stated result suffices for our purposes and it avoids the introduction
of an additional $\epsilon$.
\end{remark}
The remaining operators $\mathcal{N}_2$, $\mathcal{N}_3$, $\mathcal{N}_4$ are in a certain sense
interpolates between $\mathcal{N}_1$ and $\mathcal{N}_5$ and can be treated in the exact same fashion.
Note that we do not gain additional decay in $\tau$ from the $\varphi_j$ factors because they involve
the (rescaled) soliton $u_0$ which does not decay.
In fact, from $\varphi_j$ we may even have a loss of $\tau^\epsilon$ (with $\epsilon \to 0+$ as $\nu \to 1$) depending
on the sign of $1-\nu$.
Consequently, the gain comes exclusively from the nonlinearity, as is the case for
the operator $\mathcal{N}_5$.
We only state the corresponding result and leave the verification to the reader.
\begin{lemma}
\label{lem:R234}
Let $p,\delta$ be as in Lemma \ref{lem:F-1} and $\beta_d, \beta_c \geq 0$.
Then we have the estimate
$$ \|\mathcal{N}_j (x_d,x)\|_{L^{\infty,j\beta_d-\frac14}_{\tau_0}\times L^{\infty,j\beta_c-\frac14}_{\tau_0}Y^{p,\frac18}}
\lesssim \|(x_d,x)\|_{L^{\infty,\beta_d}_{\tau_0}\times L^{\infty,\beta_c}_{\tau_0}X^{p,\frac18}_\delta}^j $$
for $j \in \{2,3,4\}$.
\end{lemma}
Furthermore, by basically repeating the above computations and using elementary identities such as
$$ a^n-b^n=(a-b)\sum_{j=0}^{n-1}a^jb^{n-1-j},$$
it is straightforward to see that we have the following estimate
for the operator $\mathcal{N}:=\sum_{j=1}^5 \mathcal{N}_j$.
\begin{lemma}
\label{lem:R}
Let $p,\delta$ be as in Lemma \ref{lem:F-1} and $\beta_d, \beta_c \geq \frac32$. Then there exists a continuous
function $\gamma: [0,\infty)\times [0,\infty) \to \mathbb{R}$ such that
\begin{align*}
\|\mathcal{N}(x_d,x)-\mathcal{N}(y_d,y)\|_{L^{\infty,\beta_d+\frac54}_{\tau_0}\times L^{\infty,\beta_c+\frac54}_{\tau_0}Y^{p,\frac18}}
\leq \gamma(X,Y)\|(x_d,x)-(y_d,y)\|_{L^{\infty,\beta_d}_{\tau_0}\times L^{\infty,\beta_c}_{\tau_0}X^{p,\frac18}_\delta}
\end{align*}
where
\begin{align*}
X:=\|(x_d,x)\|_{L^{\infty,\beta_d}_{\tau_0}\times L^{\infty,\beta_c}_{\tau_0}X^{p,\frac18}_\delta},\quad
Y:=\|(y_d,y)\|_{L^{\infty,\beta_d}_{\tau_0}\times L^{\infty,\beta_c}_{\tau_0}X^{p,\frac18}_\delta}.
\end{align*}
\end{lemma}
\section{Estimates for the terms involving $\mathcal{K}$ and $[\mathcal{A},\mathcal{K}]$}
\label{sec:K}
Finally, we consider the terms on the right-hand side of Eq.~\eqref{eq:sysFourier}
which come from the transference identity.
The main results in this respect are summarized in the following proposition.
\begin{proposition}[Estimates for the $\mathcal K$-operators]
\label{prop:mainK}
Let $\delta>0$ be small and assume $p \in (1,\infty)$ to be so large that $p'(1-\delta)<1$.
Then we have the estimates
\begin{align*}
\|\mathcal K (a,f)\|_{\mathbb{C} \times X^{p,\frac18}_\delta}&\lesssim \|(a,f)\|_{\mathbb{C} \times X^{p,\frac18}_\delta} &
\|[\mathcal A,\mathcal K] (a,f)\|_{\mathbb{C} \times X^{p,\frac18}_\delta}&\lesssim \|(a,f)\|_{\mathbb{C} \times X^{p,\frac18}_\delta} \\
\|\mathcal K (a,f)\|_{\mathbb{C} \times Y^{p,\frac18}}&\lesssim \|(a,f)\|_{\mathbb{C} \times X^{p,\frac18}_\delta} &
\|[\mathcal A, \mathcal K] (a,f)\|_{\mathbb{C} \times Y^{p,\frac18}}&\lesssim \|(a,f)\|_{\mathbb{C} \times X^{p,\frac18}_\delta} \\
\|\mathcal K (a,g)\|_{\mathbb{C} \times Y^{p,\frac18}}&\lesssim \|(a,g)\|_{\mathbb{C} \times Y^{p,\frac18}} &
\|[\mathcal A, \mathcal K] (a,g)\|_{\mathbb{C} \times Y^{p,\frac18}}&\lesssim \|(a,g)\|_{\mathbb{C} \times Y^{p,\frac18}}
\end{align*}
for all $a \in \mathbb{C}$, $f \in X^{p,\frac18}_\delta$, and $g\in Y^{p,\frac18}$.
\end{proposition}
Based on these bounds and the nonlinear estimates from Section \ref{sec:nonlinhom} we then prove
the existence of a solution to Eq.~\eqref{eq:sysFourier} by a fixed point argument.
\subsection{A convenient expression for $\mathcal K$}
First, we need to obtain an explicit expression for the operator $\mathcal{K}$.
Definition \eqref{eq:K} can be rewritten as
$$ (\mathcal{K}+1)\left ( \begin{array}{c}a \\ f\end{array} \right )=
\mathcal{F}\left (|\cdot|\left [ \mathcal{F}^{-1}\left ( \begin{array}{c}a \\ f\end{array} \right )\right]'\right )
-\mathcal{A}\left ( \begin{array}{c} a \\ f \end{array} \right )$$
or, equivalently,
$$ \mathcal{F}^{-1}(\mathcal{K}+1)\left ( \begin{array}{c}a \\ f\end{array} \right )
=|\cdot|\left [\mathcal{F}^{-1}\left ( \begin{array}{c}a \\ f\end{array} \right )\right]'
-\mathcal{F}^{-1}\mathcal{A}\left ( \begin{array}{c}a \\ f\end{array} \right ). $$
By definition of $\mathcal{A}$ and Eq.~\eqref{eq:defF-1} we have
\begin{align*}
\mathcal{F}^{-1}\mathcal{A}\left ( \begin{array}{c}a \\ f\end{array} \right )&=
\mathcal{F}^{-1}\left ( \begin{array}{c}0 \\ -2|\cdot|f'-\tfrac52f-\tfrac{|\cdot|\rho'}{\rho}f \end{array}\right ) \\
&=-2\int_0^\infty \phi(\cdot,\eta)\eta f'(\eta)\rho(\eta)d\eta- \int_0^\infty
\phi(\cdot,\eta)\left [\tfrac52 +\tfrac{\eta \rho'(\eta)}{\rho(\eta)}\right ]f(\eta)\rho(\eta)d\eta
\end{align*}
and, assuming that $f \in C_c^\infty(0,\infty)$, an integration by parts yields
\begin{align*}
\mathcal{F}^{-1}\mathcal{A}\left ( \begin{array}{c}a \\ f\end{array} \right )&=
2\int_0^\infty \eta \partial_\eta \phi(\cdot,\eta)f(\eta)\rho(\eta)d\eta
+\int_0^\infty \phi(\cdot,\eta)\left [-\tfrac12+\tfrac{\eta \rho'(\eta)}{\rho(\eta)} \right ]f(\eta)
\rho(\eta)d\eta \\
&=2\int_0^\infty \eta \partial_\eta \phi(\cdot,\eta)f(\eta)\rho(\eta)d\eta+\mathcal{F}^{-1}
\left ( \begin{array}{c}0 \\ (-\tfrac12+\tfrac{|\cdot|\rho'}{\rho})f\end{array} \right ).
\end{align*}
Consequently, we obtain
\begin{align*}
\mathcal{F}^{-1}\mathcal{K}\left ( \begin{array}{c}a \\ f\end{array} \right )(R)
=&a\frac{(R\partial_R-1) \phi(R,\xi_d)}{\|\phi(\cdot,\xi_d)\|_{L^2(0,\infty)}^2}
+\int_0^\infty [R\partial_R \phi(R,\eta)-2\eta \partial_\eta\phi(R,\eta)]f(\eta)\rho(\eta)d\eta \\
&-\mathcal{F}^{-1}
\left ( \begin{array}{c}0 \\ (\tfrac12+\tfrac{|\cdot|\rho'}{\rho})f\end{array} \right )(R)
\end{align*}
and thus,
\begin{align*}
\mathcal{K}_{dd} a&=\frac{a}{\|\phi(\cdot,\xi_d)\|_{L^2(0,\infty)}^2}\int_0^\infty\phi(R,\xi_d)
[R\partial_R-1] \phi(R,\xi_d)dR=-\tfrac32 a \\
\mathcal{K}_{cd}a(\xi)&=\frac{a}{\|\phi(\cdot,\xi_d)\|_{L^2(0,\infty)}^2} \int_0^\infty
\phi(R,\xi)[R\partial_R-1] \phi(R,\xi_d)dR \\
\mathcal{K}_{dc}f&=\int_0^\infty \int_0^\infty \phi(R,\xi_d)[R\partial_R\phi(R,\eta)-
2\eta \partial_\eta \phi(R,\eta)]f(\eta)\rho(\eta)d\eta dR \\
\mathcal{K}_{cc}f(\xi)&=\int_0^\infty \int_0^\infty \phi(R,\xi)[R\partial_R\phi(R,\eta)-2\eta
\partial_\eta \phi(R,\eta)]f(\eta)\rho(\eta)d\eta dR \\
&\quad -\left (\tfrac12+\tfrac{\xi \rho'(\xi)}{\rho(\xi)}\right )f(\xi)
\end{align*}
where, as before,
$$ \mathcal{K}=\left (\begin{array}{cc}\mathcal{K}_{dd} & \mathcal{K}_{dc} \\
\mathcal{K}_{cd} & \mathcal{K}_{cc} \end{array} \right ). $$
\subsection{$\mathcal{K}_{cc}$ as a Calder\'on-Zygmund operator}
First, we focus on $\mathcal{K}_{cc}$ which is the most complicated of the above operators.
The respective estimates for $\mathcal{K}_{cd}$ and $\mathcal{K}_{dc}$ will follow easily after we have
understood $\mathcal{K}_{cc}$ and $\mathcal{K}_{dd}$ is trivial anyway.
In order to proceed, we need a more manageable representation of $\mathcal{K}_{cc}$, more precisely
of the integral part of $\mathcal{K}_{cc}$ which we denote by $\tilde{\mathcal{K}}_0$ for the moment, i.e.,
$$ \tilde{\mathcal{K}}_0 f(\xi):=\int_0^\infty \int_0^\infty \phi(R,\xi)[R\partial_R\phi(R,\eta)-2\eta
\partial_\eta \phi(R,\eta)]f(\eta)\rho(\eta)d\eta dR. $$
Note first that, for $f\in C_c^\infty(0,\infty)$, the function
$$ R\mapsto \int_0^\infty [R\partial_R \phi(R,\eta)-2\eta \partial_\eta \phi(R,\eta)]f(\eta)
\rho(\eta)d\eta $$
is rapidly decreasing as $R \to \infty$. This can be immediately concluded from the
representation of $\phi$ given in Corollary \ref{cor:phi} and integration by parts.
It follows that the operator $\tilde{\mathcal{K}}_0$ is well-defined as a linear mapping from $C^\infty_c(0,\infty)$
to $C[0,\infty)$. Furthermore, by
dominated convergence, $\tilde{\mathcal{K}}_0$ is continuous when viewed as a map into the space of distributions.
Consequently, by the Schwartz kernel theorem there exists a (distributional) kernel
$\tilde{K}_0$ such that
\begin{equation}
\label{eq:schwartzkernel}
\tilde{\mathcal{K}}_0f(\xi)=\int_0^\infty \tilde{K}_0(\xi,\eta)f(\eta)d\eta.
\end{equation}
In fact, the operator $\mathcal{K}_{cc}$ has already been studied in \cite{KST09} and from there
we have the following result (note carefully that our $\mathcal{K}_{cc}$ is called $\mathcal{K}_0$ in \cite{KST09}).
\begin{theorem}
\label{thm:K0}
For $f \in C^\infty_c(0,\infty)$ the operator $\mathcal{K}_{cc}$ is given by
$$ \mathcal{K}_{cc}f(\xi)=
\int_0^\infty K_0(\xi,\eta)f(\eta)d\eta $$
where the kernel $K_0$ is of the form
$$ K_0(\xi,\eta)=\frac{\rho(\eta)}{\xi-\eta}F(\xi,\eta) $$
with a symmetric function $F \in C^2((0,\infty)\times (0,\infty))$.
Furthermore, for any $N\in \mathbb{N}$, $F$ satisfies the bounds
\[\begin{split}
| F(\xi,\eta)| &\leq C_N \left\{ \begin{array}{cc} \xi+\eta &
\xi+\eta \leq 1 \cr (\xi+\eta)^{-1} (1+|\xi^\frac12
-\eta^\frac12|)^{-N} & \xi+\eta \geq 1
\end{array} \right.\\
| \partial_{\xi} F(\xi,\eta)|+| \partial_{\eta} F(\xi,\eta)| &\leq C_N \left\{
\begin{array}{cc} 1 & \xi+\eta \leq 1 \cr (\xi+\eta)^{-\frac32}
(1+|\xi^\frac12 -\eta^\frac12|)^{-N} & \xi+\eta \geq 1
\end{array} \right.\\
\max_{j+k=2} | \partial^j_{\xi}\partial^k_{\eta} F(\xi,\eta)| &\leq C_N \left\{
\begin{array}{cc} (\xi+\eta)^{-\frac12} & \xi+\eta \leq 1 \cr
(\xi+\eta)^{-2} (1+|\xi^\frac12 -\eta^\frac12|)^{-N} &
\xi+\eta \geq 1
\end{array} \right. .
\end{split}
\]
\end{theorem}
\begin{proof}
This is Theorem 5.1 in \cite{KST09}.
One starts with an integration by parts and
thereby identifies the function $F$ which can be expressed as an integral over
$\phi$, $\rho$ and some explicitly known function resulting from the potential $V$.
The stated estimates are then obtained by a careful analysis of this expression based on the
asymptotic descriptions of $\phi$ and $\rho$ from Section \ref{sec:exact}.
We refer the reader to \cite{KST09} for the details.
\end{proof}
\subsection{Bounds for $\mathcal{K}_{cc}$}
In order to obtain estimates in the ${X^{p,\alpha}_\delta}$ and ${Y^{p,\alpha}}$ spaces, we require boundedness of $\mathcal K_{cc}$ in
\emph{weighted} $L^p$ spaces.
However, this problem reduces to boundedness on ordinary $L^p$ by simply
attaching the weights to the kernel.
The (weighted) $L^2$ boundedness of $\mathcal{K}_0$ is already established in \cite{KST09}, Proposition 5.2.
Unfortunately, the result there does not exactly apply to our situation (at least not for small
frequencies) since it only considers
weights of the form $\langle \cdot \rangle^{2\alpha}$ but we have
to deal with more general expressions, cf.~
Definition \ref{def:spaces}.
Thus, we have to make sure that despite this slight modification the reasoning in \cite{KST09} still goes through.
Furthermore, we have to take care of the $L^p$ component in ${X^{p,\alpha}_\delta}$ which is not present
in \cite{KST09}.
Finally, we need to exploit a certain additional smoothing property of $\mathcal K_{cc}$
at small frequencies
which was irrelevant
for the construction in \cite{KST09}.
In order to prove boundedness in weighted spaces of the type occurring in the definition of
$X^{p,\alpha}_\delta$,
we define kernels $K^{(a,b)}_0$ for $a,b \in \mathbb{R}$ by
\begin{equation}
\label{eq:defKab}
K^{(a,b)}_0(\xi, \eta):=\xi^{-\frac12}\langle \xi \rangle^\frac12
\xi^{-a}\langle \xi \rangle^{-b}K_0(\xi,\eta)\eta^a \langle \eta\rangle^b
\end{equation}
and denote by $\mathcal{K}_0^{(a,b)}$ the corresponding operators.
Observe carefully the additional weight $\xi^{-\frac12}\langle \xi\rangle^\frac12$ on
the ``output'' variable which encodes the aforementioned smoothing effect.
Our aim is to prove $L^p$ boundedness of $\mathcal{K}_0^{(a,b)}$ for any $a,b$ and $1<p<\infty$ which then implies
the desired boundedness properties of $\mathcal{K}_{cc}$ in $X^{p,\alpha}_\delta$ and ${Y^{p,\alpha}}$.
Due to the singular behavior of the spectral measure at zero it is advantageous to
separate the diagonal from the off-diagonal behavior.
This is most effectively done by introducing a dyadic covering of the diagonal $\Delta=\{(\xi,\eta)\in\mathbb{R}^2: \xi=\eta\}$,
\[ \Delta \subset \bigcup_{j \in \mathbb{Z}} I_j \times I_j, \]
where $I_j:=[2^{j-1},2^{j+1}]$.
Furthermore, let $\chi: \mathbb{R} \to [0,1]$ be a smooth bump function satisfying
\[ \chi(\xi):=\left \{ \begin{array}{l}
1 \mbox{ if } \xi \in [\frac34, \frac74] \\
0 \mbox{ if } \xi \leq \frac12 \mbox{ or } \xi \geq 2 \end{array} \right .
\]
and set $\chi_j(\xi):=\chi(2^{-j}\xi)$.
Then we have $\mathrm{supp}(\chi_j)\subset I_j$ and the sets
$\{(\xi,\eta) \in \mathbb{R}^2: \chi_j(\xi)\chi_j(\eta)=1\}$ still cover the diagonal.
We smoothly restrict the kernel $K_0^{(a,b)}$ to
$I_j \times I_j$ by using $\chi_j$ and write
$$ \mathcal{K}_{0,j}^{(a,b)}f:=\chi_j\mathcal{K}_0^{(a,b)}(\chi_j f) $$
for the corresponding truncated operator.
\begin{lemma}
\label{lem:Kjabsm}
Fix $(a,b) \in \mathbb{R}^2$ and let $1<p<\infty$. Then $\mathcal{K}_{0,j}^{(a,b)}$ extends to a bounded operator
on $L^p(\mathbb{R})$ and
$$ \|\mathcal{K}_{0,j}^{(a,b)}f\|_{L^p(\mathbb{R})}\lesssim \|f\|_{L^p(\mathbb{R})}
$$
for all $f \in L^p(\mathbb{R})$ and $j \in \mathbb{Z}$, $j\leq 2$.
\end{lemma}
\begin{proof}
The point of the dyadic decomposition is of course that $\xi,\eta \in I_j$ implies
$\eta \leq 4\xi$. In other words, for any two elements $\xi,\eta \in I_j$
we have $\xi \simeq \eta \simeq 2^j$ \emph{with implicit constants independent of $j$}!
This allows us to control the singular behavior of $\rho(\eta)$ uniformly in $j$.
We write $F(\xi,\eta)=F(\xi,\xi)+(\xi-\eta)\tilde{F}(\xi,\eta)$
where
$$ \tilde{F}(\xi,\eta)=-\int_0^1 \partial_2 F(\xi,\xi+s(\eta-\xi))ds $$
and by Theorem \ref{thm:K0} we have the bound $|\tilde{F}(\xi,\eta)|\lesssim 1$ for all $\xi,\eta$.
Thus, the kernel decomposes as $K^{(a,b)}_{j,0}(\xi,\eta)=A_j(\xi,\eta)+B_j(\xi,\eta)$
where
\begin{align*}
A_j(\xi,\eta)&=\chi_j(\xi)\xi^{-\frac12}\langle \xi \rangle^\frac12 \xi^{-a}\langle \xi \rangle^{-b}
\frac{F(\xi,\xi)}{\xi-\eta}\chi_j(\eta)\rho(\eta)\eta^a \langle \eta \rangle^b
=:\frac{\psi_{0,j}(\xi)\psi_{1,j}(\eta)}{\xi-\eta} \\
B_j(\xi,\eta)&=\chi_j(\xi)\xi^{-\frac12}\langle \xi \rangle^\frac12 \xi^{-a}\langle \xi \rangle^{-b}
\tilde{F}(\xi,\eta)\chi_j(\eta)\rho(\eta)\eta^a \langle \eta \rangle^b
\end{align*}
and we call the respective operators $\mathcal{A}_j$ and $\mathcal{B}_j$.
In other words, we have $\mathcal A_j f=\pi \psi_{0,j}H(\psi_{1,j}f)$ where $H$ is the Hilbert transform.
By the $L^p$ boundedness of $H$ for $p \in (1,\infty)$ we immediately obtain
\[ \|\mathcal A_j f\|_{L^p(\mathbb{R})}\lesssim \|\psi_{0,j}\|_{L^\infty(\mathbb{R})}\|\psi_{1,j}f\|_{L^p(\mathbb{R})}\leq
\|\psi_{0,j}\|_{L^\infty(\mathbb{R})}\|\psi_{1,j}\|_{L^\infty(\mathbb{R})}\|f\|_{L^p(\mathbb{R})} \]
and from Theorem \ref{thm:K0} and Lemma \ref{lem:smxi} we infer the bounds
\[ |\psi_{0,j}(\xi)|\lesssim 2^{(-\frac12-a+1)j}, \quad |\psi_{1,j}(\eta)|\lesssim 2^{(-\frac12+a)j} \]
for all $\xi,\eta \in \mathbb{R}$ which yield $\|\mathcal A_j f\|_{L^p(\mathbb{R})}\lesssim \|f\|_{L^p(\mathbb{R})}$.
Furthermore, the kernel $B_j$ satisfies
\[ |B_j(\xi,\eta)|\lesssim 2^{-j}\chi_j(\xi)\chi_j(\eta) \]
and thus,
\[ \|\mathcal B_j f\|_{L^p(\mathbb{R})}\lesssim 2^{-j}\|\chi_j\|_{L^p(\mathbb{R})}\|\chi_j\|_{L^{p'}(\mathbb{R})}\|f\|_{L^p(\mathbb{R})}
\lesssim 2^{(-1+\frac{1}{p}+\frac{1}{p'})j}\|f\|_{L^p(\mathbb{R})}=\|f\|_{L^p(\mathbb{R})}.\]
\end{proof}
An analogous result holds for $j\geq 1$ as well.
\begin{lemma}
\label{lem:Kjablg}
Fix $(a,b) \in \mathbb{R}^2$ and let $1<p<\infty$. Then $\mathcal{K}_{0,j}^{(a,b)}$ extends to a bounded operator
on $L^p(\mathbb{R})$ and
$$ \|\mathcal{K}_{0,j}^{(a,b)}f\|_{L^p(\mathbb{R})}\lesssim \|f\|_{L^p(\mathbb{R})}
$$
for all $f \in L^p(\mathbb{R})$ and $j \in \mathbb{Z}$, $j\geq 1$.
\end{lemma}
\begin{proof}
We perform the same decomposition $\mathcal K_{0,j}^{(a,b)}=\mathcal A_j+\mathcal B_j$ as in the proof of Lemma
\ref{lem:Kjabsm} and this time we have
\[ |\psi_{0,j}(\xi)|\lesssim 2^{-(a+b+1)j},\quad |\psi_{1,j}(\eta)|\lesssim 2^{(\frac12+a+b)j} \]
for all $\xi,\eta \in \mathbb{R}$ by Theorem \ref{thm:K0} and Lemma \ref{lem:smxi}.
Consequently, as before, the $L^p$ boundedness of the Hilbert transform yields
$\|\mathcal A_j f\|_{L^p(\mathbb{R})}\lesssim \|f\|_{L^p(\mathbb{R})}$ for $p\in (1,\infty)$.
For the operator $\mathcal B_j$ we note that $|\tilde{F}(\xi,\eta)|\lesssim |\xi+\eta|^{-\frac32}$
by Theorem \ref{thm:K0} and thus,
$|B_j(\xi,\eta)|\lesssim 2^{-j}\chi_j(\xi)\chi_j(\eta)$ which yields
$\|\mathcal B_j f\|_{L^p(\mathbb{R})}\lesssim \|f\|_{L^p(\mathbb{R})}$.
\end{proof}
The bounds obtained in Lemmas \ref{lem:Kjabsm} and \ref{lem:Kjablg} can be summed.
\begin{corollary}
\label{cor:Kjabdiag}
Fix $(a,b) \in \mathbb{R}^2$ and $1<p<\infty$. Then the operator
$$ \mathcal{K}_{0,\Delta}^{(a,b)}:=\sum_{j\in\mathbb{Z}}\mathcal{K}_{0,j}^{(a,b)} $$
is bounded on $L^p(\mathbb{R})$.
\end{corollary}
\begin{proof}
It suffices to note that
\begin{align*}
\|\mathcal{K}_{0,\Delta}^{(a,b)}f\|_{L^p(\mathbb{R})}^p \lesssim
\sum_{j\in\mathbb{Z}}\|\mathcal{K}_{0,j}^{(a,b)}(1_{I_j}f)\|_{L^p(\mathbb{R})}^p
\lesssim \sum_{j\in \mathbb{Z}}\|1_{I_j}f\|_{L^p(\mathbb{R})}^p\lesssim \|f\|_{L^p(\mathbb{R})}^p
\end{align*}
by Lemmas \ref{lem:Kjabsm} and \ref{lem:Kjablg} where $1_{I_j}$ denotes the characteristic
function of the set $I_j$.
\end{proof}
Now we can conclude the desired boundedness properties of $\mathcal{K}_{cc}$.
\begin{proposition}
\label{prop:KccX}
Fix $\alpha\geq 0$ and $\delta> 0$ small. Furthermore,
let $1<p<\infty$ be so large that $p'(1-\delta)<1$.
Then we have the bounds
\begin{align*}
\|\mathcal{K}_{cc}f\|_{{X^{p,\alpha}_\delta}}&\lesssim \|f\|_{X^{p,\alpha}_\delta} \\
\|\mathcal{K}_{cc}f\|_{{Y^{p,\alpha}}}&\lesssim \|f\|_{X^{p,\alpha}_\delta} \\
\|\mathcal{K}_{cc}f\|_{{Y^{p,\alpha}}}&\lesssim \|f\|_{Y^{p,\alpha}}.
\end{align*}
\end{proposition}
\begin{proof}
We write $\mathcal{K}_\Delta f:=\sum_{j\in \mathbb{Z}}\chi_j \mathcal{K}_{cc}(\chi_j f)$ for the diagonal part of $\mathcal{K}_{cc}$.
By Corollary \ref{cor:Kjabdiag} it is evident that
$\|\mathcal K_\Delta f\|_{X^{p,\alpha}_\delta} \lesssim \|f\|_{X^{p,\alpha}_\delta}$ and $\|\mathcal K_\Delta f\|_{Y^{p,\alpha}} \lesssim \|f\|_{Y^{p,\alpha}}$.
In order to obtain the mixed estimate we exploit the smoothing property, i.e., we note that
\[ \|\mathcal K_\Delta f\|_{L^p(0,\infty)}\lesssim \||\cdot|^\frac12 \langle \cdot \rangle^{-\frac12}f
\|_{L^p(0,\infty)}\lesssim \||\cdot|^{\frac12-\delta}\langle \cdot \rangle^{-\frac12+\delta}f\|_{L^p(0,\infty)}
\lesssim \|f\|_{X^{p,\alpha}_\delta} \]
as well as
\[ \|\mathcal K_\Delta (f)\langle \cdot \rangle^\alpha \rho^\frac12 \|_{L^2(0,\infty)}
\lesssim \|f|\cdot|^\frac12 \langle \cdot \rangle^{\alpha-\frac12} \rho^\frac12 \|_{L^2(0,\infty)}
\lesssim \|f\|_{X^{p,\alpha}_\delta}. \]
It remains to study the off-diagonal contributions and to this end we set
\[ \varphi_A(\xi,\eta):=1_{A}(\xi,\eta)\left [1-\sum_{j\in\mathbb{Z}}\chi_j(\xi)
\chi_j(\eta) \right ] \]
for $A \subset [0,\infty)^2$.
We distinguish between large $\xi,\eta$ and small $\xi,\eta$ and consider the two truncated kernels
\[ K_-:=\varphi_{[0,1]^2}K_0,\quad K_+:=\varphi_{[0,\infty)^2\backslash [0,1]^2}K_0 \]
and denote by $\mathcal{K}_-$, $\mathcal{K}_+$ the respective operators.
Note that $(\xi,\eta)\in \mathrm{supp}(\varphi_{[0,1]^2})$ implies that $\eta \leq c \xi$ or
$\eta \geq \frac{1}{c}\xi$ for a suitable $c \in (0,1)$.
This implies $|\xi-\eta|\gtrsim \xi+\eta$
and from Theorem \ref{thm:K0} and Lemma \ref{lem:smxi} we obtain the estimate
$|K_-(\xi,\eta)|\lesssim \varphi_{[0,1]^2}(\xi,\eta)\eta^{-\frac12}$.
We infer
\begin{align*}
|\mathcal{K}_-f(\xi)|&\lesssim 1_{[0,1]}(\xi)\int_0^1 \eta^{-\frac12}|f(\eta)|d\eta
=1_{[0,1]}(\xi)\int_0^1 \eta^{-1+\delta}|\eta^{\frac12-\delta}f(\eta)|d\eta \\
&\lesssim 1_{[0,1]}(\xi)\||\cdot|^{\frac12-\delta}f\|_{L^p(0,1)}
\end{align*}
by H\"older's inequality and the condition $p'(1-\delta)<1$.
Note that this estimate implies $|\mathcal K_- f(\xi)|\lesssim 1_{[0,1]}(\xi)\|f\|_{X^{p,\alpha}_\delta}$ and also
$|\mathcal K_- f(\xi)|\lesssim 1_{[0,1]}(\xi)\|f\|_{Y^{p,\alpha}}$ for all $\xi\geq 0$.
Thus, we immediately obtain the bound
$\|\mathcal K_- f\|_{X^{p,\alpha}_\delta} \lesssim \|\mathcal K_- f\|_{L^\infty(0,1)}\lesssim \|f\|_{X^{p,\alpha}_\delta}$.
For the remaining two estimates it is crucial that the spectral measure $\rho$ be integrable
near $0$, i.e., we have
\[ \|\mathcal K_- f\|_{Y^{p,\alpha}} \lesssim \|\mathcal K_- f\|_{L^\infty(0,1)}\|\rho\|_{L^1(0,1)}\lesssim \|f\|_{X^{p,\alpha}_\delta} \]
and analogously, $\|\mathcal K_- f\|_{Y^{p,\alpha}} \lesssim \|f\|_{Y^{p,\alpha}}$.
In order to bound the operator $\mathcal{K}_+$ we note that, as before, we have
$|\xi-\eta|\gtrsim \xi+\eta$ on $\mathrm{supp}(\varphi_{[0,\infty)^2
\backslash [0,1]^2})$ and thus, by Theorem \ref{thm:K0} and
Lemma \ref{lem:lgxi} we infer $|K_+(\xi,\eta)|\leq C_N \langle \xi\rangle^{-N}\langle \eta \rangle^{-N}$
for any $N$.
This yields the three stated estimates for $\mathcal K_+$ as well.
\end{proof}
\subsection{Estimates for the operators $\mathcal{K}_{dc}$ and $\mathcal{K}_{cd}$}
It is now an easy exercise to conclude the respective boundedness for the remaining operators
$\mathcal{K}_{dc}$ and $\mathcal{K}_{cd}$.
\begin{lemma}
\label{lem:Kcd}
Let $\alpha\geq 0$, $\delta>0$ and $1<p<\infty$ be as in Proposition \ref{prop:KccX}.
Then we have the bounds
\[ \|\mathcal{K}_{cd}a\|_{X^{p,\frac18}_\delta}\lesssim |a|,\quad |\mathcal{K}_{dc}f|\lesssim \|f\|_{X^{p,\alpha}_\delta} \]
as well as
\[ \|\mathcal{K}_{cd}a\|_{Y^{p,\frac18}}\lesssim |a|,\quad |\mathcal{K}_{dc}g|\lesssim \|g\|_{Y^{p,\alpha}} \]
for all $a \in \mathbb{C}$ and $f \in {X^{p,\alpha}_\delta}$, $g \in {Y^{p,\alpha}}$.
\end{lemma}
\begin{proof}
Recall that
\begin{align*}\mathcal{K}_{cd}a(\xi)&=\frac{a}{\|\phi(\cdot,\xi_d)\|_{L^2(0,\infty)}^2} \int_0^\infty
\phi(R,\xi)[R\partial_R-1] \phi(R,\xi_d)dR \\
\mathcal{K}_{dc}f&=\int_0^\infty \int_0^\infty \phi(R,\xi_d)[R\partial_R\phi(R,\eta)-
2\eta \partial_\eta \phi(R,\eta)]f(\eta)\rho(\eta)d\eta dR. \\
\end{align*}
According to Lemma \ref{lem:Jostlgxi} and Corollary \ref{cor:phi} we obtain $|\mathcal{K}_{cd}a(\xi)|\lesssim
|a|\langle \xi \rangle^{-\frac32}$ by performing two integrations by parts.
This already shows $\|\mathcal{K}_{cd}a\|_{X^{p,\frac18}_\delta}\lesssim |a|$
and $\|\mathcal{K}_{cd}a\|_{Y^{p,\frac18}}\lesssim |a|$ as claimed.
Furthermore, the operator $\mathcal{K}_{dc}$ has a similar (in fact, better behaved) kernel as $\mathcal{K}_{cc}$ and
we conclude $|\mathcal{K}_{dc}f|\lesssim \|f\|_{X^{p,\alpha}_\delta}$ as well as $|\mathcal K_{dc}g|\lesssim \|g\|_{Y^{p,\alpha}}$.
\end{proof}
We summarize our results.
\begin{corollary}
\label{cor:K}
Let $\delta>0$ be small and assume $p \in (1,\infty)$ be so large that $p'(1-\delta)<1$.
Then the operator $\mathcal K$ satisfies the estimates
\begin{align*}
\|\mathcal K (a,f)\|_{\mathbb{C} \times X^{p,\frac18}_\delta}&\lesssim \|(a,f)\|_{\mathbb{C} \times X^{p,\frac18}_\delta} \\
\|\mathcal K (a,f)\|_{\mathbb{C} \times Y^{p,\frac18}}&\lesssim \|(a,f)\|_{\mathbb{C} \times X^{p,\frac18}_\delta} \\
\|\mathcal K (a,g)\|_{\mathbb{C} \times Y^{p,\frac18}}&\lesssim \|(a,g)\|_{\mathbb{C} \times Y^{p,\frac18}}
\end{align*}
for all $a \in \mathbb{C}$, $f \in X^{p,\frac18}_\delta$ and $g\in Y^{p,\frac18}$.
\end{corollary}
\begin{proof}
This is the content of Proposition \ref{prop:KccX} and Lemma \ref{lem:Kcd}.
\end{proof}
\subsection{Estimates for $[\mathcal{A},\mathcal{K}]$}
It remains to estimate the commutator $[\mathcal A,\mathcal K]$.
To this end recall that
\[ \mathcal A=\left (\begin{array}{cc} 0 & 0 \\ 0 & \mathcal A_c \end{array} \right ) \]
with $\mathcal A_cf(\xi)=-[2\xi\partial_\xi +\frac52 +\tfrac{\xi \rho'(\xi)}{\rho(\xi)}]f(\xi)$.
Thus, the commutator reads
\[ [\mathcal A,\mathcal K]=\left ( \begin{array}{cc}
0 & \mathcal{K}_{dc}\mathcal A_c \\
-\mathcal A_c\mathcal K_{cd} & [\mathcal A_c, \mathcal K_{cc}] \end{array} \right ). \]
Obviously, the most complicated contribution comes from $[\mathcal A_c, \mathcal K_{cc}]$.
Recall that $\mathcal K_{cc}$ is a continuous map from $C^\infty_c(0,\infty)$ to the space of distributions
$\mathcal D'(0,\infty)$.
Consequently, $[\mathcal A_c, \mathcal K_{cc}]: C^\infty_c(0,\infty)\to \mathcal{D}'(0,\infty)$ is well-defined
and continuous.
\begin{proposition}
\label{prop:KccAcc}
Let $\alpha,\delta,p$ be as in Proposition \ref{prop:KccX}.
Then $[\mathcal A_c,\mathcal K_{cc}]$ satisfies the bounds
\begin{align*}
\|[\mathcal A_c, \mathcal K_{cc}]f\|_{X^{p,\alpha}_\delta} &\lesssim \|f\|_{X^{p,\alpha}_\delta} \\
\|[\mathcal A_c, \mathcal K_{cc}]f\|_{Y^{p,\alpha}} &\lesssim \|f\|_{X^{p,\alpha}_\delta} \\
\|[\mathcal A_c, \mathcal K_{cc}]f\|_{Y^{p,\alpha}} &\lesssim \|f\|_{Y^{p,\alpha}}.
\end{align*}
\end{proposition}
\begin{proof}
In \cite{KST09}, Proposition 5.2 it is shown that $[\mathcal K_{cc},\mathcal A_c]$ has a similar
kernel as $\mathcal K_{cc}$. Thus, the verification of the stated bound consists of a repetition of
the above arguments that led to Proposition \ref{prop:KccX}.
\end{proof}
Finally, we bound the operators $\mathcal A_c\mathcal K_{cd}$ and $\mathcal K_{dc} \mathcal A_c$.
\begin{lemma}
\label{lem:AccKcd}
Let $\alpha,\delta,p$ be as in Proposition \ref{prop:KccX}.
Then we have the bounds
\[ \|\mathcal A_c \mathcal K_{cd}a\|_{X^{p,\frac18}_\delta}\lesssim |a|,\quad
|\mathcal K_{dc} \mathcal A_cf|\lesssim \|f\|_{X^{p,\alpha}_\delta} \]
as well as
\[ \|\mathcal A_c \mathcal K_{cd}a\|_{Y^{p,\frac18}}\lesssim |a|,\quad
|\mathcal K_{dc} \mathcal A_cg|\lesssim \|g\|_{Y^{p,\alpha}} \]
for all $a\in \mathbb{C}$ and $f\in {X^{p,\alpha}_\delta}$, $g\in {Y^{p,\alpha}}$.
\end{lemma}
\begin{proof}
We start with $\mathcal A_c \mathcal K_{cd}$.
If $0<\xi\ll 1$ we have $|\xi\partial_\xi \phi(R,\xi)|\lesssim \langle R \rangle$
for all $R>0$ by Lemmas \ref{lem:Phi}, \ref{lem:Jost} and Corollary \ref{cor:phi}.
This implies $|\mathcal A_c\mathcal K_{cd}a(\xi)|\lesssim 1$ since $\phi(R,\xi_d)$ decays
exponentially as $R \to \infty$.
On the other hand, if $\xi\gtrsim 1$, we obtain by integration by parts the estimate
$|\mathcal A_c \mathcal K_{cd}a(\xi)|\lesssim \langle \xi \rangle^{-\frac32}$ as in the proof of Lemma
\ref{lem:Kcd}. This shows $\|\mathcal A_c \mathcal K_{cd}a\|_{X^{p,\frac18}_\delta}\lesssim |a|$
and $\|\mathcal A_c \mathcal K_{cd}a\|_{Y^{p,\frac18}}\lesssim |a|$.
Furthermore, from \cite{KST09}, Theorem 5.1 we have the representation
\[ \mathcal K_{dc} \mathcal A_cf=\int_0^\infty \tilde{K}_d(\eta)f(\eta)d\eta \]
with $\tilde{K}_d$ bounded and rapidly decreasing.
This yields
\begin{align*} |\mathcal K_{dc}\mathcal A_cf|&\lesssim
\||\cdot|^{-\frac12+\delta}|\tilde{K}_d|^\frac12\|_{L^{p'}(0,\infty)}\||\cdot|^{\frac12-\delta}
|\tilde{K}_d|^\frac12 f\|_{L^p(0,\infty)}
\lesssim \||\cdot|^{\frac12-\delta}
|\tilde{K}_d|^\frac12 f\|_{L^p(0,\infty)}
\end{align*}
since $p'(-\frac12+\delta)>p'(-1+\delta)>-1$ by assumption and this yields
$ |\mathcal K_{dc}\mathcal A_cf|\lesssim \|f\|_{X^{p,\alpha}_\delta}$ as well as $|\mathcal K_{dc}\mathcal A_cg|\lesssim \|g\|_{Y^{p,\alpha}}$.
\end{proof}
By putting together Proposition \ref{prop:KccAcc} and Lemma \ref{lem:AccKcd} we arrive at
the desired result.
\begin{corollary}
\label{cor:AK}
Let $\delta>0$ be small and assume $p \in (1,\infty)$ be so large that $p'(1-\delta)<1$.
Then the operator $[\mathcal A, \mathcal K]$ satisfies the estimates
\begin{align*}
\|[\mathcal A,\mathcal K] (a,f)\|_{\mathbb{C} \times X^{p,\frac18}_\delta}&\lesssim \|(a,f)\|_{\mathbb{C} \times X^{p,\frac18}_\delta} \\
\|[\mathcal A, \mathcal K] (a,f)\|_{\mathbb{C} \times Y^{p,\frac18}}&\lesssim \|(a,f)\|_{\mathbb{C} \times X^{p,\frac18}_\delta} \\
\|[\mathcal A, \mathcal K] (a,g)\|_{\mathbb{C} \times Y^{p,\frac18}}&\lesssim \|(a,g)\|_{\mathbb{C} \times Y^{p,\frac18}}
\end{align*}
for all $a \in \mathbb{C}$, $f \in X^{p,\frac18}_\delta$ and $g\in Y^{p,\frac18}$.
\end{corollary}
\subsection{Construction of an exact solution in the forward lightcone}
We solve the main equation \eqref{eq:sysFourier} by a contraction argument.
To this end, we write $\mathcal{H}:=\mathrm{diag}(\mathcal{H}_d,\mathcal{H}_c)$ for the solution operator of the
transport equation \eqref{eq:xd}, \eqref{eq:x}.
Furthermore, we set
\begin{equation}
\label{eq:Contract}
\Phi(x_d,x):=\mathcal{H}\left [\mathcal{N}(x_d,x)+\mathcal{R}_\nu\mathcal B_\nu(x_d,x)+\mathcal{T}_\nu(x_d,x)+\hat{E}_2\right ]
\end{equation}
where
\begin{align*}
\mathcal{R}_\nu \mathcal B_\nu \left (\begin{array}{c}x_d \\ x \end{array} \right )(\tau,\xi)&:=-\beta_\nu(\tau)(2\mathcal K
+1)(\partial_\tau+\beta_\nu(\tau)\mathcal A)\left (\begin{array}{c}x_d(\tau) \\ x(\tau,\cdot) \end{array} \right )(\xi) \\
\mathcal T_\nu \left (\begin{array}{c}x_d \\ x \end{array} \right )(\tau,\xi)&:=-\beta_\nu(\tau)^2
\left (\mathcal K^2+[\mathcal A,\mathcal K]+\mathcal K+\tfrac{\beta_\nu'(\tau)}{\beta_\nu(\tau)^2}\mathcal K\right )
\left (\begin{array}{c}x_d(\tau) \\ x(\tau,\cdot) \end{array} \right )(\xi) \\
\hat{E}_2(\tau,\xi)&:=\left (\begin{array}{c}\hat{e}_2(\tau,\xi_d)\\ \hat{e}_2(\tau,\xi) \end{array} \right )
\end{align*}
and
\[ \mathcal B_\nu \left (\begin{array}{c}x_d \\ x \end{array} \right )(\tau,\xi):=
(\partial_\tau+\beta_\nu(\tau)\mathcal A)\left (\begin{array}{c}x_d(\tau) \\ x(\tau,\cdot) \end{array} \right )(\xi) \]
Thus, solutions of Eq.~\eqref{eq:sysFourier} correspond to fixed points of $\Phi$.
In order to simplify notation, we define the following Banach space where we run our fixed
point argument.
\begin{definition}
\label{eq:defX}
Fix a $\delta$ with $2|\frac{1}{\nu}-1|<\delta<\frac18$ and a (large)
$p \in(2,\infty)$ such that $p'(1-\delta+2|\frac{1}{\nu}-1|)<1$. Then, for
$x_d: [\tau_0,\infty)\to\mathbb{R}$ and $x: [\tau_0,\infty)\times [0,\infty)\to \mathbb{R}$, where $\tau_0>0$,
we set
\[ \|(x_d,x)\|_{\mathcal X^{\tau_0,\nu}}:=\|(x_d,x)\|_{L^{\infty,3-2\delta}_{\tau_0}\times L^{\infty,2-4\delta}_{\tau_0}
X^{p,\frac18}_\delta}+\|\mathcal B_\nu (x_d,x)\|_{L^{\infty,3-2\delta}_{\tau_0}\times L^{\infty,2-2\delta}_{\tau_0}
Y^{p,\frac18}}. \]
The Banach space of the respective functions is denoted by $\mathcal X^{\tau_0,\nu}$.
\end{definition}
\begin{theorem}
\label{thm:Contract}
Let $\tau_0 \geq 1$ be sufficiently large and $\nu$ be sufficiently close to $1$.
Then $\Phi$ (as defined in Eq.~\eqref{eq:Contract}) maps the unit ball in $\mathcal X^{\tau_0,\nu}$ to itself
and is contractive, i.e.,
\[ \|\Phi(x_d,x)-\Phi(y_d,y)\|_{\mathcal X^{\tau_0,\nu}}\leq \tfrac12 \|(x_d,x)-(y_d,y)\|_{\mathcal X^{\tau_0,\nu}} \]
for all $(x_d,x), (y_d,y) \in \mathcal X^{\tau_0,\nu}$ with $\|(x_d,x)\|_{\mathcal X^{\tau_0,\nu}}\leq 1,
\|(y_d,y)\|_{\mathcal X^{\tau_0,\nu}}\leq 1$.
As a consequence, there exists a unique fixed point of $\Phi$ (and hence, a solution of
Eq.~\eqref{eq:sysFourier}) which belongs to $\mathcal{X}^{\tau_0,\nu}$ (in fact, to the unit ball
in $\mathcal X^{\tau_0,\nu}$).
\end{theorem}
\begin{proof}
We consider each term on the right-hand side of Eq.~\eqref{eq:Contract} separately.
\begin{enumerate}
\item According to Lemma \ref{lem:Fe2} we have $\hat{E}_2 \in L^{\infty,3-\delta}_{\tau_0}
\times L^{\infty,3-\delta}_{\tau_0}Y^{p,\frac18}$ and thus, we find
\[
\|\mathcal H \hat{E}_2\|_{L^{\infty,3-2\delta}_{\tau_0}\times
L^{\infty,2-4\delta}_{\tau_0}X^{p,\frac18}_\delta}\lesssim
\tau_0^{-\delta}\|\hat{E}_2\|_{L^{\infty,3-\delta}_{\tau_0}
\times L^{\infty,3-\delta}_{\tau_0}Y^{p,\frac18}} \]
and also
\[ \|\mathcal B_\nu \mathcal H \hat{E}_2\|_{L^{\infty,3-2\delta}_{\tau_0}\times
L^{\infty,2-2\delta}_{\tau_0}Y^{p,\frac18}}\lesssim
\tau_0^{-\delta}\|\hat{E}_2\|_{L^{\infty,3-\delta}_{\tau_0}
\times L^{\infty,3-\delta}_{\tau_0}Y^{p,\frac18}} \]
by Proposition \ref{prop:H} and Lemma \ref{lem:Hd}.
Hence, since $\tau_0\geq 1$ is assumed large, we infer
$\|\mathcal H \hat E_2\|_{\mathcal X^{\tau_0,\nu}}\leq \frac18$.
\item By Corollaries \ref{cor:K} and \ref{cor:AK} we infer
\[ \|\mathcal T_\nu (x_d,x)\|_{L^{\infty,4-4\delta}_{\tau_0}\times L^{\infty,4-4\delta}_{\tau_0}Y^{p,\frac18}}
\lesssim \|(x_d,x)\|_{L^{\infty,3-2\delta}_{\tau_0}\times L^{\infty,2-4\delta}_{\tau_0}X^{p,\frac18}_\delta} \]
and by Proposition \ref{prop:H} and Lemma \ref{lem:Hd} this implies
\[ \|\mathcal H \mathcal T_\nu (x_d,x)\|_{L^{\infty,3-2\delta}_{\tau_0}\times L^{\infty,2-4\delta}_{\tau_0}X^{p,\frac18}}
\lesssim \tau_0^{-1+2\delta}\|(x_d,x)\|_{L^{\infty,3-2\delta}_{\tau_0}\times L^{\infty,2-4\delta}_{\tau_0}X^{p,\frac18}_\delta} \]
as well as
\[\|\mathcal B_\nu \mathcal H \mathcal T_\nu (x_d,x)\|_{L^{\infty,3-2\delta}_{\tau_0}\times L^{\infty,2-2\delta}_{\tau_0}Y^{p,\frac18}}
\lesssim \tau_0^{-1+2\delta}\|(x_d,x)\|_{L^{\infty,3-2\delta}_{\tau_0}\times L^{\infty,2-4\delta}_{\tau_0}X^{p,\frac18}_\delta}. \]
We obtain $\|\mathcal{H}\mathcal T_\nu (x_d,x)\|_{\mathcal{X}^{\tau_0,\nu}}\leq \frac18 \|(x_d,x)\|_{\mathcal{X}^{\tau_0,\nu}}$
provided that $\tau_0\geq 1$ is sufficiently large.
\item From Corollary \ref{cor:K} we obtain
\[ \|\mathcal R_\nu \mathcal B_\nu (x_d,x)\|_{L^{\infty,3-2\delta}_{\tau_0}\times L^{\infty,3-2\delta}_{\tau_0}Y^{p,\frac18}}
\lesssim |\tfrac{1}{\nu}-1| \|\mathcal B_\nu (x_d,x)\|_{L^{\infty,3-2\delta}_{\tau_0}\times L^{\infty,2-2\delta}_{\tau_0}Y^{p,\frac18}} \]
by recalling that $\beta_\nu(\tau)=-(\frac{1}{\nu}-1)\tau^{-1}$.
By Proposition \ref{prop:H} and Lemma \ref{lem:Hd} this yields
\[ \|\mathcal H \mathcal R_\nu \mathcal B_\nu (x_d,x)\|_{L^{\infty,3-2\delta}_{\tau_0}\times L^{\infty,2-4\delta}_{\tau_0}X^{p,\frac18}_\delta}
\lesssim |\tfrac{1}{\nu}-1| \|\mathcal B_\nu (x_d,x)\|_{L^{\infty,3-2\delta}_{\tau_0}\times L^{\infty,2-2\delta}_{\tau_0}Y^{p,\frac18}}
\]
as well as
\[ \|\mathcal B_\nu \mathcal H \mathcal R_\nu \mathcal B_\nu (x_d,x)\|_{L^{\infty,3-2\delta}_{\tau_0}\times L^{\infty,2-2\delta}_{\tau_0}Y^{p,\frac18}}
\lesssim |\tfrac{1}{\nu}-1| \|\mathcal B_\nu (x_d,x)\|_{L^{\infty,3-2\delta}_{\tau_0}\times L^{\infty,2-2\delta}_{\tau_0}Y^{p,\frac18}}.
\]
Consequently, if $\nu$ is sufficiently close to $1$, we infer
$\|\mathcal H \mathcal R_\nu \mathcal B_\nu (x_d,x)\|_{\mathcal X^{\tau_0,\nu}}
\leq \frac18 \|(x_d,x)\|_{\mathcal X^{\tau_0,\nu}}$.
\item Finally, for the nonlinearity we infer from Lemma \ref{lem:R} that
\begin{align*}
\| \mathcal N(x_d,x)&-\mathcal N(y_d,y)\|_{L^{\infty,4}_{\tau_0}\times L^{\infty,3-2\delta}_{\tau_0}Y^{p,\frac18}} \\
&\lesssim \tau_0^{-\frac14+2\delta}\|(x_d,x)-(y_d,y)\|_{
L^{\infty,3-2\delta}_{\tau_0}\times L^{\infty,2-4\delta}_{\tau_0}X^{p,\frac18}_\delta}.
\end{align*}
for all $(x_d,x)$, $(y_d,y)$ in the unit ball in $\mathcal X^{\tau_0,\nu}$.
As before, since $\tau_0$ is large, Proposition \ref{prop:H} and Lemma \ref{lem:Hd}
yield $\| \mathcal H \mathcal N(x_d,x)-\mathcal H\mathcal N(y_d,y)\|_{\mathcal X^{\tau_0,\nu}}
\leq \frac18 \|(x_d,x)-(y_d,y)\|_{\mathcal{X}^{\tau_0,\nu}}$ and by recalling that $\mathcal N(0)=0$ this also
implies
$\|\mathcal H \mathcal N(x_d,x)\|_{\mathcal X^{\tau_0,\nu}}\leq \frac18 \|(x_d,x)\|_{\mathcal X^{\tau_0,\nu}}$.
\end{enumerate}
The claim now follows by the contraction mapping principle.
\end{proof}
\subsection{Transformation to the physical space}
Recall that our original intention was to solve Eq.~\eqref{eq:eps}, given by
\begin{equation}
\label{eq:eps2}
\Box \varepsilon+5u_0^4 \varepsilon=5(u_0^4-u_2^4)\varepsilon-N(u_2,\varepsilon)-e_2,
\end{equation}
where $u_0(t,r)=\lambda(t)^\frac12 W(\lambda(t)r)$ and the functions $u_2$ and $e_2$ are the
approximate solution and the error
constructed in Section \ref{sec:approx}. However, $u_2$ and $e_2$ are only defined in the
forward lightcone
\[ K_{t_0,c}^\infty=\{(t,x) \in \mathbb{R} \times \mathbb{R}^3: t\geq t_0, 0\leq |x| \leq t-c \},\quad t_0>c>0 \]
and as a consequence, we had to introduce the smooth cut-off
$\chi_c(t,r):=\chi(\frac{t-r}{c})$ where $\chi(s)=0$ for $s\leq \frac12$ and $\chi(s)=1$ for
$s\geq 1$. Then we considered the truncated equation
\begin{equation}
\label{eq:eps3}
\Box \varepsilon+5u_0^4 \varepsilon=\chi_c \left [5(u_0^4-u_2^4)\varepsilon-N(u_2,\varepsilon)-e_2\right ]
\end{equation}
instead of Eq.~\eqref{eq:eps2}.
As a consequence of Theorem \ref{thm:Contract} we now have the following result which concludes
the construction of the solution inside the forward lightcone.
\begin{lemma}
\label{lem:eps}
Let $t_0>0$ be sufficiently large and $\nu$ be sufficiently close to $1$.
Then there exists a solution $\varepsilon$ of Eq.~\eqref{eq:eps3}
with
\[ \|(\varepsilon(t,\cdot),\varepsilon_t(t,\cdot))\|_{H^1(B_{2t})\times L^2(B_{2t})}\lesssim t^{-1} \]
for all $t\geq t_0$ where $B_{2t}:=\{x \in \mathbb{R}^3: |x|< 2t\}$.
\end{lemma}
\begin{proof}
Recall the transformations that led from Eq.~\eqref{eq:eps3} to the main system
Eq.~\eqref{eq:sysFourier} which
we have finally solved in Theorem \ref{thm:Contract}. Schematically, we had
\[ \varepsilon \xrightarrow{(\tau,R)} \tilde{v} \xrightarrow{\cdot R} v \xrightarrow{\mathcal{F}}
\left (\begin{array}{c}x_d \\ x \end{array} \right ) \]
and in order to prove the claim we simply have to undo these transformations.
The coordinates $(t,r)$ and $(\tau,R)$ are related by $\tau=\tfrac{1}{\nu}\lambda(t)t$ and
$R=\lambda(t)r$.
Thus, let $(x_d,x)\in \mathcal X^{\tau_0,\nu}$ be the solution of Eq.~\eqref{eq:sysFourier} constructed
in Theorem \ref{thm:Contract} (which exists since we assume $t_0$ and hence
$\tau_0=\tfrac{1}{\nu}\lambda(t_0)t_0$ to be
large) and set
\begin{align*}
\tilde{v}_-(\tau,R)&:=R^{-1}\mathcal{F}^{-1}\left (\begin{array}{c}x_d(\tau) \\(1-\chi) x(\tau,\cdot) \end{array}
\right)(R) \\
\tilde{v}_+(\tau,R)&:=R^{-1}\mathcal{F}^{-1}\left (\begin{array}{c}0 \\ \chi x(\tau,\cdot) \end{array}
\right)(R)
\end{align*}
where $\chi$ is again a smooth cut-off with $\chi(\xi)=0$ for, say, $\xi \leq \frac12$ and $\chi(\xi)=1$
for $\xi \geq 1$.
According to Lemma \ref{lem:F-1} we have $\tilde{v}_+(\tau,\cdot)\in H^\frac54(\mathbb{R}^3)$ (recall
that we have set $\alpha=\frac18$) and therefore, $\|\tilde{v}_+(\tau,\cdot)\|_{H^1(\mathbb{R}^3)}
\lesssim \|\tilde{v}_+(\tau,\cdot)\|_{H^\frac54(\mathbb{R}^3)}\lesssim \tau^{-2+4\delta}$
for some small $\delta>0$ (see Definition \ref{eq:defX}).
Furthermore, Lemma \ref{lem:F-1} yields $\|\tilde{v}_-(\tau,\cdot)\|_{\dot{H}^2(\mathbb{R}^3)}\lesssim
\tau^{-2+4\delta}$ and
also, $\||\cdot|\tilde{v}_-(\tau,\cdot)\|_{L^\infty(\mathbb{R}^3)}\lesssim \tau^{-2+4\delta}$.
Thus, we obtain
\[ \|\tilde{v}_-(\tau,\cdot)\|_{L^2(B_{2\nu\tau})}\lesssim \||\cdot|\tilde{v}_-(\tau,\cdot)\|_{L^\infty(\mathbb{R}^3)}
\|1\|_{L^2(B_{2\nu\tau})}\lesssim \tau^{-\frac32+4\delta} \]
which implies $\|\tilde{v}_-(\tau,\cdot)\|_{H^2(B_{2\nu\tau})}\lesssim \tau^{-\frac32+4\delta}$.
By setting $\tilde{v}=\tilde{v}_-+\tilde{v}_+$ we infer
$\|\tilde{v}(\tau,\cdot)\|_{H^1(B_{2\nu\tau})}\lesssim \tau^{-\frac32+4\delta}$ and thus,
with $\varepsilon(t,r)=\tilde{v}(\tfrac{1}{\nu}\lambda(t)t,\lambda(t)r)$, we obtain
$\|\varepsilon(t,\cdot)\|_{H^1(B_{2t})}\lesssim t^{-1}$ since $\nu$ is assumed to be close to $1$.
For the time derivative recall that $\partial_t=\tilde{\lambda}(\tau)[\partial_\tau+\beta_\nu(\tau)R\partial_R]$
and thus,
\[ \partial_t \varepsilon(t,r)=\tilde{\lambda}(\tau)R^{-1}[\partial_\tau+\beta_\nu(\tau)
(R\partial_R-1)] v(\tau,R). \]
By the transference identity Eq.~\eqref{eq:K} we have
\[ R^{-1}[\partial_\tau+\beta_\nu(\tau)
(R\partial_R-1)] v(\tau,R)=R^{-1}\mathcal F^{-1}\left (
[\partial_\tau+\beta_\nu(\tau)(\mathcal A+\mathcal K)]\left (\begin{array}{c} x_d(\tau) \\
x(\tau,\cdot) \end{array} \right) \right ). \]
By Theorem \ref{thm:Contract}, Lemma \ref{lem:H2alpha}, Corollary \ref{cor:K} and the definition
of the space $Y^{p,\frac18}$ we therefore obtain the desired
$\|\partial_t \varepsilon(t,\cdot)\|_{L^2(B_{2t})}\lesssim t^{-1}$.
\end{proof}
\section{Extraction of initial data}
Let $\chi_c(t,r)$ be the smooth localizer to the truncated cone which
is defined by $\chi_c(t,r)=\chi(\frac{t-r}{c})$ where $\chi$ is a fixed smooth cut-off with
$\chi(s)=0$ for $s\leq \frac12$ and $\chi(s)=1$ for $s\geq 1$.
Furthermore, we set $\theta(t,r):=1-\chi(\frac{r}{2t})$, i.e., $\theta(t,r)=1$ if $r\leq t$ and
$\theta(t,r)=0$ if $r\geq 2t$.
As a consequence of Lemma \ref{lem:eps} there exists a function $u_c$ of the form
\begin{equation}
\label{eq:solu}
u_c(t,r)=\lambda(t)^\frac12 W(\lambda(t)r)+\theta(t,r)[v_0(t,r)+v_1^g(t,r)+\varepsilon(t,r)]
+\chi_c(t,r)v_1^b(t,r),
\end{equation}
where $v_0$ and $v_1^g:=v_{11}^g+v_{12}^g$, $v_1^b:=v_{11}^b+v_{12}^b$ are from Lemmas \ref{lem:e1} and \ref{lem:v1}, and
$\Box u_c(t,r)+u_c(t,r)^5=0$ provided that $(t,r) \in K_{t_0,c}^\infty$ with $t_0>0$ sufficiently
large.
\subsection{Energy estimates}
As a first step we establish energy bounds for the solution $u_c$.
\begin{lemma}
\label{lem:boundE}
Let $t_0\geq 1$ be sufficiently large and suppose $\nu$ is sufficiently close to $1$.
Then we have the bounds
\begin{align*}
\|\chi_c(t,\cdot) \partial_t u_c(t,\cdot)\|_{L^2(\mathbb{R}^3)}&\lesssim
[\lambda(t)t]^{-\frac12}+c^{-\frac{\nu}{2}} \\
\|\chi_c(t,\cdot) \partial_t W_{\lambda(t)}\|_{L^2(\mathbb{R}^3)}&\lesssim [\lambda(t)t]^{-\frac12} \\
\|\nabla [u_c(t,\cdot)-W_{\lambda(t)}]\|_{L^2(\mathbb{R}^3)}
&\lesssim [\lambda(t)t]^{-\frac12} +c^{-\frac{\nu}{2}}
\end{align*}
for all $t\geq t_0> 2c\geq 1$
where, as before, $W_{\lambda(t)}(r)=\lambda(t)^\frac12 W(\lambda(t)r)$.
\end{lemma}
\begin{proof}
We consider each constituent of $u_c$ separately.
\begin{enumerate}
\item
For the time derivative of $W_{\lambda(t)}$ we have
\[ \partial_t [\lambda(t)^\frac12 W(\lambda(t)r)]
=\tfrac12 \lambda(t)^{-\frac12}\lambda'(t)W(\lambda(t)r)+\lambda(t)^\frac12 \lambda'(t)rW'(\lambda(t)r)\]
and, since
\[ \int_0^{t}|W(\lambda(t)r)|^2 r^2 dr=\lambda(t)^{-3}\int_0^{\lambda(t)t}|W(r)|^2 r^2 dr
\lesssim \lambda(t)^{-2}t, \]
we infer
\[ \|\chi_c(t,\cdot)\partial_t W_{\lambda(t)}\|_{L^2(\mathbb{R}^3)}
\lesssim \lambda(t)^{-\frac32}\lambda'(t)t^\frac12
\simeq t^{-\frac12 \nu}=[\lambda(t)t]^{-\frac12}. \]
\item According to Lemma \ref{lem:e1} we have the estimates
\begin{align*}
|v_0(t,r)|&\lesssim \lambda(t)^{-\frac12}t^{-2}r \\
|\partial_t v_0(t,r)|+|\partial_r v_0(t,r)|&\lesssim \lambda(t)^{-\frac12}[t^{-3}r+t^{-2}]\lesssim
\lambda(t)^{-\frac12}t^{-2}
\end{align*}
for all $t\geq t_0$ and
$0\leq r\leq 2t$. Furthermore, note that $|\partial_t \theta(t,r)|\simeq rt^{-2}|\chi'(\frac{r}{2t})|$
and $|\nabla \theta(t,r)|\simeq t^{-1}|\chi'(\frac{r}{2t})|$.
Thus, we obtain
\[ \|\chi_c(t,\cdot) \partial_t \theta(t,\cdot) v_0(t,\cdot)\|_{L^2(\mathbb{R}^3)}^2\lesssim
\lambda(t)^{-1}t^{-8}\int_{t}^{2t}r^6 dr\lesssim [\lambda(t)t]^{-1} \]
and analogously, $\|\nabla \theta(t,\cdot)v_0(t,\cdot)\|_{L^2(\mathbb{R}^3)}\lesssim [\lambda(t)t]^{-\frac12}$.
Similarly, we infer
\begin{align*}
\|\chi_c(t,\cdot)\theta(t,\cdot)\partial_t v_0(t,\cdot)\|_{L^2(\mathbb{R}^3)}^2
+\|\theta(t,\cdot)\nabla v_0(t,\cdot)\|_{L^2(\mathbb{R}^3)}^2&\lesssim
\lambda(t)^{-1}t^{-4}\int_0^{2t}r^2 dr \\
&\lesssim [\lambda(t)t]^{-1}.
\end{align*}
\item \label{item:3}
Note that $\theta(t,r)v_1^g(t,r)+\chi_c(t,r)v_1^b(t,r)=v_1(t,r)$ provided $r\leq \frac12 t$.
Thus, in the case $r\leq \frac12 t$ we put $v_1^g$ and $v_1^b$ together and use the bounds
\begin{align*}
|v_1(t,r)|&\lesssim \lambda(t)^{-\frac12}t^{-1} \\
|\partial_t v_1(t,r)|+|\partial_r v_1^g(t,r)|&
\lesssim \lambda(t)^{-\frac12}t^{-2}
\end{align*}
from Lemma \ref{lem:v1}.
If $\frac12 t\leq r\leq 2t$ we similarly have
\begin{align*}
|v_1^g(t,r)|&\lesssim \lambda(t)^{-\frac12}t^{-1} \\
|\partial_t v_1^g(t,r)|+|\partial_r v_1^g(t,r)|&
\lesssim \lambda(t)^{-\frac12}t^{-2}
\end{align*}
by Lemma \ref{lem:v1}
and thus, these terms can be treated in the exact same fashion as $v_0$.
\item For $\varepsilon$ it suffices to invoke the bound from Lemma \ref{lem:eps} which
immediately yields
\[ \|\chi_c(t,\cdot)\partial_t [\theta(t,\cdot)\varepsilon(t,\cdot)]\|_{L^2(\mathbb{R}^3)}+
\|\nabla [\theta(t,\cdot)\varepsilon(t,\cdot)]\|_{L^2(\mathbb{R}^3)}\lesssim
t^{-1}\lesssim [\lambda(t)t]^{-\frac12}. \]
\item Finally, we come to the most interesting contribution, the term $\chi_c v_1^b=\chi_c(v_{11}^b+v_{12}^b)$. We emphasize that the following estimates
are key to the whole construction.
We may restrict ourselves to $r\geq \frac12 t$ since the case $r\leq \frac12 t$ is already included
in point $\eqref{item:3}$ above.
We start with $\chi_c v_{11}^b$.
Lemma \ref{lem:v1} yields
\begin{align*}
|v_{11}^b(t,r)|&\lesssim
\lambda(t)^{-\frac12}t^{-1}(1-\tfrac{r}{t})^{\frac12(1-\nu)}\\
|\partial_t v_{11}^b(t,r)|+|\partial_r v_{11}^b(t,r)|
&\lesssim \lambda(t)^{-\frac12}t^{-2}(1-\tfrac{r}{t})^{\frac12(1-\nu)-1}
\end{align*}
for all $t\geq t_0$ and $\frac12 t\leq r< t$.
Furthermore, note that
\[ |\partial_t \chi_c(t,r)|+|\partial_r \chi_c(t,r)|\simeq \tfrac{1}{c}|\chi'(\tfrac{t-r}{c})|. \]
We infer
\begin{align*}
\|\partial_t \chi_c(t,\cdot)v_{11}^b(t,\cdot)\|_{L^2(\mathbb{R}^3\backslash B_{t/2})}^2&\lesssim
\tfrac{1}{c^2}\lambda(t)^{-1}t^{-2}\int_{t-c}^{t-\frac{c}{2}}(1-\tfrac{r}{t})^{1-\nu}r^2 dr \\
&=\tfrac{1}{c^2}\lambda(t)^{-1}t\int_{1-\frac{c}{t}}^{1-\frac{c}{2t}}(1-s)^{1-\nu}s^2 ds \\
&\lesssim \tfrac{1}{(2-\nu)c^2}\underbrace{\lambda(t)^{-1}t}_{t^{2-\nu}}\left (\tfrac{c}{t}\right)^{2-\nu}\lesssim c^{-\nu}
\end{align*}
and by the very same calculation we also obtain $\|\nabla \chi_c(t,\cdot)v_{11}^b(t,\cdot)\|_{L^2(\mathbb{R}^3)}\lesssim
c^{-\frac{\nu}{2}}$.
If the derivative hits $v_{11}^b$ we have
\begin{align*}
\|\chi_c(t,\cdot)\partial_t v_{11}^b(t,\cdot)\|_{L^2(\mathbb{R}^3 \backslash B_{t/2})}^2
&\lesssim
\lambda(t)^{-1}t^{-4}\int_{\frac12 t}^{t-\frac{c}{2}}(1-\tfrac{r}{t})^{-\nu-1}r^2 dr \\
&=[\lambda(t)t]^{-1}\int_{\frac12}^{1-\frac{c}{2t}}(1-s)^{-\nu-1}s^2 ds \\
&\lesssim \tfrac{1}{\nu}t^{-\nu}\left(\tfrac{c}{2t}\right)^{-\nu}\lesssim c^{-\nu}
\end{align*}
and analogously we get $\|\chi_c(t,\cdot)\nabla v_{11}^b(t,\cdot)\|_{L^2(\mathbb{R}^3)}\lesssim c^{-\frac{\nu}{2}}$
as well. The term $v_{12}^b$ is handled by the exact same computations upon replacing $\nu$ by $3\nu$.
\end{enumerate}
\end{proof}
Next, we estimate the residual energy outside the (truncated) lightcone $K_{t_0,c}^\infty$.
\begin{lemma}
\label{lem:boundEout}
Under the assumptions of Lemma \ref{lem:boundE} we have the bounds
\begin{align*}
\|\chi_c(t,\cdot)\partial_t W_{\lambda(t)}\|_{L^2(\mathbb{R}^3\backslash B_{t-c})}&\lesssim
[\lambda(t)t]^{-\frac12} \\
\|\nabla W_{\lambda(t)}\|_{L^2(\mathbb{R}^3\backslash B_{t-c})}
&\lesssim [\lambda(t)t]^{-\frac12}
\end{align*}
for all $t\geq t_0 > 2c\geq 2$ where $B_{t-c}=\{x\in \mathbb{R}^3: |x|<t-c\}$.
\end{lemma}
\begin{proof}
The first bound follows from Lemma \ref{lem:boundE}.
Thus, it suffices to note that
\begin{align*}
\|\nabla W_{\lambda(t)}\|_{L^2(\mathbb{R}^3\backslash B_{t-c})}^2&\simeq
\lambda(t)\int_{t-c}^\infty |\partial_r W(\lambda(t)r)|^2r^2 dr \\
&\lesssim \lambda(t)^3\int_{t-c}^\infty
\lambda(t)^{-4}r^{-2} dr
\lesssim \lambda(t)^{-1}(t-c)^{-1} \\
&\lesssim [\lambda(t)t]^{-1}.
\end{align*}
\end{proof}
To conclude the energy bounds, we provide estimates for the $L^6$ norms.
\begin{corollary}
\label{cor:L6}
Under the assumptions of Lemma \ref{lem:boundE} we have
\begin{align*}
\|u_c(t,\cdot)-W_{\lambda(t)}\|_{L^6(\mathbb{R}^3)}&\lesssim [\lambda(t)t]^{-\frac12}+c^{-\frac{\nu}{2}} \\
\|W_{\lambda(t)}\|_{L^6(\mathbb{R}^3\backslash B_{t-c})}&\lesssim [\lambda(t)t]^{-\frac12}
\end{align*}
for all $t\geq t_0>2c\geq 2$.
\end{corollary}
\begin{proof}
The first assertion is an immediate consequence of Lemma \ref{lem:boundE}
and the Sobolev embedding $\dot{H}^1(\mathbb{R}^3)\hookrightarrow L^6(\mathbb{R}^3)$.
For the second bound we calculate explicitly
\begin{align*}
\|W_{\lambda(t)}\|_{L^6(\mathbb{R}^3\backslash B_{t-c})}^6\lesssim \lambda(t)^3 \int_{t-c}^\infty
\lambda(t)^{-6}r^{-6}r^2dr\lesssim \lambda(t)^{-3}(t-c)^{-3}\lesssim [\lambda(t)t]^{-3}.
\end{align*}
\end{proof}
\subsection{Extension of the solution to the whole space}
Lemmas \ref{lem:boundE} and \ref{lem:boundEout} show that the bulk of the energy of $u_c$ is concentrated
on the soliton inside the truncated lightcone.
Now we pick a sequence of times $(T_n)$ with $T_n \geq t_0$ and $T_n \to \infty$ as $n\to \infty$.
Then we consider the sequence of Cauchy data $(f_c^n,g_c^n)$ given by
\begin{equation}
\label{eq:fg}
\begin{aligned}
f_c^n(r)&=u_c(T_n,r) \\
g_c^n(r)&=\chi_c(T_n,r)\partial_t u_c(t,r)|_{t=T_n}.
\end{aligned}
\end{equation}
Since $\chi_c(T_n,r)\equiv 1$ for $r\leq T_n-c$, we have
\footnote{Here and in the following we employ the convenient abbreviation $u[t]=(u(t,\cdot),\partial_t u(t,\cdot))$.}
$(f_c^n,g_c^n)=u_c[T_n]$ on $B_{T_n-c}$.
As a consequence of Lemma \ref{lem:boundE}, the sequence $(f_c^n,g_c^n)$ is uniformly bounded
in $\dot{H}^1\times L^2(\mathbb{R}^3)$ for all $n \in \mathbb{N}$.
Now we solve the equation backwards in time with data $(f_c^n,g_c^n)$ at $t=T_n$.
For the following it is useful to introduce the notation
\[ K_{t_0,c}^{T_n}:=\{(t,x) \in \mathbb{R}\times \mathbb{R}^3: t_0\leq t\leq T_n, |x|\leq t-c\}. \]
Furthermore, for $U\subset \mathbb{R}^3$ open we set
\begin{align*}
\mathcal{E}_U(f,g)&:=\frac12 \int_U (|\nabla f(x)|^2+|g(x)|^2)dx-\frac16 \int_U |f(x)|^6 dx \\
&=\tfrac12 \|(f,g)\|_{\dot{H^1}\times L^2(U)}^2-\tfrac16 \|f\|_{L^6(U)}^6.
\end{align*}
Thus, $\mathcal{E}_{\mathbb{R}^3}$ is the energy functional associated with the focusing quintic wave equation.
\begin{lemma}
\label{lem:extend}
Let $t_0\geq 1$ and $c\geq 1$ be sufficiently large and assume $t_0\geq 2c$.
Then, for any $n\in\mathbb{N}$, there exists an energy class solution $u^{(T_n)}$ of
\[ \left \{ \begin{array}{l}
\Box u^{(T_n)}(t,x)+u^{(T_n)}(t,x)^5=0,\quad (t,x) \in [t_0,T_n] \times \mathbb{R}^3 \\
u^{(T_n)}[T_n]=(f_c^n,g_c^n) \end{array} \right . \]
which satisfies $u^{(T_n)}=u_c$ on $K_{t_0,c}^{T_n}$.
Furthermore,
\[ \|u^{(T_n)}[t_0]\|_{\dot{H}^1\times L^2(\mathbb{R}^3\backslash B_{t_0-c})}\to 0 \]
as $t_0,c \to \infty$, uniformly in $n$.
\end{lemma}
\begin{proof}
Recall that, given data in $\dot{H}^1\times L^2(\mathbb{R}^3)$,
the Cauchy problem for the quintic wave equation can be solved locally in time.
Furthermore, if the data are \emph{small} in $\dot{H}^1\times L^2(\mathbb{R}^3)$, the corresponding
solution exists globally in time, see \cite{Pec84} or \cite{Sog08}, p.~142, Theorem 3.1.
Given any $\delta>0$, we have $\|(f_c^n,g_c^n)\|_{\dot{H}^1(\mathbb{R}^3)\times L^2(\mathbb{R}^3)}\leq
\|W\|_{\dot{H}^1(\mathbb{R}^3)}+\delta$ for all $n\in \mathbb{N}$ if we assume $t_0$ and $c$ to be sufficiently
large (see Lemma \ref{lem:boundE}).
Thus, we infer the existence of $u^{(T_n)}$ on $(T_n^*,T_n]\times \mathbb{R}^3$
with some $T_n^*<T_n$ and we assume $T_n^*$ to be minimal with this property.
Furthermore, the map
\[ t \mapsto \|u^{(T_n)}[t]\|_{\dot{H}^1\times L^2(\mathbb{R}^3)}: (T_n^*,T_n]\to \mathbb{R} \]
is continuous (\cite{Sog08}, p.~142, Theorem 3.1).
If $T_n^*\leq t_0$ we are done.
Thus, assume that $T_n^*>t_0$.
By causality it is clear that $u^{(T_n)}=u_c$ on $({K}_{T_n^*,c}^{T_n})^\circ$, the interior
of the truncated lightcone.
Furthermore, by Lemma \ref{lem:boundE} and Corollary \ref{cor:L6}, the data satisfy
\begin{align*}
\|(f_c^n,g_c^n)-(W_{\lambda(T_n)},0)\|_{\dot{H}^1 \times L^2(\mathbb{R}^3)}&\lesssim \delta \\
\|f_c^n-W_{\lambda(t)}\|_{L^6(\mathbb{R}^3)}&\lesssim \delta
\end{align*}
and, by using that $\|\nabla W_{\lambda(t)}\|_{L^2(\mathbb{R}^3)}=\|\nabla W\|_{L^2(\mathbb{R}^3)}$ and analogously
for $\|W_{\lambda(t)}\|_{L^6(\mathbb{R}^3)}$,
this implies $|\mathcal E_{\mathbb{R}^3}(f_c^n,g_c^n)-\mathcal E_{\mathbb{R}^3} (W,0)|\lesssim \delta^2+\delta^6\lesssim \delta^2$ for the total energy.
In other words, the bulk of the total energy is concentrated on the soliton.
By conservation of energy we infer that this has to hold for all times, i.e.,
$|\mathcal E_{\mathbb{R}^3} (u^{(T_n)}[t])-\mathcal E_{\mathbb{R}^3} (W,0)|\lesssim \delta^2$
for all $t \in (T_n^*, T_n]$.
Since $u^{(T_n)}=u_c$ on $(K_{T^*_n,c}^{T_n})^\circ$, we infer from Lemma \ref{lem:boundE}
and Corollary \ref{cor:L6} the bounds
\begin{align*}
\|u^{(T_n)}[t]-(W_{\lambda(t)},0)\|_{\dot{H}^1\times L^2(B_{t-c})}&\lesssim \delta \\
\|u^{(T_n)}(t,\cdot)-W_{\lambda(t)}\|_{L^6(B_{t-c})}&\lesssim \delta
\end{align*}
which imply $|\mathcal E_{B_{t-c}}(u^{(T_n)}[t])-\mathcal E_{B_{t-c}}(W_{\lambda(t)},0)|\lesssim \delta^2$.
Moreover, by Lemma \ref{lem:boundEout} and Corollary \ref{cor:L6} we have
$|\mathcal E_{\mathbb{R}^3\backslash B_{t-c}}(W,0)|\lesssim \delta^2$ and thus,
$|\mathcal E_{\mathbb{R}^3\backslash B_{t-c}}(u^{(T_n)}[t])|\lesssim \delta^2$ which shows that the
total energy of the solution $u^{(T_n)}$ outside the truncated lightcone stays small for
all times.
By a (slight modification of) the Sobolev inequality we infer
$\|u^{(T_n)}(t,\cdot)\|_{L^6(\mathbb{R}^3\backslash B_{t-c})}\lesssim \|\nabla u^{(T_n)}(t,\cdot)\|_{L^2(\mathbb{R}^3\backslash
B_{t-c})}$ with an implicit constant that is independent of $t$.
This estimate implies
\begin{align*} \delta^2&\gtrsim \mathcal E_{\mathbb{R}^3\backslash B_{t-c}}(u^{(T_n)}[t])
=\tfrac12 \|u^{(T_n)}[t]\|_{\dot{H}^1\times L^2(\mathbb{R}^3\backslash B_{t-c})}^2-\tfrac16
\|u^{(T_n)}(t,\cdot)\|_{L^6(\mathbb{R}^3\backslash B_{t-c})}^6 \\
&\geq \tfrac12 \|u^{(T_n)}[t]\|_{\dot{H}^1\times L^2(\mathbb{R}^3\backslash B_{t-c})}^2
\left [1-C\|u^{(T_n)}[t]\|_{\dot{H}^1\times L^2(\mathbb{R}^3\backslash B_{t-c})}^4 \right ]
\end{align*}
for all $t\in (T_n^*,T_n]$.
Initially, at $t=T_n$, we have
\[ \|u^{(T_n)}[T_n]\|_{\dot{H}^1\times L^2(\mathbb{R}^3\backslash B_{t-c})}
=\|(f_c^n,g_c^n)\|_{\dot{H}^1\times L^2(\mathbb{R}^3\backslash B_{t-c})}\leq \delta \]
by Lemma \ref{lem:boundEout} and therefore, we must have
\[ \|u^{(T_n)}[t]\|_{\dot{H}^1\times L^2(\mathbb{R}^3\backslash B_{t-c})}^2\lesssim \delta^2 \]
for all $t \in (T_n^*,T_n]$ (provided $\delta>0$ is sufficiently small)
since the map $t \mapsto u^{(T_n)}[t]$ is continuous from $(T_n^*,T_n]$
to $\dot{H}^1\times L^2(\mathbb{R}^3\backslash B_{t-c})$ by a classical $\frac{\varepsilon}{2}$ argument.
We conclude that not only the total energy but also the \emph{kinetic} energy of $u^{(T_n)}$
stays small outside the truncated cone.
Consequently, the small data global existence result
allows us to extend the solution beyond time $T_n^*$ which contradicts the minimality of $T_n^*$.
Thus, we must have $T_n^*\leq t_0$ and the Lemma is proved.
\end{proof}
\subsection{The Bahouri-G\'erard decomposition}
Our idea now is to consider the sequence of data $u^{(T_n)}[t_0]$ and attempt to extract
a limit as $n\to \infty$.
In effect, we shall not be able to do so, but we shall nonetheless be able to construct
new initial data $u_*[t_0]$ resulting in an energy class solution $u_*(t,x)$ defined on all of
$[t_0,\infty) \times \mathbb{R}^3$ with the property that
\[ u_*=u_c \mbox{ on }K_{t_0,c}^\infty. \]
In order to achieve this, we apply the celebrated Bahouri-G\'erard decomposition \cite{BG99}.
From now on we always assume $t_0$ and $c$ to be sufficiently large with $t_0\geq 2c$.
Note also that no space translations are necessary in the following lemma since all functions
are in fact radial.
Furthermore, it is convenient to introduce the notation
\[ u^{\lambda,t_0}(t,x):=\lambda^{-\frac12}u\left (\frac{t-t_0}{\lambda}, \frac{x}{\lambda}\right ). \]
\begin{lemma}[Linear profile decomposition]
\label{lem:BG}
Consider the sequence $(u^{(T_n)}[t_0])_{n\in\mathbb{N}}$ of Cauchy data for the
solutions constructed in Lemma \ref{lem:extend}.
Then, upon passing to a subsequence,
there exists a sequence $(V_i)_{i\in\mathbb{N}}$ of (fixed) radial free waves
\footnote{A ``free wave'' is a function $v:\mathbb{R}\times\mathbb{R}^3\to\mathbb{R}$ such that the map
$t \mapsto \|v[t]\|_{\dot{H}^1\times L^2(\mathbb{R}^3)}$ is continuous (in particular,
$\|v[t]\|_{\dot{H}^1\times L^2(\mathbb{R}^3)}\lesssim 1$ for any $t\in\mathbb{R}$) and
$v(t,\cdot)=\cos(t|\nabla|)v(0,\cdot)+|\nabla|^{-1}\sin(t|\nabla|)\partial_1 v(0,\cdot)$, i.e.,
$\Box v=0$ in the weak sense.},
such that
\begin{equation}
\label{eq:BG}
u^{(T_n)}[t_0]=\sum_{i=1}^A V_i^{\lambda_n^i,t_n^i}[t_0]
+W^{nA}[t_0]
\end{equation}
for all $n, A\in \mathbb{N}$ where $(t_n^i)_{n\in\mathbb{N}}$ and $(\lambda_n^i)_{n\in\mathbb{N}}$
are suitable sequences of times and positive
scaling factors, respectively, that satisfy
\begin{equation}
\label{eq:orth}
\left |\log \left ( \frac{\lambda_n^i}{\lambda_n^j}\right )\right |+\frac{|t_n^i-t_n^j|}{\lambda_n^i}
\to \infty,\quad i\not= j
\end{equation}
as $n \to \infty$, and $W^{nA}$ is a free wave with the property
\[ \lim_{A\to \infty} \limsup_{n\to \infty}\|W^{nA}\|_{L^\infty(\mathbb{R}) L^6(\mathbb{R}^3)}=0. \]
Furthermore, for any $A\in\mathbb{N}$, we have asymptotic orthogonality in the sense that
\begin{align*}
\left ( \left . V_i^{\lambda_n^i,t_n^i}[t_0]\right | V_j^{\lambda_n^j,t_n^j}[t_0] \right )_{\dot{H}^1
\times L^2(\mathbb{R}^3)} &\to 0, \quad 1\leq i,j\leq A,\quad i\not= j \\
\left ( \left . V_i^{\lambda_n^i,t_n^i}[t_0]\right | W^{nA}[t_0] \right )_{\dot{H}^1
\times L^2(\mathbb{R}^3)} &\to 0,\quad 1\leq i\leq A \\
\end{align*}
as $n\to\infty$.
\end{lemma}
\begin{proof}
From Lemma \ref{lem:boundE} we deduce
\[ \|u^{(T_n)}[t_0]\|_{\dot{H}^1\times L^2(B_{t_0-c})}
=\|u_c[t_0]\|_{\dot{H}^1 \times L^2(B_{t_0-c})}\lesssim 1 \]
and in the proof of Lemma \ref{lem:extend} above we had
\[ \|u^{(T_n)}[t_0]\|_{\dot{H}^1\times L^2(\mathbb{R}^3\backslash B_{t_0-c})}\lesssim \delta \]
for all $n\in\mathbb{N}$.
Thus, we infer
\[\sup_{n\in\mathbb{N}}\|u^{(T_n)}[t_0]\|_{\dot{H}^1\times L^2(\mathbb{R}^3)}\lesssim 1\]
and
everything follows by the main theorem in \cite{BG99} and the remark on p.~159.
\end{proof}
\begin{remark}
\label{rem:locorth}
We will also use a kind of ``localized'' orthogonality which can be derived as follows. Suppose
$(v_n[t_0]|w_n[t_0])_{\dot H^1\times L^2(\mathbb{R}^3)}=o(1)$ as $n\to\infty$ and the energy of
$v_n[t_0]$ concentrates in $U\subset \mathbb{R}^3$ as $n\to \infty$, i.e.,
\[ \|v_n[t_0]\|_{\dot H^1\times L^2(\mathbb{R}^3)}=
\|v_n[t_0]\|_{\dot H^1\times L^2(U)}+o(1) \]
as $n\to\infty$, or, in other words,
\[ \|v_n[t_0]\|_{\dot H^1\times L^2(\mathbb{R}^3 \backslash U)} \to 0 \quad (n\to\infty).\]
Of course, we also assume that $\|v_n[t_0]\|_{\dot H^1\times L^2(\mathbb{R}^3)}, \|w_n[t_0]\|_{\dot H^1\times L^2(\mathbb{R}^3)}\lesssim 1$
for all $n$.
Then
we have the localized orthogonality
\begin{align*}
(v_n[t_0]|w_n[t_0])_{\dot H^1\times L^2(U)}
&=(v_n[t_0]|w_n[t_0])_{\dot H^1\times L^2(\mathbb{R}^3)}-(v_n[t_0]|w_n[t_0])_{\dot H^1\times L^2(\mathbb{R}^3\backslash U)}=o(1)
\end{align*}
since
\[ \big |(v_n[t_0]|w_n[t_0])_{\dot H^1\times L^2(\mathbb{R}^3\backslash U)}\big |\leq \|v_n[t_0]\|_{\dot H^1\times L^2(\mathbb{R}^3\backslash U)}
\|w_n[t_0]\|_{\dot H^1\times L^2(\mathbb{R}^3\backslash U)}\to 0 \]
as $n\to\infty$.
\end{remark}
A triple $(V_i,(\lambda_n^i),(t_n^i))$ like in Lemma \ref{lem:BG} is called a (concentration) profile.
As a first step we now show that certain profiles with scaling factors tending to zero
cannot exist.
Heuristically speaking, such profiles are excluded by the fact that they would concentrate at the origin
as $n\to \infty$ but near $x=0$, $u^{(T_n)}[t_0]$ equals $u_c[t_0]$ and is thus independent
of $n$.
In order to make the argument rigorous, one has to show that the concentration effect cannot be
``cancelled'' by the error term $W^{nA}$.
This is a consequence of the asymptotic orthogonality of the profiles.
Before coming to that, however, we introduce another notion.
We call a profile $(V_i,(\lambda_n^i),(t_n^i))$ \emph{bounded}, if
$\lambda_n^i\simeq 1$ and $|t_0-t_n^i|\lesssim 1$ for all $n\in\mathbb{N}$.
Otherwise, the profile is called \emph{unbounded}.
Note that by condition \eqref{eq:orth} there exists at most one (nonzero) bounded profile
in the decomposition Eq.~\eqref{eq:BG}.
\begin{lemma}
\label{lem:ZA}
Consider the decomposition given in Lemma \ref{lem:BG} and suppose there exists a profile
$(V_i, (\lambda_n^i), (t_n^i))$ with
$\lambda_n^i \to 0$ as $n\to \infty$ and $\frac{|t_0-t_n^i|}{\lambda_n^i}\lesssim 1$ for all $n\in\mathbb{N}$.
Then $V_i=0$.
\end{lemma}
\begin{proof}
Fix $A\in\mathbb{N}$ and
let $V_b$ be the unique bounded profile ($V_b$ might be zero in which
case we set $\lambda_n^b=1$ and $t_n^b=0$ for all $n\in \mathbb{N}$).
First, we claim that for any given $\varepsilon>0$ we can find a $\delta>0$ such that
\begin{equation}
\label{eq:epsdelta}
\left \|u^{(T_n)}[t_0]-V_b^{\lambda_n^b,t_n^b}[t_0]
\right \|_{\dot{H}^1\times L^2(B_{\delta})}<\varepsilon
\end{equation}
for all $n\in\mathbb{N}$.
Indeed, $u^{(T_n)}[t_0]|_{B_\delta}$ is independent of $n$ for small enough $\delta$ since
by construction we have
$u^{(T_n)}[t_0]=u_c[t_0]$ on $B_{t_0-c}$, see Lemma \ref{lem:extend}.
This shows that $\|u^{(T_n)}[t_0]\|_{\dot{H}^1\times L^2(B_\delta)}\to 0$ as $\delta \to 0$,
uniformly in $n$.
Furthermore,
by scaling and energy conservation we have
\[ \|V_i^{\lambda_n^i,t_n^i}[t_0]\|_{\dot{H}^1\times L^2(\mathbb{R}^3)}=\|V_i[t_0]\|_{\dot{H}^1\times L^2(\mathbb{R}^3)}\]
for any profile $V_i$ (bounded or unbounded).
Since $\lambda_n^b\simeq 1$ and $|t_n^b|\lesssim 1$ for all $n\in\mathbb{N}$, we obtain
by the continuity of $t\mapsto \|V_b[t]\|_{\dot{H}^1\times L^2(\mathbb{R}^3)}$ the bound
$\|V_b^{\lambda_n^b,t_n^b}[t_0]\|_{\dot{H}^1\times L^2(B_\delta)}
<\tfrac{\varepsilon}{2}$
for all $n\in \mathbb{N}$ provided $\delta>0$ is sufficiently small.
Consequently, the triangle inequality yields the claim \eqref{eq:epsdelta}.
Note that by Eq.~\eqref{eq:BG}, Eq.~\eqref{eq:epsdelta} is equivalent to
\[ \left \|\sum_{i=1,i\not=b}^A V_i^{\lambda_n^i,t_n^i}[t_0]+W^{nA}[t_0] \right \|_{\dot{H}^1\times
L^2(B_\delta)}<\varepsilon. \]
We write $i\in Z_A$ iff $\lambda_n^i\to 0$ as $n\to\infty$
and $\frac{|t_0-t_n^i|}{\lambda_n^i}\lesssim 1$ for all $n\in\mathbb{N}$.
Now observe that, for $i\in Z_A$,
\begin{align*}
\|V_i^{\lambda_n^i,t_n^i}[t_0]\|_{\dot{H}^1\times L^2(\mathbb{R}^3\backslash B_\delta)}
&\simeq \left \|\partial_1 V_i\left (\frac{t_0-t_n^i}{\lambda_n^i},\cdot\right )\right
\|_{L^2(\mathbb{R}^3\backslash B_{\delta/\lambda_n^i})} \\
&\quad+\left \|\nabla V_i\left (\frac{t_0-t_n^i}{\lambda_n^i},\cdot\right )\right
\|_{L^2(\mathbb{R}^3\backslash B_{\delta/\lambda_n^i})} \to 0
\end{align*}
as $n\to\infty$ by the continuity of $t\mapsto \|V_i[t]\|_{\dot{H}^1\times L^2(\mathbb{R}^3)}$.
For brevity we write
\begin{align*}
v_n[t_0]:=\sum_{i \in Z_A} V_i^{\lambda_n^i,t_n^i}[t_0],\quad \quad w_n[t_0]:=\sum_{i \notin Z_A,i\not= b}
V_i^{\lambda_n^i,t_n^i}[t_0]+W^{nA}[t_0].
\end{align*}
By the pairwise orthogonality of the profiles and the triangle inequality we obtain
\begin{align*}
\varepsilon&>\left \|\sum_{i=1,i\not=b}^A V_i^{\lambda_n^i,t_n^i}[t_0]+W^{nA}[t_0]
\right \|_{\dot{H}^1\times
L^2(B_\delta)}^2=\|v_n[t_0]+w_n[t_0]\|_{\dot H^1\times L^2(B_\delta)}^2 \\
&=\|v_n[t_0]+w_n[t_0]\|_{\dot H^1\times L^2(\mathbb{R}^3)}^2-\|v_n[t_0]+w_n[t_0]\|_{\dot H^1\times L^2(\mathbb{R}^3\backslash B_\delta)}^2 \\
&\geq\|v_n[t_0]\|_{\dot H^1\times L^2(\mathbb{R}^3)}^2+\|w_n[t_0]\|_{\dot H^1\times L^2(\mathbb{R}^3)}^2
-\|v_n[t_0]\|_{\dot H^1\times L^2(\mathbb{R}^3\backslash B_\delta)}^2-\|w_n[t_0]\|_{\dot H^1\times L^2(\mathbb{R}^3\backslash B_\delta)}^2 \\
&\quad -2\|v_n[t_0]\|_{\dot H^1\times L^2(\mathbb{R}^3\backslash B_\delta)}
\|w_n[t_0]\|_{\dot H^1\times L^2(\mathbb{R}^3\backslash B_\delta)}+o(1) \\
&= \sum_{i\in Z_A}\|V_i[t_0]\|_{\dot{H}^1\times L^2(\mathbb{R}^3)}^2
+\left \|\sum_{i\notin Z_A,i\not=b} V_i^{\lambda_n^i,t_n^i}[t_0]+W^{nA}[t_0]
\right \|_{\dot{H}^1\times
L^2(B_\delta)}^2+o(1)
\end{align*}
as $n\to\infty$.
Consequently, $\|V_i[t_0]\|_{\dot{H}^1\times L^2(\mathbb{R}^3)}<\varepsilon$
for all $i\in Z_A$ provided $n$ is sufficiently large and,
since $\varepsilon>0$ and $A\in\mathbb{N}$ were arbitrary, this yields the claim.
\end{proof}
Our next goal is to prove that the energy of \emph{all} unbounded profiles is small.
As a preparation for this we need the following elementary observation which is just an instance
of the strong Huygens' principle. As always, it suffices to consider the radial
case for our purposes.
\begin{lemma}[Strong Huygens' principle]
\label{lem:Huygens}
Let $v:\mathbb{R}\times \mathbb{R}^3\to\mathbb{R}$ be a (radial) free wave and set
\[ A(R_1,R_2):=\{x\in\mathbb{R}^3: R_1<|x|<R_2\}. \]
Then we have the estimate
\[ \|v[t]\|_{\dot{H}^1\times L^2(B_R)}\lesssim \|v[0]\|_{\dot{H}^1\times L^2(A(|t|-R,|t|+R))}
+\|v(0,\cdot)\|_{L^6(A(|t|-R,|t|+R))}\]
for all $t\in \mathbb{R}$ and all $R>0$ provided $|t|\geq R$.
\end{lemma}
\begin{proof}
Note first that by the Sobolev embedding $\dot{H}^1(\mathbb{R}^3)\hookrightarrow L^6(\mathbb{R}^3)$ we may assume
$v(t,\cdot)\in L^6(\mathbb{R}^3)$ for all $t\in\mathbb{R}$.
Furthermore, by the time reflection symmetry it suffices to consider the case $t\geq R$. Since
$v$ is radial, it is given explicitly by
d'Alembert's formula
\[ v(t,r)=\frac{1}{2r}\left [(t+r)f(t+r)-(t-r)f(t-r)+\int_{t-r}^{t+r}sg(s)ds \right ] \]
for $0\leq r\leq t$ where $(f,g)=v[0]$.
Based on this formula it is straightforward to prove the claimed estimate by using Hardy's and
H\"older's inequalities.
\end{proof}
We obtain a simple corollary which applies to certain concentration profiles in the
Bahouri-G\'erard decomposition.
\begin{corollary}
\label{cor:leave}
Suppose $v:\mathbb{R}\times \mathbb{R}^3\to\mathbb{R}$ is a (radial) free wave and let $(\lambda_n)_{n\in\mathbb{N}}$, $(t_n)_{n\in\mathbb{N}}$ be sequences of
(positive) scaling factors and times, respectively.
If
\begin{itemize}
\item $\lambda_n\to\infty$
\item or $\frac{|t_0-t_n|-R}{\lambda_n}\to\infty$ as $n\to \infty$
\end{itemize}
then
\[ \|v^{\lambda_n,t_n}[t_0]\|_{\dot{H}^1\times L^2(B_R)}\to 0 \]
as $n\to\infty$ for any (fixed) $R>0$.
\end{corollary}
\begin{proof}
Note that by scaling we have
\begin{equation}
\label{eq:vt0}
\|v^{\lambda_n,t_n}[t_0]\|_{\dot{H}^1\times L^2(B_R)}^2=\left \|\partial_1
v \left (\frac{t_0-t_n}{\lambda_n},\cdot\right )\right \|_{L^2(B_{R/\lambda_n})}^2
+\left \|\nabla
v \left (\frac{t_0-t_n}{\lambda_n},\cdot\right )\right \|_{L^2(B_{R/\lambda_n})}^2.
\end{equation}
First, we consider the case $\lambda_n\to \infty$.
If $\frac{|t_0-t_n|}{\lambda_n}\lesssim 1$ for all $n\in\mathbb{N}$ then
the continuity of $t\mapsto \|v[t]\|_{\dot{H}^1\times L^2(\mathbb{R}^3)}$ and Eq.~\eqref{eq:vt0}
show that $\|v^{\lambda_n,t_n}[t_0]\|_{\dot{H}^1\times L^2(B_R)}\to 0$ as $n\to \infty$.
On the other hand, if $\frac{|t_0-t_n|}{\lambda_n}\to\infty$, we must have $|t_0-t_n|\to\infty$ and
thus, $|t_0-t_n|/\lambda_n\geq R/\lambda_n$ for large $n$. Consequently,
Lemma \ref{lem:Huygens} yields
the claim.
The second case is a direct consequence of
Lemma \ref{lem:Huygens}.
\end{proof}
Now we can show the aforementioned smallness of the unbounded profiles.
\begin{lemma}
\label{lem:ubsmall}
Let $\varepsilon>0$.
For the energy of any unbounded profile $(V_i, (\lambda_n^i), (t_n^i))$
in the decomposition Eq.~\eqref{eq:BG} we have the bound
\[ \|V_i[t_0]\|_{\dot{H}^1\times L^2(\mathbb{R}^3)}\leq
\|u^{(T_n)}[t_0]\|_{\dot{H}^1\times L^2(\mathbb{R}^3\backslash B_{t_0-c})}+\varepsilon \]
as $n\to \infty$.
\end{lemma}
\begin{remark}
By Lemma \ref{lem:extend} we see that the energy of the unbounded profiles can be assumed to be
arbitrarily small provided $t_0$ and $c$ are chosen large enough.
\end{remark}
\begin{proof}[Proof of Lemma \ref{lem:ubsmall}]
By Lemma \ref{lem:ZA} the claim holds trivially for those unbounded profiles $V_i$ where
$\lambda_n^i\to 0$ as $n\to\infty$ and $\frac{|t_0-t_n^i|}{\lambda_n^i}\lesssim 1$ for all $n\in\mathbb{N}$.
Furthermore, if the sequences $(\lambda_n^i)$ and $(t_n^i)$ satisfy the hypothesis of
Corollary \ref{cor:leave}, we infer
\[
\|V_i[t_0]\|_{\dot{H}^1\times L^2(\mathbb{R}^3)}=\|V_i[t_0]\|_{\dot{H}^1\times L^2(\mathbb{R}^3\backslash
B_{t_0-c})}+o(1) \]
as $n\to\infty$ and the claim follows by the orthogonality of the profiles stated in
Lemma \ref{lem:BG} and Remark \ref{rem:locorth}.
It remains to study those profiles where $\frac{|t_0-t_n^i|-(t_0-c)}{\lambda_n^i}\lesssim 1$ and $\lambda_n^i\lesssim 1$.
Since the profiles in question are unbounded, we must have $\lambda_n^i\to 0$.
Consequently, $|t_0-t_n^i|$ is bounded and after selecting a subsequence, we may assume that $t_n^i\to t^i$.
Let $\varepsilon>0$ be given. Then we can find an $R>0$ such that
\[ \|V_i[0]\|_{\dot H^1\times L^2(\mathbb{R}^3)}=\|V_i[0]\|_{\dot H^1\times L^2(B_R)}+\varepsilon, \]
i.e., the energy of $V_i[0]$ is essentially contained in $B_R$.
Taking into account the fact that $V$ is a free wave, we infer by the strong Huygens' principle that the
energy of $V$ at time $\frac{t_0-t_n^i}{\lambda_n^i}$ (for large enough $n$) is essentially contained in
\[ \Omega_n:=\left \{x\in\mathbb{R}^3: \tfrac{|t_0-t_n^i|}{\lambda_n^i}-R\leq |x|\leq \tfrac{|t_0-t_n^i|}{\lambda_n^i}+R\right \}, \]
cf.~Lemma \ref{lem:Huygens}.
More precisely, we have
\[ \left \|V_i\left [\tfrac{t_0-t_n^i}{\lambda_n^i}\right ]\right \|_{\dot H^1\times L^2(\mathbb{R}^3)}
=\left \|V_i\left [\tfrac{t_0-t_n^i}{\lambda_n^i}\right ]\right \|_{\dot H^1\times L^2(\Omega_n)}+\varepsilon \]
as $n \to \infty$.
By rescaling this is equivalent to
\[ \|V_i^{\lambda_n^i,t_n^i}[t_0]\|_{\dot H^1\times L^2(\mathbb{R}^3)}=
\|V_i^{\lambda_n^i,t_n^i}[t_0]\|_{\dot H^1\times L^2(\tilde \Omega_n)}+\varepsilon \]
where $\tilde \Omega_n=\{x\in\mathbb{R}^3: |t_0-t_n|-\lambda_n^i R\leq |x|\leq |t_0-t_n|+\lambda_n^i R\}$.
Thus, as $n\to\infty$, $V_i^{\lambda_n^i,t_n^i}[t_0]$ concentrates at $r^i=|t_0-t^i|$.
If $r^i<t_0-c$, we apply the argument from Lemma \ref{lem:ZA} to conclude that $V_i=0$ (the logic being
that $V_i^{\lambda_n^i,t_n^i}[t_0]$ cannot concentrate inside of $B_{t_0-c}$ because there, $u^{(T_n)}[t_0]$
equals $u_c[t_0]$).
If $r^i\geq t_0-c$, it follows from Lemma \ref{lem:extend} and the orthogonality of Lemma \ref{lem:BG}, see also
Remark \ref{rem:locorth}, that the energy of the profile $V_i$ is small.
In any case, we arrive at the desired conclusion.
\end{proof}
An immediate consequence of Lemma \ref{lem:ubsmall} is the fact that the bulk of the energy
of $u^{(T_n)}[t_0]$ is concentrated on the bounded profile $V_b$ (in particular, $V_b$ is nonzero).
\subsection{The nonlinear profile decomposition}
After extracting a subsequence we have $\lambda_n^b\to \lambda^b$, $t_n^b\to t^b$ as $n\to\infty$
where $\lambda^b$ and $t^b$ are some finite numbers.
Now we define new initial data by
\[ (f,g):=\lim_{n\to\infty}V_b^{\lambda_n^b,t_n^b}[t_0]=V_b^{\lambda^b,t^b}[t_0] \]
with convergence in $\dot{H}^1\times L^2(\mathbb{R}^3)$.
By the local existence theory \cite{Sog08} we obtain a time $t_*>t_0$ and an energy class solution $u_*$ satisfying
\begin{equation}
\label{eq:u*}
\left \{ \begin{array}{l}\Box u_*(t,x)+u_*(t,x)^5=0,\quad (t,x)\in (t_0,t_*)\times \mathbb{R}^3 \\
u_*[t_0]=(f,g) \end{array} \right .
\end{equation}
where we assume $t_*$ to be maximal.
To each profile $(V_i, (\lambda_n^i), (t_n^i))$
in the decomposition of Lemma \ref{lem:BG} there is associated a \emph{nonlinear profile}
$(U_i, (\lambda_n^i), (t_n^i))$ which is characterized by $\Box U_i+U_i^5=0$ and either
\[
\|V_i[t]-U_i[t]\|_{\dot{H}^1\times L^2(\mathbb{R}^3)}\to 0 \mbox{ as }t\to \pm \infty \]
if $\frac{t_0-t_n^i}{\lambda_n^i}\to \pm \infty$ in the limit $n\to\infty$ or
\[ U_i[t_0]=V_i[t_0] \]
in the case $\frac{|t_0-t_n^i|}{\lambda_n^i}\lesssim 1$ for all $n\in \mathbb{N}$.
The existence of the nonlinear profiles is a consequence of the small data
scattering theory
\cite{Pec84}, cf.~also \cite{KM08}.
Moreover, since the energy of all unbounded profiles $V_i$ is small, it follows that the $U_i$
exist globally (and scatter) provided $i\not= b$ and by a continuity argument as in the proof of Lemma \ref{lem:extend}
we may assume $\|U_i[t]\|_{\dot{H}^1\times L^2(\mathbb{R}^3)}$ to be small for all $t\in\mathbb{R}$.
In the case of $U_b$ we have at least existence for small times.
By the symmetries of the equation we see that, for all $n\in\mathbb{N}$,
$U_b^{\lambda_n^b,t_n^b}$ is a solution
with data $U_b^{\lambda_n^b,t_n^b}[t_0]=V_b^{\lambda_n^b,t_n^b}[t_0]$ and thus, by the local
well-posedness we infer
\begin{equation}
\label{eq:Ubu*}
\|U_b^{\lambda_n^b,t_n^b}[t]-u_*[t]\|_{\dot{H}^1\times L^2(\mathbb{R}^3)}\to 0\quad (n\to\infty)
\end{equation}
for any $t \in [t_0,t_*)$.
Now we want to compare the solution $u_*$ with $u^{(T_n)}$.
The following lemma yields a representation of the \emph{nonlinear} evolution of the decomposition
Eq.~\eqref{eq:BG}.
\begin{lemma}
\label{lem:BGnl}
Let $t_1 \in [t_0,t_*)$.
Then there exists an $n_0\in \mathbb{N}$ such that, for all $n\geq n_0$,
the nonlinear profiles $U_i^{\lambda_n^i, t_n^i}$
associated to the decomposition in Lemma \ref{lem:BG} exist on $[t_0,t_1]\times \mathbb{R}^3$ and
\[ u^{(T_n)}(t,x)=\sum_{i=1}^A U_i^{\lambda_n^i,t_n^i}(t,x)+W^{nA}(t,x)+R^{nA}(t,x) \]
for all $A\in \mathbb{N}$ and $t\in [t_0,t_1]$ where $W^{nA}$ is the free wave from Lemma \ref{lem:BG}.
Furthermore, the error $R^{nA}$ satisfies
\[ \lim_{A\to\infty}\limsup_{n\to\infty}\|R^{nA}[\cdot]\|_{L^\infty([t_0,t_1]) \dot{H}^1\times L^2(\mathbb{R}^3)}=0. \]
\end{lemma}
\begin{proof}
This is (the second part of) the main theorem in \cite{BG99}.
Note that in \cite{BG99} the result is actually proved for the defocusing critical wave equation.
However, once the existence of the nonlinear profiles $U_i$ is established, one checks that the argument
in Section IV of
\cite{BG99} is in fact insensitive to the sign of the nonlinearity.
As already mentioned, the $U_i$ for $i\not=b$ exist globally and scatter since the energy of the corresponding
$V_i$ is small (in fact, arbitrarily small if we choose $t_0$ and $c$ large enough).
In the case of $U_b$
it follows from Eq.~\eqref{eq:Ubu*} that we may assume the existence
of $U_b^{\lambda_n^b,t_n^b}$ on $[t_0,t_1]\times \mathbb{R}^3$ provided $n$ is sufficiently large.
\end{proof}
The final step in the construction consists of showing that $u_*=u_c$ on $(K_{t_0,c}^{t_*})^\circ$.
In particular, we thereby obtain that $u_*$ can be extended beyond time $t_*$ and thus extends
globally.
\begin{lemma}
The solution $u_*$ extends to all of $[t_0,\infty)\times \mathbb{R}^3$ and satisfies
\[ u_*=u_c \mbox{ on }K_{t_0,c}^\infty. \]
\end{lemma}
\begin{proof}
Let $t \in [t_0,t_*)$ and choose $n$ so large that Lemma \ref{lem:BGnl} applies.
For $N\in\mathbb{N}$ denote by $P_{<N}$ the Littlewood-Paley projector to
frequencies $\{\xi \in \mathbb{R}^3: |\xi|\leq N\}$.
Given $\varepsilon>0$ we choose $N$ so large that
\[ \|u_*(t,\cdot)-P_{<N}u_*(t,\cdot)\|_{L^6(B_{t-c})}<\varepsilon. \]
Consider the decomposition in Lemma \ref{lem:BGnl}.
For an unbounded profile $(U_i,(\lambda_n^i),(t_n^i))$ with $\lambda_n^i \to 0$ as $n\to\infty$ we
clearly have $\|P_{<N}U_i^{\lambda_n^i,t_n^i}(t,\cdot)\|_{L^6(B_{t-c})} \to 0$ as $n \to \infty$.
Furthermore, for a profile $(U_i, (\lambda_n^i),(t_n^i))$ with $\lambda_n^i \to \infty$ we obtain
$\|U_i^{\lambda_n^i,t_n^i}(t,\cdot)\|_{L^6(B_{t-c})} \to 0$ as $n\to\infty$.
Finally, if $\lambda_n^i \simeq 1$ and $|t_n^i| \to \infty$ we infer
$\|U_i^{\lambda_n^i,t_n^i}(t,\cdot)\|_{L^6(B_{t-c})} \to 0$ as $n\to\infty$
by the small data scattering theory and Huygens' principle (recall that the $\dot{H}^1\times L^2(\mathbb{R}^3)$
norm of all unbounded profiles is small).
By choosing $A$ sufficiently large we can also achieve
\[ \|W^{nA}(t,\cdot)\|_{L^6(\mathbb{R}^3)}+\|R^{nA}(t,\cdot)\|_{L^6(\mathbb{R}^3)}<\varepsilon \]
by Lemma \ref{lem:BGnl}.
These estimates and the decomposition from Lemma \ref{lem:BGnl} imply
\[ \|u_*(t,\cdot)-u^{(T_n)}(t,\cdot)\|_{L^6(B_{t-c})}\lesssim \varepsilon \]
provided $n$ is chosen large enough.
Since $\varepsilon>0$ was arbitrary and $u^{(T_n)}=u_c$ on $K_{t_0,c}^t$ by Lemma \ref{lem:extend},
we obtain $u_*=u_c$ on $K_{t_0,c}^t$.
Furthermore, the solution can be continued since outside of $K_{t_0,c}^t$ the kinetic energy
stays small for all times by a continuity argument as in the proof of Lemma \ref{lem:extend} and
inside of $K_{t_0,c}^t$ the solution $u_*$ equals $u_c$ which exists on $K_{t_0,c}^\infty$.
\end{proof}
|
2,877,628,091,167 | arxiv | \section{Introduction}
Since 1934, the U.S. Securities and Exchange Commission
(SEC) mandates that
public companies
disclose information in form of public filings
to ensure that adequate information is available to
investors. One such filing is the 10-K,
the company's annual report. It
contains
financial statements and
information about business strategy,
risk factors and legal issues. For this reason, 10-Ks are
an important source of information in
the field of finance and accounting.
A common method
employed by
finance and accounting
researchers
is
to evaluate the ``tone'' of a text based
on the Harvard
Psychosociological Dictionary, specifically, on the
Harvard-IV-4 TagNeg (H4N) word
list.\footnote{\url{http://www.wjh.harvard.edu/~inquirer}}
However, as its name suggests, this dictionary is from a
domain that is different from finance, so many words (e.g.,
``liability'', ``tax'') that are labeled as negative in H4N
are in fact not negative in finance.
In a pioneering study,
\citet{donald} manually
reclassified the words in H4N for the financial domain.
They applied the resulting
dictionaries\footnote{\url{https://sraf.nd.edu/textual-analysis/resources}} to 10-Ks and
predicted financial variables such as
excess return and
volatility.
We will refer to the sentiment dictionaries created
by \citet{donald} as L\&M.
In this work, we
also create sentiment dictionaries for the finance domain,
but we adapt them from the domain-general H4N dictionary
\emph{automatically}.
We first learn word embeddings from a corpus of 10-Ks and then
reclassify them -- using SVMs trained on H4N labels -- as negative vs. non-negative.
We refer to the resulting domain-adapted dictionary as
H4N$\dnrm{RE}$.
In our experiments, we demonstrate that the
automatically adapted financial
sentiment dictionary H4N$\dnrm{RE}$ performs better at
predicting
excess return and volatility than dictionaries of
\citet{donald} and \citet{theil}.
We make the following contributions.
\textbf{(i)} We demonstrate that
automatic domain adaptation
performs better at
predicting financial outcomes
than previous work based on manual domain adaptation.
\textbf{(ii)} We perform an analysis of the
differences between the classifications of
L\&M and those of
our sentiment dictionary H4N$\dnrm{RE}$ that sheds light on the
superior performance of H4N$\dnrm{RE}$.
For example, H4N$\dnrm{RE}$ is much smaller than L\&M, consisting
mostly of frequent words, suggesting H4N$\dnrm{RE}$ is more robust and
less prone to overfitting.
\textbf{(iii)}
In a further detailed analysis, we
investigate words
classified by L\&M as
\textit{negative}, \textit{litigious} and
\textit{uncertain} that our embedding classifier classifies otherwise; and
common (i.e., non-negative) words from H4N that
L\&M did not include in the categories
\textit{negative}, \textit{litigious} and
\textit{uncertain}, but that our embedding classifier
classifies as belonging to these classes. Our analysis
suggests that manual adaptation of dictionaries is
error-prone if annotators are not given access to
corpus contexts.
Our paper primarily addresses a finance application. In
empirical finance, a correct sentiment classification
decision is not sufficient -- the decision must also be
\emph{interpretable} and \emph{statistically sound}. That is why we use
ordinary least squares (OLS) -- an
established method in empirical finance -- and sentiment dictionaries. Models based
on sentiment dictionaries are transparent and interpretable:
by looking at the dictionary words occurring in a document
we can trace the classification decision back to the original
data and, e.g., understand the cause of a classification
error. OLS is a well-understood statistical method that
allows the analysis of significance, effect size and
dependence between predictor variables, inter alia.
While we focus on finance here, three important lessons of
our work also
apply to many other domains. (1) An increasing number of
applications require interpretable
analysis; e.g., the European Union
mandates that systems
used for sensitive
applications provide explanations of decisions.
Decisions based on a solid statistical foundation are more
likely to be trusted than those by black boxes.
(2)
Many NLP applications are domain-specific and require
domain-specific
resources including lexicons. Should such lexicons be built
manually from scratch or adapted from generic lexicons? We
provide evidence that automatic adaptation works better.
(3) Words often have specific meanings in a domain and this
increases the risk that a word is misjudged
if only the generic meaning is present to the
annotator. This seems to be the primary reason for the
problems of manual lexicons in our experiments. Thus, if
manual lexicon creation is the only option, then it is
important to present words in context, not in isolation, so
that the domain-specific sense can be recognized.
\begin{comment}
First attempts to achieve interpretable
neural net framework for financial sentiment analysis have been made
in (Luo, Beyond Polarity). They propose a query-driven attention mechanism which is
based on the idea that different parts of a document have varied
weights in deciding the sentiment. These weights are according to the
provided queries. Here the queries are natural language specified by the financial analysts from various departments.
\end{comment}
\section{Related Work}
In \textbf{empirical finance}, researchers have exploited
various text resources, e.g., news \citep{Kazemian16},
microblogs \citep{semeval17}, twitter \citep{zamani2017using} and
company disclosures \citep{Nopp15, Kogan09}. Deep learning has been used for
learning document representations \citep{Ding15, Akhtar17}.
However, the methodology of empirical finance requires
interpretable results. Thus, a common approach is
to define
features for statistical models like Ordinary Least Squares
\citep{Lee_onthe, Rekabsaz17}. Frequently, lexicons like
H4N TagNeg\footnote{\url{http://www.wjh.harvard.edu/~inquirer/}}
\citep{tetlock} are used. It includes a total of 85,221 words,
4188 of which are labeled negative. The remaining words are
labeled ``common'', i.e., non-negative.
\citet{donald} argue that many
words from H4N have a specialized meaning when appearing in an
annual report. For instance, domain-general negative words such as ``tax'', ``cost'',
``liability'' and ``depreciation''
-- which predominate
in 10-Ks -- do not typically have negative sentiment in 10-Ks. So
\citet{donald} constructed subjective financial dictionaries manually, by examining all
words that appear in at least 5\% of 10-Ks
and classifying them based on their
assessment of most likely usage. More recently, other finance-specific lexicons
were created \cite{Wang13}. Building on L\&M,
\citet{tsai} and \citet{theil}
show that the L\&M dictionaries can be further improved by
adding most similar neighbors to words
manually labeled by L\&M.
\textbf{Seed-based methods}
generalize a set of seeds based on corpus (e.g.,
distributional) evidence.
Models use syntactic patterns \citep{Hatzivassiloglou,
Widdows}, cooccurrence \citep{Turney, Igo}
or label propagation on lexical graphs derived from
cooccurrence \cite{Velikovich10,Huang}.
\textbf{Supervised methods}
start with a larger training set, not just a few seeds
\citep{Mohammad}.
Distributed word representations
\citep{Tang14,Amir,Vo,Rothe} are beneficial in this approach. For instance, \citet{Tang14} incorporate in word embeddings a
document-level sentiment signal.
\citet{Wang17} also integrate
document and word levels.
\citet{Hamilton16} learn domain-specific word
embeddings and derive word lists specific to domains,
including the finance domain.
\textbf{Dictionary-based approaches} \citep{Takamura,
BaccianellaES10, Vicente14} use hand-curated lexical
resources -- often WordNet \citep{fellbaum98wordnet} -- for
constructing lexicons. \citet{Hamilton16} argue that
dictionary-based approaches generate better results due to
the quality of hand-curated resources. We compare two ways
of using a hand-curated resource in this work -- a
general-domain resource that is automatically adapted to the
specific domain vs.\ a resource that is manually
created for the specific domain -- and show that automatic
domain adaptation performs better.
Apart from domain adaptation work on dictionaries, many
other approaches to \textbf{generic domain adaptation} have
been proposed. Most of this work adopts the classical domain
adaptation scenario: there is a large labeled training set
available in the source domain and an amount of
labeled target data that is insufficient
for training a high-performing model on its own
\citep{blitzer2006domain,chelba2006adaptation,daume2009frustratingly,pan2010cross,glorot2011domain,chen2012marginalized}. More
recently, the idea of domain-adversarial training was
introduced for the same scenario \citep{ganin2016domain}.
In contrast to this work, we do not transfer any
parameters or model structures from source to
target. Instead, we use labels from the source domain and
train new models from scratch based on these labels: first
embedding vectors, then a classifier that is trained on
source domain labels and finally a regression model that is
trained on the classification decisions of the classifier.
This approach is feasible in our problem setting because the
divergence between source and target sentiment labels is
relatively minor, so that training target embeddings with
source labels gives good results.
The motivation for this different setup is that
our work primarily addresses a finance application where
explainability is of high importance.
For this reason, we use a model based on sentiment
dictionaries that allows us to provide explanations of the model's
decisions and predictions.
\enote{hs}{
A large part of the work has focused on
adapting a model trained on the source domain for use in the
target domain. For example, use
the source model as a prior on the weights for a second
model, trained on the target data. In contrast,
optimizes source and target
parameters jointly, not sequentially. They augment the
feature space of both the source and target data and use the
result as input to a standard learning
algorithm. Concurrently, multiple methods proposed learning
a common feature representation that is meaningful across
both domains. So, adapt
Structural Correspondence Learning (SCL) to identify
correspondences among features from different
domains. use a spectral feature
alignment (SFA) algorithm to find a robust representation
for cross-domain data. More recently, neural network
representations have become increasingly studied
. In these
approaches, the classifier is learned in a separate step
using the feature representations. In contrast,
domain-adversarial learning approach of
\citet{ganin2016domain} performs feature learning, domain
adaptation and classifier learning jointly.
}
\eat{
Our work differs in the sense that we use a general-domain lexicon and
adapt it to a specialized domain. We argue that domain-specific
dictionaries can be constructed automatically without having to rely
on a manually compiled domain-specific dictionary.
}
\section{Methodology}
\seclabel{compfin}
\subsection{Empirical finance methodology}
In this paper, we adopt Ordinary Least Squares (OLS),
a common research method in empirical finance: a
dependent variable of interest (e.g., excess return,
volatility) is predicted based on a linear combination of a
set of explanatory variables.
The main focus of this paper
is to investigate text-based explanatory variables: we would
like to know to what extent a text variable such as
occurrence of negative words in a 10-K can predict a
financial variable like volatility.
Identifying the economic drivers of such a financial outcome
is
of central interest in the field of finance.
Some of these determinants may be correlated with sentiment.
To understand the role of sentiment in explaining financial
variables
we therefore need to isolate the \emph{complementary
information} of our text variables.
This is achieved by including in our
regressions -- as control variables -- a standard set of
financial explanatory variables such as firm size and
book-to-market ratio. These control variables are added as additional explanatory variables in the regression specification besides the textual sentiment variables.
This experimental setup allows us to assess the added benefit of text-based
variables in a realistic empirical finance scenario.
The approach is
motivated by previous studies in the finance literature (e.g.,
\citet{donald}), which show that
characteristics
of financial firms can
explain variation in excess returns and volatility. By including these
control variables in the regression we are able to determine whether
sentiment factors have incremental explanatory power beyond the
already established financial factors. Since the inclusion of these
control variables is not primarily driven by the assumption that firms
with different characteristics use different language, our approach
differs from other NLP studies, such as \citet{hovy2015demographic}, who accounts for non-textual characteristics by training group-specific embeddings.
Each text variable we use is based on a dictionary. Its
value for a 10-K is the proportion of tokens in the 10-K that
are members of the dictionary. For example, if the 10-K is
5000 tokens long and 50 of those tokens are contained in the
L\&M uncertainty dictionary, then the value of the L\&M uncertainty
text variable for this 10-K is 0.01.
In the type of analysis of stock market data we conduct, there are two general forms of
dependence in the residuals of a regression, which arise from the panel structure of our data set where a single firm is repeatedly observed over time and
multiple firms are observed at the same point in time.
\emph{Firm effect:}
Time-series
dependence assumes that the residuals of \emph{a given firm} are
correlated \emph{across years}.
\emph{Time effect:}
Cross-sectional dependence assumes
that the residuals of \emph{a given year} are correlated \emph{across
different firms}.
These properties violate the i.i.d.\ assumption of
residuals in standard OLS.
We therefore model data
with both firm and time effects and run
a \emph{two-way robust cluster regression}, i.e.,
an OLS
regression with standard errors that are clustered on two
dimensions \cite{gelbach2009robust}, the dimensions of firm
and time.\footnote{\citet{donald} use
the method of
\citet{fama1973risk} instead.
This method assumes that the yearly
estimates of the coefficient are independent of each
other. However, this is not true when there is a firm effect.}
We apply this regression-based methodology to test the explanatory power of financial dictionaries with regard to two dependent variables:
excess return and volatility.
This approach allows us to compare the explanatory power of different sentiment dictionaries and in the process test the hypothesis that negative sentiment is associated with subsequently lower stock returns and higher volatility.
We now introduce the regression specifications for these tests.
\subsubsection{Excess return}
\seclabel{excess-sec}
The dependent variable
excess return is defined as the firm's
buy-and-hold stock return minus the
value-weighted buy-and-hold market index return during the
4-day event window
starting on the \mbox{10-K} filing date,
computed from prices by
the Center for Research in
Security Prices (CRSP)\footnote{\url{http://www.crsp.com}}
(both expressed as a percentage).
In addition to the independent text variables (see
\secref{exp} for details), we include the following
financial control variables.
(i) Firm size: the log of the
book value of total assets.
(ii) Alpha of a Fama-French regression
\citep{FAMA1933}
calculated
from days \mbox{[-252 -6]};\footnote{[-252 -6] is the notation for
the 252 days prior to the filing date
with the last 5 days prior to the filing date excluded.}
this represents the ``abnormal'' return of the asset, i.e.,
the part of the return not due to common risk factors like
market and firm size.
(iii)
Book-to-market ratio: the log of
the book value of equity divided by the market value of
equity. (iv) Share turnover: the volume of shares
traded in days [-252 -6] divided by shares outstanding on
the filing date. (v)
Earnings surprise,
computed by IBES from Thomson Reuters;\footnote {\url{http://www.thomsonreuters.com}}
this variable captures
whether the reported financial performance was better or
worse than expected by
financial analysts.\footnote{Our setup
largely mirrors, but
is
not identical to the one used by \citet{donald}
because not all data they used are publicly
available and because we use a larger time window
(1994-2013) compared to theirs (1994-2008).}
\subsubsection{Volatility}
The dependent variable
volatility is defined as
the post-filing root-mean-square error (RMSE) of a Fama-French regression calculated
from days [6 252].
The RMSE captures the idiosyncratic component of the total volatility of the firm, since it
picks up the stock price variation that cannot be explained by fluctuations of the common risk factors of the Fama-French model.
The RMSE is therefore a measure of the financial uncertainty of the firm.
In addition to the independent text variables (see \secref{exp} for details), we include the following
financial control variables.
(i)
Pre-filing RMSE and (ii) pre-filing alpha of a Fama-French regression
calculated
from days [-252 -6];
these characterize the financial uncertainty and abnormal return
of the firm in the past
(see \secref{excess-sec} for alpha and first sentence of this section
for RMSE).
(iii) Filing abnormal return; the value of the
buy-and-hold return in trading days [0 3] minus the
buy-and-hold return of the market index.
(iv) Firm size and (v) book-to-market ratio (the same
as in \secref{excess-sec}).
(vi) Calendar year dummies
and Fama-French 48-industry dummies to allow for time and
industry fixed effects.\footnote{We do not include in the
regression a Nasdaq dummy variable indicating whether the firm
is traded on Nasdaq. Since Nasdaq
mainly lists tech companies, the Nasdaq effect is
already captured by industry dummies.}
\begin{table}
\begin{center}
\small
\begin{tabular}{l|r}
dictionary & size\\\hline\hline
neg$\dnrm{lm}$ & 2355\\
unc$\dnrm{lm}$ & 297\\
lit$\dnrm{lm}$ & 903\\ \hline
neg$\dnrm{ADD}$ & 2340\\
unc$\dnrm{ADD}$ & 240 \\
lit$\dnrm{ADD}$ & 984 \\\hline
neg$\dnrm{RE}$ & 1205 \\
unc$\dnrm{RE}$ &96 \\
lit$\dnrm{RE}$ & 208 \\ \hline
H4N$\dnrm{ORG}$ &4188\\
H4N$\dnrm{RE}$ &338
\end{tabular}
\end{center}
\caption{\tablabel{overview-add}Number of words per dictionary}
\end{table}
\subsection{NLP methodology}
\seclabel{nlpmethod}
There are two
main questions we want to answer:
\mbox{\hspace{0.5cm}}
\textbf{Q1.} Is a manually domain-adapted or an automatically
domain-adapted
dictionary a more effective predictor of financial
outcomes?
\mbox{\hspace{0.5cm}} \textbf{Q2.} L\&M adapted H4N for the financial domain and
showed that this manually adapted dictionary is more
effective than H4N for prediction. Can we further
improve L\&M's manual adaptation by automatic domain
adaptation?
The general methodology we employ for domain adaptation is
based on word embeddings. We train CBOW word2vec
\citep{micolov-13} word embeddings on a corpus of 10-Ks for
all words of H4N that occur in the corpus -- see
\secref{exp} for details.
We consider two adaptations: ADD and RE. ADD is only used to
answer question Q2.
\textbf{ADD.} For adapting the L\&M dictionary, we train an
SVM on an L\&M dictionary in which words are labeled +1 if
they are marked for the category by L\&M and labeled -1
otherwise (where the category is negative, uncertain or
litigious). Each word is represented as its embedding. We
then run the SVM on all H4N words that are not contained in the
L\&M dictionary. We also ignore H4N words that we do not
have embeddings for because their frequency is below the
word2vec frequency threshold. Thus, we obtain an ADD dictionary which
is not a superset of the L\&M lexicon because it
includes only new additional words that are not part of the original
dictionary.
SVM scores
are converted into probabilities via logistic regression. We
define a confidence threshold $\theta$ -- we only want
to include words in the ADD dictionary that are
reliable indicators of the category of interest. A word
is added to the dictionary if its converted SVM score
is greater
than $\theta$.
\textbf{RE.} We train SVMs
as for ADD, but this time in a five-fold cross validation
setup. Again, SVM scores are converted into probabilities
via logistic regression.
A word $w$ becomes a member of the
adapted dictionary
if its converted SVM score of the SVM that was not
trained on the fold that contains $w$ is greater than
$\theta$.
To answer our first
question Q1: ``Is automatic or manual adaptation better?'',
we apply adaptation method RE to H4N
and compare the results to the L\&M dictionaries.
To answer our second
question Q2: ``Can manual adaptation be further improved by
automatic adaptation?'',
we apply adaptation methods RE and ADD to
the three dictionaries compiled by L\&M and compare results
for original and adapted L\&M dictionaries: (i) negative (abbreviated as ``neg''), (ii) uncertain (abbreviated as ``unc''), (iii) litigious (abbreviated as ``lit'').
Our goals here are to
improve
the
in-domain
L\&M dictionaries by relabeling them using adaptation method RE
and to find new additional words using adaptation method ADD.
\tabref{overview-add} gives dictionary sizes.
\section{Experiments and results}
\seclabel{exp}
We downloaded 206,790 10-Ks for years 1994 to 2013 from the
SEC's database
EDGAR.\footnote{https://www.sec.gov/edgar.shtml}
Table of contents, page numbers,
links and numeric tables are removed
in preprocessing
and
only the main body of the text is retained.
Documents are split into sections. Sections that are not
useful for textual analysis (e.g., boilerplate) are deleted.
To construct the final sample, we apply the
filters defined by L\&M \citep{donald}: we require a match with CRSP's permanent
identifier PERMNO, the stock to be common equity, a stock
pre-filing price of greater than \$3, a positive
book-to-market, as well as CRSP's market capitalization and
stock return data available at least 60 trading days before
and after the filing date. We only keep firms traded on
Nasdaq, NYSE or AMEX and whose filings contain at least 2000
words.
This procedure results in a corpus of
60,432 10-Ks.
We tokenize (using NLTK)
and lowercase this corpus and remove punctuation.
We use word2vec CBOW with hierarchical softmax to
learn word embeddings from the corpus.
We set the size of word vectors to
400 and run one training iteration; otherwise we use
word2vec's default
hyperparameters. SVMs are trained on word embeddings as
described in \secref{nlpmethod}.
We set the threshold $\theta$
to 0.8, so only words with converted SVM scores greater than
0.8 will be added to dictionaries.\footnote{We choose this
threshold because the proportion of negative, litigious
and uncertain words in 10-Ks for 0.8 is roughly the same as when using
L\&M dictionaries.}
As described in \secref{compfin}, we compare manually
adapted and automatically adapted dictionaries (Q1) and
investigate whether automatic adaptation of manually adapted
dictionaries further improves performance (Q2).
Our experimental setup is Ordinary Least Squares (OLS), more specifically,
a two-way robust cluster regression for the time and
firm effects.
The dependent financial variable is excess return or
volatility. We include several independent financial
variables in the regression as well as one or more text variables.
The value of the text variable for a category is
the proportion of tokens from the category that occur in a 10-K.
To assess the utility of a text variable for predicting a
financial outcome, we look at
significance and
the standardized regression coefficient (the product of
regression coefficient and standard deviation).
If a result is significant, then it is unlikely
that the result is
due to chance. The standardized coefficient measures the
effect size, normalized for different value ranges of
variables.
It can be interpreted as the expected change in the dependent variable
if the independent variable increases by one standard deviation.
The standardized coefficient allows
a fair comparison between a text variable that, on average,
has high values (many tokens per document) with one that, on
average, has low values (few tokens per document).
\begin{table}
\begin{center}
\small
\begin{tabular}{l|llll}
var & coeff & std coeff & t & $R^2$\\ \hline
neg$\dnrm{lm}$ & -0.202**&-0.080 & \bf-2.56 & 1.02\\
lit$\dnrm{lm}$ &-0.0291&-0.026& -0.83 &1.00\\
unc$\dnrm{lm}$ &-0.215*&-0.064 &\bf -1.91 &1.01 \\
H4N\dnrm{RE} &-0.764***&\textit{-0.229}& \textbf{-3.04} &1.05\\ \hline
\multicolumn{5}{c}{*p $\le$ 0.05, **p $\le$ 0.01, ***p $\le$ 0.001}
\end{tabular}
\end{center}
\caption{\tablabel{harvard-excess}
Excess
return regression results for L\&M dictionaries and reclassified H4N
dictionary.
\textbf{For all tables in this paper,
significant $t$
values are bolded and best standard coefficients per category
are in italics.}}
\end{table}
\begin{table}
\begin{center}
\small
\begin{tabular}{l|llll}
var & coeff & std coeff & t & $R^2$\\ \hline\hline
H4N\dnrm{RE} & -0.88** &\textit{-0.264} & \textbf{-2.19}& 1.05\\
neg$\dnrm{lm}$ & \phantom{-}0.062& \phantom{-}0.024& \phantom{-}0.48&\\ \hline
H4N\dnrm{RE}&-0.757*** &-0.227& \textbf{-2.90}&1.05\\
lit$\dnrm{lm}$ &-0.351&\textit{-0.315}&-0.013 &\\ \hline
H4N\dnrm{RE} &-0.746*** &\textit{-0.223} &\textbf{-2.89} &1.05 \\
unc$\dnrm{lm}$ &-0.45&-0.135 &-0.45 &\\ \hline\hline
\multicolumn{5}{c}{*p $\le$ 0.05, **p $\le$ 0.01, ***p $\le$ 0.001}
\end{tabular}
\end{center}
\caption{\tablabel{multi-excess}Excess return regression results
for multiple text variables. This table shows
results for three regressions that combine H4N$\dnrm{RE}$
with each of the three L\&M dictionaries.}
\end{table}
\subsection{Excess Return}
\tabref{harvard-excess}
gives regression results
for excess return, comparing
H4N$\dnrm{RE}$ (our automatic adaptation of the general
Harvard dictionary) with the three manually adapted L\&M dictionaries.
As expected the
coefficients are negatively signed -- 10-Ks containing a high
percentage of pessimistic words are associated with negative excess returns.
L\&M designed the dictionary
neg$\dnrm{lm}$ specifically for measuring negative
information in a 10-K that may have a negative effect on
outcomes like excess return. So it is not surprising that
neg$\dnrm{lm}$ is the best performing dictionary of the
three L\&M dictionaries: it has the highest standard
coefficient (\mbox{-0.080}) and the highest significance (-2.56).
unc$\dnrm{lm}$ performs slightly worse, but is also
significant.
However, when comparing the three L\&M dictionaries with
H4N$\dnrm{RE}$, the automatically adapted Harvard
dictionary, we see that H4N$\dnrm{RE}$ performs clearly
better: it is highly significant and its standard
coefficient is larger by a factor of more than 2 compared to
neg$\dnrm{lm}$. This evidence suggests that the
automatically created H4N$\dnrm{RE}$ dictionary has a higher
explanatory power for excess returns than the manually
created L\&M dictionaries. This provides an initial answer
to question Q1: in this case, automatic adaptation beats
manual adaptation.
\def0.125cm{0.125cm}
\begin{table}
\begin{center}
\small
\begin{tabular}{l|l@{\hspace{0.125cm}}l@{\hspace{0.125cm}}r@{\hspace{0.125cm}}l}
var & coeff & std coeff & \multicolumn{1}{c}{$t$} & $R^2$\\ \hline\hline
neg$\dnrm{lm}$ & -0.202** &-0.080 & \bf-2.56 & 1.02\\
neg$\dnrm{spec}$ &\phantom{-}0.0102 &\phantom{-}0.0132&0.27 &1.00\\
neg$\dnrm{RE}$ & -0.37*** &\emph{-0.111}& \textbf{-2.96} & 1.03\\
neg$\dnrm{ADD}$ & -0.033 &-0.0231 & -1.03 & 1.00 \\
neg$\dnrm{RE+ADD}$ & -0.08** &-0.072 & \bf -2.19 & 1.03\\\hline
lit$\dnrm{lm}$ &-0.0291 &-0.026& -0.83 &1.00\\
lit$\dnrm{RE}$ &-0.056 &\emph{-0.028}& -0.55 &1.00\\
lit$\dnrm{ADD}$ &-0.0195 &-0.0156&-0.70 &1.00\\
lit$\dnrm{RE+ADD}$ &-0.0163&-0.0211& -0.69 &1.00 \\\hline
unc$\dnrm{lm}$ &-0.215* &-0.064& \bf -1.91 &1.01 \\
unc$\dnrm{RE}$ &-0.377*** &\emph{-0.075}& \bf -2.77 &1.02\\
unc$\dnrm{ADD}$ &\phantom{-}0.0217 &\phantom{-}0.0065& 0.21 &1.00\\
unc$\dnrm{RE+ADD}$ &-0.0315 &-0.0157 &-0.45 &1.00\\\hline\hline
\multicolumn{5}{c}{*p $\le$ 0.05, **p $\le$ 0.01, ***p $\le$ 0.001}
\end{tabular}
\end{center}
\caption{\label{add-ro} Excess return regression results for
L\&M, \textbf{RE} and
\textbf{ADD} dictionaries}
\end{table}
\tabref{multi-excess} shows \textit{manual plus automatic}
experiments with \emph{multiple} text variables in one regression, in
particular, the combination of H4N\dnrm{RE} with each of the L\&M
dictionaries. We see that the explanatory power of
L\&M variables is lost after we additionally include
H4N\dnrm{RE} in a regression: all three L\&M
variables are not significant. In contrast, H4N\dnrm{RE}
continues to be significant in all experiments, with large
standard coefficients. More manual plus automatic
experiments can be found in
the appendix.
These experiments further confirm that automatic is better than manual adaptation.
Table \ref{add-ro} shows results for automatically adapting
the L\&M dictionaries.\footnote{Experiments with multiple text variables in one
regression (manual plus automatic experiments) are presented
in the appendix.}
The subscript ``RE+ADD'' refers to a dictionary
that merges RE and ADD; e.g.,
neg$\dnrm{RE+ADD}$ is the union of
neg$\dnrm{RE}$ and
neg$\dnrm{ADD}$.
We see that for each category (neg, lit and unc), the automatically
adapted dictionary performs better than the original
manually adapted dictionary; e.g., the standard coefficient
of
neg$\dnrm{RE}$ is -0.111, clearly better than that of
neg$\dnrm{lm}$ (-0.080). Results are significant for
neg$\dnrm{RE}$ (-2.96) and
unc$\dnrm{RE}$ (-2.77).
We also evaluate neg$\dnrm{spec}$, the negative word list of \citet{Hamilton16}. neg$\dnrm{spec}$ does not perform well: it is not significant.
These results provide a partial answer
to question Q2: for excess return, automatic adaptation
of L\&M's manually adapted dictionaries further improves
their performance.
\begin{table}
\begin{center}
\small
\begin{tabular}{l|llll}
var & coeff & std coeff & \multicolumn{1}{c}{$t$} & $R^2$\\ \hline\hline
neg$\dnrm{lm}$ &\phantom{-}0.118*** &\phantom{-}0.0472 &\phantom{-}\textbf{3.30} & 60.1\\
lit$\dnrm{lm}$ &-0.0081 &-0.0073&-0.62 & 60.0 \\
unc$\dnrm{lm}$ &\phantom{-}0.119* & \phantom{-}0.0356 &\phantom{-}\textbf{2.25} & 60.0 \\
H4N\dnrm{RE} &\phantom{-}0.577*** &\textit{\phantom{-}0.173}& \phantom{-}\textbf{4.40} &60.3\\\hline\hline
\multicolumn{5}{c}{*p $\le$ 0.05, **p $\le$ 0.01, ***p $\le$ 0.001}
\end{tabular}
\end{center}
\caption{\tablabel{harvard-vol} Volatility regression results for L\&M dictionaries and reclassified H4N dictionary}
\end{table}
\begin{table}
\begin{center}
\small
\begin{tabular}{l|llll}
var & coeff & std coeff & t & $R^2$\\ \hline \hline
H4N\dnrm{RE} & \phantom{-}0.748*** & \textit{\phantom{-}0.224}& \phantom{-}\textbf{4.44} & 1.11\\
neg$\dnrm{lm}$ & -0.096*& -0.038& -2.55& \\ \hline
H4N\dnrm{RE} &\phantom{-}0.642*** &\textit{\phantom{-}0.192}&\phantom{-}\textbf{4.28}&1.11\\
lit$\dnrm{lm}$ &-0.041*&-0.037& -2.54 &\\ \hline
H4N\dnrm{RE} &\phantom{-}0.695*** &\phantom{-}0.208 &\phantom{-}\textbf{4.54} &1.11\\
unc$\dnrm{lm}$ &-0.931**&\textit{-0.279 }& -2.73 &\\ \hline \hline
\multicolumn{5}{c}{*p $\le$ 0.05, **p $\le$ 0.01, ***p $\le$ 0.001}
\end{tabular}
\end{center}
\caption{\tablabel{multiple-vol} Volatility regression results for multiple text variables}
\end{table}
\subsection{Volatility}
\tabref{harvard-vol} compares H4N$\dnrm{RE}$ and L\&M regression results
for volatility. Except for litigious, the coefficients are
positive, so
the greater the number of pessimistic words, the greater the volatility.
Results for neg$\dnrm{lm}$, unc$\dnrm{lm}$
and H4N$\dnrm{RE}$
are statistically
significant. The best L\&M dictionary is again
neg$\dnrm{lm}$ with standard coefficient 0.0472 and
$t=3.30$. However,
H4N\dnrm{RE} has the highest explanatory value for volatility.
Its standard coefficient (0.173)
is more than three times as large as that of neg$\dnrm{lm}$.
The higher effect size
demonstrates that
H4N\dnrm{RE}
better explains
volatility than the L\&M dictionaries. Again, this indicates
-- answering question Q1 -- that
automatic outperforms manual adaptation.
\tabref{multiple-vol} confirms this. We see that for manual plus automatic experiments each
combination of H4N\dnrm{RE} with one of the L\&M
dictionaries provides significant results for H4N\dnrm{RE}.
In contrast, L\&M dictionaries become negatively signed
meaning that more uncertain words decrease volatility,
suggesting that they are not indicative of the true
relationship between volatility and negative tone in 10-Ks
in this regression setup. Our results of additional manual plus
automatic experiments support this observation as well. See
the appendix for an illustration.
\begin{table}
\begin{center}
\small
\begin{tabular}{l|l@{\hspace{0.125cm}}l@{\hspace{0.125cm}}r@{\hspace{0.125cm}}l}
var & coeff & std coeff & \multicolumn{1}{c}{$t$} & $R^2$\\ \hline\hline
neg$\dnrm{lm}$ &\phantom{-}0.118*** &\phantom{-}0.0472 &\textbf{3.30} & 60.1\\
neg$\dnrm{spec}$ &-0.038 &-0.0494&-2.73 &60.1\\
neg$\dnrm{RE}$ &\phantom{-}0.219*** &\phantom{-}\emph{0.0657} &\textbf{3.57} & 60.1\\
neg$\dnrm{ADD}$ &\phantom{-}0.032*** &\phantom{-}0.0224& \textbf{4.06} & 60.0\\
neg$\dnrm{RE+ADD}$ &\phantom{-}0.038*** &\phantom{-}0.0342&\textbf{4.32}& 60.1\\ \hlin
lit$\dnrm{lm}$ &-0.0081 &-0.0073 &-0.62 & 60.0 \\
lit$\dnrm{RE}$ & \phantom{-}0.0080 &\phantom{-}0.0040 & 0.20 & 60.0 \\
lit$\dnrm{ADD}$ & 0.028&\phantom{-}\textit{0.0224} & 1.07 & 60.0 \\
lit$\dnrm{RE+ADD}$ & 0.015 &\phantom{-}0.0195&0.81 & 60.0 \\\hline
unc$\dnrm{lm}$ & \phantom{-}0.119* &\phantom{-}\emph{0.0356} &\textbf{2.25} & 60.0 \\
unc$\dnrm{spec}$ & -0.043 &-0.0344 & -1.56 & 60.0 \\
unc$\dnrm{RE}$ &\phantom{-}0.167* &\phantom{-}0.0334&\textbf{2.30} & 60.0\\
unc$\dnrm{ADD}$ &-0.013 &-0.0039& -0.17& 60.0\\
unc$\dnrm{RE+ADD}$ &\phantom{-}0.035&\phantom{-}0.0175 &0.68 & 60.0\\ \hline \hline
\multicolumn{5}{c}{*p $\le$ 0.05, **p $\le$ 0.01, ***p $\le$ 0.001}
\end{tabular}
\end{center}
\caption{\label{volat} Volatility regression results for L\&M, \textbf{RE} and
\textbf{ADD} dictionaries}
\end{table}
Table \ref{volat} gives results for automatically adapting
the L\&M dictionaries.\footnote{Experiments with multiple text variables in one
regression (manual plus automatic experiments) are presented
in the appendix.} For neg,
the standard coefficient of neg$\dnrm{RE}$ is 0.0657, better
by about 40\% than
neg$\dnrm{lm}$'s standard coefficient of 0.0472.
neg$\dnrm{spec}$ does
not provide significant results and has the negative sign, i.e., an
increase of negative words decreases volatility.
The lit dictionaries are not significant (neither L\&M nor adapted dictionaries).
For unc,
unc$\dnrm{RE}$ performs worse than
unc$\dnrm{lm}$, but only slightly by 0.0344 vs.\ 0.0356 for
the standard coefficients.
The overall best result is
neg$\dnrm{RE}$ (standard coefficient 0.0657).
Even though L\&M designed the unc$\dnrm{lm}$ dictionary
specifically for volatility, our results indicate that neg
dictionaries perform better than unc dictionaries, both for
L\&M dictionaries (neg$\dnrm{lm}$) and their automatic adaptations (e.g., neg$\dnrm{RE}$).
Table \ref{volat} also evaluates
unc$\dnrm{spec}$, the uncertainty dictionary of
\citet{theil}. unc$\dnrm{spec}$ does not perform well: it is not
significant and the coefficient has the ``wrong'' sign.\footnote{\citet{theil} define volatility for the time period
[6 28] whereas our definition is
[6 252], based on \citep{donald}. Larger time windows allow
more reliable estimates and account for the fact that
information
disclosures can influence volatility for
long periods
\citep{zhao}.}
The main finding supported by Table \ref{volat} is
that
the best automatic
adaptation of an L\&M dictionary gives rise to more explanatory power
than the best L\&M dictionary, i.e.,
neg$\dnrm{RE}$ performs better than neg$\dnrm{lm}$.
This again confirms our answer to Q2: we can further improve manual adaptation by automatic domain adaptation.
\begin{table}
\begin{center}
\small
\begin{tabular}{l|l}
ADD\dnrm{neg} & missing, diminishment, disabling, overuse\\
ADD\dnrm{unc} & reevaluate, swings, expectation, estimate\\
ADD\dnrm{lit} & lender, assignors, trustee, insurers \\ \hline
RE\dnrm{neg} & confusion, unlawful, convicted, breach\\
RE\dnrm{unc} & variability, fluctuation, variations, variation\\
RE\dnrm{lit} & courts, crossclaim, conciliation, abeyance\\ \hline
H4N$\dnrm{RE}$
& compromise, issues, problems, impair, hurt
\end{tabular}
\end{center}
\caption{\label{new-words}Word classification examples from automatically adapted dictionaries}
\end{table}
\section{Analysis and discussion}
\subsection{Qualitative Analysis}
\label{qu:a}
\seclabel{qualana}
Our dictionaries outperform L\&M.
In this section, we perform a qualitative analysis to
determine the reasons for this discrepancy in performance.
Table \ref{new-words} shows words
from automatically adapted dictionaries.
Recall that the
\textbf{ADD}
method adds words that L\&M classified as nonrelevant for a category.
So words like ``missing'' (neg), ``reevaluate'' (unc) and
``assignors'' (lit) were classified as relevant terms
and seem to connote
negativity, uncertainty and
litigiousness, respectively, in financial
contexts.
In L\&M's classification scheme, a word can
be part of several different categories.
For instance, L\&M label ``unlawful'', ``convicted'' and
``breach''
both as
litigious and as negative.
When applying our RE method, these words were only
classified as negative, not as litigious.
Similarly, L\&M label ``confusion'' as negative and
uncertain, but automatic RE adaptation labels it only negative.
This indicates that there is strong distributional evidence in the
corpus for the category negativity, but weaker
distributional evidence for litigious and uncertain. For our
application, only ``negative'' litigious/uncertain words are
of interest -- ``acquittal'' (positive litigious) and ``suspense''
(positive uncertain) are examples of positive words that may
not help in predicting financial variables. This could
explain why the negative category fares better in our
adaptation than the other two.
An interesting case study for RE is ``abeyance''.
L\&M classify it as uncertain, automatic adaptation as litigious.
Even though ``abeyance'' has a domain-general uncertain sense
(``something that is waiting to be acted
upon''), it is mostly used in legal contexts in 10-Ks:
``held in abeyance'', ``appeal in
abeyance''.
The nearest neighbors of
``abeyance'' in embedding space are also litigious words:
``stayed'', ``hearings'', ``mediation''.
H4N$\dnrm{RE}$ contains 74 words that are ``common'' in
H4N. Examples include
``compromise'',
``serious'' and ``god''.
The nearest neighbors of
``compromise'' in the 10-K embedding space
are the negative terms ``misappropriate'',
``breaches'', ``jeopardize''.
In a general-domain embedding
space,\footnote{\url{https://code.google.com/archive/p/word2vec/}} the
nearest neighbors of ``compromise'' include ``negotiated settlement'',
``accord'' and ``modus vivendi''.
This example suggests that ``compromise'' is used in 10-Ks in
negative contexts and in the general domain in positive contexts.
This also illustrates the importance of domain-specific word
embeddings that capture domain-specific information.
Another interesting example is the word ``god''; it is
frequently used in 10-Ks in the phrase ``act of
God''.
Its nearest
neighbors in the 10-K embedding space are ``terrorism'' and
``war''.
This example clearly demonstrates that annotators are likely
to make mistakes when they annotate words for sentiment
without seeing their contexts. Most annotators would
annotate ``god'' as positive, but when presented with the
typical context in 10-Ks (``act of God''), they would be
able to correctly classify it.
We
conclude that manual
annotation
of words without context based on the prior belief an
annotator has about word meanings is error-prone.
Our automatic adaptation
is performed based on
the word's contexts in the target domain and therefore not
susceptible to this type of error.
\def0.075cm{0.075cm}
\begin{table}
\begin{center}
{\small
\begin{tabular}{l||r@{\hspace{0.075cm}}r@{\hspace{0.075cm}}r|r@{\hspace{0.075cm}}r@{\hspace{0.075cm}}r|r@{\hspace{0.075cm}}r@{\hspace{0.075cm}}r|r@{\hspace{0.075cm}}r@{\hspace{0.075cm}}r}
&{\rotatebox{90}{neg$\dnrm{lm}$}}&{\rotatebox{90}{lit$\dnrm{lm}$}}&{\rotatebox{90}{unc$\dnrm{lm}$}}&{\rotatebox{90}{neg$\dnrm{ADD}$}}&{\rotatebox{90}{lit$\dnrm{ADD}$}}&{\rotatebox{90}{unc$\dnrm{ADD}$}}&{\rotatebox{90}{neg$\dnrm{RE}$}}&{\rotatebox{90}{lit$\dnrm{RE}$}}&{\rotatebox{90}{unc$\dnrm{RE}$}}&{\rotatebox{90}{H4N$\dnrm{neg}$}}&{\rotatebox{90}{H4N$\dnrm{cmn}$}}&{\rotatebox{90}{H4N$\dnrm{RE}$}}\\\hline\hline
neg$\dnrm{lm}$ &&7&2&0&0&0&49&2&0&48&52&12 \\
lit$\dnrm{lm}$ &17&&0&0&0&0&6&20&0&7&93&1 \\
unc$\dnrm{lm}$ &14&0&&0&0&0&18&2&30&16&84&2\\\hline
neg$\dnrm{ADD}$ &0&0&0&&0&0&0&0&0&18&82&2\\
lit$\dnrm{ADD}$ &0&0&0&0&&0&0&0&0&1&99&0\\
unc$\dnrm{ADD}$ &0&0&0&0&0&&0&0&0&3&97&0\\\hline
neg$\dnrm{RE}$ &95&5&4&0&0&0&&0&1&52&48&21\\
lit$\dnrm{RE}$ &18&86&2&0&0&0&0&&0&7&93&0\\
unc$\dnrm{RE}$ &11&2&92&0&0&0&10&0&&13&87&3\\\hline
H4N$\dnrm{neg}$&27&2&1&10&0&0&15&0&0&&0&6\\
H4N$\dnrm{cmn}$&2&1&0&2&1&0&1&0&0&0&&0\\
H4N$\dnrm{RE}$&79&2&2&17&0&0&74&0&1&78&22&
\end{tabular}}
\caption{\label{analysis-qu}
Quantitative analysis of dictionaries.
For a row dictionary $d_r$ and a column dictionary $d_c$, a
cell
gives $|d_r \cap d_c|/|d_r|$ as a percentage.
Diagonal entries (all equal to 100\%) omitted for space
reasons. cmn = common}
\end{center}
\end{table}
\subsection{Quantitative Analysis}
Table \ref{analysis-qu} presents a quantitative analysis of
the distribution of words over dictionaries. For a row
dictionary $d_r$ and a column dictionary $d_c$, a cell gives
$|d_r \cap d_c|/|d_r|$ as a percentage. (Diagonal entries
are all equal to 100\% and are omitted for space reasons.)
For example, 49\% of the words in neg\dnrm{lm} are also
members of neg\dnrm{RE} (row ``neg\dnrm{lm}'', column
``neg\dnrm{RE}''). This analysis allows us to obtain
insights into the relationship between different
dictionaries and into the relationship between the
categories negative, litigious and uncertain.
Looking at rows
neg$\dnrm{lm}$,
lit$\dnrm{lm}$ and
unc$\dnrm{lm}$ first, we see how L\&M constructed their
dictionaries.
neg$\dnrm{lm}$ words come from
H4N$\dnrm{neg}$ and
H4N$\dnrm{cmn}$ in about equal proportions; i.e., many words
that are ``common'' in ordinary usage were classified as
negative by L\&M for financial text. Relatively few
lit$\dnrm{lm}$ and
unc$\dnrm{lm}$ words are taken from
H4N$\dnrm{neg}$, most are from
H4N$\dnrm{cmn}$. Only 12\% of neg$\dnrm{lm}$ words were
automatically classified as negative in domain adaptation
and assigned to H4N$\dnrm{RE}$. This is a surprisingly low
number. Given that H4N$\dnrm{RE}$ performs better than
neg$\dnrm{lm}$ in our experiments, this statistic casts
serious doubt on the ability of human annotators to
correctly classify words for the type of sentiment analysis
that is performed in empirical finance if the actual corpus
contexts of the words are not considered. We see two
types of failures in the human annotation. First, as
discussed in \secref{qualana}, words like ``god'' are
misclassified because the prevalent context in 10-Ks (``act
of God'') is not obvious to the annotator. Second,
the utility of a word is not only a function of its
sentiment, but also of the strength of this sentiment. Many
words in neg$\dnrm{lm}$ that were deemed neutral in
automatic adaptation are probably words that may be slightly
negative, but that do not contribute to explaining financial
variables like excess return. The strength of sentiment of a
word is
difficult to judge by human annotators.
Looking at the row H4N$\dnrm{RE}$, we see that most of its
words are taken from neg$\dnrm{lm}$ (79\%) and a few from
lit$\dnrm{lm}$ and unc$\dnrm{lm}$ (2\% each). We can
interpret this statistic as indicating that L\&M had high
recall (they found most of the reliable indicators), but low
precision (see the previous paragraph: only 12\% of their
negative words survive in H4N$\dnrm{RE}$). The distribution
of H4N$\dnrm{RE}$ words over H4N$\dnrm{neg}$
and H4N$\dnrm{cmn}$ is 78:22. This confirms the need for
domain adaptation: many general-domain common words are
negative in the financial domain.
We finally look at how dictionaries for negative, litigious
and uncertain overlap, separately for the L\&M, ADD and RE dictionaries.
lit$\dnrm{lm}$ and
unc$\dnrm{lm}$ have considerable overlap with neg$\dnrm{lm}$
(17\% and 14\%), but they do not overlap with each other.
The three ADD dictionaries --
neg$\dnrm{ADD}$,
lit$\dnrm{ADD}$ and
unc$\dnrm{ADD}$ -- do not overlap at all.
As for RE, 10\% of the
words of unc$\dnrm{RE}$ are also in
neg$\dnrm{RE}$, otherwise there is no overlap between RE
dictionaries. Comparing the original L\&M dictionaries and
the automatically adapted ADD and RE dictionaries, we see
that the three categories -- negative, litigious and
uncertain -- are more clearly distinguished after
adaptation. L\&M dictionaries overlap more, ADD and RE
dictionaries overlap less.
\section{Conclusion}
In this paper, we automatically created
sentiment dictionaries for predicting
financial outcomes.
In our experiments, we demonstrated that the
automatically adapted
sentiment dictionary H4N$\dnrm{RE}$ outperforms the previous state of the
art
in predicting the financial outcomes
excess return and volatility.
In particular, automatic adaptation performs better than
manual adaptation.
Our quantitative and qualitative study provided insight into the
semantics of the dictionaries.
We found
that annotation based on
\emph{an expert's a priori belief}
about a word's meaning can be incorrect -- annotation should
be performed based on the word's \emph{contexts in the
target domain} instead.
In the future, we plan to investigate whether there are
changes over time that significantly
impact the linguistic characteristics of the data, in the
simplest case changes in the meaning of a word. Another
interesting topic for future research is
the comparison of domain adaptation based on our
domain-specific word embeddings vs.\ based on word embeddings trained on much larger corpora.
\section*{Acknowledgments}
We are grateful for the support of the European Research
Council for this work (ERC \#740516).
|
2,877,628,091,168 | arxiv |
\section{Detailed Description of $\textsc{Facet}$}
\label{sec:app-language}
Programs in $\textsc{Facet}$ are parameterized by a language
$(\mathcal{L},\mathcal{M},\models)$, where the semantic function
$(\_\,\models\,\_) \,:\, \mathcal{L}\times\mathcal{M}\rightarrow D$ specifies the domain $D$ in
which expressions are interpreted. A program takes as input a
\emph{pointer} into the syntax tree for an expression $e\in\mathcal{L}$ as
well as a structure $M\in\mathcal{M}$. A program $P$ navigates up and down on
$e$ using a set of pointers to move from children to parent and parent
to children in order to evaluate the semantics of $e$ over the
structure $M$ and verify that $M\models e = d$ for some $d\in D$.
\subsubsection{Parameters}
\label{sec:parameters}
To write a $\textsc{Facet}$ program we specify two things: (1) the language
over which the program is to operate and (2) the program's auxiliary
\emph{states}, which correspond to semantic aspects, i.e. the
auxiliary information used in the definition of the language
semantics\footnote
We sometimes use \emph{states} and
\emph{aspects} interchangeably. But, occasionally we use
\emph{aspects} to distinguish auxiliary semantic information, with
which we need not associate any operational meaning, from the
operational meaning associated with \emph{states} in the context
of a program
}. Part (1) involves specifying (a) the syntax trees
for $\mathcal{L}$ in terms of a ranked alphabet $\Delta$ and (b) the signature
for structures $\mathcal{M}$, including a set of functions used to access the
data for a given $M\in\mathcal{M}$. Additionally, in part (a) we specify an
algebraic data type (ADT) that endows the symbols of $\Delta$ with
extra structure in order to allow programs to pattern match over the
alphabet. As an example, suppose we model universal quantification in
first-order logic by adding to $\Delta$ a unary symbol
\dblqt{$\forall x$} for each variable $x$ from some finite set of
variables. We could then treat \dblqt{$\forall$} as a unary
constructor to allow a program to match over all alphabet symbols that
represent a universally-quantified variable. We abuse notation and
write $\Delta$ for both the ranked alphabet and ADT when the context
is clear.
Part (2) is accomplished by specifying an ADT for the program
states. This ADT will typically be infinite, but any fixed structure
will use only a finite subset of it, provided the language is FAC. We
denote the state ADT by $\mathsf{Asp}$ and the subset pertaining to a given
structure $M$ by $\mathsf{Asp}(M)$. In some cases, $\mathsf{Asp}$ has little or no
structure. For instance, in modal logic it consists of nullary
constructors for the nodes of a Kripke structure. In other cases,
e.g., regular expressions (\Cref{sec:regular-expressions}), we use a
\emph{pair} constructor over positions in finite words with
$\mathsf{Asp} = \{(i,j)\in\mathbb{N}\times\mathbb{N} \,:\, i \le j\}$. We will clarify
these choices in each setting as needed.
The ADTs are each associated with a set of \emph{patterns}, which are
built using the ADT constructors together with variables from a set
$\mathit{\var}$. We use $x,z\in\mathit{\var}$ as pattern variables; these should not
be confused with variables used in expressions $\mathcal{L}$. The sets of
state and alphabet patterns are denoted by $\mathsf{Asp}(\mathit{\var})$ and
$\Delta\mathit{(\var)}$, respectively. Note that we do not assume $\mathsf{Asp}$ has a
finite signature. All we will need is that $\mathsf{Asp}(M)$ is finite for
every $M$ and that state and alphabet pattern matching is
computable. For the latter, we assume a computable function $\mathit{match}$
that computes a unifying substitution for two members of
$\mathsf{Asp}(\mathit{\var})$ or $\Delta\mathit{(\var)}$ whenever possible.
\subsubsection{Semantics}
\label{sec:semantics}
The semantics for $\textsc{Facet}$ is given in \Cref{fig:big-step}. It defines
two relations
\begin{align*}
(M, n, \mathit{C}, \sigma) \Downarrow_p \qquad \text{and} \qquad (M,
n, \mathit{C}, e) \Downarrow_e.
\end{align*}
The predicate $\Downarrow_p$ holds for program configurations
$(M,n,\mathit{C},\sigma)$, with $\mathit{C}$ being the set of clauses for
the program, $M$ a structure, $n$ a pointer, and $\sigma\in\mathsf{Asp}(M)$ a
state. The predicate $\Downarrow_e$ holds for expression
configurations $(M, n, \mathit{C}, e)$, with $e$ being an expression
from the grammar in \Cref{fig:lang}. The assertion
$(M, n, \mathit{C}, \sigma) \Downarrow_p$ can be read as follows: over
the structure $M$ and syntax tree pointed to by $n$, the program
consisting of clauses $\mathit{C}$ terminates with success when started
in the state $\sigma$. Proofs for $\Downarrow_p$ involve finding the
matching clause for a state $\sigma$ and building a subproof for
$\Downarrow_e$ for the appropriate case of the \linl{match} statement
in the matching clause.
Next we discuss some details for the language.
\subsubsection{Well-Formed Programs}
\label{sec:well-formed-programs}
We consider only \emph{well-formed} programs, which have
\emph{disjoint} and \emph{exhaustive} clauses. That is, for every
$M\in\mathcal{M}$ and $\sigma\in\mathsf{Asp}(M)$, there is precisely one clause $c$
for which $\mathit{match}(\mathit{pat}_c, \sigma)$ succeeds, where $\mathit{pat}_c$ denotes
the state pattern for the clause $c$.
We emphasize that variables in $\textsc{Facet}$ programs are \emph{never} bound
to $\textsc{Facet}$ expressions. They are instead replaced either by components
of the syntax tree data type or the state data type. For example, a
variable $x$ might be bound to position $2$ in the word $w = abc$,
while a variable $y$ might be bound to the letter $b$. Variables are
\emph{bound} in $\textsc{Facet}$ programs by the state pattern at the beginning
of each clause, by the alphabet patterns in each case of a
\linl{match} statement, and in \linl{all} and \linl{any}
expressions. Well-formed programs do not have free variables.
\subsubsection{Computable Function Parameters}
\label{sec:comp-funct-param}
The functions $g\in S$ appearing in \linl{any} and \linl{all}
expressions compute finite sets, e.g., elements or sets of elements
from the domain of the structure. These elements are then bound to
variables in \linl{any} and \linl{all} expressions. For example, in a
totally-ordered structure like a word, we could use the function
$g(l,r) = [l, r]$, or a variant $g'(l,r) = [l+1,r]$ to compute sets of
consecutive positions. For convenience we allow functions $g$ to occur
in expressions that denote states. For example, if states are pairs of
word positions, we may write $(2,g(x))$, which can be evaluated to a
state once $x$ is bound. The condition in the premise of the
\textsc{Call} rule in \Cref{fig:big-step} uses a function called
$\mathit{norm}$ to reduce an expression like this to a state. The functions
$g\in S$ and $f\in B$ in fact represent a family of functions, one for
each $M\in\mathcal{M}$. We write $\mathit{eval}(M,g(v))$ and $\mathit{eval}(M,f(v))$ to denote
the result of computing $f$ and $g$ in a structure $M$ with arguments
$v$.
\subsubsection{Negation}
\label{sec:negation}
The reader may wonder why there is no negation in the expression
syntax for $\textsc{Facet}$ and why the conditional for \linl{if} expressions
does not allow an arbitrary expression. These apparent restrictions
are only for simplicity. If we wanted to include negation and more
general conditionals, we could augment the states of any $\textsc{Facet}$
program to track the parity of the number of negations seen at any
point in a program execution; but we prefer to keep this complexity
out of the semantics. The effect of negation can easily be
accomplished by writing dual programs that implement dual operations
in particular states, as described in \Cref{sec:modal-expr-separ}.
\begin{figure}
\begin{mathpar}
\inferrule*[left=Prog]
{
\tau = \mathit{match}(c_i, \sigma)\\
\tau' = \mathit{match}_{c_i}(\tau(\alpha_k), n.\mathit{l}) \\
(M, n, \mathit{C}, \tau'(\tau(e_k))) \Downarrow_e
}
{
(M, n, \mathit{C} = \{\,c_1 \ldots c_m\,\}, \sigma) \Downarrow_p
}
\inferrule*[left=Call]
{
(M, n.c, \mathit{C}, \sigma') \Downarrow_p \\
\sigma' = \mathit{norm}(M, \sigma(v))\\
}
{
(M, n, \mathit{C}, \hbox{\zlstinline!P($M$,\ $\sigma(v)$,\ $c$)!}) \Downarrow_e
}
\inferrule*[left=Bool]
{
\mathit{eval}(M, f(v)) = \top\\
}
{
(M, n, \mathit{C}, f(v)) \Downarrow_e
}\\
\inferrule*[left=And]
{
(M, n, \mathit{C}, e) \Downarrow_e \\
(M, n, \mathit{C}, e') \Downarrow_e \\
}
{
(M, n, \mathit{C}, e\ \hbox{\zlstinline!and!}\ e') \Downarrow_e
}
\inferrule*[left=True]
{
}
{
(M, n, \mathit{C}, \hbox{\zlstinline!True!}) \Downarrow_e
}\\
\inferrule*[left=Then]
{
(M, n, \mathit{C}, e_1) \Downarrow_e \\
\mathit{eval}(M, f(v)) = \top \\
}
{
(M, n, \mathit{C}, \hbox{\zlstinline!if\ $f(v)$ then\ $e_1$ else\ $e_2$!}) \Downarrow_e
}
\inferrule*[left=Or1]
{
(M, n, \mathit{C}, e) \Downarrow_e \\
}
{
(M, n, \mathit{C}, e\ \hbox{\zlstinline!or!}\ e') \Downarrow_e
}\\
\inferrule*[left=Else]
{
(M, n, \mathit{C}, e_2) \Downarrow_e \\
\mathit{eval}(M, f(v)) = \bot \\
}
{
(M, n, \mathit{C}, \hbox{\zlstinline!if\ $f(v)$ then\ $e_1$ else\ $e_2$!}) \Downarrow_e
}
\inferrule*[left=Or2]
{
(M, n, \mathit{C}, e') \Downarrow_e \\
}
{
(M, n, \mathit{C}, e\ \hbox{\zlstinline!or!}\ e') \Downarrow_e
}\\
\inferrule*[left=All]
{
(M, n, \mathit{C}, \{x\mapsto v_1\}(e)) \Downarrow_e \ \ \cdots\ \
\ (M, n, \mathit{C}, \{x\mapsto v_l\}(e)) \Downarrow_e \quad
\mathit{eval}(M, g(v)) = \{\, v_1\, \ldots\, v_l\,\}
}
{
(M, n, \mathit{C}, \hbox{\zlstinline!all (LAM$x$.\ $e$)\ $g(v)$!}) \Downarrow_e
}\\
\inferrule*[left=Any]
{
(M, n, \mathit{C}, \{x\mapsto v_i\}(e)) \Downarrow_e \qquad v_i \in \mathit{eval}(M, g(v))
}
{
(M, n, \mathit{C}, \hbox{\zlstinline!any (LAM$x$.\ $e$)\ $g(v)$!}) \Downarrow_e
}
\end{mathpar}
\caption[LoF]{Semantics for $\textsc{Facet}$. If the node $n.c$ does not exist, then the \LeftTirNameStyle{Call} rule does not apply.
}
\label{fig:big-step}
\end{figure}
\newpage
\subsection{Translating $\textsc{Facet}$ Programs to Two-way Tree Automata}
\label{sec:translate}
If for every structure $M$, the states $\mathsf{Asp}(M)$ are computable and
finite, then a $\textsc{Facet}$ program can be translated to a two-way
alternating tree automaton over syntax trees. $\textsc{Facet}$ draws attention
to a small subset of tree automata that serve as semantic evaluators,
and which have state spaces and alphabets that can be highly
structured. Most $\textsc{Facet}$ programs we have considered consist of at
most a few clauses, with each clause handling the semantics for a
large set of related states.
\emph{Two-way alternating tree automata} over states $Q$ and alphabet
$\Delta$ assign to each state and symbol a positive Boolean formula
like the following
\begin{align*}
\delta(q,a) \,\,= \,\, (q'',0) \vee (q',-1)\wedge (q,1) \wedge (q,2).
\end{align*}
This formula stipulates that in state $q$ reading symbol $a$, the
automaton can \emph{either} continue from the current position ($0$)
in state $q''$ or else it should succeed in several new positions:
from the parent ($-1$) in state $q'$ \emph{and} from the left ($1$)
and right ($2$) children, both in state $q$. More generally, the
transitions are of the following form:
\begin{align*}
\delta(q,a) \in \mathcal{B}^{\mathtt{+}}(Q\times \{-1,0,...,k\}) \qquad
\text{where} \,\, q\in Q, \,\, a\in \Delta,\,\, k = \arity{a},
\end{align*}
and they describe the viable next states and directions for the
machine. It can either move up ($-1$), down (numbers $> 0$), or stay
at the same node ($0$) on the input tree while changing
state. Satisfying Boolean assignments to the set
$Q\times\{-1,0,...,k\}$ describe strategies to build accepting
automaton runs along an input syntax tree, with the two components of
$Q$ and $\{-1,0,...,k\}$ corresponding to the \emph{next state} and
the \emph{direction to move}, respectively. We next explain how
programs in $\textsc{Facet}$ have a simple, straightforward translation to such
automata.
Given a well-formed program $P$, a structure $M$, the set $\mathsf{Asp}(M)$,
and a distinguished state $\sigma_i\in\mathsf{Asp}(M)$, we can build a two-way
alternating tree automaton $\mathcal{A}(P, M)$ that accepts precisely the
syntax trees (rooted at $n$) for which
$(M, n, \mathit{C}(P),\sigma_i) \Downarrow_p$ holds. The states of
$\mathcal{A}(P, M)$ are the members of $\mathsf{Asp}(M)$ and the initial state is
$\sigma_i$. For each clause $c\in\mathit{C}(P)$ of the form
\begin{center}
\linl{$P$($M$,\ $\sigma(z)$,\ $n$) = match\ $n.\mathit{l}$ with\ $\alpha_1$ -> e_1\ $\ldots\,\, \alpha_m$ -> e_m}
\end{center}
the translation creates a set of transitions. For each matching
$\sigma\in\mathsf{Asp}(M)$ there is a transition for each $a\in \Delta$. We
write $\tau = \mathit{match}_c(\alpha_i, a)$ to mean that $a$ matches the
alphabet pattern $\alpha_i$ in $c$ with unifying substitution $\tau$
and it does not match $\alpha_j$ for any $j < i$. We write $\tau(e)$
for the application of $\tau$ to $e$, which substitutes $\tau(x)$ for
all free occurrences of $x$ in $e$, for all $x$ in the domain of
$\tau$.
Suppose $\tau = \mathit{match}(\mathit{pat}_c,\sigma)$ for some clause $c$ with
cases \linl{$\alpha_i$ ->\ $e_i$}. For each symbol $a\in \Delta$, with
$\tau' = \mathit{match}_c(\tau(\alpha_i), a)$, we have the transition
\begin{align*}
\delta(\sigma, a) = \mathsf{aut}(\tau'(\tau(e_i))),
\end{align*}
with $\mathsf{aut}$ described below. Any $a\in\Delta$ for which no alphabet
pattern matches, and which is therefore not covered above, is assigned
$\delta(\sigma, a) = \bot$. Thus a \linl{match} statement need not
specify what to do for alphabet symbols that, say, must never be read
in specific states of the program.
Expressions $e$ are translated into positive Boolean formulae using
the function $\mathsf{aut}$ defined below.
\vspace{0.1in}
\begin{minipage}[t]{0.6\textwidth}
\centering
\begin{tabularx}{0.8\textwidth}{l l l l l l l}
$\mathsf{aut}(\text{\linl{True}})$ &$=$ & $\top$ & \quad &
$\mathsf{aut}(\text{\lstinline!$e_1$\ and\ $e_2$!})$ &$=$ & $\mathsf{aut}(e_1) \wedge\mathsf{aut}(e_2)$ \\
$\mathsf{aut}(\text{\linl{False}})$ &$=$ & $\bot$ & \quad &
$\mathsf{aut}(\text{\lstinline!$e_1$\ or\ $e_2$!})$ &$=$ & $\mathsf{aut}(e_1) \vee \mathsf{aut}(e_2)$ \\
$\mathsf{aut}(\text{\linl{$f$($v$)}})$ &$=$ & $\mathit{eval}(M, f(v))$ & \quad & & &\\
\end{tabularx}
\end{minipage}
\begin{minipage}[t]{0.6\textwidth}
\centering
\begin{tabularx}{0.8\textwidth}{l l}
$\mathsf{aut}(\text{\linl{if\ $f(v)$ then\ $e_1$ else\ $e_2$}})$ \quad &$=$\,\,\, $\begin{array}{l} \mathsf{aut}(e_1)
\quad \text{if}\,\, \mathit{eval}(M, f(v)) = \top \\
\mathsf{aut}(e_2)
\quad \text{else}
\end{array}$ \\
$\mathsf{aut}(\text{\lstinline!all (LAM $x$.\ $e$)\ $g$($v$)!})$ \quad
&$=$ \quad $\bigwedge_{v'\,\in\, \mathit{eval}(M,\,g(v))}\mathsf{aut}(\{x\mapsto v'\}(e))$\\
$\mathsf{aut}(\text{\lstinline!any (LAM $x$.\ $e$)\ $g$($v$)!})$ \quad
&$=$ \quad $\bigvee_{v'\,\in\, \mathit{eval}(M,\,g(v))}\mathsf{aut}(\{x\mapsto v'\}(e))$\\
$\mathsf{aut}(\text{\linl{$P$($M$,\ $\sigma(v)$,\ $n.\mathit{dir}$)}})$ \quad &$=$ \quad
$(\sigma,\mathit{dir})$
\quad
\text{where
}
$\sigma
= \mathit{norm}(M,\sigma(v))$
\end{tabularx}
\end{minipage}
\vspace{0.1in}
We can now state one other well-formedness condition for $\textsc{Facet}$
programs. Namely, for every $M$, $\sigma\in\mathsf{Asp}(M)$, $a\in\Delta$, and
clause $c$ such that $\tau = \mathit{match}(\mathit{pat}_c,\sigma)$, if
$\tau' = \mathit{match}_c(\tau(\alpha_i), a)$ for the case
\linl{$\alpha_i$ -> $\ e_i$} in $c$, then we must have:
\begin{align*}
\mathsf{aut}(\tau'(\tau(e_i))) \in \mathcal{B}^{\mathtt{+}}(\mathsf{Asp}\times
\{-1,0,\ldots,\arity{a}\}).
\end{align*}
In other words, the movement of a program along the syntax tree must
respect symbol arities.
\subsection{Example Construction}
\label{sec:example-construction}
Here we explicitly show the construction of an automaton for a
restricted regular expression language on the alphabet
$\{a,b\}$. Assume regular expressions as in \Cref{sec:regular-expressions}
but without union, intersection, and negation. Our evaluator
simplifies as follows
\begin{lstlisting}
Reg($w$, $(l,r)$, $n$) = match $n.\mathit{l}$ with
$*$ -> if ($l = r$) then True else
any (LAM$x$. Reg($w$, $(l,x)$, $n.\mathit{c}_1$) and Reg($w$, $(x,r)$, $n.\mathit{stay}$)) $[l+1,r]$
$\,\cdot$ -> any (LAM$x$. Reg($w$, $(l,x)$, $n.\mathit{c}_1$) and Reg($w$, $(x,r)$, $n.\mathit{c}_2$)) $[l,r]$
$x$ -> $r = l+1$ and $w(l) = x$
\end{lstlisting}
Consider the word $w=\mathit{abb}$. The resulting two-way
alternating automaton $\mathcal{A}(\hbox{\linl{Reg}}, w)$ is constructed as
follows. See~\cite{tata} for definitions and results for such
automata. The state set is
\begin{align*}
Q=\{&(1,1), (1,2), (1,3), (1,4), (2,2), (2,3), (2,4), (3,3),
(3,4), (4,4),\\
&\mathit{dual}(1,1), \mathit{dual}(1,2), \mathit{dual}(1,3), \mathit{dual}(1,4), \mathit{dual}(2,2),
\mathit{dual}(2,3),
\\ &\mathit{dual}(2,4), \mathit{dual}(3,3),
\mathit{dual}(3,4), \mathit{dual}(4,4) \}.
\end{align*}
The initial state is $(1,|\mathit{abb}|+1)=(1,4)$. Here are just some
of the many transitions:
\begin{itemize}
\item[] $\delta((1,2),a) = \top$
\item[] $\delta((2,3),b) = \top$
\item[] $\delta((3,4),b) = \top$
\item[] $\delta((1,4),\cdot) = ((1,1),1) \wedge
((1,4),2) \vee ((1,2),1) \wedge
((2,4),2) \vee ((1,3),1) \wedge ((3,4),2)$
\item[] $\qquad\qquad\qquad \vee ((1,4),1) \wedge
((4,4),2)$
\item[] $\delta((1,3),\cdot)=\cdots$
\item[] $\delta((1,2),\cdot)=\cdots$
\item[] $\delta((1,1),\cdot)= ((1,1),1)\wedge ((1,1),2)$
\item[] $\delta((2,4),\cdot)=\cdots$
\item[] $\delta((2,3),\cdot)=\cdots$
\item[] $\delta((2,2),\cdot)=\cdots$
\item[] $\delta((3,4),\cdot)=\cdots$
\item[] $\delta((3,3),\cdot)=\cdots$
\item[] $\delta((4,4),\cdot)=\cdots$
\item[] $\delta((1,1),*) = \top$
\item[] $\delta((2,2),*) = \top$
\item[] $\delta((3,3),*) = \top$
\item[] $\delta((4,4),*) = \top$
\item[]
$\delta((1,4),*) = ((1,2),1)\wedge((2,4),0) \vee
((1,3),1)\wedge((3,4,0)) \vee ((1,4),1)\wedge((4,4,0))$
\item[] $\delta((2,4),*) = \cdots$
\item[] $\delta((3,4),*) = \cdots$
\item[] $\delta((1,3),*) = \cdots$
\item[] $\delta((2,3),*) = \cdots$
\item[] $\delta((1,2),*) = \cdots$
\end{itemize}
All other transition formulas not already suggested above are
$\bot$.
\newpage
\section{Dual Programs}
\label{sec:dual-programs}
The dual of a $\textsc{Facet}$ expression $e$ is computed as follows:
\vspace{0.1in}
\begin{tabularx}{\linewidth}{l}
$\mathit{dual}($\linl{True}$) =\ $ \linl{False} \\
$\mathit{dual}($\linl{False}$) =\ $\linl{True} \\
$\mathit{dual}(f(v)) = \neg f(v)$ \\
$\mathit{dual}($\linl{$P$($M$,\ $\sigma(z)$,\ $n.\mathit{dir}$)}$) =\ $ \linl{$P$($M$,\ $\mathit{flip}(\sigma(z))$,\ $n.\mathit{dir}$)}\\
$\mathit{dual}(e\ $ \linl{and} $\ e') = \mathit{dual}(e)\ $ \linl{or} $\ \mathit{dual}(e')$ \\
$\mathit{dual}(e\ $ \linl{or} $\ e') = \mathit{dual}(e)\ $ \linl{and} $\ \mathit{dual}(e')$ \\
$\mathit{dual}($\linl{all (LAM$z$.\ $e$)\ $g(v)$}$) =\ $\linl{any (LAM$z$.\ $\mathit{dual}(e)$)\ $g(v)$} \\
$\mathit{dual}($\linl{any (LAM$z$.\ $e$)\ $g(v)$}$) =\ $\linl{all (LAM$z$.\ $\mathit{dual}(e)$)\ $g(v)$} \\
$\mathit{dual}($\linl{if\ $f(v)$\ then\ $e_1$ else\ $e_2$}$)$ = \linl{if\ $f(v)$\ then\ $\mathit{dual}(e_1)$ else\ $\mathit{dual}(e_2)$}
\end{tabularx}
\vspace{0.1in}
\noindent where $\mathit{flip}(\mathit{dual}(\sigma(z))) = \sigma(z)$ and otherwise
$\mathit{flip}(\sigma(z)) = \mathit{dual}(\sigma(z))$.
\newpage
\section{Computation Tree Logic}
\label{sec:ctl}
We want a semantic evaluator for CTL syntax trees $\varphi$ over
pointed Kripke structures $G=(W,s,E,P)$ that checks whether
$G\models\varphi$. Like modal logic, the semantic aspects again
include nodes of $G$, but now also a counter to interpret path
quantifiers recursively.
We consider the following grammar for CTL formulas, from which the
other standard operators can be defined:
\begin{align*}
\varphi\Coloneqq a\in\Sigma \,\, | \,\, \varphi\vee\psi\,\, | \,\,
\neg\varphi\,\, | \,\, \ensuremath{\mathsf{EG}}\varphi\,\, | \,\, \mathsf{E}(\varphi\mathsf{U}\psi)\,\, | \,\, \ensuremath{\mathsf{EX}}\varphi
\end{align*}
The semantics for path quantifiers is given recursively based on the following:
\begin{align*}
G,w\models \ensuremath{\mathsf{EX}}\varphi \,\,\Leftrightarrow\,\, \exists w'.\, E(w,w')\text{ and }
G,w'\models\varphi
\end{align*}
with the other two path quantifiers interpreted according to the
equivalences
\begin{align*}
(1)\quad \ensuremath{\mathsf{EG}}\varphi \equiv \varphi \wedge \ensuremath{\mathsf{EX}}(\ensuremath{\mathsf{EG}}\varphi)
\quad\text{and}\quad (2)\quad
\mathsf{E}(\varphi\mathsf{U}\psi) \equiv \psi \vee \varphi \wedge \ensuremath{\mathsf{EX}} (\mathsf{E}(\varphi\mathsf{U}\psi)),
\end{align*}
with $(1)$ understood as a greatest fixpoint and $(2)$ as a least
fixpoint, as we discuss shortly. The $\hbox{\linl{CTL}}$ program is given in
\Cref{fig:ctl}. Below, and in the program, we write \dblqt{$Ew$} as a
shorthand for $\{w'\in W\,:\, E(w,w')\}$. Fix a Kripke structure
$G=(W,s,E,P)$.
The alphabet ADT is trivial. The state ADT consists of two parts:
\begin{align*}
\mathsf{Asp}(G) &\coloneqq \{w, \mathit{dual}(w) \,:\, w\in W\} \sqcup \mathsf{Count}(G) \\
\mathsf{Count}(G) &\coloneqq \{(w, i),\, (\mathit{dual}(w), i) \,:\, w\in W,\, 0\le i\le|W|\},
\end{align*}
the first being states of the form $w$ or $\mathit{dual}(w)$, which do not
involve a counter, and the second being states of the form $(w,i)$ or
$(\mathit{dual}(w),i)$, which use a counter to verify path quantifiers.
The counter value $i$ tracks the stages of a least or greatest
fixpoint computation. If the counter is being used to verify a formula
$\ensuremath{\mathsf{EG}}\varphi$ holds at some $w\in W$, then this is a greatest
fixpoint. On the other hand, if the counter is being used to verify a
formula $\mathsf{E}(\varphi\mathsf{U}\psi)$ does \emph{not} hold at some
$w\in W$, then it is a least fixpoint. To understand this, let us
consider the equivalences $(1)$ and $(2)$ as monotone functions over
$\mathcal{P}(W)$. We write
$\llbracket \varphi\rrbracket_G \coloneqq \{w\in W\,:\,
G,w\models\varphi\}$ for the set of states where a given formula
holds. Now observe that $(1)$ and $(2)$ correspond to the monotone
functions
\vspace{0.2in}
\begin{minipage}[t]{\textwidth}
\centering
\begin{tabularx}{0.7\linewidth}{r c c c l}
$\ensuremath{\mathsf{EG}}_\varphi(X)$ & $=$ & $\llbracket\varphi\rrbracket$ & $\cap$ & $\left\{w\in W\,:\, Ew
\cap X \neq \emptyset\right\}$ \\
$\ensuremath{\mathsf{EU}}_{\varphi,\psi}(X)$ & $=$ & $\llbracket\psi\rrbracket$ &
$\cup$ & $\left\{w\in W\,:\, Ew
\cap X \neq \emptyset\right\} \cap \llbracket\varphi\rrbracket$,
\end{tabularx}
\end{minipage}
\vspace{0.1in}
\noindent with $\llbracket\ensuremath{\mathsf{EG}}\varphi\rrbracket_G = \mathsf{gfp}(\ensuremath{\mathsf{EG}}_\varphi)$ and
$\llbracket\mathsf{E}(\varphi\mathsf{U}\psi)\rrbracket_G =
\mathsf{lfp}(\ensuremath{\mathsf{EU}}_{\varphi,\psi})$. The program $\hbox{\linl{CTL}}$ in \Cref{fig:ctl} has
the property that for all $G=(W,s,E,P)$, $w\in W$, $0\le i\le|W|$, and
$\varphi$:
\vspace{0.1in}
\begin{minipage}[t]{\linewidth}
\centering
\def\arraystretch{1.4
\begin{tabularx}{0.8\linewidth}{l r l l}
& $(G,\mathit{root}(\ensuremath{\mathsf{EG}}\varphi), \mathit{C}(\hbox{\linl{CTL}}), (w,i))$ $\Downarrow_p$
&
$\Leftrightarrow$ & $w\in \ensuremath{\mathsf{EG}}_\varphi^{|W|-i}(W)$ \\
and &
$(G,\mathit{root}(\mathsf{E}(\varphi\mathsf{U}\psi)), \mathit{C}(\hbox{\linl{CTL}}), (\mathit{dual}(w), i))$
$\Downarrow_p$ & $\Leftrightarrow$ &
$w\notin \ensuremath{\mathsf{EU}}_{\varphi,\psi}^{|W|-i}(\emptyset)$.
\end{tabularx}
\end{minipage}
\vspace{0.1in}
\noindent The task of evaluating
$w\notin \llbracket\ensuremath{\mathsf{EG}}\varphi\rrbracket_G$ needs no counter because
there is always a finite computation witnessing non-membership for
$\ensuremath{\mathsf{EG}}$-formulas. The task of evaluating whether
$w\in \llbracket\mathsf{E}(\varphi\mathsf{U}\psi)\rrbracket_G$ needs no counter
for the same reason. There are always proofs of \emph{removal} from a
greatest fixpoint and proofs of \emph{inclusion} in a least
fixpoint. Note we do not mind infinite loops in the other cases
because we interpret programs as tree automata with reachability
acceptance.
\begin{theorem}
CTL separation for finite sets $\mathit{P}$ and $\mathit{N}$ of finite pointed
Kripke structures, and grammar $\mathcal{G}$, is decidable in time
$\mathcal{O}(2^{\mathit{poly}(mn^2)}\cdot|\mathcal{G}|)$, where $m=|\mathit{P}|+|\mathit{N}|$ and
$n=\max_{G\in \mathit{P}\cup\mathit{N}}|G|$.
\end{theorem}
\begin{proof}
We have $|\mathsf{Asp}(G)| = \mathcal{O}(|W|^2)$, and the rest follows by
\Cref{lemma:mainlemma} and \Cref{complexity-lemma}.
\end{proof}
\newpage
\begin{figure}[H]
\centering
\begin{lstlisting}
CTL($G$, $w$, $n$) =
match $n.\mathit{l}$ with
$\ensuremath{\mathsf{EG}}$ -> CTL($G$, $w$, $0$, $n.\mathit{stay}$)
$\ensuremath{\mathsf{EU}}$ -> CTL($G$, $w$, $n.\mathit{c}_2$) or
(CTL($G$, $w$, $n.\mathit{c}_1$) and (any (LAM$z.$ CTL($G$, $z$, $n.\mathit{stay}$)) $Ew$))
$\ensuremath{\mathsf{EX}}$ -> any (LAM$z.$ CTL($G$, $z$, $n.\mathit{c}_1$)) $Ew$
$\,\,\vee$ -> CTL($G$, $w$, $n.\mathit{c}_1$) or CTL($G$, $w$, $n.\mathit{c}_2$)
$\,\,\neg$ -> CTL($G$, $\mathit{dual}(w)$, $n.\mathit{c}_1$)
$\,\,x$ -> $x\in P(w)$
CTL($G$, $\mathit{dual}(w)$, $n$) =
match $n.\mathit{l}$ with
$\ensuremath{\mathsf{EU}}$ -> CTL($G$, $\mathit{dual}(w)$, $0$, $n.\mathit{stay}$)
$\ensuremath{\mathsf{EG}}$ -> CTL($G$, $\mathit{dual}(w)$, $n.\mathit{c}_1$) or (all (LAM$z.$ CTL($G$, $\mathit{dual}(z)$, $n.\mathit{stay}$)) $Ew$)
$\ensuremath{\mathsf{EX}}$ -> all (LAM$z.$ CTL($G$, $\mathit{dual}(z)$, $n.\mathit{c}_1$)) $Ew$
$\,\,\vee$ -> CTL($G$, $\mathit{dual}(w)$, $n.\mathit{c}_1$) and CTL($G$, $\mathit{dual}(w)$, $n.\mathit{c}_2$)
$\,\,\neg$ -> CTL($G$, $w$, $n.\mathit{c}_1$)
$\,\,x$ -> $x\notin P(w)$
CTL($G$, $w$, $i$, $n$) =
match $n.\mathit{l}$ with
$\ensuremath{\mathsf{EG}}$ -> if $i=|W|$ then True
else CTL($G$, $w$, $n.\mathit{c}_1$) and (any (LAM$z.$ CTL($G$, $z$, $i+1$, $n.\mathit{stay}$)) $Ew$)
CTL($G$, $\mathit{dual}(w)$, $i$, $n$) =
match $n.\mathit{l}$ with
$\ensuremath{\mathsf{EU}}$ -> if $i=|W|$ then true
else CTL($G$, $\mathit{dual}(w)$, $n.\mathit{c}_2$) and
(CTL($G$, $\mathit{dual}(w)$, $n.\mathit{c}_1$) or
(all (LAM$z.$ CTL($G$, $\mathit{dual}(z)$, $i+1$, $n.\mathit{stay}$)) $Ew$))
\end{lstlisting}
\caption{$\hbox{\linl{CTL}}$ evaluates CTL formulas $\varphi$ against an input
pointed Kripke structure $G$ and checks $G\models\varphi$. }
\label{fig:ctl}
\end{figure}
\newpage
\section{Finite-Variable First-Order Logic}
\label{sec:fo}
Fix a finite relational signature with relation symbols $R_i$ and a
set of variables $V=\{x_1,\ldots,x_k\}$. We write a program $\hbox{\linl{FO}}$ that
evaluates an $\FO^{k}$ syntax tree $\varphi$ against a relational
structure $M$ and checks $M\models\varphi$. The aspects are the
(partial) assignments to variables $V$:
\begin{align*}
\mathsf{Asp}(M)\coloneqq \{\gamma, \mathit{dual}(\gamma) \,:\, \gamma \in [V\rightharpoonup M]\},
\end{align*}
and $|\mathsf{Asp}(M)|=\mathcal{O}(|M|^k)$. The alphabet ADT consists of unary
constructors for $\forall$ and $\exists$, as well as $l$-ary
constructors for each $l$-ary relation symbol $R$. The program $\hbox{\linl{FO}}$
together with its (omitted) dual allows us to derive the main result
of~\cite{popl22}. The other results can also be derived, e.g., $\FO^{k}$
with recursive definitions can be interpreted using a combination of
two-way navigation as in \Cref{sec:cfg} and counters as in
\Cref{sec:ctl}.
\begin{figure}[H]
\centering
\begin{lstlisting}
FO($M$, $\gamma$, $n$) =
match $n.\mathit{l}$ with
$\wedge$ -> FO($G$, $\gamma$, $n.\mathit{c}_1$) and FO($G$, $\gamma$, $n.\mathit{c}_2$)
$\vee$ -> FO($G$, $\gamma$, $n.\mathit{c}_1$) or FO($G$, $\gamma$, $n.\mathit{c}_2$)
$\neg$ -> FO($G$, $\mathit{dual}(\gamma)$, $n.\mathit{c}_1$)
$\forall x$ -> all (LAM$z$. FO($G$, $z$, $n.\mathit{c}_1$)) $\{\update{\gamma}{x}{a} \,:\, a\in M\}$
$\exists x$ -> any (LAM$z$. FO($G$, $z$, $n.\mathit{c}_1$)) $\{\update{\gamma}{x}{a} \,:\, a\in M\}$
$R(\overline{x})$ -> $\gamma(\overline{x})\in R^M$
\end{lstlisting}
\caption{$\hbox{\linl{FO}}$ evaluates first-order logic formulas $\varphi$ against
an input relational structure $M$ and checks $M\models\varphi$. }
\label{fig:fo-clause1}
\end{figure}
\section{Context-Free Grammars}
\label{sec:cfg}
In this section we consider another problem involving separation of
labeled finite words. The goal is to synthesize a context-free grammar
that generates all positively-labeled words and no negatively-labeled
words. We derive a decision procedure as before by writing a $\textsc{Facet}$
program.
\subsection{Separating Words with Context-Free Grammars}
\label{sec:separ-words-with}
\begin{problem}[Context-Free Grammar Separation]
Given finite sets $\mathit{P}$ and $\mathit{N}$ of finite words over an alphabet
$\Sigma$, as well as a (meta-)grammar $\mathcal{G}$, synthesize a
context-free grammar $G$ over nonterminals $\mathit{NT}$, terminals
$\Sigma$, and axiom $S\in \mathit{NT}$, such that $G\in L(\mathcal{G})$ and
$P\subseteq L(G)$ and $N \cap L(G) = \emptyset$, or declare no such
grammar exists.
\end{problem}
The semantics of context-free grammars (CFGs) is standard: a word is
generated by a grammar if we can build a parse tree for it using the
productions. We want to represent CFGs as syntax trees and then write
a program that reads such trees and evaluates whether a given word is
generated by the represented grammar. The syntax trees can organize
productions along, say, the right spine, with their right-hand sides
in left children as suggested below. Note that the ranked alphabet
$\Delta$ uses a binary symbol $\mathit{lhs}(A)$ and nullary symbol $\mathit{rhs}(A)$
for each $A\in\mathit{NT}$ to distinguish between occurrences of $A$ in the
right-hand side of a production from occurrences in the left-hand
side.
We also use a binary symbol $\mathit{top}(S)$ to distinguish the root of the
syntax tree, as well as a nullary symbol $\mathit{end}$ to signal the end of
productions along the right spine. Terminals $a\in\Sigma$ are
represented as nullary symbols $\mathit{term}(a)$. See below with a grammar on
the left and its syntax tree on the right.
\begin{minipage}[t]{0.45\linewidth}
\begin{align*}
\\
S \,\, &\longrightarrow\,\, a \, S \, b \quad|\quad c
\end{align*}
\end{minipage}%
\begin{minipage}[t]{0.45\linewidth}
\begin{align*}
\begin{tikzpicture}[thick]
\node[draw=none] (S1) at (0,0) {$\mathit{top}(S)$} ;
\node[draw=none] (S2) at (1.5,-0.25) {$\mathit{lhs}(S)$} ;
\node[draw=none] (Dot) at (-1.5,-0.25) {$\cdot$} ;
\node[draw=none] (A) at (-2.2,-1) {$\mathit{term}(a)$} ;
\node[draw=none] (S3) at (-0.8,-1) {$\cdot$} ;
\node[draw=none] (rhsS) at (-1.5, -1.75) {$\mathit{rhs}(S)$} ;
\node[draw=none] (termb) at (0, -1.75) {$\mathit{term}(b)$} ;
\node[draw=none] (epsilon) at (0.8,-1) {$\mathit{term}(c)$} ;
\node[draw=none] (end) at (2.2,-1) {$\mathit{end}$} ;
\draw [-] (S3) edge[black] (rhsS) ;
\draw [-] (S3) edge[black] (termb) ;
\draw [-] (S1) edge[black] (S2) ;
\draw [-] (S1) edge[black] (Dot) ;
\draw [-] (Dot) edge[black] (A) ;
\draw [-] (Dot) edge[black] (S3) ;
\draw [-] (S2) edge[black] (epsilon) ;
\draw [-] (S2) edge[black] (end) ;
\end{tikzpicture}
\end{align*}
\end{minipage}
\subsection{Decidable Learning for Context-Free Grammars}
\label{sec:progr-eval-cont}
We want a program that evaluates a CFG syntax tree $G$ to verify
whether $w\in L(G)$ for an input word $w$. What kind of state is
needed? Intuition suggests the $\textsc{Facet}$ evaluator will be similar to
the one for regular expressions, and that we should use pairs of
positions. The main difference is the more flexible recursion afforded
by nonterminals. Consider reading the syntax tree above starting at
the root labeled by $\mathit{top}(S)$, with the production \dblqt{$a\,S\, b$}
in the left subtree and the rest of the productions on the right. To
verify that $w$ is generated by $S$, the program should move to the
right-hand sides of the two $S$-productions and check whether $w$ is
generated by \emph{either} of these. Concatenation in the right-hand
sides of productions can be handled just like for regular expressions
by guessing a split for $w$ and then verifying the guess in the
subtrees. But upon reading, say, $\mathit{rhs}(S)$ in the production
\dblqt{$a\,S\,b$}, the program should navigate \emph{up} to find the
$S$ productions and reenter their right-hand sides in order to parse
the current subword and verify its membership in $L(S)$. This is
accomplished by entering a state $\mathit{reset}(S)$ that causes the program
to navigate to the root of the syntax tree and then move downward in a
state $\mathit{find}(S)$ to check membership in all productions for $S$. It
turns out, however, that this more flexible recursion afforded by the
nonterminals introduces a subtlety when verifying
\emph{non-membership} of a word in the grammar. We will return to this
point after we consider the part of the program that verifies
membership.
\subsubsection{Verifying Membership}
\label{sec:verify-memb}
We write a program $\hbox{\linl{CFG}}$ that evaluates an input CFG syntax tree $G$
over a word $w$ and verifies that $w\in L(G)$. The states consist of
ordered pairs of word positions as well as some extra information
related to moving up and down on the syntax tree, along with duals:
\begin{align*}
\mathsf{Asp}(w) &\coloneqq \{x, \mathit{dual}(x) \,:\, x\in X(w)\}\\
X(w) &\coloneqq \mathsf{subs}(w)\,\cup\,\{ (s,\mathit{find}(A)), (s,\mathit{reset}(A)) \,:\, s\in\mathsf{subs}(w),\, A\in\mathit{NT} \}\\
\mathsf{subs}(w) &\coloneqq \{ (l,r) \,:\, 1\le l\le r \le |w|+1 \}
\end{align*}
\noindent
Signature functions are the same as those for regular expressions. The
clauses for $\hbox{\linl{CFG}}$ (duals omitted) are shown in \Cref{fig:cfg}.
\begin{figure}
\centering
\begin{minipage}[c]{0.95\linewidth}
\begin{lstlisting}
CFG($w$, $(l,r)$, $n$) = match $n.\mathit{l}$ with
$\cdot$ -> any (LAM$x$. CFG($w$, $(l,x)$, $n.\mathit{c}_1$) and CFG($w$, $(x,r)$, $n.\mathit{c}_2$) $[l,r]$
$\mathit{rhs}(z)$ -> CFG($w$, $(l,r)$, $\mathit{reset}(z)$, $n.\mathit{up}$)
$\mathit{term}(x)$ -> $r = l+1$ and $w(l) = x$
CFG($w$, $(l,r)$, $\mathit{reset}(z)$, $n$) = match $n.\mathit{l}$ with
$\mathit{top}(z)$ -> CFG($w$, $(l,r)$, $n.\mathit{c}_1$) or CFG($w$, $(l,r)$, $\mathit{find}(z)$, $n.\mathit{c}_2$)
$\mathit{top}(x)$ -> CFG($w$, $(l,r)$, $\mathit{find}(z)$, $n.\mathit{c}_2$)
$\_$ -> CFG($w$, $(l,r)$, $\mathit{reset}(z)$, $n.\mathit{up}$)
CFG($w$, $(l,r)$, $\mathit{find}(z)$, $n$) = match $n.\mathit{l}$ with
$\mathit{lhs}(z)$ -> CFG($w$, $(l,r)$, $n.\mathit{c}_1$) or CFG($w$, $(l,r)$, $\mathit{find}(z)$, $n.\mathit{c}_2$)
$\mathit{lhs}(x)$ -> CFG($w$, $(l,r)$, $\mathit{find}(z)$, $n.\mathit{c}_2$)
\end{lstlisting}
\end{minipage}
\caption{$\hbox{\linl{CFG}}$ evaluates an input CFG syntax tree $G$ pointed to by
$n$ against word $w$ and verifies that $w\in L(G)$.}
\label{fig:cfg}
\end{figure}
Consider the operation of $\hbox{\linl{CFG}}$ over a word $w$ and a grammar $G$
that has a production like $A \rightarrow AA$. Notice that the program
could read this production arbitrarily many times in the same state by
always choosing to split $w$ into $\epsilon$ and $w$ when reading the
right-hand side $AA$, in which case it would verify recursively that
$w\in L(A)$ and $\epsilon\in L(A)$. Nevertheless, if indeed
$w\in L(G)$, then there is a finite proof for
$(w, \mathit{root}(G), \mathit{C}(\hbox{\linl{CFG}}), ((1, |w|+1),\mathit{reset}(S))) \Downarrow_p$
that can be obtained by following any correct derivation of $w$ from
the grammar. The case for $w\notin L(G)$ is more subtle.
\subsubsection{Verifying Non-Membership}
\label{sec:non-membership}
Consider how $\hbox{\linl{CFG}}$ should check $w\notin L(A)$ for a production like
$A \rightarrow AA$. Now there is no derivation to follow, and the
program might loop forever by entering the right-hand side and reading
$AA$, which will cause it to read all $A$ productions, which will
cause it to read $AA$, and so on, with no guarantee that subwords
become smaller in each recursive call. This termination issue can be
dealt with in a few ways, e.g., by adding states to keep track of the
depth of recursion, but we present a simpler solution that avoids
this.
It turns out that the duals for the $\hbox{\linl{CFG}}$ clauses in \Cref{fig:cfg}
are sufficient for our purposes, provided all input grammars are in
\emph{Greibach normal form} (GNF). Productions in GNF grammars have
the form $A\rightarrow a(\mathit{NT})^*$, with $a\in \Sigma$ and
$A\in \mathit{NT}$. Intuitively, this restriction helps because it makes
proofs of non-membership finite: subwords must become smaller each
time the program recursively checks a given nonterminal. To see this,
consider verifying $\mathit{aba}\notin L(S)$ for the GNF grammar:
\begin{align*}
S \,\,\longrightarrow \,\, a\,S \,\,\,\,|\,\,\,\, b
\end{align*}
The word $\mathit{aba}$ is clearly not generated by the second
production. To show it is not generated by the first, we show there is
no way to split $\mathit{aba}$ into $w_1w_2$ so that $w_1$ is
generated by $a$ and $w_2$ is generated by $S$. If $w_1\neq a$ then
the subproof for that split can end. Otherwise $w_1 = a$, and thus
$|w_2| < w$, and hence the subproof for $w_2\notin L(S)$ will be
finite by induction on word length. For any GNF grammar $G$ and word
$w\notin L(G)$, there is a finite proof for
$(w, \mathit{root}(G), \mathit{C}(\hbox{\linl{CFG}}), \mathit{dual}((1, |w| + 1),\mathit{reset}(S)))
\Downarrow_p$, and the argument does not in fact rely on the precise
form of GNF productions; it works for more general productions of the
form $A\rightarrow \alpha$ with
$\alpha\in (\Sigma\sqcup\mathit{NT})^*\Sigma(\Sigma\sqcup\mathit{NT})^*$, i.e., those
that involve at least one terminal.
We call a grammar
\emph{productive} if each of its productions meets this
requirement. As long as input grammars are productive, the dual
clauses for those from \Cref{fig:cfg} correctly verify
non-membership. Note that all context-free languages can be
represented by productive CFGs, and thus we assume that the input
meta-grammar $\mathcal{G}$ encodes only productive
CFGs\footnote
Alternatively, we can use another
automaton to verify that input trees encode productive
grammars. The product of this automaton with the meta-grammar
automaton $\mathcal{A}_\mathcal{G}$ can itself be viewed as a meta-grammar
which enforces productivity
}
\begin{theorem}
CFG separation for finite sets of words $\mathit{P}$, $\mathit{N}$, and grammar
$\mathcal{G}$ enforcing productivity, is decidable in time
$\mathcal{O}(2^{\mathit{poly}(k)}\cdot |\mathcal{G}|)$, where
$n = \max_{w\in P\cup N}|w|$, $m = |P| + |N|$, and
$k=mn^2\cdot|\mathit{NT}|$.
\end{theorem}
\begin{proof}[Proof Sketch.]
Fix a word $w$. For every $(i,j)\in \mathsf{subs}(w)$ and for every $G$ we
have that $w(i,j)\in L(G)$ if and only if
$(w,\mathit{root}(G),\mathit{C}(\hbox{\linl{CFG}}),
((i,j),\mathit{reset}(S)))\Downarrow_p$. Similarly, we have
$w(i,j)\notin L(G)$ if and only if
$(w,\mathit{root}(G),\mathit{C}(\hbox{\linl{CFG}}),
\mathit{dual}((i,j),\mathit{reset}(S)))\Downarrow_p$. The proof is by induction on $w$
and $G$. We have $|\mathsf{Asp}(w)| = \mathcal{O}(|w|^2\cdot |\mathit{NT}|)$ and the theorem
follows by \Cref{lemma:mainlemma} and \Cref{complexity-lemma}.
\end{proof}
\section{Conclusion}
\label{sec:conclusion}
We introduced a powerful recipe for proving that a symbolic language
has decidable learning. It involves writing a program, i.e. semantic
evaluator, that operates over mathematical structures and expression
syntax trees for a given symbolic language. \emph{Finite-aspect
checkable} languages have the property that the semantics of any
expression $e$ can be expressed in terms of a finite amount of
semantic information (aspects) that depends on the structure over
which evaluation occurs but not on the size of $e$.
This
addresses a central question in expression learning with
\emph{version space algebra} (VSA) techniques, especially those
realized as tree automata: for which symbolic languages are these
learning algorithms possible? One prevailing answer in the
literature is that language operators should have \emph{finite
inverses} (e.g., see~\cite{flashmeta,flashfillplus}). When
operators have finite inverses a top-down tree automaton can be
effectively constructed. Our work suggests a weaker requirement,
namely, that it be sufficient to \emph{evaluate} arbitrarily large
expressions or programs on specific examples by traversing syntax
trees up and down using memory that is bounded by a function solely
of the \emph{size of the example}. For instance, in
\Cref{sec:rationals} we considered learning queries over the
rational numbers with order. The ``inverse'' of $x < y$ is an
infinite set of ordered pairs of rational numbers. Nevertheless, to
evaluate a formula for a specific example, all that is needed is a
bounded number of bits to encode the current ordering of variables.
We have also presented a set of interesting FAC languages that have
nontrivial semantic definitions using finitely-many aspects, and new
decidable learning results for each. We believe that many more can be
readily found using our meta-theorem.
Tree automata underlie many practical algorithms for synthesis based
on compactly representing large spaces of programs and
expressions~\cite{miltner-angelic,fta-data-completion-scripts,Wang2017,WangWangDilligOOPSLA18,ecta,handa-rinard,flashfill}. The
main idea is to efficiently represent classes of expressions which are
equivalent with respect to some examples. This idea originates with
version space algebra~\cite{tom-mitchell-vsa,mitchell97}, which
essentially amounts to a restricted form of tree automata working over
trees of bounded depth~\cite{vsa-ta}. Bringing the full tree automata
toolkit to bear on learning and synthesis, e.g. two-way power and
alternation, recognizes tree automata as a kind of basic building
block for a version space algebra \emph{over tree automata}. The
learning constructions from our work use semantic evaluators (compact,
effective descriptions of tree automata) as a basic building block and
combine them in specific ways to address specific learning
problems. Given this uniform technique of using tree automata as a
programming language, it would be interesting to build compact
representations and incremental algorithms for their construction and
emptiness that yield generic learning algorithms which scale. Bounding
the depth of expressions may make some of the constructions from this
paper feasible.
Tree automata have also been used in many other contexts in computer
science. In fixed-parameter tractable algorithms (e.g. Courcelle's
theorem~\cite{parameterized-complexity-flum-grohe}) and finite model
theory, they have been used to obtain generic algorithms for
$\mathsf{MSO}$-definable properties that work over tree decompositions of
graphs, while in temporal logic verification~\cite{vardi-ltl} they
have been used as acceptors of correct behaviors of systems. Their use
for learning in symbolic languages is an emerging new application of
tree automata. It would be interesting to study the \emph{theory} of
FAC languages in terms of expressiveness, language-theoretic
properties, and alternative characterizations.
\section{Introduction}
\label{sec:introduction}
We undertake a foundational theoretical exploration of the exact
learning problem for \emph{symbolic languages} with rich
semantics. Learning symbolic concepts from data has myriad
applications, e.g., in
verification~\cite{madhu-qda,ice,data-driven-chc,learning-nn-controllers,neider2020}
and, in particular, invariant synthesis for distributed
protocols~\cite{koenig-taming-search-space, aiken-fo-sep,
parno-21,distAI}, learning properties of
programs~\cite{inferring-rep-invariants, preconditions,
learning-contracts}, explaining executions of distributed
protocols~\cite{neider-ltl-learning}, and synthesizing programs from
examples or specifications
\cite{mil-muggleton,evans-greffen-noisy,flashfill,sketch,sygusJournal,handa-rinard,fta-data-completion-scripts,Wang2017,flashmeta}.
In this paper, \emph{symbolic languages} are construed as sets of
expressions together with formal syntax and semantics. Languages
include logics, e.g., first-order and modal logics, programming
languages (functional or imperative), query languages like
\textsc{SQL}, or even languages whose expressions describe other kinds
of languages, e.g., regular expressions or context-free grammars. In
the \emph{exact learning problem} for a language $\mathcal{L}$, the goal is to
find an expression $e\in\mathcal{L}$ that is consistent with a given finite
set of (positively and negatively) labeled examples, which in this
setting are \emph{finite structures}. The expression $e$ should be
satisfied by all positive structures and not satisfied by any negative
ones. The semantic notion, i.e. \emph{satisfaction}, varies by
problem.
\mypara{Decidable Learning} The languages we study are complex enough
that polynomial time learning is seldom possible (even learning the
simplest Boolean formula that separates a labeled set of Boolean
assignments to variables is not possible in polynomial
time~\cite{kearns-vazirani}). Furthermore, assuming the semantics of
expressions over structures is computable (true in all languages we
consider), there is always a trivial algorithm that \emph{enumerates}
expressions, evaluates them over the given structures, and finds a
consistent expression if one exists. Given that enumeration can take
exponential time in the size of the smallest consistent expression to
terminate (if it terminates at all), and given any learning algorithm
may require exponential time, a meaningful theoretical analysis of
learning for such languages is hard. We hence consider \emph{decidable
learning}, where hypothesis classes are infinite and learning
algorithms must terminate with a consistent expression if one exists
or report there is none. Note the trivial enumerator is not a decision
procedure when there are infinitely-many expressions, since it may not
be able to report an instance has no solution.
\mypara{Learning under Syntactic Restrictions} We require learning
algorithms to both \emph{accommodate syntactic restrictions} over the
language, i.e., restrictions to the hypothesis space, and to be able
to find \emph{small} expressions. These stipulations mitigate
overfitting. For instance, in the case of some logics, the set of
positively-labeled structures can be precisely captured using a single
formula that can be computed efficiently given the structures. Such a
solution is not interesting and is unlikely to generalize. In such
cases, learning also becomes trivially decidable: if there is any
consistent expression, then the highly specific one will be
consistent. Accommodating syntactic restrictions and requiring small
solutions circumvent these issues. Note also that syntactic
restrictions are a \emph{feature}--- one can always allow expressions
to be learned from the entire language.
\mypara{Learning in Finite-Variable Logics} Our work draws inspiration
from a recent result that showed classical logics, e.g., first-order
logic ($\mathsf{FO}$), have decidable learning when restricted to use
finitely-many variables~\cite{popl22}. The technique underlying this
result uses \emph{tree automata}. For each positively-labeled
(respectively negatively-labeled) structure, one builds a tree
automaton that reads expression syntax trees and accepts those that
are true (respectively false) in that structure. Such automata are
akin to \emph{version space algebras}~\cite{mitchell97}, and taking
their product (and a product with an automaton capturing the syntactic
restriction) results in an automaton that accepts \emph{all}
consistent expressions. Existence of solutions can be decided with
automata emptiness algorithms, which can be used to synthesize small
consistent expressions if they exist.
\mypara{Contributions} We show that the tree automata-theoretic
technique for learning extends much beyond finite-variable logics. We
prove decidable learning for a number of languages that are not
finite-variable logics, and we give a meta-theorem that streamlines
the task of proving decidable learning for new languages. It reduces
proofs to the problem of writing an interpreter for the semantics in a
particular \emph{programming language}, which we call $\textsc{Facet}$. Using
this meta-theorem, we exhibit a rich set of examples that have
decidable learning:
\begin{itemize}
\item \emph{Modal logic} over Kripke structures
(\Cref{sec:motiv-exampl-modal})
\item \emph{Computation tree logic} over Kripke structures
(\Cref{sec:ctl})
\item \emph{Regular expressions} over finite words (\Cref{sec:regular-expressions})
\item \emph{Linear temporal logic} over periodic words (\Cref{sec:ltl})
\item \emph{Context-free grammars} over finite words (\Cref{sec:cfg})
\item \emph{First-order queries} over tuples of rationals numbers with
order (\Cref{sec:rationals})
\item \emph{String transformations} from input-output examples
(\Cref{sec:discussion}), similar to~\cite{flashfill}
\end{itemize}
In each of these settings, the learning problem is decidable under
syntactic restrictions expressed by a (tree) grammar, which is given
as an input along with the sample structures.
We emphasize our contributions are theoretical. The programming
language itself is a notational tool that identifies and abstracts a
pattern we observe many times in this work, that of \emph{programming
with two-way tree automata} to prove decidable learning and derive
decision procedures. The learning algorithms we obtain have high
complexity; implementing a compiler from the programming language to
efficient decision procedures for learning will involve heuristics
that depend on specific problems. We note that learning problems for
several of the languages we study are of practical interest, with
previous work exploring algorithms for regular
expressions~\cite{jagadish-regexp-learning-08,fernau-regexp-learning-09},
linear temporal logic~\cite{neider-ltl-learning}, context-free
grammars~\cite{sakakibara-cfg-learning,langley-cfg-learning,vanlehn87-cfg-learning},
and string transformations in Microsoft Excel's Flash
Fill~\cite{flashfill}.
\smallskip \mypara{Meta-theore
} Each of the results above follows from a \emph{meta-theorem} which
says, intuitively, that languages are decidably learnable as long as
expressions can be evaluated over any structure using a particular
kind of program. More precisely, we require a \emph{semantic
evaluator} $P$ that, given a structure $M$ and expression $e$,
evaluates $e$ over $M$ by navigating up and down on the syntax tree
for $e$ using recursion. Furthermore, $P$ must rely only on a finite
set of \emph{semantic aspects of the structure} for memory during its
navigation over $e$. This set depends on $M$ but not on $e$. As long
as we can write such an evaluator, the meta-theorem guarantees
decidable learning for the language!
The notion of \emph{semantic aspects} is quite natural in the settings
we consider. In logic, the semantics of formulas $\varphi$ over a
structure $M$ is often presented by recursion on $\varphi$, with some
additional information. For instance, the satisfaction relation
$M, \gamma \models \varphi$ for $\mathsf{FO}$ uses an interpretation of
variables $\gamma$. For $\mathsf{FO}$ with a fixed set of $k$ variables, the
number of such $\gamma$ is \emph{bounded}; it depends on the structure
$M$ but not on the formula $\varphi$, and hence meets the
finite-aspect requirement we identify in this paper. Consequently, the
result on decidable learning for finite-variable $\mathsf{FO}$ is an immediate
corollary of our meta-theorem. In fact, all such results for
finite-variable logics~\cite{popl22} are obtained as
corollaries\footnote{The tree automata underlying each result for
finite-variable logics can be easily translated to semantic
evaluators of the kind we require. See \Cref{sec:fo} for such a
semantic evaluator in the case of $\mathsf{FO}$.}.
Semantic aspects are sometimes obvious from standard semantic
definitions and sometimes less so. In \emph{modal logic}, the standard
semantics is defined recursively in terms of the semantics for
subformulas at different \emph{nodes} in a Kripke structure, and
indeed, the aspects in this case are simply the nodes. For
\emph{computation tree logic} (CTL), standard semantics in the
literature would use the nodes as for modal logic but would go beyond
recursion in the structure of expressions and use recursive
definitions to give meaning to formulas (least and greatest
fixpoints~\cite{mcmillan-thesis}). In this case, the aspects include
the nodes of the structure as well as a \emph{counter} that encodes a
recursion budget for \emph{until} and \emph{globally} formulas to be
satisfied, with the counter value bounded by the number of nodes. The
standard semantics for a \emph{regular expression} $e$ defines the
language $L(e)$ recursively in the structure of $e$. For a given word
$w$, however, we specialize the semantics of membership in $L(e)$ to
$w$, using aspects that correspond to subwords of $w$ (e.g., a pair of
indices $(i,j)$ marking the left and right endpoints of a subword,
with $1\le i \leq j \le |w|$). This membership semantics involves a
finite number of aspects which is quadratic in the size of $w$ but
independent of $e$. Semantics of \emph{linear temporal logic} (LTL)
formulas over periodic words $uv^\omega$ can also be defined
(non-standardly) using a set of aspects corresponding to each position
of $u$ and $v$, again finite. The semantics for membership of a word
$w$ in the yield of a \emph{context-free grammar} (restricted to a
finite set of nonterminals) can again be written with aspects
corresponding to subwords, as for regular expressions. But in this
case it also requires navigating the tree representing the grammar
\emph{up and down} many times in order to parse $w$, which requires
keeping some extra memory. Standard semantics for \emph{first-order
queries} over rational numbers with order would involve an
interpretation of variables as rational numbers, but this set is of
course infinite. It turns out that a finite set of aspects encoding
the \emph{ordering} of the variables is sufficient to define semantics
in this setting.
The meta-theorem is a powerful tool for establishing decidable
learning. We emphasize that its proof is technically quite simple---
programs that navigate trees using recursion can be translated to
\emph{two-way alternating tree automata}, which can be converted to
one-way tree automata to obtain decision procedures for learning. Our
technical contribution lies more in the formalization of the technique
in terms of a programming language for semantic evaluators, and
realizations in different settings with (nonstandard) semantic
definitions involving a finite set of aspects.
We use the meta-theorem for the well-known application of learning
string transfomations from examples in the context of spreadsheet
programs. The seminal work of Gulwani~\cite{flashfill} established
this problem as one of the first important applications of program
synthesis from examples. We consider the language for string programs
used in that work and argue that even a signficant extension of that
language admits decidable learning. As far as we know, decidable
learning for this well-studied problem was not known earlier.
\mypara{Organization} In \Cref{sec:motiv-exampl-modal} we explore
learning in modal logic to motivate the generic learning algorithm
based on tree automata and the notion of semantic aspects. We discuss
a semantic evaluator for modal formulas and abstract the main pattern
as a program. \Cref{sec:notation-background} gives some background on
tree automata. In \Cref{sec:language} we define the class of
finite-aspect checkable languages (languages that admit decidable
learning), formalize a programming language for writing semantic
evaluators, and give the meta-theorem connecting semantic evaluators
to tree
automata. \Cref{sec:regular-expressions,sec:ltl,sec:cfg,sec:rationals}
establish decidable learning for regular expressions, linear temporal
logic, context-free grammars, and first-order queries over rationals
with order. In \Cref{sec:discussion} we discuss decidable learning for
string transformations. We review related work in
\Cref{sec:related-work} and conclude in \Cref{sec:conclusion}.
\section{Decidable Learning via Programming Semantic Evaluators}
\label{sec:language}
In this section we define a rich class of languages for which
decidable learning is possible. We then develop a meta-theorem which
enables proofs of decidable learning by writing semantic evaluators in
a programming language, which we call $\textsc{Facet}$\footnote{$\textsc{Facet}$ stands
for \underline{f}inite \underline{a}spect \underline{c}heckers of
\underline{e}xpression \underline{t}rees.}. The decision procedures
involve an effective translation of $\textsc{Facet}$ programs into two-way
alternating tree automata that read syntax trees. After defining the
class (\Cref{sec:decid-learn-via}), we discuss the syntax and
semantics of $\textsc{Facet}$ (\Cref{sec:syntax-semantics}), followed by the
meta-theorem (\Cref{sec:main-lemma}), which says that all languages
whose semantics can be written in $\textsc{Facet}$ are decidably learnable. We
then apply this theorem to show decidable learning for modal logic
(\Cref{sec:modal-expr-separ}) and computation tree logic
(\Cref{sec:ctl}).
\subsection{A Class of Languages with Decidable Learning}
\label{sec:decid-learn-via}
There is a surprisingly rich set of languages that have decidable
learning via essentially one generic decision procedure, which we
describe here. For our purposes, a \emph{language} consists of a set
of expressions $\mathcal{L}$\footnote{We often abuse notation and do not
distinguish between expressions $e\in\mathcal{L}$ and the syntax trees for
$e$.}, a class of finitely-representable structures $\mathcal{M}$ over the
same signature, and a semantic function that interprets a structure
and an expression in some domain $D$, written with a turnstile as
$(\_\models\_) : \mathcal{M}\times\mathcal{L}\rightarrow D$. Sometimes we just use
$\mathcal{L}$ to refer to such a symbolic language.
The decision procedure relies on building a tree automaton that
accepts the set of all (syntax trees for) expressions $e\in\mathcal{L}$ that
are consistent with a given example. In a supervised learning
scenario, with $D=\mathbb{B}\coloneq \{\mathsf{True},\mathsf{False}\}$ and examples modeled
as pairs $(M,b)\in \mathcal{M}\times \mathbb{B}$, one builds a tree automaton
$\mathcal{A}(M,b)$ such that
$L(\mathcal{A}(M,b))=\{e\in\mathcal{L} \,:\, M \models e = b\}$. For a finite set of
examples $E=(M_i,b_i)_i$, we take the product
$\mathcal{A}(\mathit{E}) = \bigwedge_i \mathcal{A}(M_i,b_i)$. Given an automaton
$\mathcal{A}(\mathcal{G})$ that accepts syntax trees conforming to a tree
grammar $\mathcal{G}$, we construct the product
$\mathcal{A}(E) \wedge \mathcal{A}(\mathcal{G})$ and run emptiness algorithms to
synthesize a syntax tree in the language or declare none exist.
The crucial requirement above is to be able to build $\mathcal{A}(M,b)$ for
any $M$, which is an automaton that acts as an evaluator for the
language over $M$. This is possible when the semantics of a language
is definable in terms of a \emph{finite} amount of auxiliary
information, which may depend (sometimes wildly) on the particular
structure $M$ but not on the expression size. We refer to such
auxiliary semantic information as \emph{semantic aspects}, or just
\emph{aspects}, and we call languages for which evaluators can be
implemented using tree automata \emph{finite-aspect
checkable}\footnote
\emph{Checkable} refers to the \emph{model
checking} problem for a logic, i.e. checking whether
$M\models\varphi$ for a structure $M$ and formula $\varphi$
}.
\begin{definition}[Finite-Aspect Checkable Language]
A language $(\mathcal{M},\mathcal{L},\models)$ is \emph{finite-aspect checkable}
(FAC) if for every $(M,d)\in\mathcal{M}\times D$ there is a tree automaton
$\mathcal{A}(M,d)$ over syntax trees for $\mathcal{L}$ such that
$L(\mathcal{A}(M,d)) = \{e\in \mathcal{L} \,:\, M\models e = d\}$, and the mapping
$(M,d) \mapsto \mathcal{A}(M,d)$ is computable.
\end{definition}
\noindent Note that \emph{all} FAC languages have decidable learning
by the generic algorithm described above.
FAC languages only require the automata to be computable given
$(M,d)\in\mathcal{M}\times D$, but all examples we have considered in fact
have small \emph{witnesses} for being FAC: the tree automata can be
described compactly by a \emph{program} that evaluates an input syntax
tree against an input structure. We next describe the programming
language $\textsc{Facet}$, which abstracts the common features of such
programs. Our meta-theorem relies on a simple procedure that takes a
program $P\in\textsc{Facet}$, a structure $M\in\mathcal{M}$, and a domain element
$d\in D$, and computes the tree automaton $\mathcal{A}(M,d)$.
\subsection{Syntax and Semantics of $\textsc{Facet}$}
\label{sec:syntax-semantics}
We present the syntax and semantics of $\textsc{Facet}$ by way of example. We
omit many details that are not important for understanding later
sections and results; details, including formal semantics of $\textsc{Facet}$,
can be found in \Cref{sec:app-language}.
Programs in $\textsc{Facet}$ are parameterized by a symbolic language
$(\mathcal{L},\mathcal{M},\models)$. A program $P$ takes as input a \emph{pointer}
into the syntax tree for an expression $e\in\mathcal{L}$ as well as a
structure $M\in\mathcal{M}$. The program navigates up and down on $e$ using a
set of pointers to move from children to parent and parent to children
in order to evaluate the semantics of $e$ over the structure $M$ and
verify that $M\models e = d$ for some $d\in D$. To write a $\textsc{Facet}$
program we first specify two things: (1) the symbolic language $\mathcal{L}$
over which the program is to operate and (2) the program's auxiliary
state, which corresponds to the semantic aspects of $\mathcal{L}$. Part (1)
involves specifying (1a) the syntax trees for $\mathcal{L}$ in terms of a
ranked alphabet $\Delta$ and (1b) the signature for structures $\mathcal{M}$,
which is a set of functions used to access the data for any given
$M\in\mathcal{M}$. The set of auxiliary states of part (2), which we denote by
$\mathsf{Asp}$, will typically be infinite. For a fixed structure $M\in\mathcal{M}$,
however, programs will only use a finite subset $\mathsf{Asp}(M)\subset\mathsf{Asp}$,
provided the symbolic language $\mathcal{L}$ is FAC, and so we will only need
to specify $\mathsf{Asp}(M)$ for an arbitrary fixed $M$.
For example, consider the program \linl{Modal} for modal logic in
\Cref{fig:modal-clause1}. Part (1): the symbolic language is modal
logic over finite pointed Kripke structures $G=(W,s,E,P)$ with
propositions $\Sigma$. We fix any straightforward representation of
formulas as syntax trees, and we use two Kripke structure-specific
functions for interpreting modal logic. The first computes the
neighborhood $\{y\in G\,:\, E(w,y)\}$ of a given node $w\in W$, and
the second computes whether a given proposition $x\in\Sigma$ is true
at a given node $w\in W$, i.e. whether $x\in P(w)$ holds. Part (2):
the states $\mathsf{Asp}(M)$ are the nodes of the Kripke structure, i.e. the
set $W$.
\subsubsection{Syntax}
\label{sec:syntax-1}
The formal syntax for $\textsc{Facet}$ programs is shown in \Cref{fig:lang}. A
program $P\in\textsc{Facet}$ consists of a set of \emph{clauses}, which we
denote by $\mathit{C}(P)$, or just $\mathit{C}$, each of which has the
form
\begin{center}
\linl{$P$($M$,\ $\sigma(z)$,\ $n$) = match\ $n.\mathit{l}$\ with\ $\ldots$}
\end{center}
The parameter $M$ is a mathematical structure (e.g. a Kripke
structure) and the parameter $n$ is a pointer into a syntax tree
(e.g. for a modal logic formula). The parameter $\sigma(z)$ is a
\emph{pattern} (e.g. a variable $w$ matching any node of a Kripke
structure). We treat both the ranked alphabet $\Delta$ and auxiliary
states $\mathsf{Asp}(M)$ as algebraic data types and allow $\textsc{Facet}$ programs to
pattern match over these using expressions from two sets of patterns,
alphabet patterns $\mathit{pat}(\Delta)$ and state patterns $\mathit{pat}(\mathsf{Asp})$. For
example, for \linl{Modal} in \Cref{fig:modal-clause1}, the (trivial)
state pattern $w\in\mathit{pat}(\mathsf{Asp})$ will match any node of the input Kripke
structure, and the (trivial) alphabet pattern $x\in\mathit{pat}(\Delta)$ will
match any of the modal logic propositions in $\Sigma$.
Each clause in $C$ has a single \linl{match} statement, consisting of
a list of \emph{cases}, each of the form \dblqt{\linl{$\alpha_i(z)$ ->
$\,\,e$}}, with an alphabet pattern on the left and an expression
on the right. Expressions $e$ represent Boolean functions, possibly
involving results of recursive calls \dblqt{\linl{P($M$, $\sigma(z)$,
$n.\mathit{dir}$)}} that start in new states $\sigma(z)$ at nearby nodes
$n.\mathit{dir}$ on the syntax tree. Compound expressions are built using
\linl{and}, \linl{or}, \linl{all}, \linl{any}, and \linl{if}. For
example, in \Cref{fig:modal-clause1}, the first two cases involve
recursive calls at the two children ($n.\mathit{c}_1$ and $n.\mathit{c}_2$) of
the current node, in the same state $w$, and return, respectively, the
conjunction and disjunction of the results.
The sets $S$ and $B$ in \Cref{fig:lang} categorize the signature
functions for structures $\mathcal{M}$ as one of two kinds.
\emph{\underline{S}tate} functions $g\in S$ are used to compute new
states for the program, e.g. computing the neighborhood of a given
node in a Kripke structure. These functions are used in \linl{any} and
\linl{all} expressions to bind parts of the structure $M$ to
variables. \emph{\underline{B}oolean} functions\footnote{We assume the
set of Boolean functions $B$ is closed under complement.} $f\in B$
are used in \linl{if} expressions as well as for base cases in
recursion, e.g. computing membership in the set of true propositions
for each node of a Kripke structure. Note that for all the symbolic
languages studied in this paper, the signature functions are evidently
computable.
\begin{figure}
\begin{tabularx}{0.95\textwidth}{l c l}
$\mathit{Prog}$ & \ensuremath{::=~}\xspace{} & $\left\{\, \mathit{Clause}\,\ldots\, \mathit{Clause} \,\right\}$ \\
$\mathit{Clause}$ & \ensuremath{::=~}\xspace{} & \linl{P($M$,\ $\sigma(z)$,\ $n$) = match\ $n.\mathit{l}$ with\ $\mathit{Cases}$} \\
$\mathit{Cases}$ & \ensuremath{::=~}\xspace{} & \linl{$\alpha_1(z)$ ->\ $e_1$\ $\ldots$\ $\alpha_n(z)$ ->\ $e_n$}
\end{tabularx}
\begin{tabularx}{0.95\textwidth}{@{}r@{\ }c@{\ }c@{\ }l@{\hspace*{2em}}c@{\ \ \ }l@{\ \ \ }@{\hspace*{2em}}c@{\ \ \ }l}
\\
$e$ & &\ensuremath{::=~}\xspace{} & \linl{True} & | & \linl{False} & | & {\lstinline!$f$($z$)!} \\
& & | & {\lstinline|$e_1$ and $\ e_2$|} & | & {\lstinline|$e_1$ or $\ e_2$|} & | & {\linl{P($M$,\ $\sigma(z)$,\ $n.\mathit{dir}$)}} \\
& & | & {\lstinline|all ($\lambda x$.\ $e$) $\
g$($z$)|} & | & {\lstinline|any ($\lambda x$.\ $e$) $\
g$($z$)|} & | & {\lstinline|if $\ f(z)$ then $\ e_1$ else $\ e_2$|}
\\
\end{tabularx}
\begin{tabularx}{0.97\textwidth}{c c c c c c}
\\
$\alpha(z) \in \mathit{pat}(\Delta)$
&
&
$\sigma(z) \in \mathit{pat}(\mathsf{Asp})$
&
&
$f \in B, \,\, g \in S\quad$
& $\mathit{dir} \in \{\mathit{\mathit{up},\mathit{stay},\mathit{c}_1,...,\mathit{c}_k}\}$ \\
\end{tabularx}
\caption{Syntax for $\textsc{Facet}$ programs. We use $x$ to denote a single
variable and $z$ to denote a vector of variables. }
\label{fig:lang}
\end{figure}
\subsubsection{Semantics}
\label{sec:semantics-1}
Given a structure $M$, state $\sigma\in\mathsf{Asp}(M)$, and pointer $n$, a
program operates by first determining the clause whose state pattern
$\sigma(z)$ matches $\sigma$. We consider only \emph{well-formed}
programs in which every state is matched by the state pattern of
precisely one clause (see \Cref{sec:well-formed-programs} for
details). After the clause is determined, the program matches the
symbol $n.\mathit{l}$ labeling the current node of the syntax tree against
the alphabet patterns. It then evaluates the expression on the right
side of the first matching case and returns a Boolean result, either
success or failure.
Consider the operation of the program \linl{Modal} from
\Cref{fig:modal-clause1} over the tree-shaped positive Kripke
structure from \Cref{modal-picture} and the formula
$\varphi =\ensuremath{\square}(\ensuremath{\lozenge}(a\vee v))$. Suppose $n$ is pointing at the
root of $\varphi$, with label $\ensuremath{\square}$, and the current state is
$s\in W$ (the top node of the Kripke structure). The state $s$ matches
the state pattern of the clause depicted in \Cref{fig:modal-clause1},
and the variable $w$ is bound to $s$. Next, the symbol $n.\mathit{l}=\ensuremath{\square}$
matches the fourth case of the \linl{match} statement, and the program
evaluates the \linl{all} expression. To do this, it uses a state
function to compute the neighborhood
$N(s) = \{y \in G \,:\, E(s,y)\}$. It then returns the conjunction of
results for recursive calls obtained by evaluating \linl{Modal($G$,
$z$, $n.\mathit{c}_1$)} with $z$ bound to the states of $N(s)$. This
involves two recursive calls with pointer $n.\mathit{c}_1$ corresponding
to the formula $\ensuremath{\lozenge}(a\vee v)$ in states corresponding to the two
neighbors of $s$. Each of these recursive calls involves evaluating
the \linl{any} expression for the $\ensuremath{\lozenge}$ case, which in turn
involves evaluating the expression for the $\vee$ case. Finally, when
the program encounters either of the propositions $a,v\in\Sigma$ in
the syntax tree, the last case will match, and the program uses a
function to compute $x\in P(w)$, i.e. whether the proposition bound to
$x$ is true at the current state.
Formal semantics for $\textsc{Facet}$ can be found in
\Cref{sec:app-language}.
The semantics defines a predicate $\Downarrow_p$ on program
configurations of the form $(M,n,\mathit{C}(P),\sigma)$. The assertion
$(M, n, \mathit{C}(P),\sigma) \Downarrow_p$ has the following meaning:
the program consisting of clauses $\mathit{C}(P)$ terminates with
success on the syntax tree pointed to by $n$, when started from state
$\sigma$ over the structure $M$
\subsection{A Meta-Theorem for Decidable Learning}
\label{sec:main-lemma}
We now connect $\textsc{Facet}$ programs with automata to give our meta-theorem
for decidable learning. Its proof relies on the following lemma, which
states that semantic evaluators written in $\textsc{Facet}$ that use finite
auxiliary states for any structure can be translated to two-way
alternating tree automata.
\begin{lemma}
Let\, $(\mathcal{L},\mathcal{M},\models)$\, be a symbolic language and\, $\Delta$\,
an alphabet for \,$\mathcal{L}$. Let\, $P$\, be a well-formed\, $\textsc{Facet}$\,
program over\, $(\mathcal{L},\mathcal{M},\models)$\, with computable signature
functions and with\, $\mathsf{Asp}(M)$\, finite for every \,$M\in\mathcal{M}$. Then
for every\, $M\in\mathcal{M}$\, and\, $\sigma\in\mathsf{Asp}(M)$, we can compute a
two-way alternating tree automaton
\,$\mathcal{A}(P,M,\sigma) = (Q, \Delta, q_i, \delta, F)$\, such that for
every \,$\,e\in\mathcal{L}$, we have \,$e\in L(\mathcal{A}(P,M,\sigma))$\, if and
only if \,$(M, \mathit{root}(e), \mathit{C}(P),\sigma)
\Downarrow_p$.
\label{lemma:mainlemma-lemma}
\end{lemma}
\begin{proof}[Proof Sketch]
The automaton states are $Q = \mathsf{Asp}(M) \sqcup\{q_\top,q_\bot\}$, with
$q_\top$ and $q_\bot$ being absorbing states for accepting and
rejecting upon termination of the program. The initial state is
$q_i = \sigma$, and the transitions $\delta$ are obtained from a
straightforward translation of expressions into Boolean formulas,
detailed in \Cref{sec:translate}. The acceptance condition is
reachability with $F = \{q_\top\}$. The correspondence between the
language of the automaton and semantic proofs for $P$ is
straightforward and essentially follows by construction.
\end{proof}
\begin{theorem}[Meta-theorem]
Let $(\mathcal{L},\mathcal{M},\models)$ be a language. If there is a semantic
evaluator, i.e., a well-formed program $P\in \textsc{Facet}$, such that for
all $M\in\mathcal{M}$, $e\in \mathcal{L}$, and $d\in D$ there is a
$\sigma_d\in\mathsf{Asp}(M)$ for which
$(M, \mathit{root}(e), \mathit{C}(P), \sigma_d)\Downarrow_p$ if and only if
$M\models e = d$, then $(\mathcal{L},\mathcal{M},\models)$ has decidable learning.
\label{lemma:mainlemma}
\end{theorem}
\begin{proof}
Let $P$ be a semantic evaluator for a language $(\mathcal{L},\mathcal{M},\models)$,
let $(M_i,d_i)_i$ be a finite set of examples from $\mathcal{M}\times D$,
and let $\mathcal{G}$ be a (tree) grammar for $\mathcal{L}$. Use
\Cref{lemma:mainlemma-lemma} to build the tree automata
$\mathcal{A}(P,M_i,\sigma_{d_i})$. Construct the product of these automata
and convert the result to a nondeterministic tree automaton
$\mathcal{A}$. Take the product of $\mathcal{A}$ with a nondeterministic tree
automaton for $\mathcal{G}$, and use an emptiness algorithm to
synthesize an expression or decide there is none.
\end{proof}
\paragraph{Remark on Complexity.} Notice that the states of the
resulting automaton are precisely what we have called semantic
aspects. The complexity of decision procedures for learning can in
fact be read from the number of aspects. Products of two-way tree
automata obtained from \Cref{lemma:mainlemma-lemma} are converted to
one-way nondeterministic tree automata with an exponential increase in
states using known algorithms~\cite{two-way-vardi,cachat-two-way},
leading to decision procedures with time complexity exponential in the
number of aspects as well as the number of examples. Provided that
signature functions are computable in time exponential in the size of
structures (satisfied by all languages in this paper), we can use the
following:
\begin{corollary}
Decision procedures for learning obtained via \Cref{lemma:mainlemma}
have time complexity exponential in the number of examples and the
number of aspects, and linear in the size of the grammar.
\label{complexity-lemma}
\end{corollary}
$\textsc{Facet}$ enables compact, familiar descriptions of two-way tree
automata, and thereby enables decision procedures for learning to be
developed using intuition from programming. If we can write a $\textsc{Facet}$
program to interpret a language $\mathcal{L}$ using a fixed amount of
auxiliary state for any given structure, then the language is FAC and
decision procedures for learning and synthesis follow from results in
automata theory.
\subsection{Decidable Learning for Modal Logic and Dual Clauses}
\label{sec:modal-expr-separ}
We finish this section with a decidable learning theorem for modal
logic by completing the $\hbox{\linl{Modal}}$ program from
\Cref{sec:motiv-exampl-modal} and explaining \emph{dual} states and
programs, which are syntactic sugar useful for handling negation and
negative examples.
The grammar for modal logic formulas over propositions $\Sigma$ from
earlier has a straightforward ranked alphabet $\Delta$, with members
of $\Sigma$ having arity $0$. The class $\mathcal{M}$ consists of finite
pointed Kripke structures $G=(W,s,E,P)$. The states for a given
$G=(W,s,E,P)$ are
\begin{align*}
\mathsf{Asp}(G)=\{w,\mathit{dual}(w) \,:\, w\in W\},
\end{align*}
where $\mathit{dual}$ is a constructor for states related to negation. There are
two signature functions: a state function for computing neighborhoods
$\{y\in G \,:\, E(w,y)\}$ and a Boolean function for computing
membership of propositions in $P(w)$, for a given $w\in W$. Along
with the clause in \Cref{fig:modal-clause1}, $\hbox{\linl{Modal}}$ includes the
\emph{dual} of that clause, shown in \Cref{fig:modal-clause2}, which
operates on states of the form $\mathit{dual}(x)$. For any clause $c$ there is a
simple translation to produce its dual clause $\mathit{dual}(c)$ as follows:
\vspace{0.1in}
\noindent \begin{minipage}[c]{\linewidth}
\begin{tabularx}{\linewidth}{r l X l}
$c\,\,\, =$ &$\text{\lstinline!$P$($M$,\ $\sigma(z)$,\ $n$)!}$ &
$\text{\lstinline!= match\ $n.\mathit{l}$\ with\ $\alpha_1$ ->\ $e_1$!}$ & $\ \ ...\ \ \text{\lstinline!$\alpha_n$ ->\ $e_n$!}$ \\
$\mathit{dual}(c)\, =$ &$\text{\lstinline!$P$($M$,\ $\mathit{dual}(\sigma(z))$,\ $n$)!}$ & $\text{\lstinline!= match\ $n.\mathit{l}$\ with\ $\alpha_1$ ->\ $\mathit{dual}(e_1)$!}$ & $\ \ ...\ \ \text{\lstinline!$\alpha_n$ ->\ $\mathit{dual}(e_n)$!}$
\end{tabularx}
\end{minipage}
\vspace{0.1in}
The expressions $\mathit{dual}(e_i)$ are obtained by recursively swapping
\linl{True} with \linl{False}, \linl{and} with \linl{or}, and
\linl{all} with \linl{any}, and the dual and non-dual clauses invoke
each other whenever a negation operator is encountered in the syntax
tree. Dual clauses are useful even if the language $\mathcal{L}$ has no
negation operator, given we may need to check that semantic
relationships \emph{do not} hold for negative structures. Note that
the concept of dual clause is independent of the symbolic language
$\mathcal{L}$, and it is entirely mechanical to obtain dual clauses in each
problem we consider. A precise description of $\mathit{dual}(e)$ can be found in
\Cref{sec:dual-programs}. From now on we omit the dual clauses from
our presentation.
\begin{figure}
\centering
\begin{minipage}[c]{0.9\linewidth}
\begin{lstlisting}
Modal($G$, $\mathit{dual}(w)$, $n$) = match $n.\mathit{l}$ with
$\wedge$ -> Modal($G$, $\mathit{dual}(w)$, $n.\mathit{c}_1$) or Modal($G$, $\mathit{dual}(w)$, $n.\mathit{c}_2$)
$\vee$ -> Modal($G$, $\mathit{dual}(w)$, $n.\mathit{c}_1$) and Modal($G$, $\mathit{dual}(w)$, $n.\mathit{c}_2$)
$\neg$ -> Modal($G$, $w$, $n.\mathit{c}_1$)
$\ensuremath{\square}$ -> any (LAM$z$. Modal($G$, $\mathit{dual}(z)$, $n.\mathit{c}_1$)) $\{y\in G \,:\, E(w,y)\}$
$\ensuremath{\lozenge}$ -> all (LAM$z$. Modal($G$, $\mathit{dual}(z)$, $n.\mathit{c}_1$)) $\{y\in G \,:\, E(w,y)\}$
$x$ -> $x\notin P(w)$
\end{lstlisting}
\end{minipage}
\caption{Dual clause for $\hbox{\linl{Modal}}$, which evaluates formula $\varphi$
pointed to by $n$ against $G$ and verifies $G\not\models\varphi$.}
\label{fig:modal-clause2}
\end{figure}
\begin{theorem}
Modal logic separation for sets of Kripke structures $\mathit{P}$ and
$\mathit{N}$ with grammar $\mathcal{G}$ is decidable in time
$\mathcal{O}(2^{\mathit{poly}(mn)}\cdot|\mathcal{G}|)$, where
$n = \max_{G\in \mathit{P}\cup \mathit{N}} |G|$ and $m = |\mathit{P}| + |\mathit{N}|$.
\end{theorem}
\begin{proof}[Proof Sketch.]
For all Kripke structures $G=(W,s,E,P)$, $w\in W$, and formulas
$\varphi$, we have that $G,w\models\varphi$ iff
$(G, \mathit{root}(\varphi), \mathit{C}(\hbox{\linl{Modal}}), w) \Downarrow_p$ and
$G,w\not\models\varphi$ iff
$(G, \mathit{root}(\varphi), \mathit{C}(\hbox{\linl{Modal}}), \mathit{dual}(w)) \Downarrow_p$. The
proof is by induction on $\varphi$. The rest follows by
\Cref{lemma:mainlemma} and \Cref{complexity-lemma}, with
$D=\{\mathsf{True},\mathsf{False}\}$, $\sigma_\mathsf{True} = s$ and $\sigma_\mathsf{False}=\mathit{dual}(s)$,
noting that $|\mathsf{Asp}(G)|=\mathcal{O}(|W|)$.
\end{proof}
\emph{Computation tree logic.} Do other modal logics have decidable
learning? For computation tree logic (CTL) the answer is affirmative,
and we can program a $\textsc{Facet}$ evaluator whose semantic aspects again
involve the nodes of the Kripke structure. CTL involves, however,
quantification over \emph{paths} in the Kripke structure, with
semantics given by recursive definitions like
\begin{align*}
\mathsf{E} (\varphi\mathsf{U}\varphi') \,\equiv\, \varphi'\vee (\varphi\wedge
\mathsf{E}(\mathsf{X} (\mathsf{E}(\varphi\mathsf{U}\varphi'))))
\end{align*}
for the existential path quantifier $\mathsf{E}$. This introduces a
subtlety related to negation, because the evaluator should avoid
infinite recursion caused by interpreting $\mathsf{E}$ as the right-hand side
of the equivalence above. We include in \Cref{sec:ctl} a program for
CTL that uses a \emph{bounded counter} to prevent such infinite
recursion.
\smallskip In the remainder of the paper we derive new decision
procedures for several learning problems by writing $\textsc{Facet}$
programs. We avoid details about the $\textsc{Facet}$ language and focus
instead on the logic of the programs as well as the semantic aspects
needed to accurately evaluate expressions.
\section{Linear Temporal Logic}
\label{sec:ltl}
In this section we consider synthesizing linear temporal logic (LTL)
formulas that separate infinite, periodic words. We again derive a
decision procedure for learning by writing a program.
\subsection{Separating Infinite Words with Linear Temporal Logic}
\label{sec:separ-infin-words}
We consider separating lassos over a finite alphabet $\Sigma$. A
\emph{lasso} is an infinite periodic word $w\in\Sigma^\omega$ that can
be represented finitely as the concatenation of a finite prefix
$u\in\Sigma^*$ with a finite \emph{repeated} suffix
$v\in\Sigma^*$.
For example, the infinite word
$\mathit{babaabbaabbaabbaabb}\cdots$
can be represented with $u=\mathit{bab}$ and
$v=\mathit{aabb}$. Lassos are determined by the pair $(u,v)$, though
there may be multiple ways to pick $u$ and $v$. Here we could also
have picked $u=\mathit{baba}$ and $v=\mathit{abba}$.
\begin{problem}[Linear Temporal Logic Separation]
Given finite sets $P$ and $N$ of lassos over an alphabet $\Sigma$,
and a grammar $\mathcal{G}$ for LTL over $\Sigma$, synthesize a formula
$\varphi\in L(\mathcal{G})$ such that $w\models\varphi$ for all
$w\in P$ and $w\not\models\varphi$ for all $w\in N$, or declare no
such formula exists.
\end{problem}
Our LTL formulas come from the following grammar.
\begin{align*}
\varphi \Coloneqq a\in\Sigma \,\, | \,\, \varphi \wedge
\varphi' \,\, | \,\, \varphi \vee \varphi' \,\, | \,\, \neg\varphi \,\, | \,\, \mathsf{X} \varphi
\,\, | \,\, \varphi \mathsf{U} \varphi'
\end{align*}
Below we present the semantics in a way that makes clear the aspects,
which are \emph{positions} in the lasso $(u,v)$, along with an
indication of whether the position corresponds to $u$ or $v$. An LTL
formula $\varphi$ is true in a lasso $(u,v)$, written
$(u,v)\models\varphi$, precisely when $(u,v),(1,\_)\models\varphi$,
with the latter defined below. The main idea is that the following
holds
\begin{align*}
(u,v),(i,\_)\models\varphi \,\Leftrightarrow\,
u^iv^\omega\models \varphi \quad\text{and}\quad
(u,v),(\_,j)\models\varphi \,\Leftrightarrow\,
v^{j}v^\omega\models \varphi,
\end{align*}
where $uv^\omega\models\varphi$ is the standard semantics for
LTL~\cite{temporal-logic-pnueli}. To denote the
letter at position $i$ we write $w(i)$. We use $i$ to range over
$[1,|u|]$ and $j$ to range over $[1,|v|]
. If $j' < j$, then $[j,j']$ means $[1,j']\cup [j,|v|]$. We use
$[a,b)$ to exclude $b$.
\vspace{0.1in}
\begin{minipage}[t]{\textwidth}
\hspace{-0.15in}
\begin{tabularx}{\textwidth}{l l l l l l l}
$(u,v),(i,\_)$ & $\models$ & $a \in \Sigma$ & if & $u(i) = a$ &&\\
$(u,v),(\_,j)$ & $\models$ & $a \in \Sigma$ & if & $v(j) = a$ &&\\
$(u,v),p$ & $\models$ & $\neg\varphi$ & if & $(u,v),p\not\models\varphi$ &&\\
$(u,v),p$ & $\models$ & $\varphi\wedge\varphi'$ & if &
$(u,v),p\models\varphi$ \ and \ $(u,v),p\models\varphi'$ && \\
$(u,v),p$ & $\models$ & $\varphi\vee\varphi'$ & if &
$(u,v),p\models\varphi$ \ or \ $(u,v),p\models\varphi'$ &&\\
$(u,v),(|u|,\_)$ & $\models$ & $\mathsf{X}\,\varphi$ & if &
$(u,v),(\_,1)\models\varphi$ & &\\
$(u,v),(i,\_)$ & $\models$ & $\mathsf{X}\,\varphi$ & if &
$(u,v),(i+1,\_)\models\varphi\quad i<|u|$ &\\
$(u,v),(\_,j)$ & $\models$ & $\mathsf{X}\,\varphi$ & if &
$(u,v),(\_,\,j\,\,\text{mod}\,\, |v| + 1)\models\varphi$ &&\\
$(u,v),(i,\_)$ & $\models$ & $\varphi\mathsf{U}\varphi'$ & if & $\exists
i'\ge i.\,\,\, (u,v),(i',\_)\models\varphi' \,\,\,\text{and}\,\,\, \forall
i''\in[i,i').\,\,\, (u,v),(i'',\_)\models\varphi$ \\
&&& or & $\exists j.\,\, \forall i'\ge i.\,\,\, (u,v),(i',\_)\models
\varphi\,\,\,\text{and}\,\,\,\forall j'<j.\,\,\, (u,v),(\_,j')\models\varphi$\\
&&&& $\,\,\qquad\quad\text{and}\,\,(u,v),(\_,j)\models\varphi'$ \\
$(u,v),(\_,j)$ & $\models$ & $\varphi\mathsf{U}\varphi'$ & if & $\exists
j'.\,\,\,\forall j''\in [j,j').\,\,\,
(u,v),(\_,j'')\models\varphi\,\,\,\text{and}\,\,\, (u,v),(\_,j')\models\varphi'$
\end{tabularx}
\end{minipage}
\vspace{0.1in}
\subsection{Decidable Learning for LTL}
\label{sec:progr-eval-ltl}
The $\textsc{Facet}$ program $\hbox{\linl{LTL}}$ in \Cref{fig:ltl-clauses} reads LTL syntax
trees and evaluates them over $\Sigma$-lassos, presented as pairs of
finite words $(u,v)\in\Sigma^*\times\Sigma^*$. Again we omit $\mathit{dual}$
clauses. The ranked alphabet $\Delta$ for syntax trees is similar to
those of regular expressions and modal logic. Signature functions
include functions for word length, written $|w|$, and functions for
computing sets of consecutive positions, e.g., $[x,y]$ and
$[x,y)$. There is also a function $\mathit{wrap}(j)$ defined by
\begin{align*}
\mathit{wrap}(j) = \text{if}\,\, j > |v| \text{ then } 1 \text{
else } j
\end{align*}
which is used to reset the current lasso position to the beginning of
the suffix $v$ when it exceeds $|v|$. There are also functions for
comparison of positions, e.g. $i < i'$, and equality and disequality
for alphabet letters at a given position, e.g. $u(i) = x$. The states
use two constructors $(\_,\cdot)$ and $(\cdot,\_)$ to encode whether a
position is part of $u$ or $v$. For a given lasso $(u,v)$ the states
are:
\begin{align*}
\mathsf{Asp}((u,v)) &\coloneqq \left\{p, \mathit{dual}(p) \,:\, p\in\mathsf{pos}\right\}\\
\mathsf{pos} &\coloneqq \left\{(i,\_),(\_,j) \,:\, i\in [1,|u|], j\in[1,|v|]\right\}
\end{align*}
\begin{theorem}
Linear temporal logic separation for finite sets $\mathit{P}$ and $\mathit{N}$
of lasso structures, and grammar $\mathcal{G}$, is decidable in time
$\mathcal{O}(2^{\mathit{poly}(mn)}\cdot|\mathcal{G}|)$, with $m=|\mathit{P}|+|\mathit{N}|$ and
$n=\max_{(u,v)\in \mathit{P}\cup \mathit{N}}(|uv|)$.
\end{theorem}
\begin{proof}[Proof Sketch.]
For each lasso $(u,v)\in \mathit{P}$, positions
$i\in[1,|u|], j\in[1,|v|]$, and LTL formula $\varphi$, we have that
$(u,v),(i,\_)\models\varphi$ if and only if
$((u,v), \mathit{root}(\varphi), \mathit{C}(\hbox{\linl{LTL}}), (i,\_))\Downarrow_p$ and
$(u,v),(\_,j)\models\varphi$ if and only if
$((u,v), \mathit{root}(\varphi), \mathit{C}(\hbox{\linl{LTL}}),
(\_,j))\Downarrow_p$. Similarly, for each lasso $(u,v)\in \mathit{N}$ we
have that $(u,v),(i,\_)\not\models\varphi$ if and only if
$((u,v), \mathit{root}(\varphi), \mathit{C}(\hbox{\linl{LTL}}), \mathit{dual}((i,\_)))\Downarrow_p$
and $(u,v),(\_,j)\not\models\varphi$ if and only if
$((u,v), \mathit{root}(\varphi), \mathit{C}(\hbox{\linl{LTL}}), \mathit{dual}((\_,j)))\Downarrow_p$. The proof is by induction on $\varphi$. We have
$|\mathsf{Asp}((u,v))| = \mathcal{O}(|uv|)$ and the rest follows by
\Cref{lemma:mainlemma} and \Cref{complexity-lemma}.
\end{proof}
\begin{figure}
\centering
\begin{lstlisting}
LTL($(u,v)$, $(i,\_)$, $n$) = match $n.\mathit{l}$ with
conj -> LTL($(u,v)$, $(i,\_)$, $n.\mathit{c}_1$) and LTL($(u,v)$, $(i,\_)$, $n.\mathit{c}_2$)
disj -> LTL($(u,v)$, $(i,\_)$, $n.\mathit{c}_1$) or LTL($(u,v)$, $(i,\_)$, $n.\mathit{c}_2$)
neg -> LTL($(u,v)$, $\mathit{dual}(i,\_)$, $n.\mathit{c}_1$)
$\mathsf{X}$ -> if $i < |u|$ then LTL($(u,v)$, $(i+1,\_)$, $n.\mathit{c}_1$) else LTL($(u,v)$, $(\_,1)$, $n.\mathit{c}_1$)
$\mathsf{U}$ -> any (LAM$i'.$ LTL($(u,v)$, $(i',\_)$, $n.\mathit{c}_2$) and
all (LAM$i''.$ LTL($(u,v)$, $(i'',\_)$, $n.\mathit{c}_1$)) $[i,i')$) $[i,|u|]$
or all (LAM$i'.$ LTL($(u,v)$, $(i',\_)$, $n.\mathit{c}_1$)) $[i,|u|]$ and
any (LAM$j.$ all (LAM$j'.$ LTL($(u,v)$, $(\_,j')$, $n.\mathit{c}_1$)) $[1,j)$
and LTL($(u,v)$, $(\_,j)$, $n.\mathit{c}_2$)) $[1,|v|]$
$x$ -> $u(i) = x$
\end{lstlisting}
\begin{lstlisting}
LTL($(u,v)$, $(\_,j)$, $n$) = match $n.\mathit{l}$ with
conj -> LTL($(u,v)$, $(\_,j)$, $n.\mathit{c}_1$) and LTL($(u,v)$, $(\_,j)$, $n.\mathit{c}_2$)
disj -> LTL($(u,v)$, $(\_,j)$, $n.\mathit{c}_1$) or LTL($(u,v)$, $(\_,j)$, $n.\mathit{c}_2$)
neg -> LTL($(u,v)$, $\mathit{dual}(\_,j)$, $n.\mathit{c}_1$)
$\mathsf{X}$ -> LTL($(u,v)$, $(\_,\mathit{wrap}(j+1))$, $n.\mathit{c}_1$)
$\mathsf{U}$ -> any (LAM$j'.$ LTL($(u,v)$, $(\_,j')$, $n.\mathit{c}_2$) and
all (LAM$j''.$ LTL($(u,v)$, $(\_,j'')$, $n.\mathit{c}_1$)) $[j,j')$) $[1,|v|]$
$x$ -> $v(j) = x$
\end{lstlisting}
\caption{$\hbox{\linl{LTL}}$ evaluates the LTL formula $\varphi$ pointed to by $n$ over
lasso $(u,v)$ and verifies that $(u,v)\models \varphi$. }
\label{fig:ltl-clauses}
\end{figure}
\section{Motivating Problem: Learning Modal Logic Formulas}
\label{sec:motiv-exampl-modal}
In this section, we illustrate how to derive learning algorithms from
semantic evaluators in the context of propositional modal logic. We
make the observation that a specific kind of semantic evaluator
corresponds to a constructive proof of decidable learning. Such
evaluators use an amount of memory \emph{bounded by the structure}
over which expressions are evaluated but independent of expression
size, beyond that afforded by the syntax tree itself. To prove
decidable learning for a new language, it suffices to program such an
evaluator. We summarize this main theme as follows:
\begin{theme}
Effective evaluation using state bounded by structures
$\,\,\,\xRightarrow{\hspace*{0.5cm}}\,\,$ decidable learning
\label{slogan1}
\end{theme}
We next introduce the learning problem for modal logic and explore
this theme by developing a suitable semantic evaluator for modal
formulas over Kripke structures.
\subsection{Separating Kripke Structures with Modal Logic Formulas}
\label{sec:separ-modal}
Consider the following problem, with an example illustrated in
\Cref{modal-picture}.
\begin{problem}[Modal Logic Separation]
Given finite sets $P$ and $N$ of finite pointed Kripke structures
over propositions $\Sigma$, and a grammar $\mathcal{G}$, synthesize a
modal logic formula $\varphi\in L(\mathcal{G})$ that is true for
structures in $P$ (the \emph{positives}) and false for those in $N$
(\emph{negatives}), or declare none exist.
\end{problem}
\noindent We review some basics of modal
logic~\cite{blackburn_rijke_venema_2001}. The following grammar
defines the set of modal logic formulas over a finite set of
propositions $\Sigma$.
\begin{align*}
\varphi\Coloneqq a\in\Sigma \,\, | \,\, \varphi\wedge\varphi'
\,\, | \,\, \varphi\vee\varphi' \,\, | \,\, \neg\varphi \,\, | \,\,
\ensuremath{\square}\varphi \,\, | \,\, \ensuremath{\lozenge}\varphi
\end{align*}
\input{modal-picture}
The standard semantics of modal logic is reproduced below. Formulas
are interpreted against (in our case finite) pointed Kripke structures
$G=(W,s,E,P)$, where $W$ is a set of nodes (or \emph{worlds}), $E$ is
a binary
\emph{neighbor}
relation on $W$, and
$P : W\rightarrow \mathcal{P}(\Sigma)$ is a function that labels each node by
the set of all atomic propositions that hold
there. A formula $\varphi$ is true in $G=(W,s,E,P)$, written
$G\models\varphi$, if it is true starting from $s$, written
$G,s\models\varphi$, with the latter notion defined as follows.
\vspace{0.1in}
\begin{minipage}[t]{0.6\textwidth}
\centering
\begin{tabularx}{0.6\textwidth}{l l l l l}
$G,w$ & $\models$ & $a\in \Sigma$ \qquad\qquad & if &$a\in P(w)$ \\
$G,w$ & $\models$ & $\neg\varphi$ \qquad\qquad & if &
$G,w\not\models\varphi$ \\
$G,w$ & $\models$ & $\varphi\wedge\varphi'$ \qquad\qquad & if &
$G,w\models \varphi$ \text{ and } $G,w\models \varphi'$ \\
$G,w$ & $\models$ & $\varphi\vee\varphi'$ \qquad\qquad & if &
$G,w\models \varphi$ \text{ or } $G,w\models \varphi'$ \\
$G,w$ & $\models$ & $\ensuremath{\square} \varphi$ \qquad\qquad & if &
$G,w'\models \varphi$ \text{ for all} $w'$ \text{such that } $E(w,w')$\\
$G,w$ & $\models$ & $\ensuremath{\lozenge} \varphi$ \qquad\qquad & if &
$G,w'\models \varphi$ \text{ for some} $w'$ \text{such that } $E(w,w')$
\end{tabularx}
\end{minipage}
\vspace{0.1in}
Observe that there are \emph{infinitely-many} inequivalent modal
formulas. Indeed, the sequence
\begin{align*}
\ensuremath{\lozenge} a,\, \ensuremath{\lozenge} (\ensuremath{\lozenge} a),\, \ensuremath{\lozenge} (\ensuremath{\lozenge} (\ensuremath{\lozenge} a)),\, ...
\end{align*}
defines an infinite set $(\varphi_i)_{i\in\mathbb{N}^{\mathbb{+}}}$ of
semantically inequivalent formulas. For $i \in \mathbb{N}^{\mathbb{+}}$, a
finite graph consisting of a single directed path of length $i-1$
makes the formula $\varphi_i$ false while making all $\varphi_j$ true
for $j < i$. Thus the search space of modal formulas is infinite, and
so we cannot resort to enumeration for decidable learning.
We advocate an automata-theoretic technique for learning problems of
this kind, inspired by recent work on learning formulas in
finite-variable logics~\cite{popl22}, which is summarized as follows:
\begin{enumerate}
\item Encode language expressions as syntax trees over a finite
alphabet.
\item For each structure $p \in \mathit{P}$ (respectively, $n\in\mathit{N}$),
construct a tree automaton accepting the syntax trees for
expressions $e$ such that $p \models e$ (respectively,
$n\not\models e$).
\item Construct a tree automaton accepting the intersection of
languages for previous automata, which accepts all expressions
consistent with the examples.
\item Run an emptiness checking algorithm for the final automaton. If
empty, report \emph{unrealizable}. Otherwise, synthesize a (small)
expression in the language of the automaton.
\end{enumerate}
\noindent The procedure above adapts easily to learning with grammar
restrictions. Given a regular tree grammar $\mathcal{G}$, we can
construct a tree automaton accepting precisely the expressions allowed
by $\mathcal{G}$ and take its intersection with the automaton from (3) before
checking emptiness.
The crucial observation we make is that in order to apply this generic
procedure to learning problems for new languages, we need only
implement an \emph{evaluator\,} for the semantics of the language. For
any \emph{fixed structure $M$}, the evaluator checks whether
$M\models e$ for an input expression $e$, where $\dblqt{\models}$ is a
problem-specific semantic relationship. Using the evaluator, we can
compute the tree automaton for each given positive and negative
structure and proceed with the algorithm above.
We can view these semantic evaluators as \emph{programs} whose state
depends on the mathematical structure over which evaluation occurs but
depends only to a very small degree on the size of the expression
itself. The key to finding these programs is to consider the question
of how to interpret arbitrary input expressions from the language
(presented as syntax trees) against an arbitrary, but \emph{fixed},
structure. We invite the reader in the remainder of the section to
na{\"i}vely explore how to write a program that evaluates an input
modal logic formula $\varphi$ against a fixed Kripke structure by
traversing the syntax tree of $\varphi$.
\subsection{Evaluating Modal Formulas on Fixed Kripke Structures}
\label{sec:eval-modal-form}
We want a procedure for evaluating any formula $\varphi$ of modal
logic against a fixed Kripke structure $G=(W,s,E,P)$, where
\emph{evaluate} means \emph{verify} that $G\models \varphi$. The
evaluator hence is designed for any particular $G$ and takes the syntax tree of $\varphi$ as input.
Imagine we want to evaluate the formula
$\varphi=\ensuremath{\square}(\ensuremath{\lozenge}(a\vee v))$ from \Cref{modal-picture}
over the rightmost positive structure $G$ (tree-shaped). In
particular, we want to check whether $G,s\models \varphi$ holds by
traversing the syntax tree for $\varphi$ (displayed on the right
below) from the top down. Suppose $n$ is a pointer into the syntax
tree of $\varphi$, with $n$ initially pointing to the root. We first
read the symbol $\singqt{\ensuremath{\square}}$, and we recognize that
$G,s\models \varphi$ holds exactly when the subformula
$\ensuremath{\lozenge}(a\veev)$ holds at each of the two children of $s$ in
$G$.
\begin{wrapfigure}{r}{0.13\textwidth}
\vspace{0.1in}
\centering
\tikzstyle{every node}=[draw=none, inner sep=1.5pt, minimum width=0pt]
\begin{tikzpicture}[scale=1,draw=none,thick,minimum width=0pt,inner sep=1.5pt]
\node (box) at (0,0) {$\ensuremath{\square}$} ;
\node (diam) at (0,-0.6) {$\ensuremath{\lozenge}$} ;
\node (or) at (0,-1.2) {$\vee$} ;
\node (red) at (-0.5, -1.5) {$a$} ;
\node (blue) at (0.5, -1.5) {$v$} ;
\draw [-] (box) edge[black] (diam) ;
\draw [-] (diam) edge[black] (or) ;
\draw [-] (or) edge[black] (red) ;
\draw [-] (or) edge[black] (blue) ;
\end{tikzpicture}
\newline
\end{wrapfigure}
Let $w_1$ and $w_2$ stand for
the children of $s$, and let $\mathit{c}_i(n)$ stand for the
$i^{\mathit{th}}$ child of the syntax tree pointed to by $n$. We now
should recursively check whether $G,w_1\models \ensuremath{\lozenge}(a\veev)$
and $G,w_2\models \ensuremath{\lozenge}(a\veev)$ hold. To do this, we move
\emph{down} in the syntax tree by setting
$n\coloneqq \mathit{c}_1(n) = \ensuremath{\lozenge}(a\veev)$. We then need to
check $G,w_1'\models a\veev$ holds, where $w_1'$ is either the
left or right child of $w_1$ in $G$ (and likewise for $w_2$). Suppose
we \emph{nondeterministically guess} that
$G,w_1'\models a\veev$ holds with $w_1'$ being the left child
of $w_1$. We move \emph{down} once more by setting
$n\coloneqq \mathit{c}_1(n) = a\veev$ and we \emph{verify} the
guess by checking $G,w_1'\models a\veev$. This plays out in a
similar, straightforward way. Our traversal eventually terminates and
returns true because $G,w_1'\models v$ holds, since
$v\in P(w_1')$.
Note that the traversal described above works for arbitrary $\varphi$
of unbounded size; indeed, the next steps are determined by the
current symbol of the syntax tree pointed to by $n$ and by some state
that depends entirely on $G$, namely the set of nodes $W$, which is
finite for a fixed, finite structure $G$. Observe also that the
traversal required some computable functions specific to Kripke
structures. For example, we needed to compute
$P : V\rightarrow\mathcal{P}(\Sigma)$, membership for elements of
$\mathcal{P}(\Sigma)$, and the set of neighbors of a given node in $G$ under
$E$.
\subsection{A Program for Evaluating Modal Formulas}
\label{sec:progr-eval-modal}
We conclude the modal logic example by writing a \emph{program} which
captures our traversal of $\varphi$ and the computation of whether
$G\models \varphi$. The program takes as inputs the structure $G$,
some auxiliary state $w$, and a pointer $n$ that initially points to
the root of the syntax tree for a modal formula.
The program $\hbox{\linl{Modal}}$ is shown in \Cref{fig:modal-clause1}.
We discuss formal semantics for such programs in \Cref{sec:language};
intuitively, the program implements the traversal sketched earlier by
matching against the symbol $n.\mathit{l}$ that labels the current node of
the syntax tree. Depending on the symbol, it can then either terminate
by computing a Boolean function as its final answer (e.g.
\dblqt{$x\in P(w)$}) or it can combine the results of recursive calls
at nearby nodes on the syntax tree. It uses \linl{all} and \linl{any}
to represent finite conjunctions and disjunctions, and it uses a few
problem-specific computable functions, which we categorize as either
\emph{Boolean} functions or \emph{state} functions. The only Boolean
function in this case is for atomic propositions, i.e.
\dblqt{$z\in P(w)$}, and the only state function is for computing the
neighborhood of a given node in $G$, i.e.
\dblqt{$\{y\in G \,:\, E(w,y)\}$}. Negation in
\Cref{fig:modal-clause1} is handled by evaluating the negated
subformula in a dual state $\mathit{dual}(w)$, in which each part of the program
is interpreted as its dual, e.g., \linl{and} becomes \linl{or},
\emph{etc}. We return to this after formalizing the semantics for
these programs.
\begin{figure}
\centering
\begin{minipage}[c]{0.85\linewidth}
\begin{lstlisting}
Modal($G$, $w$, $n$) = match $n.\mathit{l}$ with
$\wedge$ -> Modal($G$, $w$, $n.\mathit{c}_1$) and Modal($G$, $w$, $n.\mathit{c}_2$)
$\vee$ -> Modal($G$, $w$, $n.\mathit{c}_1$) or Modal($G$, $w$, $n.\mathit{c}_2$)
$\neg$ -> Modal($G$, $\mathit{dual}(w)$, $n.\mathit{c}_1$)
$\ensuremath{\square}$ -> all (LAM$z$. Modal($G$, $z$, $n.\mathit{c}_1$)) $\{y\in G \,:\, E(w,y)\}$
$\ensuremath{\lozenge}$ -> any (LAM$z$. Modal($G$, $z$, $n.\mathit{c}_1$)) $\{y\in G \,:\, E(w,y)\}$
$x$ -> $x\in P(w)$
\end{lstlisting}
\end{minipage}
\caption{$\hbox{\linl{Modal}}$ evaluates modal formula $\varphi$ pointed to by
$n$ against Kripke structure $G$ and checks $G\models\varphi$.}
\label{fig:modal-clause1}
\end{figure}
Recall the theme from the beginning of this section:
\begin{theme}
Effective evaluation using state bounded by structures
$\,\,\,\xRightarrow{\hspace*{0.5cm}}\,\,$ decidable learning.
\label{slogan1}
\end{theme}
\noindent The semantic evaluator for modal formulas uses auxiliary
states that depend on the number of nodes in the Kripke structure, and
\emph{not} on the size of the syntax tree. Strictly speaking, it
accesses the syntax tree using a pointer, and hence involves some
minimal amount of memory that depends on expression size, but this is
the \emph{only} such dependence.
As we have just observed, effective evaluation of this sort is
possible for modal logic on finite Kripke structures, and programs
witnessing this fact like the one in \Cref{fig:modal-clause1} imply
decision procedures for learning. In the remainder of the paper, we
define a class of languages that admit decidable learning and
formulate the programming language that reduces decidable learning
proofs to programming a semantic evaluator. We use the language to
obtain results for several other learning problems.
\section{Preliminaries}
\label{sec:notation-background}
Here we review some background on syntax trees, tree grammars, and
tree automata.
\subsection{Syntax Trees and Tree Grammars}
\label{sec:terms}
For each symbolic language in this paper we use a \emph{ranked
alphabet} to form expression syntax trees. A ranked alphabet
$\Delta$ is a set of symbols $s$ equipped with a function
$\arity{s}\in\mathbb{N}$. For example, the ranked alphabet for modal
formulas over $\Sigma$ has $\arity{\ensuremath{\lozenge}}=1$, $\arity{\wedge}=2$,
and $\arity{a}=0$ for each $a\in\Sigma$. We write $T_\Delta$ for the
set of $\Delta$-terms, or ($\Delta$-)\emph{syntax trees}, which is the
smallest set containing symbols of arity $0$ from $\Delta$ and closed
under forming new terms with symbols of greater arities. We write
$T_\Delta(X)$ for the set of $\Delta$-terms constructed with a fresh
set of nullary symbols $X$.
We use regular tree grammars to express syntax restrictions for
learning problems. A \emph{regular tree grammar} is a tuple
$\mathcal{G} = (\mathit{NT}, \Delta, S, P)$ consisting of a finite set of
nonterminals $\mathit{NT}$, ranked alphabet $\Delta$, starting nonterminal
$S\in \mathit{NT}$, and productions $P$. Each production has the form
``$A \rightarrow t$'', with $A\in \mathit{NT}$ and $t\in
T_\Delta(\mathit{NT})$.
In a standard way we associate with the
productions $P$ a reflexive and transitive rewrite relation
$\rightarrow_P^*$ on terms $T_\Delta(\mathit{NT})
, and the language
$L(\mathcal{G})$ is the set
$\left\{t\in T_\Delta(\emptyset)\,\mid\, S\rightarrow_P^*
t\right\}$. See~\cite{tata} for details.
\subsection{Tree Automata}
\label{sec:tree-automata}
Tree automata are finite state machines that accept trees. We use tree
automata over \emph{finite} trees, or terms. Such automata are tuples
$\mathcal{A} = (Q,\Delta,q_i,\delta,F)$ consisting of a finite set of states
$Q$, ranked alphabet $\Delta$, initial state $q_i\in Q$, transition
function $\delta$, and acceptance condition $F$. An automaton accepts
a tree $t\in T_\Delta$ if it has an \emph{accepting run} over $t$.
The notions of \emph{run} and \emph{accepting run} can vary.
In this work we use a convenient, though no more expressive, variant
of tree automata called an \emph{alternating two-way tree
automaton}. Such automata walk up and down on their input tree and
branch using alternation to send copies of the automaton in updated
states to nearby nodes of the tree. We will only use
\emph{reachability} acceptance conditions in this paper, where
$F\subseteq Q$, and a tree is accepted if along every trajectory of
the automaton during its walk over the tree, it reaches a state in
$F$. We omit the formal definition of \emph{runs} for these automata,
which is entirely standard, though complicated, and unnecessary for
understanding our results.
For a symbol $s\in \Delta$ with $\arity{s}=k$ and state $q\in Q$, the
available transitions for a two-way alternating tree automaton are
described by a Boolean formula
\begin{align*}
\delta(q,s) \in \mathcal{B}^{\mathtt{+}}(Q\times\{-1,0,\ldots,k\}),
\end{align*}
where $\mathcal{B}^{\mathtt{+}}(X)$ means the set of positive Boolean
formulas over variables from a set $X$.
Each variable $(q,m)$ represents a new state $q$ and direction $m$ to
take at a particular node in the tree, with $m=-1$ being a move
\emph{up} to the parent of the current node, $m=0$ meaning to
\emph{stay} at the current node, and the other numbers being moves
down into one of $k$ children. A subset of $Q\times\{-1,0,\ldots, k\}$
corresponds to a Boolean assignment, and the automaton can proceed
according to any assignment that satisfies the current transition
formula. For example, if the automaton reads symbol $h$ in state $q$,
the transition
\begin{align*}
\delta(q,h)= (q_1,1)\wedge(q_2,1)\vee
(q_1,2)\wedge(q_2,0)\wedge(q_1,-1)
\end{align*}
would allow either of the following: (1) continuing in states $q_1$
and $q_2$, each starting from the leftmost child, or (2) continuing
from $q_1$ in the second child from left, from $q_2$ at the current
node, and from $q_1$ in the parent.
Two-way alternating tree automata can be converted to one-way
nondeterministic tree automata with an exponential increase in
states~\cite{two-way-vardi,cachat-two-way}, and so they inherit
closure properties and standard decision procedures. In particular,
the emptiness problem can be solved in exponential time and a small
tree in the language can be synthesized in the same amount of time
when nonempty. See~\cite{tata} for details.
\section{First-Order Logic over Rational Numbers with Order}
\label{sec:rationals}
In this section we consider learning first-order logic queries over an
infinite domain, namely, the structure $(\mathbb{Q}, <)$ consisting of the
rational numbers $\mathbb{Q}$ with the usual linear order $<$. The learning
problem requires labeled $k$-tuples of rational numbers to be
separated by a \emph{query} in $\FO^{k}$, i.e., a formula in first-order
logic with $k$ variables. We derive a decision procedure by writing a
$\textsc{Facet}$ interpreter for $\FO^{k}$ over $(\mathbb{Q},<)$.
\subsection{Learning Queries over Rational Numbers with Order}
\label{sec:learn-quer-over}
We consider the following problem.
\begin{problem}[Learning $\FO^{k}$ Queries over $(\mathbb{Q},<)$]
Given finite sets $\mathit{P}$ and $\mathit{N}$ of $k$-tuples of $\mathbb{Q}$, and
grammar $\mathcal{G}$ for $\FO^{k}$ over $(\mathbb{Q},<)$, synthesize
$\varphi\in L(\mathcal{G})$ such that
$P \subseteq \{t\in \mathbb{Q}^k \,\,|\,\, (\mathbb{Q},<) \models \varphi(t)\}$
and
$N \subseteq \{t\in \mathbb{Q}^k \,\,|\,\, (\mathbb{Q},<) \not\models
\varphi(t)\}$, or declare no such formula exists.
\end{problem}
\noindent A ranked alphabet $\Delta$ for $\FO^{k}$ has, for any variables
$x,y$, the unary symbols $\dblqt{\forall x}$, $\dblqt{\exists x}$ and
nullary symbols $\dblqt{x < y}$, $\dblqt{x=y}$, in addition to the
symbols for Boolean operators.
Note that for this problem
the \emph{language} takes the class of structures $\mathcal{M}$ to be the
set $\mathbb{Q}^k$, and so a single ``structure'' is a tuple of rationals
$t\in\mathbb{Q}^k$. For $t\in \mathbb{Q}^k$, the semantics is given by
$t\models \varphi \Leftrightarrow (\mathbb{Q},<),
t\models\varphi$\footnote
We are abusing notation and
treating $t$, a $k$-tuple of rationals, as an assignment to the
$k$, ordered free variables of
$\varphi$
.
\subsection{Decidable Learning for First-Order Logic Queries over
Rationals}
\label{sec:progr-eval-first}
Given $t\in \mathbb{Q}^k$ and $\varphi\in\FO^{k}$, what semantic information do
we need to verify $(\mathbb{Q},<)\models\varphi(t)$? Consider evaluating a
query $\varphi(x,y,z)$ on
$t=(\nicefrac{1}{2},3, \nicefrac{4}{3})\in P$. Our program might begin
with an assignment $\gamma$ mapping $(x,y,z)$ to
$(\nicefrac{1}{2},3, \nicefrac{4}{3})$. With the right functions, it
can easily verify atomic formulas by simply checking
$\gamma(x) < \gamma(y)$ or $\gamma(x) = \gamma(y)$. When the program
reads, say, \dblqt{$\exists x$}, it must carry forward some finite
amount of information, which thus excludes tracking the precise values
for the variables, of which there are infinitely many.
The main idea is that evaluating atomic formulas does not require the
precise values of the variables: the \emph{order between variables} is
all that is needed to evaluate $\FO^{k}$ formulas over $(\mathbb{Q},
<)$.
In our example, we have
$t=(\nicefrac{1}{2},3, \nicefrac{4}{3})$ corresponding to the
assignment
$\{x\mapsto \nicefrac{1}{2}, y\mapsto 3, z\mapsto
\nicefrac{4}{3}\}$, and so the program begins in a state encoding
that $x < z < y$. Suppose it reads the formula
$\exists x\forall y\, (x < y)$. First it reads \dblqt{$\exists x$}
and branches (disjunctively) on all of the finitely-many
\emph{distinguishable} choices for where to place $x$ relative to
the other variables while leaving the others in the same relative
positions. We preserve $z < y$, but $x$ can appear in any of several
positions: $x < z < y$, $x=z < y$, $z < x < y$, $z < x=y$,
$z < y < x$. From each of these states the program reads
\dblqt{$\forall y$} and branches (conjunctively) on every choice for
where to place $y$. It eventually rejects the formula because $y$
can always be placed strictly below $x$ in the second branching
step
\Cref{fig:rat} shows a program $\hbox{\linl{Rat}}$ that evaluates $\FO^{k}$ formulas
over tuples of rational numbers. The states of $\hbox{\linl{Rat}}$ record an
ordering between variables $V=\{y_1,\ldots,y_k\}$, including whether
two variables are equal, and thus they correspond to the \emph{total
preorders} on $V$, denoted $\mathit{pre}(V)$. We use \dblqt{$\gtrsim$} as
a pattern variable to denote a preorder. For a given tuple $t$ we
have
\begin{align*}
\mathsf{Asp}(t) \coloneqq \left\{\, \gtrsim,\, \mathit{dual}(\gtrsim) \,:\,\,\,
\gtrsim\,\,\in\mathit{pre}(V)\,\right\}.
\end{align*}
Signature functions include Boolean functions for checking ordering
and equality in a given preorder $\gtrsim$, which we denote by
$\mathit{lt}(x,z,\gtrsim)$ and $\mathit{eq}(x,z,\gtrsim)$ and define by:
\vspace{0.1in}
\begin{minipage}[t]{\textwidth}
\centering
\begin{tabularx}{0.85\textwidth}{l l l l l l l}
&$\mathit{geq}(x,z,\gtrsim)$ &$\coloneqq$ &$x \gtrsim z$ \qquad\qquad
&$\mathit{eq}(x,z,\gtrsim)$ &$\coloneqq$ &$x \gtrsim z \,\,\text{and}\,\, z
\gtrsim x$ \\
&$\mathit{lt}(x,z,\gtrsim)$ &$\coloneqq$ &
$\neg\mathit{geq}(x,z,\gtrsim)$\qquad\qquad
&$\mathit{neq}(x,z,\gtrsim)$ &$\coloneqq$ &$\neg\mathit{eq}(x,z,\gtrsim)$
\end{tabularx}
\end{minipage}\hfill
\vspace{0.05in}
\noindent State functions include $\mathit{place}(x,\gtrsim)$, which computes
the set of all total preorders that place $x\in V$ in a new position
but agree with $\gtrsim$ on variables in $V\setminus \{x\}$, defined
as:
\begin{align*}
\mathit{place}(x,\gtrsim) \coloneqq \,\, \left\{\, \gtrsim'\,\in\,\mathit{pre}(V)\,\,:\,\,
y \gtrsim' z \Leftrightarrow y \gtrsim z,\, \forall y,z\in
V\setminus\{x\}
\,\right\}.
\end{align*}
\begin{theorem}
Learning queries over rationals with order, with sets of $k$-tuples
$\mathit{P}$ and $\mathit{N}$ and grammar $\mathcal{G}$ for $\FO^{k}$, is decidable in
time $\mathcal{O}(2^{\mathit{poly}(mk^k)}\cdot|\mathcal{G}|)$, where $m=|\mathit{P}|+|\mathit{N}|$.
\end{theorem}
\begin{proof}[Proof Sketch]
Follows reasoning from previous sections and uses
\Cref{lemma:mainlemma} and \Cref{complexity-lemma}. For $t\in\mathbb{Q}^k$
and a set $V$ of $k$ variables, we have
$|\mathsf{Asp}(t)| = 2|\mathit{pre}(V)| = \mathcal{O}(k^k)$.
\end{proof}
\begin{figure}
\centering
\begin{minipage}[c]{0.7\linewidth}
\begin{lstlisting}
Rat($t$, $\gtrsim$, $n$) = match $n.\mathit{l}$ with
$\forall x$ -> all (LAM$p$. Rat($t$, $p$, $n.\mathit{c}_1$)) $\mathit{place}(x,\gtrsim)$
$\exists x$ -> any (LAM$p$. Rat($t$, $p$, $n.\mathit{c}_1$)) $\mathit{place}(x,\gtrsim)$
conj -> Rat($t$, $\gtrsim$, $n.\mathit{c}_1$) and Rat($t$, $\gtrsim$, $n.\mathit{c}_2)$
disj -> Rat($t$, $\gtrsim$, $n.\mathit{c}_1$) or Rat($t$, $\gtrsim$, $n.\mathit{c}_2)$
neg -> Rat($t$, $\mathit{dual}(\gtrsim)$, $n.\mathit{c}_1$)
$x < z$ -> if $\mathit{lt}(x, z, \gtrsim)$ then True else False
$x = z$ -> if $\mathit{eq}(x, z, \gtrsim)$ then True else False
\end{lstlisting}
\vspace{-0.1in}
\end{minipage}
\caption{$\hbox{\linl{Rat}}$ evaluates formula $\varphi$ pointed to by $n$ against
a tuple $t$ of rational numbers and verifies
$(\mathbb{Q},<)\models\varphi(t)$.}
\label{fig:rat}
\end{figure}
\paragraph{Remark}
Structures like $(\mathbb{Q}, <)$ have a special kind of automorphism group,
called an \emph{oligomorphic group}
(see~\cite{hodges93}). Oligomorphic automorphism groups have
finitely-many orbits in their action on $n$-tuples from the domain of
the structure, for every $n$. For structures with such automorphism
groups, if $n$ is fixed, then a $\textsc{Facet}$ program can evaluate formulas
by tracking these finitely-many orbits. In the case of $\FO^{k}$ over
$(\mathbb{Q}, <)$, the program $\hbox{\linl{Rat}}$ effectively checks atomic formulas in
a given orbit represented by a total preorder on variables, and when
evaluating quantifiers it is able to compute all ``nearby''
orbits. There are other examples of such structures, and many
constraint satisfaction problems over infinite domains use them as
templates, e.g., temporal constraint satisfaction, phylogenetic
reconstruction over tree structures, and network satisfaction problems
(see~\cite{bodirsky-csp}). It would be interesting to explore learning
in domains like these.
\section{Learning Regular Expressions}
\label{sec:regular-expressions}
In this section we develop a decision procedure for learning regular
expressions from finite words. In contrast to propositional modal
logic, the semantics of regular expressions involves recursion in the
structure of expressions as well as recursion over the structures
themselves.
\subsection{Separating Words with Regular Expressions}
\label{sec:separ-words-with}
Consider the following problem.
\begin{problem}[Regular Expression Separation]
Given finite sets $\mathit{P}$ and $\mathit{N}$ of finite words over an alphabet
$\Sigma$, and a grammar $\mathcal{G}$, synthesize a regular expression
over $\Sigma$ that matches all words in $\mathit{P}$, does not match any
word in $N$, and conforms to $\mathcal{G}$, or declare none exist.
\end{problem}
We consider (extended) regular expressions from the following grammar.
\begin{align*}
e \Coloneqq a\in\Sigma \,\, | \,\, e\cdot e'
\,\, | \,\, e + e' \,\, | \,\, e
\cap e' \,\, | \,\, e^* \,\, | \,\, \neg e
\end{align*}
Recall that we want to program an evaluator for regular expressions
over fixed $\Sigma$-words. The notion of evaluation here is
\emph{membership} of a word $w$ in the language of a regular
expression $e$, i.e. $w\models e \Leftrightarrow w\in L(e)$. This
semantics has a straightforward recursive definition, and it can be
presented in terms of an auxiliary relation that makes the finite
semantic aspects plain to see:
\begin{align*}
w\in L(e) \quad \Leftrightarrow \quad w,(1,|w|+1)\models e.
\end{align*}
This definition uses the aspect of \emph{subwords} of $w$, with
$(l,r)$ indicating the subword $w(l,r)$ that includes the letter at
position $l$ and extends to the position before $r$. Note that the
empty word $\epsilon$ can therefore be represented by $(i,i)$ for any
$i$. The semantics of subword membership is given below.
\vspace{0.1in}
\begin{minipage}[t]{\textwidth}
\centering
\begin{tabularx}{\textwidth}{l l l l l l l}
$w, (l,r)$ & $\models$ & $a \in \Sigma$ & if & $w(l) = a$ & and & $r
= l+1$ \\
$w, (l,r)$ & $\models$ & $e \cdot e'$ & if & $w, (l,k)\models e$ &
and &
$w, (k,r)\models e'$ \ for some $k\in [l,r]$\\
$w, (l,r)$ & $\models$ & $e + e'$ & if & $w, (l,r)\models e$ & or
& $w, (l,r)\models e'$ \\
$w, (l,r)$ & $\models$ & $e\cap e'$ & if & $w, (l,r)\models e$
& and & $w, (l,r)\models e'$ \\
$w, (l,r)$ & $\models$ & $e^*$ & if & $l=r$ & or & $\exists
k\in[l+1,r].\,\, w, (l,k)\models
e$ \ and \ $w, (k,r)\models e^*$ \\
$w, (l,r)$ & $\models$ & $\neg e$ & if & $w, (l,r)\not\models
e$ && \\
\end{tabularx}
\end{minipage}
\vspace{0.1in}
\noindent Observe that if we \emph{fix} the word $w$ then the number
of pairs $(l,r)$ used in the definition above is finite. Also observe
that the definition in the case for Kleene star is well-founded
because either the expression size decreases or the subword length
decreases.
\paragraph{Remark} Regular expression separation has two
overfitting-style solutions if we ignore the syntax restriction from
the input grammar $\mathcal{G}$. Simply use $+_{w\in\mathit{P}}w$ for the
tightest regular expression that matches all of $\mathit{P}$, or
alternatively, $\cap_{w\in\mathit{N}}\neg w$ for the loosest one that avoids
matching any of $\mathit{N}$. If there is any separating regular expression
at all, then either of these must
work.
\subsection{Decidable Learning for Regular Expressions}
\label{sec:progr-eval-regul}
We write a $\textsc{Facet}$ program called $\hbox{\linl{Reg}}$ which reads a regular
expression syntax tree and verifies whether a given word is a member
of the language for the regular expression. The class $\mathcal{M}$ consists
of structures encoding $\Sigma$-words, $\mathcal{L}$ consists of regular
expressions over $\Sigma$, and semantics is membership in the language
of regular expressions. States are pairs of ordered indices,
representing subwords, along with duals to handle negation.
\begin{align*}
\mathsf{Asp}(w) \coloneqq \left\{(l,r),\mathit{dual}(l,r) \,:\, l\le r \in [1, |w|+1]\right\}.
\end{align*}
\noindent For instance, if $w = \mathit{abbb}$, then the subword
$w'=\mathit{ab}$ is represented as the pair of positions $(1,3)$. The
alphabet $\Delta$ for syntax trees is straightforward and uses symbols
of arity $0$ for members of $\Sigma$. We use state functions for
looking up the letter at a given position $i$, written $w(i)$, the
successor function on positions $x$, written $x+1$, and functions
$[x,y]$ and $[x+1,y]$ for computing the indices between two positions
$x\le y$, with $[x+1,y] = \emptyset$ if $x=y$. Boolean functions
include equality and disequality on positions and letters of $\Sigma$.
The program $\hbox{\linl{Reg}}$ (with dual omitted) is given in
\Cref{fig:reg-clause1}. States matching $(l,r)$ are used by the
program to check whether $w(l,r)\in L(e)$, and states matching
$\mathit{dual}(l,r)$ are used to check whether $w(l,r)\notin L(e)$. Using
$\hbox{\linl{Reg}}$ we get the following.
\begin{figure}
\centering
\begin{lstlisting}
Reg($w$, $(l,r)$, $n$) = match $n.\mathit{l}$ with
$*$ -> if ($l = r$) then True else
any (LAM$x$. Reg($w$, $(l,x)$, $n.\mathit{c}_1$) and Reg($w$, $(x,r)$, $n.\mathit{stay}$)) $[l+1,r]$
$\,\cdot$ -> any (LAM$x$. Reg($w$, $(l,x)$, $n.\mathit{c}_1$) and Reg($w$, $(x,r)$, $n.\mathit{c}_2$)) $[l,r]$
$+$ -> Reg($w$, $(l,r)$, $n.\mathit{c}_1$) or Reg($w$, $(l,r)$, $n.\mathit{c}_2$)
$\neg$ -> Reg($w$, $\mathit{dual}(l,r)$, $n.\mathit{c}_1$)
$\cap$ -> Reg($w$, $(l,r)$, $n.\mathit{c}_1$) and Reg($w$, $(l,r)$, $n.\mathit{c}_2$)
$x$ -> $r = l+1$ and $w(l) = x$
\end{lstlisting}
\caption{$\hbox{\linl{Reg}}$ evaluates the regular expression $e$ pointed to by $n$
against an input word $w$ and verifies $w\in L(e)$.}
\label{fig:reg-clause1}
\end{figure}
\begin{theorem}
Regular expression separation for sets of words $\mathit{P}$ and $\mathit{N}$
and grammar $\mathcal{G}$ is decidable in time
$\mathcal{O}(2^{\mathit{poly}(mn^2)}\cdot |G|)$, where
$n = \max_{w\in \mathit{P}\cup \mathit{N}} |w|$ and $m = |\mathit{P}| + |\mathit{N}|$.
\end{theorem}
\begin{proof}[Proof Sketch.]
For all words $w$, positions $1\le i\le j\le |w|+1$, and regular
expressions $e$, we have that $w(i,j)\in L(e)$ if and only if
$(w, \mathit{root}(e), \mathit{C}(\hbox{\linl{Reg}}), (i,j)) \Downarrow_p$ and
$w(i,j)\notin L(e)$ if and only if
$(w, \mathit{root}(e), \mathit{C}(\hbox{\linl{Reg}}), \mathit{dual}(i,j)) \Downarrow_p$. The proof
is by induction on $|w|$ and inner induction on $e$. We have
$|\mathsf{Asp}(w)| = \mathcal{O}(|w|^2)$, and the theorem follows by
\Cref{lemma:mainlemma} and \Cref{complexity-lemma}.
\end{proof}
\section{Related Work}
\label{sec:related-work}
\emph{Expression Learning and Program Synthesis.} Our approach is
inspired by recent results for learning in finite-variable
logics~\cite{popl22}.
Proofs in that work involve direct
automata constructions, and the results can be obtained with our
meta-theorem by writing suitable evaluators. Our work generalizes
the tree automata approach to general symbolic languages by
separating decidable learning theorems into two parts: (1)
identifying the underlying semantic aspects of the language in
question and (2) programming with this new datatype in order to
evaluate arbitrary expressions. The finite-variable restriction for
the logics considered in~\cite{popl22} leads to finitely-many
aspects— a finite set of assignments to some $k$ variables. But, as
our work shows, this restriction is not necessary for decidable
learning; several languages we consider do not use variables and it
is unclear what a corresponding variable restriction would
mean. Usual translations of regular expressions to monadic
second-order logic formulae, for instance, do not stay within a
finite-variable fragment. Nevertheless, the recursive semantics of
regular expressions involves subwords, and there are only
finitely-many subwords of a given word, which makes regular
expressions finite-aspect checkable.
Practical algorithms for some of the learning problems we address have
been explored previously, e.g. learning
LTL~\cite{neider-ltl-learning}, regular
expressions~\cite{jagadish-regexp-learning-08,fernau-regexp-learning-09},
and context-free
grammars~\cite{sakakibara-cfg-learning,langley-cfg-learning,vanlehn87-cfg-learning},
but decidable learning results with syntactic restrictions have not
been established.
Other recent work studies the parameterized complexity of learning
queries in $\mathsf{FO}$~\cite{van-bergerem-22}, algorithms for learning in
$\mathsf{FO}$ with counting~\cite{learning-in-fo-with-counting}, and learning
in description logics~\cite{lutz-ijcai2019}. Applications for $\mathsf{FO}$
learning have emerged, e.g., synthesizing
invariants~\cite{aiken-fo-sep,parno-21,distAI,koenig-taming-search-space,ice,madhu-qda}
and learning program properties~\cite{inferring-rep-invariants,
preconditions, learning-contracts}.
Expression learning is connected to program synthesis, and in
particular, programming by example~\cite{flashmeta}, where practical
algorithms have been used to automate tedious programming tasks, e.g.
synthesizing string programs~\cite{flashfill,flashfillplus},
bit-manipulating programs from templates~\cite{sketch}, or functional
programs from examples and type
information~\cite{type-directed-synth-osera,synth-refinement-types-nadia16}. Synthesis
with grammar restrictions follows work in the
\textsc{SyGus}~\cite{sygusJournal}, and recently, \textsc{SemGuS}
frameworks~\cite{semgus}.
\emph{Automata for Synthesis.} Connections between automata and
synthesis go back to Church's problem~\cite{church63-journal} on
synthesizing finite state machines that manipulate infinite streams of
bits to meet a given logical specification. This was solved first by
B{\"u}chi and Landweber~\cite{BuchiLandweber69} for specifications in
monadic second-order logic, and later also by
Rabin~\cite{Rabin72}. The idea was to translate the specification into
an automaton, and to view synthesis of a transducer as the problem of
synthesizing a finite-state winning strategy in a game played on the
transition graph of the automaton. The result was a potentially large
transition system, not a compact program. The use of tree automata
that work over syntax trees was advanced in~\cite{madhuCSL11} and has
been used for practical algorithms in several program synthesis
contexts~\cite{fta-data-completion-scripts, Wang2017,
WangWangDilligOOPSLA18,ecta,miltner-angelic,handa-rinard}.
\emph{Decidability in Synthesis.} Many foundational decidability
results in logic and synthesis of finite-state systems rely on
reductions to automata
emptiness~\cite{BuchiLandweber69,Rabin72,automata-logics-games,kpvPneuli,PR89,KMTV00,PR90}.
Recent decidability results for synthesis of uninterpreted programs
involved a reduction to emptiness of two-way tree
automata~\cite{uninterpretedsynth}. Decision procedures for
\textsc{SyGuS} problems in linear and conditional linear integer
arithmetic~\cite{farzan22,reps20-unrealizability} used grammar flow
analysis~\cite{gfa-1991} and an abstraction based on semi-linear
sets.
\emph{Automata for Learning vs. Graph Algorithms.} There is a large
body of work, e.g. see~\cite{Habel1992,courcelle}, on checking
properties of graphs expressed in monadic second-order logic. These
results involve translating logical properties into automata that read
\emph{decompositions of graphs} and accept if the represented graph
has the property. Our work is very differently motivated: we are
interested in properties of syntax trees defined over arbitrary
\emph{fixed} structures (e.g. unrestricted graphs like cliques or
grids), and the properties are motivated by semantics of complex
symbolic languages. Our automata constructions for \emph{learning}
are, conceptually, dual to these constructions from logical
specifications.
\emph{Definability in Monadic Second-Order Logic.} Foundational
results from logic and automata theory connect definability in monadic
second-order logic and recognizability by finite machines, spanning
various classes of structures including finite words and
trees~\cite{weak-so-and-finite-aut-buchi, elgot-61,
trakhtenbrot-61,doner-1970,thatcher-wright}, infinite words and
trees~\cite{Buchi1990, Rabin69}, and graphs with bounded tree
width~\cite{courcelle90}. It follows by definition that for any FAC
language, the semantics over any fixed structure can be captured by a
sentence in monadic-second order logic over syntax trees.
\section{Decidable Learning for String Programs}
\label{sec:discussion}
In this section, we consider Gulwani's language for string
programming~\cite{flashfill}, which is designed to express programs
that transform a sequence $i$ of input strings into an output string
$o$ in the context of spreadsheet programs. This language, which we
refer to as $\textsc{String}$, turns out to be FAC, with the caveat that loops
must use variables from a finite set. We next give an overview of the
syntax and semantics of $\textsc{String}$; details can be found in the original
paper~\cite{flashfill}. Then we discuss how to implement a $\textsc{Facet}$
evaluator that reads $\textsc{String}$ syntax trees and checks whether they map
an input sequence $i$ to an output $o$. The language, being one used
in practice, is considerably more complex than our other examples, and
so we only sketch the main ideas.
\subsection{$\textsc{String}$ Overview}
\label{sec:str-overview}
Programs in $\textsc{String}$ map finitely-many input strings $v_j$ to an output
$o$. A \emph{program} $P\in\textsc{String}$ consists of a switch statement
$\textsf{Switch}((\varphi_1, e_1),...\,,(\varphi_n, e_n))$ that chooses between
expressions $e_i$. The $\varphi_i$ are DNF formulas over atoms
$\textsf{Match}(v_j,r,k)$, which hold if at least $k$ matches for a regular
expression $r$ can be found in the input $v_j$. The $e_i$ in the
switch statement, called \emph{trace expressions}, have the form
$\mathsf{Concatenate}(f_1,...\,,f_n)$. They concatenate expressions $f_i$ that come
in three flavors: (1) $\mathsf{SubStr}(v_j,p_1,p_2)$ selects the substring in
$v_j$ between positions $p_1$ and $p_2$, (2) $\mathsf{ConstStr}(s)$ denotes a
string literal $s$, and (3) $\mathsf{Loop}(\lambda x.\, e)$ iteratively
appends the result of evaluating $e$ until that result is $\bot$,
which is a special value for failure. During iteration $i$, the
variable $x$ is bound to $i$ in $e$.
The positions $p_i$ in $\mathsf{SubStr}(v_j,p_1,p_2)$ are either constant
integers $\mathsf{CPos}(k)$ or they have the form $\mathsf{Pos}(r_1,r_2,c)$, where
the $r_i$ are regular expressions and $c$ is a linear integer
expression built from constants and loop variables, e.g. $2x + 3$. The
expression $\mathsf{Pos}(r_1,r_2,c)$ is evaluated with respect to $v_j$, and
returns a position $t$ such that just to the left of $t$ in $v_j$
there is a match for $r_1$ and starting at $t$ there is a match for
$r_2$. Furthermore, it returns the $c^{\mathit{th}}$ such position, or
$\bot$ if not enough such positions exist. Regular expressions are
restricted in $\textsc{String}$ to only use Kleene star and disjunction in a
particular way, which we ignore. It is no trouble to write a $\textsc{Facet}$
evaluator for a generalization of $\textsc{String}$ that allows unrestricted
(extended) regular expressions like those from
\Cref{sec:regular-expressions}.
Consider a program that extracts capital letters of an input string
(\cite{flashfill}, example $5$).
\begin{center}
\begin{tabular}{| l || c |}
\hline
Input $v_1$ & Output $o$ \\ \hline\hline
\textit{Principles Of Programming Languages} & \textit{POPL} \\ \hline
\end{tabular}
\begin{align*}
\text{Program:} &\quad\mathsf{Loop}(\lambda x.\,
\mathsf{Concatenate}(\mathsf{SubStr2}(v_1,\, \mathsf{UpperTok},\, x))) \\
\text{where}&\quad\mathsf{SubStr2}(v_j,\, r,\, c) \,\equiv\, \mathsf{SubStr}(v_j,\, \mathsf{Pos}(\epsilon, r, c),\, \mathsf{Pos}(r, \epsilon, c))
\end{align*}
\end{center}
This program uses $\mathsf{SubStr2}(v_j, r, c)$ to compute the
$c^{\mathit{th}}$ match of the regular expression $r$ in $v_j$. This
is used to extract the $x^{\mathit{th}}$ upper case letter in
iteration $x$ of the loop, which is then appended to previously
extracted letters. The loop exits when the body evaluates to $\bot$,
which happens when there are no more matches for $\mathsf{UpperTok}$.
\subsection{Decidable Learning for $\textsc{String}$}
\label{sec:decid-learn-str}
We describe how a $\textsc{Facet}$ program should evaluate the constructs in
$\textsc{String}$. Fix a set of input strings $i=v_1,\ldots, v_n$ and an output
string $o$. The evaluator reads $P\in\textsc{String}$ and checks that it maps $i$
to $o$. We mention specific choices for representing syntax trees as
needed.
The $\textsf{Switch}$ statement can be modeled with a ternary symbol
$\textsf{Switch}(\varphi,e,\mathsf{rest})$, where $\mathsf{rest}$ represents the
rest of the cases with nested operators of the same kind. Upon reading
$\textsf{Switch}$, the program branches to verify \emph{either} the conditional
$\varphi_1$ holds and $e_1$ produces $o$ \emph{or} $\varphi_1$ does
not hold and the rest of the $\textsf{Switch}$ produces $o$.
The DNF formulae $\varphi_i$ can be easily evaluated with the Boolean
operators in $\textsc{Facet}$. An atom $\textsf{Match}(v_i,r,k)$ can be represented with
a binary symbol $\textsf{Match}_{v_i}(r,k)$, one for each $v_i$, with right
child a \emph{unary} representation of integer $k$, i.e. $s^k(0)$. To
check $\textsf{Match}_{v_i}(r,s^k(0))$, the program evaluates the right child to
determine the value of $k$. Crucially, it can reject if $k$ exceeds
$|\mathsf{subs}(v_i)|$, which upper bounds the maximum number of matches for
any regular expression over $v_i$. Having determined $k$, the program
can branch over all $\binom{N}{k}$ combinations of subwords that could
witness the requisite $k$ matches, with $N=|\mathsf{subs}(v_i)|$. For each
subword, the program executes $\hbox{\linl{Reg}}$ (\Cref{fig:reg-clause1}) as a
subroutine to check whether it matches the regular expression in the
left child.
It remains to interpret $e$ and verify it produces an output $o$. We
represent $\mathsf{Concatenate}(f_1,...\,,f_n)$ in a nested way like $\textsf{Switch}$, and
binary $\mathsf{Concatenate}(f,f')$ is evaluated as for regular expressions by
branching on all ways to split $o$ (or one of its subwords) into
consecutive subwords $w$ and $w'$, with $f$ and $f'$ then verified to
produce $w$ and $w'$.
The expressions $f$ are verified to yield a given word as
follows. Literals $\mathsf{ConstStr}(s)$ are represented with nested
concatenation and thus follow the same idea as
$\mathsf{Concatenate}(f,f')$. Substrings $\mathsf{SubStr}(v_j,p_1,p_2)$ are modeled with
binary symbols $\mathsf{SubStr}_{v_j}(p_1,p_2)$, one for each $v_j$. Having
determined the values of positions $p_i$, the program can simply use a
function for equality of subwords. Constant positions $\mathsf{CPos}(k)$ are
determined as before except the program rejects if $k$ exceeds
$|v_j|$. To evaluate $\mathsf{Pos}(r_1,r_2,c)$, represented as a ternary
operator, the program guesses a position $t$ in $v_j$ and verifies
existence of matches for $r_1$ and $r_2$ to the left and right of
$t$. It further verifies there are $c-1$, \emph{but not} $c$ such
positions to the left of $t$. This is accomplished by branching on the
possible $\binom{t-1}{c-1}$ combinations of positions and checking for
the requisite matches, and then checking the opposite for each of the
$\binom{t-1}{c}$ combinations. Finally, integer expressions
$c= k_1x + k_2$
can be evaluated by hardcoding rules for bounded arithmetic, because
the maximum value that loop variables can take is bounded by $|o|$,
which we discuss next.
Provided the number of loop variables is finite, the program can
evaluate loops using a map $\gamma$ from variables $\{x_i\}$ to
integers. The integers are bounded because the loop body $e$ must
produce a string of non-zero length (otherwise the loop terminates),
and loop expressions are only ever verified to produce words of length
no more than $|o|$. Since each iteration must productively decompose a
word of length bounded by $|o|$, we can use $|o|$ as a bound on the
range of $\gamma$. Thus the $\gamma$ have finite domain and range and
require finitely-many states. Now, suppose the program encounters a
loop $\mathsf{Loop}(\lambda x.\, e)$ with current variable map $\gamma$, and
suppose it must verify the loop produces a word $w$. It first sets
$\gamma(x)=1$. Then it guesses a decomposition of $w$ into $w_1w_2$
such that $e$ evaluated with $\gamma$ produces $w_1$ and
$\mathsf{Loop}(\lambda x.\, e)$ evaluated with
$\update{\gamma}{x}{\gamma(x)+1}$ produces $w_2$.
We conclude by \Cref{lemma:mainlemma} that learning $\textsc{String}$ programs
from examples is decidable, even for the generalization that allows
unrestricted regular expressions (which~\cite{flashfill} disallows).
|
2,877,628,091,169 | arxiv | \section{Introduction}
Dark matter (DM) contributes a significant fraction to the energy density of the Universe, with an abundance about five times as large as that of baryonic matter. However, our knowledge about this prevalent form of mass in the Universe is quite limited. The dark sector could very well consist of several distinct species of particles, each with their own interactions and dynamics.
Observations of the Bullet Cluster (1E 0657-558) and galactic DM halos suggest that most of DM is collisionless~\cite{Markevitch:2003at, Randall:2007ph, Peter:2012jh}. Nevertheless, there are hints that at least a subdominant component of DM might be self-interacting. Apart from the missing satellite and cusp vs.~core problems, some observations of galaxy clusters support this claim. A recent weak lensing study of the Abell 3827 cluster \cite{Massey:2015oza} discovered an offset between the distribution of DM and the visible stars in the central galaxies of the cluster, which can be interpreted as a signature of DM self-interactions. Similarly, the distribution of DM in the Abell 520 cluster has been reconstructed from weak gravitational lensing \cite{Mahdavi:2007yp, Jee:2014hja}, and it has been suggested that a mass peak coinciding with the visible hot gas is required.
It is difficult to conceive how these observations could be explained with collisionless DM; and in the case of Abell 520 assuming that all of dark matter is self interacting does not result in a correct DM distribution either~\cite{Kahlhoefer:2013dca}. Instead, an explanation might require that a subdominant component of DM behaves similarly to the visible ionized gas---developing shock fronts while its kinetic energy is converted into heat. In a cluster collision this interacting component of DM will then be subject to dissipation and consequently slow down, analogously to the X-ray emitting hydrogen gas, thus explaining the excess of dark mass on top of the visible gas in Abell 520.
The possible observation of DM plasma is compatible with recently proposed models of a dark disk within our own Galaxy \cite{Fan:2013yva}. In these models, the galactic dark plasma collapses into a thin disk through radiative cooling. In order for this collapse to occur within the lifetime of the galaxy, a light ``dark electron'' is required, which is, however, not necessary for generating the mass distribution observed in the Abell 3827 and 520 clusters.
As a minimal model capturing the essential features of plasma dynamics, we study a specific form of DM, a dark pair plasma consisting of vector-like fermions and anti-fermions that interact via dark photons---gauge bosons of an unbroken dark $U(1)$ gauge symmetry. This model can explain the existence of starless DM halos in galaxy cluster mergers, as well as solve some well known structure formation problems, while it is only weakly constrained by Cosmic Microwave Background (CMB) measurements and Big Bang Nucleosynthesis (BBN). Effects of energy dissipation in the dark sector have been considered previously \cite{Foot:2014uba}, specifically in the context of mirror dark matter \cite{Silagadze:2008fa,Foot:2013nea,Foot:2014mia}. Some of the earlier studies of $U(1)$ charged DM \cite{Ackerman:mha,Feng:2009mn,McDermott:2010pa,Holdom:1985ag,Baek:2013qwa,Baek:2013dwa} have considered Debye shielding in the dark plasma. The possible significance of the Weibel instability in galaxy collisions was mentioned in \cite{Ackerman:mha}. Recent advances in the physics of pair plasmas \cite{Bret:2009, Bret:2013qva, Bret:2014ufa} allow us draw a more definitive picture in this work.
Up to now, studies of DM self-interactions in cluster collisions have mainly focused on the effects of individual scattering events. The aim of the present work is to point out that two-body scattering is negligible compared to the collective plasma effects, and that plasma physics should be the starting point for understanding the phenomenology of charged dark matter. We argue that models where all of DM is charged are ruled out by observations of cluster collisions unless the charge is extremely small. By keeping this in mind we propose a scenario where only a subcomponent of DM is charged.
A dark pair plasma is an appealing form of DM both theoretically as well as experimentally. We show that it can be a thermal relic, avoiding the complicated production mechanisms required for asymmetric DM such as atomic DM \cite{Kaplan:2009de,Kaplan:2011yj,Cline:2012is,CyrRacine:2012fz,Fischler:2014jda}. It does not generate dark acoustic oscillations (DAO) at observable scales, is consistent with CMB measurements, and its dark halos do not collapse to disks, due to inefficient dark bremsstrahlung. In addition, dark $U(1)$ models may solve one of the major unexplained puzzles of the standard model (SM)---the hierarchy of Yukawa couplings that spread over at least 6 orders of magnitude \cite{Gabrielli:2013jka}.
\section{Dark Plasma Dynamics}
\subsection{A Minimal Model}
Plasma is a state of matter in which collective effects, mediated by long range interactions, dominate over hard, short-range collisions of particles. The minimal model of a dark plasma is a pair plasma consisting of thermally produced vector-like dark fermions and anti-fermions. The Lagrangian of the dark sector is dark QED:
\begin{align}
\mathcal{L}
= \mathcal{L}_{SM} - \frac{1}{4} F_{D\mu\nu}F^{\mu\nu}_{D} + \bar{\chi}\left(i \slashed{D} - m_{D}\right)\chi,
\end{align}
where $D_\mu = \partial_\mu - e_{D} A_\mu^{(D)}$ is the covariant derivative, $F_D^{\mu\nu}$ is the field tensor of the dark photon $A^{\mu}_{D}$, $\chi$ is the interacting DM component with mass $m_{D}$ and $e_{D}$ is the dark $U(1)$ charge.
This Lagrangian can be extended by including a kinetic mixing term $F_{D\mu \nu} F^{\mu \nu}$ that results in electrically charged DM. Such a term is severely constrained by recombination and halo dynamics \cite{McDermott:2010pa}. Therefore, based on experimental results, we assume it to be negligible for the rest of this work.\footnote{To avoid generating the problematic kinetic mixing term radiatively \cite{Holdom:1985ag}, we have to assume that particles charged under both the hidden $U(1)$ and the SM hypercharge do not exist, or that they only exist in complete non-degenerate multiplets of a unifying gauge group. In this case, setting the tree-level kinetic mixing term to zero does not require any finetuning.}
Nevertheless, we require an interaction between the two sectors at some high scale to bring them into thermal equilibrium. It is in principle enough to just assume that the visible and hidden sectors are connected by their coupling to the inflaton, and therefore share a common reheating temperature. Here we will omit the details of this interaction, since they are irrelevant to the purpose of this paper. Instead we refer to \cite{Fan:2013yva} where various possibilities for generating the coupling between the visible and hidden sectors without inducing the kinetic mixing term are reviewed.
We then assume that the dark sector is populated in the early Universe through freeze-out of some feeble interaction with the SM above the electroweak scale. Therefore the temperatures of the two sectors coincide at this time, but will evolve differently as the relativistic degrees of freedom drop out of equilibrium in the two sectors.The ratio of the dark sector temperature $T_{D}$ to the photon temperature $T_{\gamma}$ is fixed by entropy conservation
\begin{align}\label{eq:TD/T}
\zeta
\equiv \frac{T_{D}}{T_{\gamma}}
= \left(\frac{g_{*s,\gamma}(T_{\gamma})/g_{*s,\gamma}(T_{*})}{g_{*s,D}(T_{D})/g_{*s,D}(T_{*})}\right)^{1/3},
\end{align}
where $g_{*s,D}$ and $g_{*s,\gamma}$ are the numbers of relativistic degrees of freedom in the two sectors, and $T_{*}$ is the temperature at which the dark and visible sectors were presumably in thermal equilibrium.
The relic density of the fermionic interacting DM is fixed through freeze-out of the annihilation into dark photons. The thermally averaged cross section for the process $\chi \chi \rightarrow \gamma_D \gamma_D$ in the limit $T_{D}\ll m_D$ is
\begin{align}
\langle \sigma v \rangle
= \frac{\alpha_D^2\pi}{m_D^2}
+ \mathcal{O}\left(\frac{T_D}{m_D}\right),
\label{eq:cross section}
\end{align}
where $\alpha_D = e_{D}^{2}/4\pi$ is the fine structure constant of the dark $U(1)$. The Sommerfeld enhancement of this cross section has a negligible impact on the abundance of the dark fermions \cite{Feng:2009mn} and will be ignored. By solving the Boltzmann equation (using the procedure described e.g. in \cite{Gondolo:1990dk}) in terms of the dark sector temperature and expressing the final result in terms of the temperature of the visible sector we can estimate the relic abundance of the interacting DM component as
\begin{align}
\Omega_{\chi} h^2
\approx0.3 \frac{\sqrt{g_{*}}}{g_{*s,\gamma}}
\left(\frac{m_D}{100 {\rm GeV}}\right)^{2}
\left(\frac{\alpha_{EM}}{\alpha_{D}}\right)^{2}
\left(\frac{x_f \zeta_f}{25}\right),
\end{align}
where $\alpha_{EM} = 1/137$, $x = m_D/T_{D}$ and $g_{*}$ is the effective number of relativistic degrees of freedom in both sectors, evaluated at the time of freeze out. The freeze-out temperature can be approximated by
\begin{align}
x_f \approx 26 + \ln\left(\left(\frac{100}{g_{*}}\right)^{1/2}\left(\frac{100 {\rm GeV}}{m_D}\right)\left(\frac{\alpha_{D}}{\alpha_{EM}}\right)^{2}\right)
\end{align}
and the ratio of the hidden and visible sector temperatures at freeze out assumes values in the range \mbox{$\zeta_f \in (0.5,1.5)$}, depending on the fermion mass.
We set this relic abundance to $\Omega_{\chi} = \xi\,\Omega_{\mathrm{CDM}}$, where $\xi$ is the fraction of the interacting species and \mbox{$h^2 \Omega_{\mathrm{CDM}} = 0.1198\pm0.0015$} \cite{Planck:2015xua} is the overall DM abundance. Thus, for $\xi=30$\% interacting DM, the expected relic abundance is obtained approximately if
\begin{align}
\alpha_D \approx 10^{-4} \frac{m_D}{{\rm GeV}}.
\end{align}
\subsection{Collisionless Shocks}
Collisionless shocks are a prevalent phenomenon in astrophysical plasmas \cite{Bret:2015qia}, and have been observed e.g.~in the Earth's bow shock, in the expansion of supernova remnants into the interstellar medium, and in the behavior of the ionized gas in galaxy collisions. In all those situations, the mean free path of particles in the plasmas is orders of magnitude larger than the physical size of the shock fronts.
The physics of collisionless shocks, which arise from collective plasma instabilities, is an active research topic. Laboratory experiments and computer simulations have investigated the formation of instabilities in relativistic and non-relativistic plasmas consisting of electrons and protons, and of electron/positron pairs.
The formation of collisionless shocks can be roughly divided into two phases \cite{Bret:2013qva}. The first phase consists of the the buildup of the instabilities, followed by their saturation. The buildup phase can be studied by linear approximations and is relatively well understood. Depending on the type of plasma different instability modes can dominate~\cite{Bret:2009}.
In the counter-streaming situation that is relevant in cluster mergers, initial fluctuations in the counter-streaming electric currents in the plasma give rise to magnetic fields. These fields enhance the electric currents, which in turn enhance the magnetic fields, leading to an exponential growth.
Finally the field strength will be large enough to stop the stream of incoming particles, and a shock front develops.
The second phase, where the electromagnetic fields and currents reach saturation, is highly nonlinear and can be studied only by means of numerical simulations. Most of the dissipation of kinetic energy takes place in this phase. The latter can be roughly understood as an effect caused by scattering of the upstream particles from the strong electric and/or magnetic fields generated during the first phase.
The effect of the collisionless shocks in a cluster merger can be understood as follows. The DM halos of the colliding subclusters are initially in a stable equilibrium state. Once they begin to overlap, an unstable counter-streaming situation arises, and a shock front quickly develops as described above. This shock front then propagates through the subcluster halo, heating and slowing down the interacting component of DM, similarly to what is seen in the X-ray emitting visible gas. Consequently, a generic prediction of the dark plasma model is the existence of the bow-shaped dark matter shock fronts, that should be visible in the weak lensing reconstructions of the cluster merger events, given sufficient angular resolution.
Our suggestion is that these effects might have already been discovered in the Abell 520 and 3827 clusters. In Abell 520 an excess of DM on top of the visible X-ray emitting plasma between the subclusters is observed. This can be interpreted as the interacting component of dark matter that was slowed down due to the shocks, similarly to the visible plasma.
In Abell 3827 the separation between the stars and the center of mass of the dark matter in the central galaxies can be interpreted along the lines of \cite{Kahlhoefer:2015vua}: The interacting component of DM in the galaxies is counter-streaming against the DM in the main cluster halo. The resulting shocks create an effective drag force slowing down the interacting DM component of the galaxies, resulting in a separation between this component and the rest of the mass. Therefore, the center of mass of the total dark matter distribution is separated from the stars, even though the main component of DM remains on top of the stars in this scenario. Given a high enough resolution, this effect should be observable as separated starless dark matter clumps, similar to what is observed in a larger scale in Abell 520.
\subsection{Shock Formation Time Scale}
To show that these effects indeed are relevant in a typical cluster merger, we shall now examine the fundamental characteristics of the dark plasma---its Debye length, the plasma parameter and the plasma frequency. For the colliding intracluster DM halos, we set the size to $R=200$ kpc and the mass to $M=4 \cdot 10^{13} M_\odot$, corresponding to the dimensions of the colliding Abell 520 subclusters as determined from weak lensing data \cite{Jee:2014hja}. Assuming uniform distribution, the average density of the interacting DM is then $1.36 \cdot 10^{-2}$ GeV/cm$^3$.
This information, together with the interaction strength $\alpha_D$, is sufficient to calculate the time scale for the plasma instabilities. The mean free path of DM particles depends on the temperature of the self-interacting DM component, which we can estimate from the virial theorem for each of the colliding sub-clusters
\begin{equation}
T_{\mathrm{vir}} = \frac{G_N M m_D}{n_{\mathrm{dof}} R} = \frac{M m_D}{3 m_{P}^2 R} = 3.2 \cdot 10^{-6} \, m_D.
\label{eq:dm_temperature}
\end{equation}
Here $G_N=1/m_{P}^2$ is Newton's constant, and $n_{\mathrm{dof}} =3 $ is the number of degrees of freedom of a single particle.
Returning to the dark plasma, the Debye length in our minimal model is
\begin{equation}
\lambda_D
= \sqrt{\frac{T_{\rm vir}}{4 \pi \alpha n}}
\approx \, 30.7 \mbox{ km } \sqrt{\frac{m_{D}}{\mbox{GeV}}} .
\end{equation}
If two particles are separated by more than this distance, the Coulomb interaction between them is effectively shielded by free charge carriers. Clearly this distance is diminutive in comparison with astrophysical scale.
The plasma parameter $\Lambda$ is defined as the number of charge carriers within a sphere of radius $\lambda_D$,
\begin{equation}
\Lambda
= \frac{4 \pi}{3} \; \lambda_D^3 n
\approx 1.7 \cdot 10^{18} \, \sqrt{\frac{m_{D}}{\mbox{GeV}}} .
\end{equation}
Since $\Lambda \gg 1$, the the plasma is weakly coupled and collective effects caused by long range forces dominate plasma dynamics. The characteristic time scale for collective plasma effects is the inverse of plasma frequency
\begin{equation}
1/\omega_{p}
= \left( \frac{m_D}{4 \pi \alpha n} \right)^{1/2}
\approx 57.2 \; \mbox{ms} \; \sqrt{\frac{m_D}{\mbox{GeV}}},
\end{equation}
where $n$ is the number density and $m_D$ the mass of the DM particles.
The mean free path of charged particles in plasma is of the order of \cite{Kulsrud:2004}
\begin{equation}
\lambda_{\mbox{\scriptsize mfp}}
= \lambda_D \; \frac{\Lambda}{\log \Lambda}
\approx 39.4 \mbox{ kpc} \, \left( \frac{m_{D}}{\mbox{GeV}} \right) .
\end{equation}
The time it takes to form a collisionless shockwave can be estimated by considering the instability growth rate. In a symmetric non-relativistic collision of two cold counter-streaming pair plasmas the dominant instability mode is the Two-Stream mode for which the instability growth rate is of the order of the plasma frequency \cite{Bret:2009}. A realistic description of the collision certainly requires a more complicated set-up than a simple cold counter-streaming plasma as the latter does not account for the possible inhomogeneities and temperature or velocity dispersion inside the cluster. However, the latter considerations should not significantly affect the order of magnitude estimate given here. A conservative order of magnitude estimate of the time scale of shockwave formation is then
\begin{align}\label{eq:c_shock_t}
\tau_{s}
\approx 10^{3} \omega_{p}^{-1}
\approx 57.2 \; \mbox{s} \; \sqrt{\frac{m_D}{\mbox{GeV}}} .
\end{align}
The distance for which plasma instabilities become relevant can be estimated by multiplying our estimate for the shock formation time in eq.~(\ref{eq:c_shock_t}) by the typical speed of a cluster collision,
\begin{equation}
\lambda_s \approx \tau_s v_{\mathrm{col}} \sim 10^5 \ {\rm km},
\end{equation}
which is certainly much smaller than the mean free path. Thus, the plasma instabilities affect the dynamics of the interacting DM component at time and distance scales much shorter than the two-body scattering processes. This leads us to conclude that DM charged under an unbroken $U(1)$ interaction should be treated primarily as a plasma that develops collisionless shocks at relatively small scales. Consequently, at larger scales it behaves effectively as a collisional fluid, even if the two-body scattering rate seems to suggest that collisions are insignificant.
We are not aware of studies of non-relativistic pair plasmas. However, a study of relativistic pair plasma collisions \cite{Bret:2014ufa} suggests a time scale that is even an order of magnitude smaller than estimated in eq.~(\ref{eq:c_shock_t}). Our argument here is based on the instability growth rates in the theoretically well understood linear regime. A numerical study, which will provide a more conclusive treatment, is under preparation.
\subsection{Atomic Dark Plasma}
Our minimal model assumes that the interacting subcomponent of DM exists in the form of a collisionless pair plasma. It is also conceivable that the dark plasma could consist of light ``dark electrons'' and more massive ``dark protons'', thus imitating the visible sector. Such scenarios of atomic DM require a non-thermal, asymmetric production history. Here we do not discuss any such models in detail, but note that any mass non-degeneracy of the particles making up the plasma does not significantly affect the estimate of the instability growth rate \cite{Bret:2009}. Therefore the collisionless shock behavior will also dominate the dynamics of atomic dark plasmas.
An interesting possibility for explaining both the subdominant interacting DM species as well as the collisionless main component could be a model of partially ionized atomic dark matter, where the dark plasma consists of dark ions and dark electrons, while the main component of DM is composed of the neutral atoms.
\section{Observational Constraints}
\subsection{Bullet Cluster}
This work was motivated by observations of the Abell 3827 and 520 clusters, but many similar systems exist \cite{Harvey:2015hha}. Perhaps the most unambiguous of those is the Bullet Cluster, from which constraints have been derived on the interactions of DM. It has been shown that no more than 30\% of the total DM mass can be lost from the subcluster as it passes through the main halo \cite{Markevitch:2003at}. Thus, throughout this work we will assume that the fraction of interacting dark matter species obeys \mbox{$\xi \leq 0.3 $.} This rules out models where all of dark matter is charged under an unbroken $U(1)$ as long as the coupling constant is not negligibly small.
\subsection{BBN}
Changing the properties of DM changes the dynamics of the early Universe, and therefore any deviations from the usual $\Lambda$CDM model will be strongly constrained. New relativistic degrees of freedom change the expansion history of the Universe during BBN, leading to a constraint which is usually expressed as a limit on the effective number of light neutrino species \cite{Planck:2015xua},
\begin{align}\label{eq:bound_Neff}
N_{\mathrm{eff}} = 3.04 + 2 \zeta_{BBN}^{4} = 3.15 \pm 0.23,
\end{align}
Assuming one species of fermions we see that the ratio \eqref{eq:TD/T} of the temperatures of the dark and visible sectors at BBN is $\zeta_{BBN} = 0.52$, implying that $N_{\mathrm{eff}} = 3.18$. Therefore Eq.~\eqref{eq:bound_Neff} is satisfied within $1\sigma$.
By considering an extended scenario with $N_D$ effective relativistic dark fermions at $T_{*}$, the bound in eq.~\eqref{eq:bound_Neff} implies
\begin{align}\label{eq:bound2_Neff}
N_D = 0.68 \pm 1.67\,.
\end{align}
Note that if all fermion masses exceed $T_{*}$, then $N_D = 0$ and eq.~\eqref{eq:bound_Neff} is always satisfied.
\subsection{CMB}
Dark matter is usually considered to be pressureless, so that all primordial DM density fluctuations start to grow immediately after they enter the horizon. If a subcomponent of DM interacts with massless dark photons, the growth of structure is suppressed until the DM and dark photons kinetically decouple. This effect could be observed in the Cosmic Microwave Background as a suppression of fluctuations at large multipole moments.
The kinetic decoupling of the dark fermions and the dark photons occurs when the Compton scattering rate in the dark plasma drops below the Hubble rate \cite{Bringmann:2006mu}. The Compton scattering rate for the dark plasma is
\begin{equation}
\Gamma_C = \frac{64\pi^3\alpha_D^2T_D^4}{135 m_D^3},
\end{equation}
and the Hubble rate in the radiation dominated epoch is
\begin{equation}
H=\sqrt{\frac{4\pi^3}{45}g_*}\frac{T_\gamma^2}{m_P},
\end{equation}
where $g_*$ is the effective number of relativistic degrees of freedom in the visible sector, with $g_*=3.36$ at temperatures well below the electron mass, and we have neglected the subdominant contribution of the colder dark sector. Setting $\Gamma_C=H$ we obtain the temperature of kinetic decoupling
\begin{equation}
T_{\rm kin} = \left(\frac{4 g_*}{45\pi^3}\right)^\frac14\frac{\sqrt{135}}{8\zeta_{\rm kin}^2}\frac{m_D^\frac32}{\sqrt{m_P}\alpha_D},
\end{equation}
where $\zeta_{\rm kin}$ is the ratio of the dark and visible photon temperatures at the time of kinetic decoupling. The determination of the exact effect of the DM/dark radiation coupling on the CMB is beyond the scope of this letter. Here we will simply require that decoupling happens above \mbox{$T_{\rm kin} > 640$ eV,} so that the DM/dark radiation coupling only affects multipoles above $l>2500$ and is thus unconstrained by the Planck data. This will lead to a more conservative limit than what would be allowed by a more detailed analysis, but will be used here as a robust constraint.
It should be noted that values near the lower end of this limit, or slightly below it, can help to alleviate the \emph{missing satellites} problem \cite{Bringmann:2006mu,Chu:2014lja}. The cut-off on the size of the gravitationally bounded DM structures due to kinetic coupling with the dark radiation is given by \mbox{$\sim 10^{-4}(T_{\rm kin}/10\ {\rm MeV})^{-3}M_\odot$} \cite{Loeb:2005pm}, so that for \mbox{$T_{\rm kin}\approx 0.5$ keV} the cut off is at $\sim 10^9M_\odot$, as required to ease the missing satellites problem. However, in our case only a subdominant part of DM is coupled to the dark radiation and thus the cut-off will only affect the interacting fraction of DM, so that structures smaller than the cut-off will still exist, only in fewer numbers.
Figure \ref{fig:alpha_vs_mD} depicts the kinetic decoupling constraint on the $(m_D,\alpha_D)$-plane, as well as the parameter space region compatible with the relic abundance. Restricting to the region that produces the desired relic abundance, the kinetic decoupling limit gives a lower limit on the mass of the DM particle.
For $\xi = 0.3$ this limit is roughly \mbox{$m_D \gtrsim 5$ MeV}. We show also the upper limit from requiring that the Landau pole of the dark $U(1)$ lies above the Planck scale, which for $\xi = 0.3$ corresponds to roughly \mbox{$m_D \lesssim 2$ TeV}.
It should be stressed that the constraints for dark matter self interactions, including the kinetic decoupling constraint depicted in the figure, are in reality functions of $\xi$, naturally vanishing as $\xi$ goes to zero. For simplicity we only show the conservative limit, requiring that no effects are generated at observable scales in the CMB. On the other hand, due to the constraints from the Bullet Cluster discussed above, values of $\xi$ much larger than 0.3 are excluded for the complete range of $\alpha_D$ shown in the figure.
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{alpha_vs_mD.pdf}
\caption{The constraints on the parameter space of the dark pair plasma model. The black contours show the relic abundance of the dark plasma, as a fraction of the total DM abundance. The orange shaded region is disfavored by the kinetic decoupling constraint, and the lower limit of that region is favored for alleviating the missing satellites problem. The blue shaded region is excluded, in absence of a UV completion, if we require that the Landau pole of the dark $U(1)$ coupling is above the Planck scale.}
\label{fig:alpha_vs_mD}
\end{center}
\end{figure}
In the case of atomic DM, there is an additional constraint from the acoustic oscillation peak that results from the recombination of dark atoms \cite{Cyr-Racine:2013fsa}. This turns out to be very constraining on the parameter space of the atomic DM, but it still leaves some plausible parameter space for, \emph{e.g.}, the model proposed in \cite{Fan:2013yva}.
\subsection{Dark Halo Stability}
The dark pair plasma will heat up and become virialized both in galactic DM halos and in galaxy cluster collisions. Similar to visible matter, it can dissipate heat through dark bremsstrahlung and through Compton scattering off dark photons. The characteristic time scale for dark bremsstrahlung cooling for the plasma parameters given above is \cite{Fan:2013yva}
\begin{equation}
t_{\mathrm{brems}} \approx \frac{3}{16} \; \frac{m_D^{3/2} T_{\rm vir}^{1/2}}{n_D \alpha_D^3} \approx 6.7 \cdot 10^{19} \; \mbox{yr},
\end{equation}
where the dependence on the DM mass cancels for our model: cooling becomes more efficient for smaller DM mass, but this is offset by the smaller coupling constant required to produce the correct relic density. The characteristic time scale for dark Compton scattering is even larger and can be neglected for all reasonable values of the DM mass and coupling.
A thermally produced pair plasma can therefore not efficiently dissipate heat, so that dark pair plasma halos do not collapse to disks. This is in contrast to atomic DM, consisting of asymmetric light and heavy dark fermions. Because of the non-thermal production mechanism, the dark coupling strength is here independent of the DM density. Atomic DM with light ``dark electrons'' can therefore cool down sufficiently fast to collapse to a dark disk within the lifetime of the Universe. However, the counter-streaming instabilities in the dark plasma might give rise to nontrivial substructure within the galactic DM halo. Conclusive treatment of this issue would require numerical simulations, and is beyond the scope of this letter.
\section{Discussion and Conclusions}
In this letter we considered the possibility that a subdominant component of DM has long-range interactions and exists as a dark pair plasma. We found that a thermally produced dark pair plasma is necessarily collisionless, but will self-interact through collective plasma effects. The possibility of forming collisionless shocks will then modify the dynamics of galaxy cluster collisions, leading to effects such as an offset of DM halos as in Abell 3827, or to an isolated DM clump as in Abell 520.
To our knowledge, no numerical simulations or experiments have so far directly investigated non-relativistic collisionless pair plasmas. Our argument is however quite general, based on instability growth rates in the linear regime. For an atomic DM plasma which imitates the SM, the analogous behavior of visible astrophysical plasmas can be directly used as a proof that collisionless shocks should also exist in the dark sector. Nevertheless, numerical studies of non-relativistic dark pair plasmas would be desirable for a more thorough treatment.
Ultimately, collisionless shocks are an efficient form of DM self-interactions,
explaining the features observed in the Abell 3827 and 520 clusters. If a galaxy moves through an intra-cluster medium of interacting DM, a drag force between the halo DM and the background cluster dark plasma is generated through plasma instabilities, potentially distorting halo shapes or even striping galaxies of the interacting DM component. In cluster mergers bow-shaped dark matter shock fronts are expected to be observable in high resolution weak lensing studies. Thus a dark pair plasma offers spectacular signatures for its discovery, and we encourage the astrophysics community to look for these effects.
\section*{Acknowledgments} We thank Boris Z.~Deshev for bringing the features of Abell 520 to our attention, as well as Antoine Bret and Luca Marzola for useful discussions. This work was supported by grants MJD435, MTT60, IUT23-6, CERN+, and by the EU through the ERDF CoE program.
|
2,877,628,091,170 | arxiv | \section{Introduction}
\noindent{\bf{\em Introduction:}}
While stationary black holes in 4 spacetime dimensions (4D) are
stable to perturbations, higher dimensional analogues are not. Indeed, as first illustrated by
Gregory and Laflamme in the early 90s~\cite{Gregory:1993vy}, black strings and
p-branes are linearly unstable to long wavelength perturbations in 5 and
higher dimensions. Since then, a number of interesting black objects
in higher dimensional gravity have been discovered, many of them exhibiting
similar instabilities (see e.g.~\cite{Emparan:2008eg}).
An open question for all unstable black objects is what the end-state of
the perturbed system is. For black strings~\cite{Gregory:1993vy} conjectured
that the instability would cause the horizon to pinch-off at periodic intervals,
giving rise to a sequence of black holes. One reason for this conjecture comes from entropic
considerations: for a given mass per unit length and periodic spacing above a critical
wavelength $\lambda_c$, a sequence of hyperspherical black holes has higher entropy
than the corresponding black string. Classically, event horizons cannot bifurcate
without the appearance of a naked singularity~\cite{Hawking:1973uf}. Thus, reaching the
conjectured end-state would constitute a violation of cosmic censorship, without
``unnatural'' initial conditions or fine-tuning, and be an example
of a classical system evolving to a regime where quantum gravity is required.
This conjecture was essentially taken for granted until several years later when
it was proved that the generators of the horizon can
not pinch-off in finite affine time~\cite{Horowitz:2001cz}. From this,
it was conjectured that a new, non-uniform black string end-state would be reached~\cite{Horowitz:2001cz}.
Subsequently, stationary, non-uniform black string
solutions were found~\cite{Gubser:2001ac,Wiseman:2002zc}, however, they had less entropy than the uniform string
and so could not be the putative new end-state,
at least for dimensions lower than 13~\cite{Sorkin:2004qq}.
A full numerical investigation studied
the system beyond the linear regime~\cite{Choptuik:2003qd}, though not
far enough to elucidate the end-state before the code ``crashed''.
At that point the horizon resembled
spherical black holes connected by black strings, though no
definitive trends could be extracted, still allowing for
both conjectured possibilities:
(a) a pinch-off in infinite affine time,
(b) evolving to a new, non-uniform state.
If (a), a question arises whether pinch-off happens in
infinite {\em asymptotic} time; if so, any bifurcation
would never be seen by outside observers, and
cosmic censorship would hold. While this might be a natural conclusion,
it was pointed out in~\cite{Garfinkle:2004em,Marolf:2005vn} that
due to the exponentially diverging rate between affine time and
a well-behaved asymptotic time, pinch-off could
occur in finite asymptotic time.
A further body of (anecdotal) evidence supporting the GL conjecture
comes from the striking resemblance of the equations
governing black hole horizons to those describing fluid flows, the latter which
do exhibit instabilities that often result in break-up of the fluid. The fluid/horizon
connection harkens back to the
membrane paradigm~\cite{Thorne:1986iy}, and also in
more recently developed correspondences~\cite{Bhattacharyya:2008jc,Emparan:2009at}.
In~\cite{Cardoso:2006ks} it was shown that the dispersion relation of Rayleigh-Plateau unstable
modes in hyper-cylindrical fluid flow with tension agreed well with those of the GL modes
of a black string.
Similar behavior was found for instabilities of a self-gravitating cylinder of
fluid in Newtonian gravity~\cite{Cardoso:2006sj}.
In~\cite{Camps:2010br}, using a perturbative expansion of the Einstein field equations~\cite{Emparan:2009at}
to related the dynamics of the horizon to that of a viscous fluid,
the GL dispersion relation was {\em derived} to good approximation,
thus going one step further than showing analogous behavior between
fluids and horizons.
What is particularly intriguing about fluid analogies, and what they might
imply about the black string case,
is that break-up of an unstable flow is preceded by formation of spheres
separated by thin necks.
For high viscosity liquids, a single neck forms before
break-up. For lower viscosity fluids, smaller ``satellite'' spheres can form
in the necks, with more generations forming the lower the viscosity (see
~\cite{Eggers:1997zz} for a review). In the membrane paradigm, black holes have
lower shear viscosity to entropy ratio than any known fluid~\cite{Kovtun:2004de}.
Here we revisit the evolution of 5D black strings using a new code.
This allows us to follow the evolution well beyond the earlier study~\cite{Choptuik:2003qd}.
We find that the dynamics of the horizon
unfolds as predicted by the low viscosity fluid analogues: the string initially evolves
to a configuration resembling a hyperspherical black hole connected by thin
string segments; the string segments are themselves unstable, and the
pattern repeats in a self-similar manner to ever smaller scales. Due to finite
computational resources, we cannot follow the dynamics indefinitely. If
the self-similar cascade continues as suggested by the simulations,
arbitrarily small length scales, and in consequence arbitrarily large curvatures
will be revealed outside the horizon in finite asymptotic time.
\noindent{\bf{\em Numerical approach:}}
We solve the vacuum Einstein field equations
in a 5-dimensional (5D)
asymptotically flat spacetime with an $SO(3)$ symmetry. Since
perturbations of 5D black strings violating this
symmetry are stable and decay~\cite{Gregory:1993vy}, we do not expect
imposing this symmetry qualitatively affects the results
presented here.
We use the generalized harmonic formulation of the field
equations~\cite{Pretorius:2004jg}, and adopt a {\em Cartesian} coordinate
system related to spherical polar coordinates via
$\bar{x}^i=(\bar{t},\bar{x},\bar{y},\bar{z},\bar{w})=(t,r\cos\phi\sin\theta,r\sin\phi\sin\theta,r\cos\theta,z)$.
The black string horizon has topology $S^2\times R$; $(\theta,\phi$)
are coordinates on the 2-sphere, and $z$ ($\bar{w}$) is the coordinate
in the string direction, which we make periodic with length $L$.
We impose a {\em Cartesian Harmonic} gauge condition,
i.e. $\nabla_\alpha \nabla^\alpha \bar{x}^i =0$,
as empirically this seems to result in more stable numerical
evolution compared to spherical harmonic coordinates.
The $SO(3)$ symmetry is enforced using the variant of the
``cartoon'' method~\cite{Alcubierre:1999ab} described in~\cite{Pretorius:2004jg},
were we only evolve a $\bar{y}=\bar{z}=0$ slice of the spacetime.
We further add {\em constraint damping}~\cite{Gundlach:2005eh},
which introduces two parameters $\kappa$ and $\rho$; we use $(\kappa,\rho=1,-0.5)$,
where a non-zero $\rho$ is essential to damp an unstable zero-wavelength
mode arising in the $z$ direction.
We discretize the equations using
4th order finite difference approximations, and integrate in
time using 4th order Runge-Kutta.
To resolve the small length scales that develop during evolution
we use Berger and Oliger adaptive mesh refinement.
Truncation error estimates are used to dynamically generate the mesh hierarchy, and
we use a spatial and temporal refinement ratio of 2.
At the outer boundary we impose Dirichlet conditions, with the metric
set to that of the initial data. These conditions are not strictly
physically correct at finite radius, though the outer boundary is placed
sufficiently far that it is causally disconnected
from the horizon for the time of the simulation.
We use black hole excision on the inner surface; namely, we
find the apparent horizon (AH) using a flow method, and dynamically adjust
this boundary (the {\em excision surface}) to be some distance within the AH. Due to the causal nature of spacetime
inside the AH, no boundary conditions are placed on the excision surface.
We adopt initial data describing a perturbed black string of mass per unit length $M$
and length $L=20M\approx 1.4 L_c$ ($L_c$ is the critical length above which all perturbations
are unstable). This data was used in~\cite{Choptuik:2003qd} and
we refer the reader to that work for further details.
We evaluate the following curvature scalars on the AH:
\begin{equation}\label{inv_def}
K=I R_{AH}^4/12, \ \ \ S=27 \left( 12 J^2 I^{-3}-1 \right ) + 1 \, ,
\end{equation}
where $I=R_{abcd}R^{abcd}$, $J=R_{abcd} R^{cdef} R_{ef}{}^{ab}$ and $R_{AH}$ is the
areal radius of the AH at the corresponding point (though note that
$I$ and $J$ are usually defined in terms of the Weyl tensor,
though here this equals the Riemann tensor as we are in vacuum).
$K$ and $S$ have been scaled
to evaluate to $\{6,1\}$ for the hyperspherical black hole and black
string respectively.
\noindent{\bf{\em Results:}}
The results described here are from simulations where the computational domain
is $(r,z)\in ([0,320M]\times[0,20M])$. The coarsest grid covering the entire
domain has a resolution of $(N_r,N_z)=(1025,9)$ points. For convergence
studies we ran simulations with 3 values of the maximum estimated
truncation error $\tau$: [``low'',``medium'',``high''] resolution have
$\tau=[\tau_0,\tau_0/8,\tau_0/64]$ respectively.
This leads to an initial hierarchy where the horizon of the black string
is covered by 4, 5 and 6 additional refined levels for the
low to high resolutions, respectively. Each
simulation was stopped when the estimated computational resources
required for continued evolution was prohibitively high (which naturally occurred
later in physical time for the lower resolutions); by then the
hierarchies were as deep as 17 levels.
Fig.~\ref{fig:AH_radius} shows the integrated AH area $A$ within $z\in[0,L]$
versus time. At the end of the lowest resolution run the total area
is $A=(1.369\pm0.005) A_0$\footnote{The error in the area was estimated
from convergence at the latest time data was available from all simulations}, where $A_0$ is the initial area; interestingly,
this almost reaches the value of $1.374 A_0$ that an exact 5D black hole
of the same total mass would have.
\begin{figure}
\begin{center}
\includegraphics[width=3.in,clip=true]{i7_AH_area.ps}
\caption{(Normalized) apparent horizon area vs. time.
}
\label{fig:AH_radius}
\end{center}
\end{figure}
Fig.~\ref{fig:AH_embed} shows snapshots of embedding diagrams
of the AH, and Fig.~\ref{fig:AH_invariants} shows the curvature
invariants (\ref{inv_def}) evaluated on the AH at the last time step,
both from the medium resolution run.
\begin{figure}
\begin{center}
\includegraphics[width=3.in,clip=true]{i6_L0_AH_embed_t1_to_10_b.ps}
\includegraphics[width=3.in,clip=true]{i6_L0_AH_embed_t13_to_23_b.ps}
\includegraphics[width=3.in,clip=true]{i6_L0_AH_embed_t24_to_36_b.ps}
\caption{Embedding diagram of the apparent horizon at several
instances in the evolution of the perturbed black string,
from the medium resolution run.
$R$ is areal radius, and the embedding coordinate
$Z$ is defined so that the proper length of the horizon in
the space-time $z$ direction (for a fixed $t,\theta,\phi$) is exactly equal to the
Euclidean length of $R(Z)$ in the above figure.
For visual aid copies of the diagrams reflected about $R=0$ have
also been drawn in.
The light (dark) lines denote the first (last) time from the time-segment
depicted in the corresponding panel.
The computational domain is periodic in $z$ with period $\delta z = 20M$; at the
initial (final) time of the simulation $\delta Z=20M$ ($\delta Z=27.2M$). }
\label{fig:AH_embed}
\end{center}
\end{figure}
The shape of the AH, and that the invariants are
tending to the limits associated with pure black strings or black
holes at corresponding locations on the AH, suggests it is
reasonable to describe the local geometry as being similar
to a sequence of black holes connected by black strings.
This also strongly suggests that satellite formation will continue
self-similarly, as each string segment resembles
a uniform black string that is sufficiently long to be unstable.
Even if at some point in the cascade shorter segments were to form,
this would not be a stable configuration as generically
the satellites will have some non-zero $z$-velocity,
causing adjacent satellites to merge and effectively lengthening the
connecting string segments.
With this interpretation, we summarize key features of the AH dynamics
in Table~\ref{tab_properties}.
\begin{table}
{\small
\begin{tabular}[t]{| c || c | c | c | c | c |}
\hline
Gen. & $t_i/M$ & $R_{s,i}/M$ & $L_{s,i}/R_{s,i}$ & $n_s$ & $R_{h,f}/M$\\
\hline
1 & $118.1\pm0.5$ & $2.00$ & $10.0$ & $ 1 $ & $4.09\pm0.5\%$\\
\hline
2 & $203.1\pm0.5$ & $0.148\pm1\%$ & $105\pm1\%$ & $ 1 $ & $0.63\pm2\%$\\
\hline
3 & $223\pm2$ & $0.05\pm20\%$ & $\approx 10^2$ & $ >1$ & $0.1 - 0.2$ \\
\hline
4 & $\approx 227$ & $\approx 0.02 $ & $\approx 10^2$ & $ >1(?)$ & ? \\
\hline
\end{tabular}
}
\caption{Properties of the evolving black string apparent horizon,
{\em interpreted} as proceeding through several
self-similar generations, where each local string segment temporarily
reaches a near-steady state before the onset of the next GL instability.
$t_i$ is the time when the instability has grown to where
the nascent spherical
region reaches an areal radius $1.5$ times
the surrounding string-segment radius $R_{s,i}$, which has
an estimated proper length $L_{s,i}$ (the critical $L/R$ is $\approx 7.2$~\cite{Gregory:1993vy}).
$n_s$ is the number of satellites that form per segment, that
each attain a radius $R_{h,f}$ measured at the end of the simulation.
Errors, where appropriate, come from convergence tests.
After the second generation
the number and distribution of satellites that form
depend sensitively on grid parameters, and perhaps the only
``convergent'' result we have then
is at roughly $t=223$ a third generation {\em does} develop.
We surmise the reason for this is the long
parent string segments could have multiple unstable modes
with similar growth rates, and which is first excited
is significantly affected by truncation error.
We have only had the resources to run the lowest resolution simulation for sufficiently long
to see the onset of the 4th generation, hence the lack of error estimates
and presence of question marks in the corresponding row.
}
\label{tab_properties}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[width=3.0in,clip=true]{i6_L0_inv_area.ps}
\caption{Curvature invariants evaluated on the apparent horizon at the
last time of the simulation depicted in Fig.~\ref{fig:AH_embed}.
The invariant $K$ evaluates to $1$ for an exact black string, and $6$ for an exact
spherical black hole; similarly for $S$ (\ref{inv_def}).
}
\label{fig:AH_invariants}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=3.2in,clip=true]{i6_L0_ln_AH_r_z_const_ln_t.ps}
\caption{
Logarithm of the areal radius vs. logarithm of time
for select points on the apparent horizon from
the simulation depicted in Fig.~\ref{fig:AH_embed}.
We have shifted the time axis {\em assuming} self-similar behavior;
the putative naked singularity forms at asymptotic time $t/M\approx 231$.
The coordinates
at $z=15,5$ and $4.06$ correspond to the maxima of the areal
radii of the first and second generation satellites, and
one of the third generation satellites at the time the simulation
stopped.
The value $z=6.5$ is a representative slice in the middle of a
piece of the horizon that remains string-like throughout the evolution.
}
\label{fig:AH_radius_vs_lnt}
\end{center}
\end{figure}
We estimate when this self-similar cascade will end.
The time when the first satellite appears is controlled
by the perturbation imparted by the initial data; here that
is $T_0/M\approx 118$.
Subsequent timescales should approximately represent the generic
development of the instability. The time for the
first instability {\em after} that sourced by the initial data
is $T_1/M\approx 80$. Beyond that, with the caveats that we have
a small number of points and poor control over errors at late
times, each subsequent instability unfolds
on a time-scale $X\approx1/4$ times that of the preceding one.
This is to be expected if, as for the exact black string,
the time scale is proportional to the string radius. The time $t_0$
of the end-state is then $t_0 \approx T_0 + \sum_{i=0}^\infty T_1 X^i = T_0 + T_1/(1-X)$.
For the data here, $t_0 /M\approx 231$; then the local
string segments reach zero radius, and
the curvature visible to exterior observers diverges.
Fig.~\ref{fig:AH_radius_vs_lnt} shows a few points on the AH, scaled
assuming this behavior. In the Rayleigh-Plateau analogue,
the shrinking neck of a fluid stream has a self-similar scaling solution
that satisfies $r\propto (t_0-t)$, or, $d\ln r/d(-\ln(t_0-t))=-1$,
where $r$ is the stream radius (see ~\cite{eggers_pinch}, and~\cite{miyamoto_extend}
for extensions to higher dimensions);
to within $10-20\%$ this is the average slope we see (e.g. Fig.~\ref{fig:AH_radius_vs_lnt})
at string segments of the AH at late times.
\noindent{\bf{\em Conclusions:}}
We have
studied the dynamics of a perturbed,
unstable 5D black string. The
horizon behaves
similarly to
the surface of a stream of low viscosity
fluid subject to the Rayleigh-Plateau instability. Multiple
generations of spherical satellites, connected by ever thinner string
segments, form. Curvature invariants on the horizon suggest
this is a self-similar process, where at each stage the local string/spherical
segments resemble the corresponding exact solutions. Furthermore, the time scale
for the formation of the next generation is proportional
to the local string radius, implying the cascade will terminate in finite
asymptotic time. Since local curvature scalars grow with inverse
powers of the string radius, this end-state will thus be a naked, curvature singularity.
If quantum gravity resolves these singularities, a series of spherical
black holes will emerge. However, small
momentum perturbations in the extra dimension would induce the merger of
these black holes, thus for a compact
extra dimension the end state of the GL instability will be a single
black hole with spherical topology.
The kind of singularity reached here via a self-similar process is
akin to that formed in critical gravitational collapse~\cite{Choptuik:1992jv}; however,
here no fine-tuning is required. Thus,
5 (and presumably higher) dimensional Einstein gravity allows solutions
that generically violate cosmic censorship.
Angular momentum will likely not alter this conclusion, since
as argued in~\cite{Emparan:2009at}, and shown
in~\cite{Dias:2010eu}, rotation does not suppress
the unstable modes, and moreover induces super-radiant
and gyrating instabilities ~\cite{Marolf:2004fya}.
\noindent{\bf{\em Acknowledgments:}}
We thank
V. Cardoso, M. Choptuik, R. Emparan, D. Garfinkle, K. Lake,
S. Gubser, G. Horowitz, D. Marolf, R. Myers,
W. Unruh and R. Wald for stimulating discussions.
This work was supported by NSERC (LL), CIFAR (LL),
the Alfred P. Sloan Foundation (FP), and NSF grant PHY-0745779 (FP).
Simulations were run on the {\bf Woodhen} cluster at Princeton University and
at LONI. Research at Perimeter Institute is
supported through Industry Canada and by the Province of Ontario
through the Ministry of Research \& Innovation.
\bibliographystyle{apsrev}
|
2,877,628,091,171 | arxiv | \section*{Appendix 1}
\label{sec:appendix-2}
This section presents the set of hyper parameters used in XGboost algorithm to run the experiments. The configuration attributes are:
\begin{itemize}
\item booster = ``gbtree'';
\item objective = ``reg:linear'';
\item eta = $0.05$;
\item max\_depth = $2$;
\item min\_child\_weight = $100$;
\end{itemize}
\section*{Appendix 2}
\label{sec:appendix-1}
This section provides experimental results using the size of rolling window $L = 126$ and $L=504$ trading days to construct the financial networks. Considering $L=126$, Figs.~\ref{fig:auc-all-methods-dag-126},~\ref{fig:auc-all-methods-dtn-126} and~\ref{fig:auc-all-methods-dmst-126} show the AUC measure of the proposed machine learning method compared against baseline algorithms for DAG, DTN and DMST network filtering methods. Considering $L=504$, Figs.~\ref{fig:auc-all-methods-dag-504},~\ref{fig:auc-all-methods-dtn-504} and~\ref{fig:auc-all-methods-dmst-504} present results for DAG, DTN and DMST network filtering methods, respectively. For each time step ahead $h$, we calculated the average AUC of each method and its respective standard error over the test period, ranging from $5$ May $2007$ to $5$ September $2020$.
Figs.~\ref{fig:auc-all-comparison-126} and~\ref{fig:auc-all-comparison-504} present the AUC performance and the AUC$^\ast$ improvement of the proposed method using $L= 126$ and $L=504$ for $h$ trading weeks ahead ($1 \leq h \leq 20)$. Results are provided for DAG, DTN and DMST network filtering methods. The AUC$^\ast$ improvement is calculated over the time invariant (TI) benchmark method.
\begin{figure*}[h!]
\centering
\subfigure[DAX30]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-ALL/126/DAX30-sw-126-AUC.pdf}}
\subfigure[EUROSTOXX50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-ALL/126/EUROSTOXX50-sw-126-AUC.pdf}}
\subfigure[FTSE100]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-ALL/126/FTSE100-sw-126-AUC.pdf}}
\subfigure[HANGSENG50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-ALL/126/HANGSENG50-sw-126-AUC.pdf}}
\subfigure[NASDAQ100]{\includegraphics[trim=0.2cm 1.8cm 0.1cm 0.2cm, clip=true, width=0.2804\textwidth]{fig/AUC-ALL/126/NASDAQ100-sw-126-AUC.pdf}}
\subfigure[NIFTY50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-ALL/126/NIFTY50-sw-126-AUC.pdf}}
\caption{\textbf{DAG - Predictive performance comparison of all methods.} Figure shows the AUC measure of the machine learning method compared against the baseline methods. For each time step, we calculate the AUC average of each method and its respective standard error over the entire test period. The machine learning method overcomes the baseline methods in all market indices.}
\label{fig:auc-all-methods-dag-126}
\end{figure*}
\begin{figure*}[h!]
\centering
\subfigure[DAX30]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-T/126/DAX30-sw-126-AUC.pdf}}
\subfigure[EUROSTOXX50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-T/126/EUROSTOXX50-sw-126-AUC.pdf}}
\subfigure[FTSE100]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-T/126/FTSE100-sw-126-AUC.pdf}}
\subfigure[HANGSENG50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-T/126/HANGSENG50-sw-126-AUC.pdf}}
\subfigure[NASDAQ100]{\includegraphics[trim=0.2cm 1.8cm 0.1cm 0.2cm, clip=true, width=0.2804\textwidth]{fig/AUC-T/126/NASDAQ100-sw-126-AUC.pdf}}
\subfigure[NIFTY50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-T/126/NIFTY50-sw-126-AUC.pdf}}
\caption{\textbf{DTN - Predictive performance comparison of all methods.} Figure shows the AUC measure of the machine learning method compared against the baseline methods. For each time step, we calculate the AUC average of each method and its respective standard error over the entire test period. The machine learning method overcomes the baseline methods in all market indices.}
\label{fig:auc-all-methods-dtn-126}
\end{figure*}
\begin{figure*}[h!]
\centering
\subfigure[DAX30]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-MST/126/DAX30-sw-126-AUC.pdf}}
\subfigure[EUROSTOXX50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-MST/126/EUROSTOXX50-sw-126-AUC.pdf}}
\subfigure[FTSE100]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-MST/126/FTSE100-sw-126-AUC.pdf}}
\subfigure[HANGSENG50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-MST/126/HANGSENG50-sw-126-AUC.pdf}}
\subfigure[NASDAQ100]{\includegraphics[trim=0.2cm 1.8cm 0.1cm 0.2cm, clip=true, width=0.2804\textwidth]{fig/AUC-MST/126/NASDAQ100-sw-126-AUC.pdf}}
\subfigure[NIFTY50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-MST/126/NIFTY50-sw-126-AUC.pdf}}
\caption{\textbf{DMST - Predictive performance comparison of all methods.} Figure shows the AUC measure of the machine learning method compared against the baseline methods. For each time step, we calculate the AUC average of each method and its respective standard error over the entire test period. The machine learning method overcomes the baseline methods in all market indices.}
\label{fig:auc-all-methods-dmst-126}
\end{figure*}
\begin{figure*}[h!]
\centering
\subfigure[DAG (AUC)]{\includegraphics[trim=0.0cm 0.0cm 9.2cm 1.3cm, clip=true, width=0.29\textwidth]{fig/AUC-ALL/126/only_network-sw-126-AUC-COMPARISON.pdf}}
\subfigure[DAG (AUC$^\ast$)]{\includegraphics[trim=0.0cm 0.0cm 0.0cm 1.3cm, clip=true, width=0.416875\textwidth]{fig/AUC-ALL/126/only_network-sw-126-AUC-IMPROVMENT-COMPARISON.pdf}}
\subfigure[DTN (AUC)]{\includegraphics[trim=0.0cm 0.0cm 9.2cm 1.3cm, clip=true, width=0.29\textwidth]{fig/AUC-T/126/only_network-sw-126-AUC-COMPARISON.pdf}}
\subfigure[DTN (AUC$^\ast$)]{\includegraphics[trim=0.0cm 0.0cm 0.0cm 1.3cm, clip=true, width=0.416875\textwidth]{fig/AUC-T/126/only_network-sw-126-AUC-IMPROVMENT-COMPARISON.pdf}}
\subfigure[DMST (AUC)]{\includegraphics[trim=0.0cm 0.0cm 9.2cm 1.3cm, clip=true, width=0.29\textwidth]{fig/AUC-MST/126/only_network-sw-126-AUC-COMPARISON.pdf}}
\subfigure[DMST (AUC$^\ast$)]{\includegraphics[trim=0.0cm 0.0cm 0.0cm 1.3cm, clip=true, width=0.416875\textwidth]{fig/AUC-MST/126/only_network-sw-126-AUC-IMPROVMENT-COMPARISON.pdf}}
\caption{\textbf{Machine Learning AUC and AUC$^\ast$ for DAG, DTN and DMST network filter methods.} Panels (a), (c) and (e) present the machine learning AUC measure and its standard error for $h$ trading weeks ahead ($1 \leq h \leq 20)$. Panels (b), (d) and (f) present the AUC improvement over the benchmark time-invariant method and its standard error. Results for $L = 126$.}
\label{fig:auc-all-comparison-126}
\end{figure*}
\begin{figure*}[h!]
\centering
\subfigure[DAX30]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-ALL/504/DAX30-sw-504-AUC.pdf}}
\subfigure[EUROSTOXX50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-ALL/504/EUROSTOXX50-sw-504-AUC.pdf}}
\subfigure[FTSE100]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-ALL/504/FTSE100-sw-504-AUC.pdf}}
\subfigure[HANGSENG50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-ALL/504/HANGSENG50-sw-504-AUC.pdf}}
\subfigure[NASDAQ100]{\includegraphics[trim=0.2cm 1.8cm 0.1cm 0.2cm, clip=true, width=0.2804\textwidth]{fig/AUC-ALL/504/NASDAQ100-sw-504-AUC.pdf}}
\subfigure[NIFTY50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-ALL/504/NIFTY50-sw-504-AUC.pdf}}
\caption{\textbf{DAG - Predictive performance comparison of all methods.} Figure shows the AUC measure of the machine learning method compared against the baseline methods. For each time step, we calculate the AUC average of each method and its respective standard error over the entire test period. The machine learning method overcomes the baseline methods in all market indices.}
\label{fig:auc-all-methods-dag-504}
\end{figure*}
\begin{figure*}[h!]
\centering
\subfigure[DAX30]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-T/504/DAX30-sw-504-AUC.pdf}}
\subfigure[EUROSTOXX50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-T/504/EUROSTOXX50-sw-504-AUC.pdf}}
\subfigure[FTSE100]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-T/504/FTSE100-sw-504-AUC.pdf}}
\subfigure[HANGSENG50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-T/504/HANGSENG50-sw-504-AUC.pdf}}
\subfigure[NASDAQ100]{\includegraphics[trim=0.2cm 1.8cm 0.1cm 0.2cm, clip=true, width=0.2804\textwidth]{fig/AUC-T/504/NASDAQ100-sw-504-AUC.pdf}}
\subfigure[NIFTY50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-T/504/NIFTY50-sw-504-AUC.pdf}}
\caption{\textbf{DTN - Predictive performance comparison of all methods.} Figure shows the AUC measure of the machine learning method compared against the baseline methods. For each time step, we calculate the AUC average of each method and its respective standard error over the entire test period. The machine learning method overcomes the baseline methods in all market indices.}
\label{fig:auc-all-methods-dtn-504}
\end{figure*}
\begin{figure*}[h!]
\centering
\subfigure[DAX30]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-MST/504/DAX30-sw-504-AUC.pdf}}
\subfigure[EUROSTOXX50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-MST/504/EUROSTOXX50-sw-504-AUC.pdf}}
\subfigure[FTSE100]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-MST/504/FTSE100-sw-504-AUC.pdf}}
\subfigure[HANGSENG50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-MST/504/HANGSENG50-sw-504-AUC.pdf}}
\subfigure[NASDAQ100]{\includegraphics[trim=0.2cm 1.8cm 0.1cm 0.2cm, clip=true, width=0.2804\textwidth]{fig/AUC-MST/504/NASDAQ100-sw-504-AUC.pdf}}
\subfigure[NIFTY50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-MST/504/NIFTY50-sw-504-AUC.pdf}}
\caption{\textbf{DMST - Predictive performance comparison of all methods.} Figure shows the AUC measure of the machine learning method compared against the baseline methods. For each time step, we calculate the AUC average of each method and its respective standard error over the entire test period. The machine learning method overcomes the baseline methods in all market indices.}
\label{fig:auc-all-methods-dmst-504}
\end{figure*}
\begin{figure*}[h!]
\centering
\subfigure[DAG (AUC)]{\includegraphics[trim=0.0cm 0.0cm 9.2cm 1.3cm, clip=true, width=0.29\textwidth]{fig/AUC-ALL/504/only_network-sw-504-AUC-COMPARISON.pdf}}
\subfigure[DAG (AUC$^\ast$)]{\includegraphics[trim=0.0cm 0.0cm 0.0cm 1.3cm, clip=true, width=0.416875\textwidth]{fig/AUC-ALL/504/only_network-sw-504-AUC-IMPROVMENT-COMPARISON.pdf}}
\subfigure[DTN (AUC)]{\includegraphics[trim=0.0cm 0.0cm 9.2cm 1.3cm, clip=true, width=0.29\textwidth]{fig/AUC-T/504/only_network-sw-504-AUC-COMPARISON.pdf}}
\subfigure[DTN (AUC$^\ast$)]{\includegraphics[trim=0.0cm 0.0cm 0.0cm 1.3cm, clip=true, width=0.416875\textwidth]{fig/AUC-T/504/only_network-sw-504-AUC-IMPROVMENT-COMPARISON.pdf}}
\subfigure[DMST (AUC)]{\includegraphics[trim=0.0cm 0.0cm 9.2cm 1.3cm, clip=true, width=0.29\textwidth]{fig/AUC-MST/504/only_network-sw-504-AUC-COMPARISON.pdf}}
\subfigure[DMST (AUC$^\ast$)]{\includegraphics[trim=0.0cm 0.0cm 0.0cm 1.3cm, clip=true, width=0.416875\textwidth]{fig/AUC-MST/504/only_network-sw-504-AUC-IMPROVMENT-COMPARISON.pdf}}
\caption{\textbf{Machine Learning AUC and AUC$^\ast$ for DAG, DTN and DMST network filter methods.} Panels (a), (c) and (e) present the machine learning AUC measure and its standard error for $h$ trading weeks ahead ($1 \leq h \leq 20)$. Panels (b), (d) and (f) present the AUC improvement over the benchmark time-invariant method and its standard error. Results for $L = 504$.}
\label{fig:auc-all-comparison-504}
\end{figure*}
\section{Appendix 3}
\begin{algorithm}
\caption{Machine Learning Based Approach}\label{alg:cap}
\begin{algorithmic}
\Require $G(V,E)$
\State $k \gets 30$
\For{$k$ in $1$ to $30$}
\State extractNodeLevelFeatures ( $G_{t-k}(V,E)$ )
\State extractLinkLevelFeatures ( $G_{t-k}(V,E)$ )
\EndFor
\If{$N$ is even}
\State $X \gets X \times X$
\State $N \gets \frac{N}{2}$ \Comment{This is a comment}
\ElsIf{$N$ is odd}
\State $y \gets y \times X$
\State $N \gets N - 1$
\EndIf
\end{algorithmic}
\end{algorithm}
\section{Conclusion}
\label{sec:conclusion}
In this article, we investigated stock market structure forecasting of multiple financial markets using financial networks modeled using stock returns of major market indices constituents. The stock market structure was modeled as networks, where nodes represent assets and edges represent the relationship among them. Three correlation based filtering methods were used to create stock networks: Dynamic Asset Graphs (DAG), Dynamic Threshold Networks (DTN) and Dynamic Minimal Spanning Tree (DMST). We formulated market structure forecasting as a network link prediction problem, where we aim to accurately predict the edges that will be present in future networks. We proposed and experimentally assessed a machine learning model based on node- and link-based financial network features to forecast future market structure.
We used data from company constituents of six different stock market indices from the U.S., the U.K., India, Europe, Germany and Hong Kong markets, ranging from $1$ March $2005$ to $18$ December $2019$. To assess the predictive performance of the model, we compared it to seven link prediction benchmark algorithms. Experimental results showed the proposed model was able to forecast the market structure with a performance superior to all benchmark methods and for all market indices, regardless the network filter method. We also measured the improvement against the Time Invariant (TI) algorithm, which assumes that the network does not change over time. Experimental results showed a greater improvement over the TI in networks created using the DTN filtering method, reaching almost $40\%$ improvement for NASDAQ100. Our experimental results also suggested that topological network information is useful in forecasting stock market structure compared to pair-wise correlation measures, particularly for long-horizon predictions.
As work limitations, we should emphasize that we only used assets that stayed in the market index throughout the whole period, which limits the insertion and removal of nodes in the networks. In addition, for networks with large number of nodes, the execution time increased significantly, both for generating derived features and for training ML models.
Our results can be useful in the study of stock market dynamics and to improve portfolio selection and risk management on a forward-looking basis and market structure estimation. As future work, we plan to use the predicted stock market structure as input in portfolio and risk management tools to evaluate its usefulness in risk management scenarios. Future work also includes market structure forecasting using order book data for high frequency trading analysis and the study of different asset classes beyond equities.
\subsection{Market Data}
In this study, we used data from six different stock market indices spread across the American, European and Asian markets. The stock indices were chosen to measure the performance of the proposed approach in different scenarios, given the diversity of the stock markets. Moreover, it is important to mention that they represent the stock market of the region or country where they are listed. We considered the following indices and associated countries/regions:
\begin{itemize}
\item \textbf{DAX30} (Germany): This is a stock market index that consists of the $30$ largest and most liquid German companies trading on the Frankfurt Stock Exchange.
\item \textbf{EUROSTOXX50} (Eurozone): This is a list of the $50$ companies that are leaders in their respective sectors from eleven Eurozone countries, including Austria, Belgium, Finland, France, Germany, Ireland, Italy, Luxembourg, the Netherlands, Portugal and Spain.
\item \textbf{FTSE100} (United Kingdom): This is an index listed in the London Stock Exchange. The Financial Times Stock Exchange Index (FTSE) is Britain's main asset indicator, managed by the independent organization and calculated based on the $100$ largest companies in the United Kingdom.
\item \textbf{HANGSENG50} (Hong Kong): This is an index listed in the Stock Exchange of Hong Kong. This stock market index has the $50$ constituent companies with the highest market capitalization. It is the main indicator of the market performance in Hong Kong.
\item \textbf{NASDAQ100} (United States): This is an index composed of the $100$ non-financial largest companies listed in NASDAQ.
\item \textbf{NIFTY50} (India): This is a stock market index listed in the National Stock Exchange of India based on the $50$ largest Indian companies.
\end{itemize}
Each financial index has a daily price time series for each one of its constituent stocks. Price time series are constructed using daily closing prices collected from \textit{Thomson Reuters}. The list of company constituents of each stock market index is not static and may change over time. In this article, we only consider companies that were part of the underlying indices across the entire period analyzed, as commonly used in other studies, when node prediction is out-of-scope~\cite{SOUZA2019122343, 10.1007/978-3-030-22744-9_27}.
We consider prices ranging from $1$ March $2005$ to $18$ December $2019$.
\section{Results and Discussion}
\label{sec:results-and-discussion}
In this section, we present the experimental results for financial market structure forecasting. Initially, we present a set of descriptive analyses on evolution of financial networks and a brief discussion about the impact of the different network filtering methods in the financial market structure. Afterwards, we present a set of predictive analyses related to the machine learning approach and the benchmark methods. Finally, we present a discussion about the interpretability of the machine learning models.
\subsection{Descriptive Analysis}
We present a set of descriptive analyses of temporal financial networks created across different market indices. Our first descriptive analysis describes financial network persistence, considering $L = 252$ trading days to create each graph (results regarding $L \in \lbrace 126, 504 \rbrace$ trading days can be found in Supplementary Material, Section S.$3$). This analysis allows us to measure how the financial networks change their structure over time. We estimate the network persistence by calculating pair-wise network similarity between $G(t)$ and $G(t')$ using the Jaccard Distance, defined as follows:
\begin{equation}
sim (G(t), G(t')) = \frac{ \left| G(t) \cap G(t')\right|}{\left| G(t) \cup G(t)\right|},
\end{equation}
\noindent where $t$ and $t'$ range from $12$ May $2006$ to $18$ December $2019$.
\begin{figure*}[t!]
\centering
\subfigure[DAX30]{\includegraphics[trim=0cm 3.8cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/DAX30_252.jpg}}
\subfigure[EUROSTOXX50]{\includegraphics[trim=0cm 3.8cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/EUROSTOXX50_252.jpg}}
\subfigure[FTSE100]{\includegraphics[trim=0cm 3.8cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/FTSE100_252.jpg}}
\subfigure[HANGSENG50]{\includegraphics[trim=0cm 3.8cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/HANGSENG50_252.jpg}}
\subfigure[NASDAQ100]{\includegraphics[trim=0cm 0.2cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/NASDAQ100_252.jpg}}
\subfigure[NIFTY50]{\includegraphics[trim=0cm 3.8cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/NIFTY50_252.jpg}}
\caption{\textbf{DAG - Cross-similarity matrix for each market index.} We calculate the pair-wise Jaccard Distance across all financial networks $G(t)$ and $G(t')$ ranging from $12$ May $2006$ to $18$ December $2019$, related to a given market index. For each market index figure, the first network on $12$ May $2006$ is represented in the top-left and the last network on $18$ December $2019$ in the bottom-right corner of each individual figure.}
\label{fig:cross-similarity-dag}
\end{figure*}
\begin{figure*}[h!]
\centering
\subfigure[DAX30]{\includegraphics[trim=0cm 4.2cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/T/DAX30_252.jpg}}
\subfigure[EUROSTOXX50]{\includegraphics[trim=0cm 4.2cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/T/EUROSTOXX50_252.jpg}}
\subfigure[FTSE100]{\includegraphics[trim=0cm 4.2cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/T/FTSE100_252.jpg}}
\subfigure[HANGSENG50]{\includegraphics[trim=0cm 4.2cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/T/HANGSENG50_252.jpg}}
\subfigure[NASDAQ100]{\includegraphics[trim=0cm 0.2cm 0cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/T/NASDAQ100_252.jpg}}
\subfigure[NIFTY50]{\includegraphics[trim=0cm 4.2cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/T/NIFTY50_252.jpg}}
\caption{\textbf{DTN - Cross-similarity matrix for each market index.} We calculate the pair-wise Jaccard Distance across all financial networks $G(t)$ and $G(t')$ ranging from $12$ May $2006$ to $18$ December $2019$, related to a given market index. For each market index figure, the first network on $12$ May $2006$ is represented in the top-left and the last network on $18$ December $2019$ in the bottom-right of each individual figure.}
\label{fig:cross-similarity-dtn}
\end{figure*}
\begin{figure*}[h!]
\centering
\subfigure[DAX30]{\includegraphics[trim=0cm 4.2cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/MST/DAX30_252.jpg}}
\subfigure[EUROSTOXX50]{\includegraphics[trim=0cm 4.2cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/MST/EUROSTOXX50_252.jpg}}
\subfigure[FTSE100]{\includegraphics[trim=0cm 4.2cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/MST/FTSE100_252.jpg}}
\subfigure[HANGSENG50]{\includegraphics[trim=0cm 4.2cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/MST/HANGSENG50_252.jpg}}
\subfigure[NASDAQ100]{\includegraphics[trim=0cm 0.2cm 0cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/MST/NASDAQ100_252.jpg}}
\subfigure[NIFTY50]{\includegraphics[trim=0cm 4.2cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/MST/NIFTY50_252.jpg}}
\caption{\textbf{DMST - Cross-similarity matrix for each market index.} We calculate the pair-wise Jaccard Distance across all financial networks $G(t)$ and $G(t')$ ranging from $12$ May $2006$ to $18$ December $2019$, related to a given market index. For each market index figure, the first network on $12$ May $2006$ is represented in the top-left corner and the last network on $18$ December $2019$ in the bottom-right of each individual figure.}
\label{fig:cross-similarity-dmst}
\end{figure*}
Figures~\ref{fig:cross-similarity-dag},~\ref{fig:cross-similarity-dtn} and~\ref{fig:cross-similarity-dmst} present the cross-similarity analysis for DAG, DTN and DMST of each stock market index, respectively. In the individual figure of each stock market index, the first network is represented in the top-left and the last network is represented in the bottom-right, where the first network is $12$ May $2006$ and the last network is $18$ December $2019$. In general, we can observe that the structure consistently changes over time, which emphasizes the importance of tools to forecast market structure.
DAG results in Figure~\ref{fig:cross-similarity-dag} show network structure changes considerably throughout the time in all stock market indices. Figure~\ref{fig:cross-similarity-dtn} presents results from the DTN network filtering method. We can observe the similarity among networks tends to be noisier than the previous DAG method. In some periods, the similarity among the networks is maximum, while at other times it reaches zero, as can be seen in NASDAQ100 and NIFTY50. The DTN network filtering method can produce disconnected or even empty graphs, which may cause these similarity oscillations. DMST results are shown in Figure~\ref{fig:cross-similarity-dmst}. This figure shows that there is low similarity for long-range comparisons among trees created by the DMST filtering method for all market indices, suggesting low stability as reported by other authors~\cite{carlsson2010characterization,marti2015proposal}.
Given the cross-similarity matrices of each market, we calculate the distance among all matrices to measure the market similarity in terms of network evolution. This analysis allows us to identify which markets have similar behavior considering the persistence of networks. To do this, we use the cosine similarity, calculated using the following formula:
\begin{equation}
cosine\_sim (a,b) = \frac{\sqrt{\sum_{}^{}{(a-b)^{2} } } }{\sqrt{\sum_{}^{}{a^2}} * \sqrt{\sum_{}^{}{b^2} } } ,
\end{equation}
\noindent where $a$ and $b$ are two non-zero numeric vectors and represents the upper triangle of two distinct cross-similarity matrices. This metric ranges from $0$ to $1$ and it is defined as the angular distance from two vectors. Table~\ref{tab:cosine-distance} presents the pairwise cosine similarity for DAG, DTN and DMST. DAX30 and EUROSTOXX50 have the highest cosine similarity for DAG and DTN. For DMST, the highest value is between FTSE100 and EUROSTOXX50. This analysis demonstrates that the network persistence among markets from Europe are higher than markets from other regions of the world, given the three network filtering methods.
\begin{table}[hb!]
\centering
\small
\caption{\textbf{Cosine distance from cross-similarity results.} We calculate the cosine similarity from cross-similarity matrices. We use the upper triangle of each matrix as the input vector. European markets have the highest similarity.}
\begin{tabular}{|c|l|ccccc|}
\cmidrule{3-7} \multicolumn{1}{r}{} & & \multicolumn{1}{l}{\textit{\textbf{EUROSTOXX50}}} & \multicolumn{1}{l}{\textit{\textbf{FTSE100}}} & \multicolumn{1}{l}{\textit{\textbf{HANGSENG50}}} & \multicolumn{1}{l}{\textit{\textbf{NASDAQ100}}} & \multicolumn{1}{l|}{\textit{\textbf{NIFTY50}}} \\
\midrule
\multirow{5}[2]{*}{\textbf{DAG}} & \textit{\textbf{DAX30}} & \textbf{0.9532} & 0.9435 & 0.9472 & 0.9341 & 0.9257 \\
& \textit{\textbf{EUROSTOXX50}} & & 0.9228 & 0.9403 & 0.9420 & 0.9070 \\
& \textit{\textbf{FTSE100}} & & & 0.9150 & 0.9358 & 0.8978 \\
& \textit{\textbf{HANGSENG50}} & & & & 0.9297 & 0.9302 \\
& \textit{\textbf{NASDAQ100}} & & & & & 0.9137 \\
\midrule
\multirow{5}[2]{*}{\textbf{DTN}} & \textit{\textbf{DAX30}} & \textbf{0.9338} & 0.8367 & 0.7573 & 0.6209 & 0.5795 \\
& \textit{\textbf{EUROSTOXX50}} & & 0.8755 & 0.7873 & 0.6143 & 0.6000 \\
& \textit{\textbf{FTSE100}} & & & 0.8331 & 0.5479 & 0.5503 \\
& \textit{\textbf{HANGSENG50}} & & & & 0.5892 & 0.5531 \\
& \textit{\textbf{NASDAQ100}} & & & & & 0.4269 \\
\midrule
\multirow{5}[2]{*}{\textbf{DMST}} & \textit{\textbf{DAX30}} & 0.9486 & 0.9354 & 0.8967 & 0.9011 & 0.9200 \\
& \textit{\textbf{EUROSTOXX50}} & & \textbf{0.9500} & 0.9058 & 0.9294 & 0.9312 \\
& \textit{\textbf{FTSE100}} & & & 0.9253 & 0.9400 & 0.9338 \\
& \textit{\textbf{HANGSENG50}} & & & & 0.9169 & 0.9080 \\
& \textit{\textbf{NASDAQ100}} & & & & & 0.9160 \\
\bottomrule
\end{tabular}%
\label{tab:cosine-distance}%
\end{table}%
The second descriptive analysis is the similarity between the current financial network $G(t)$ and the future network $G(t + h)$, where $h$ is the time lag, $\forall$ $h \in \lbrace 1, 5, 10, 15, 20 \rbrace$ trading weeks. This analysis provides an accurate point of view concerning how the current network changes in the near future - if they do not change, we do not need to forecast them. We quantify the changes in the network structure using the Jaccard Distance between $G(t)$ and $G(t+h)$, considering $L = 252$ trading days to create each graph. Figure~\ref{fig:similarity-lag} presents the distribution of networks similarity related to the three network filtering methods DAG, DTN and DMST of each stock market index. Experimental results suggest a high similarity distribution among networks considering $h = 1$ step ahead to all network filtering methods. However, the similarity distribution decreases with $h$, mainly in the DMST method. Considering $h = 20$, DMST presents a mean similarity lower than $25\%$ in all markets. In general, financial networks tend to have a certain margin of similarity for low $h$, but as $h$ increases, they become more and more dissimilar, hence justifying the importance of forecasting future market structures, particularly in high-horizon forecasting scenarios. Analyzing the DTN method, NIFTY50 and HANGSENG50 present a different behavior for larger $h$, where the distribution of the similarity behaves differently from other markets, oscillating between the maximum value and almost zero for larger $h$, as shown in $h=5$, $h=10$ and $h=15$. This amplitude can be explained by the analysis presented in Figure~\ref{fig:cross-similarity-dtn}, which shows that for some periods the similarity among networks is high, but it is also very low for other periods. The smallest similarity values are presented for the DMST method considering $L = 20$.
\begin{figure*}[ht!]
\centering
\subfigure[Dynamic Asset Graph]{\includegraphics[trim=0cm 1cm 0cm 0cm, clip=true, width=0.65\textwidth]{fig/similarity-and-degree/only_network-sw-252-NETWORK-SIMILARITY-BY-INDEX.pdf} \label{subfig:similarity-lag-dag}}
\subfigure[Dynamic Threshold Networks]{\includegraphics[trim=0cm 1cm 0cm 0cm, clip=true, width=0.65\textwidth]{fig/similarity-and-degree/T/only_network-sw-252-NETWORK-SIMILARITY-BY-INDEX.pdf} \label{subfig:similarity-lag-dtn}}
\subfigure[Dynamic Minimal Spanning Tree]{\includegraphics[trim=0cm 1cm 0cm 0cm, clip=true, width=0.65\textwidth]{fig/similarity-and-degree/MST/only_network-sw-252-NETWORK-SIMILARITY-BY-INDEX.pdf} \label{subfig:similarity-lag-dmst}}
\caption{\textbf{Networks Similarity vs. Time Lag.} Figure shows the distribution of networks persistence considering $h = \lbrace 1, 5, 10, 15, 20 \rbrace$ trading weeks ahead related to the three network filtering methods: DAG, DTN and DMST. Network similarity is quantified using the Jaccard Distance between graphs $G(t)$ and $G(t+h)$.}
\label{fig:similarity-lag}
\end{figure*}
\begin{figure*}[ht!]
\centering
\subfigure[DAG ($L = 126$)]{\includegraphics[trim=0.4cm 0.0cm 2.0cm 1.6cm, clip=true, width=0.28\textwidth]{fig/similarity-and-degree/126.eps} \label{subfig:degree-cdf-126-dag}}
\subfigure[DAG ($L = 252$)]{\includegraphics[trim=0.4cm 0.0cm 2.0cm 1.6cm, clip=true, width=0.28\textwidth]{fig/similarity-and-degree/252.eps} \label{subfig:degree-cdf-252-dag}}
\subfigure[DAG ($L = 504$)]{\includegraphics[trim=0.4cm 0.0cm 2.0cm 1.6cm, clip=true, width=0.28\textwidth]{fig/similarity-and-degree/504.eps} \label{subfig:degree-cdf-504-dag}}
\subfigure[DTN ($L = 126$)]{\includegraphics[trim=0.4cm 0.0cm 2.0cm 1.6cm, clip=true, width=0.28\textwidth]{fig/similarity-and-degree/T/126.eps} \label{subfig:degree-cdf-126-dtn}}
\subfigure[DTN ($L = 252$)]{\includegraphics[trim=0.4cm 0.0cm 2.0cm 1.6cm, clip=true, width=0.28\textwidth]{fig/similarity-and-degree/T/252.eps} \label{subfig:degree-cdf-252-dtn}}
\subfigure[DTN ($L = 504$)]{\includegraphics[trim=0.4cm 0.0cm 2.0cm 1.6cm, clip=true, width=0.28\textwidth]{fig/similarity-and-degree/T/504.eps} \label{subfig:degree-cdf-504-dtn}}
\subfigure[DMST ($L = 126$)]{\includegraphics[trim=0.4cm 0.0cm 2.0cm 1.6cm, clip=true, width=0.28\textwidth]{fig/similarity-and-degree/MST/126.eps} \label{subfig:degree-cdf-126-dmst}}
\subfigure[DMST ($L = 252$)]{\includegraphics[trim=0.4cm 0.0cm 2.0cm 1.6cm, clip=true, width=0.28\textwidth]{fig/similarity-and-degree/MST/252.eps} \label{subfig:degree-cdf-252-dmst}}
\subfigure[DMST ($L = 504$)]{\includegraphics[trim=0.4cm 0.0cm 2.0cm 1.6cm, clip=true, width=0.28\textwidth]{fig/similarity-and-degree/MST/504.eps} \label{subfig:degree-cdf-504-dmst}}
\caption{\textbf{CDF of node degree across networks using DAG, DTN and DMST network filtering methods.} We calculate the cumulative distribution function of node degree across all stock networks using the size of rolling window $L = 126, 252$ and $504$ trading days. The period of the experiments ranges from $3$ March $2007$ to $18$ December $2019$. Market indices with the smallest number of constituents present a similar behaviour in the DAG network filtering method. The DTN method presents the highest probability of nodes without edges, mainly on NIFTY50, NASDAQ100 and HANGSENG50. EUROSTOXX50 presents a distinct shape compared with the other market indices in DTN with the smallest number of nodes without connection. Results also suggest the degree distribution of the market indices are similar for $L = 126, 252 \text{ and } 504$ trading days in all network filtering methods.}
\label{fig:degree-cdf}
\end{figure*}
The third descriptive analysis is presented in Figure~\ref{fig:degree-cdf}. We present the Cumulative Distribution Function (CDF) of the node degree across networks of each index using the DAG, DTN and DMST network filtering methods. This analysis provides information concerning the node degree according to three main aspects: \textit{(i)} the impact of time series size $L$; \textit{(ii)} network filtering method and \textit{(iii)} size of the market index, considering the number of constituents. We calculated the node degree distribution across all financial networks ranging from $3$ March $2007$ to $18$ December $2019$. Results using $L \in \lbrace 126, 252, 504 \rbrace$ trading days as rolling window size are presented. We observe in Figure~\ref{fig:degree-cdf} that market indices with the smallest number of constituents present a similar behaviour in terms of node degree when we use the DAG network filtering method. Besides, DAG nodes are prone to have a higher occurrence of node with no connections. The DTN method also presents high probability of nodes without edges, mainly on NIFTY50, NASDAQ100 and HANGSENG50. EUROSTOXX50 presents a distinct shape compared with the other market indices in DTN with the smallest number of nodes without a connection - more than $75\%$ of nodes has a degree greater than $1$ edge. On the other hand, for all market indices, at least $50\%$ of the nodes have $4$ or more connections in DAG. Considering the number of stocks in each market index, we can also conclude that there are no nodes connecting to all other vertices in any network filtering method because the largest degree distribution of each market index. Results also suggest the degree distribution of the market indices are similar for $L = 126, 252 \text{ and } 504$ trading days in all network filtering methods, indicating that the size of $L$ does not affect the degree distribution of stock networks of each market index.
\subsection{Predictive Analysis}
In this section, we present a set of experimental results related to market structure forecasting using machine learning. First, we investigate the predictive performance of the proposed method in different scenarios, comparing it against the benchmark methods. Then, we present a qualitative analysis concerning the model interpretability and its implications.
\subsubsection{Performance Results}
We used a machine learning approach to forecast the financial network $G(t + h)$, where $h$ is the number of weeks ahead, $h = 1, 2, \dots, 20$ trading weeks. We discuss and report results using the size of rolling windows $L = 252$ trading days to construct the financial networks. Results regarding $L \in \lbrace 126, 504 \rbrace$ trading days can be found in the Supplementary Material, Section S.$4$. Figures~\ref{fig:auc-all-methods-dag},~\ref{fig:auc-all-methods-dtn} and~\ref{fig:auc-all-methods-dmst} show the AUC measure of the proposed machine learning method compared to baseline algorithms for DAG, DTN and DMST network filtering methods. For each time step ahead $h$, we calculated the average AUC of each method and its respective standard error over the test period, ranging from $5$ May $2007$ to $18$ December $2019$.
\begin{figure*}[h!]
\centering
\subfigure[DAX30]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-ALL/252/DAX30-sw-252-AUC.pdf}}
\subfigure[EUROSTOXX50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-ALL/252/EUROSTOXX50-sw-252-AUC.pdf}}
\subfigure[FTSE100]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-ALL/252/FTSE100-sw-252-AUC.pdf}}
\subfigure[HANGSENG50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-ALL/252/HANGSENG50-sw-252-AUC.pdf}}
\subfigure[NASDAQ100]{\includegraphics[trim=0.2cm 1.8cm 0.1cm 0.2cm, clip=true, width=0.2804\textwidth]{fig/AUC-ALL/252/NASDAQ100-sw-252-AUC.pdf}}
\subfigure[NIFTY50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-ALL/252/NIFTY50-sw-252-AUC.pdf}}
\caption{\textbf{DAG - Predictive performance comparison of all methods.} This figure shows the AUC measure of the machine learning method compared to the baseline methods. For each time step, we calculate the AUC average of each method and its respective standard error over the entire test period. The machine learning method outperforms the baseline methods in all market indices.}
\label{fig:auc-all-methods-dag}
\end{figure*}
Denoted as ``ML'', the machine learning method outperforms the baseline methods in all market indices and all network filtering methods. In general, predictive performance decreases as the time lag $h$ increases. Despite its simplicity, TI is quite effective and presents good performance across market indices and network filtering methods, similar to RW algorithm. Figure~\ref{fig:auc-all-methods-dag} presents results for the DAG network filtering method, suggesting that market indices with a small number of constituents have a higher AUC than markets with a large number of constituents. Results also suggest that the RW algorithm produces a edge ranking quite similar to TI. The JC method presents the worst predictive performance in all market indices, except for FTSE100 in which PA presents lower AUC values for the DAG network filtering method.
\begin{figure*}[h!]
\centering
\subfigure[DAX30]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-T/252/DAX30-sw-252-AUC.pdf}}
\subfigure[EUROSTOXX50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-T/252/EUROSTOXX50-sw-252-AUC.pdf}}
\subfigure[FTSE100]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-T/252/FTSE100-sw-252-AUC.pdf}}
\subfigure[HANGSENG50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-T/252/HANGSENG50-sw-252-AUC.pdf}}
\subfigure[NASDAQ100]{\includegraphics[trim=0.2cm 1.8cm 0.1cm 0.2cm, clip=true, width=0.2804\textwidth]{fig/AUC-T/252/NASDAQ100-sw-252-AUC.pdf}}
\subfigure[NIFTY50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-T/252/NIFTY50-sw-252-AUC.pdf}}
\caption{\textbf{DTN - Predictive performance comparison of all methods.} This figure shows the AUC measure of the machine learning method compared against the baseline methods. For each time step, we calculate the AUC average of each method and its respective standard error over the entire test period. The machine learning method outperforms the baseline methods in all market indices.}
\label{fig:auc-all-methods-dtn}
\end{figure*}
\begin{figure*}[h!]
\centering
\subfigure[DAX30]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-MST/252/DAX30-sw-252-AUC.pdf}}
\subfigure[EUROSTOXX50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-MST/252/EUROSTOXX50-sw-252-AUC.pdf}}
\subfigure[FTSE100]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-MST/252/FTSE100-sw-252-AUC.pdf}}
\subfigure[HANGSENG50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-MST/252/HANGSENG50-sw-252-AUC.pdf}}
\subfigure[NASDAQ100]{\includegraphics[trim=0.2cm 1.8cm 0.1cm 0.2cm, clip=true, width=0.2804\textwidth]{fig/AUC-MST/252/NASDAQ100-sw-252-AUC.pdf}}
\subfigure[NIFTY50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-MST/252/NIFTY50-sw-252-AUC.pdf}}
\caption{\textbf{DMST - Predictive performance comparison of all methods.} This figure shows the AUC measure of the machine learning method compared against the baseline methods. For each time step, we calculate the AUC average of each method and its respective standard error over the entire test period. The machine learning method outperforms the baseline methods in all market indices.}
\label{fig:auc-all-methods-dmst}
\end{figure*}
Figure~\ref{fig:auc-all-methods-dtn} presents results for the DTN network filtering method. ML results are superior in all markets and suggest the proposed method can accurately identify links with high correlation due the main purpose of DTN method. We can observe that baseline algorithms have worst results for HANGSENG50, NASDAQ100 and NIFTY50 indices. As presented in Figure~\ref{fig:degree-cdf}, these market indices have expressive number of nodes without connections. TI algorithm outperforms baseline algorithms in DAX30, EUROSTOXX50 and NASDAQ100. Figure~\ref{fig:auc-all-methods-dmst} presents results related to the DMST network filtering method. Baseline methods have the worst results among the three filtering methods, except for the TI and RW algorithms. ML outperforms the benchmark methods in all markets.
\begin{figure*}[h!]
\centering
\subfigure[DAG (AUC)]{\includegraphics[trim=0.0cm 0.5cm 9.2cm 1.3cm, clip=true, width=0.29\textwidth]{fig/AUC-ALL/252/only_network-sw-252-AUC-COMPARISON.pdf}}
\subfigure[DAG (AUC$^\ast$)]{\includegraphics[trim=0.0cm 0.5cm 0.0cm 1.3cm, clip=true, width=0.416875\textwidth]{fig/AUC-ALL/252/only_network-sw-252-AUC-IMPROVMENT-COMPARISON.pdf}}
\subfigure[DTN (AUC)]{\includegraphics[trim=0.0cm 0.5cm 9.2cm 1.3cm, clip=true, width=0.29\textwidth]{fig/AUC-T/252/only_network-sw-252-AUC-COMPARISON.pdf}}
\subfigure[DTN (AUC$^\ast$)]{\includegraphics[trim=0.0cm 0.5cm 0.0cm 1.3cm, clip=true, width=0.416875\textwidth]{fig/AUC-T/252/only_network-sw-252-AUC-IMPROVMENT-COMPARISON.pdf}}
\subfigure[DMST (AUC)]{\includegraphics[trim=0.0cm 0.5cm 9.2cm 1.3cm, clip=true, width=0.29\textwidth]{fig/AUC-MST/252/only_network-sw-252-AUC-COMPARISON.pdf}}
\subfigure[DMST (AUC$^\ast$)]{\includegraphics[trim=0.0cm 0.5cm 0.0cm 1.3cm, clip=true, width=0.416875\textwidth]{fig/AUC-MST/252/only_network-sw-252-AUC-IMPROVMENT-COMPARISON.pdf}}
\caption{\textbf{Machine learning AUC and AUC$^\ast$ for DAG, DTN and DMST network filtering methods.} Panels (a), (c) and (e) present the machine learning AUC measure and its standard error for $h$ trading weeks ahead ($1 \leq h \leq 20)$. Panels (b), (d) and (f) present the AUC improvement over the benchmark time-invariant method and its standard error. Results for $L = 252$.}
\label{fig:auc-all-comparison}
\end{figure*}
Figure~\ref{fig:auc-all-comparison} presents the proposed method AUC performance for $h$ trading weeks ahead ($1 \leq h \leq 20)$ using the DAG, DTN and DMST network filtering methods. The AUC measure decreases as the time lag $h$ increases. We also compared our results against the benchmark time invariant method TI, where the network $G(t)$ is used as the forecast $G(t+h)$. We choose TI to compare our method due to its superior performance over all benchmark methods presented in the previous analysis. Moreover, we selected the TI method because it is derived from information from the pair-wise correlation, as described in Table~\ref{tab:linkfeatures}. The AUC* improvement is calculated as follows:
\begin{equation}
AUC^\ast = (AUC_m - 0.5) / (AUC_b - 0.5) - 1,
\end{equation}
\noindent where $AUC_m$ is the machine learning AUC and $AUC_b$ is the benchmark's AUC. Figures~\ref{fig:auc-all-comparison}(b),~\ref{fig:auc-all-comparison}(d) and~\ref{fig:auc-all-comparison}(f) present AUC$^\ast$ improvement results and their standard errors for DAG, DTN and DMST network filtering methods.
The proposed method presents similar AUC results for all network filtering methods. Results using DAG shown in Figure~\ref{fig:auc-all-comparison}(a) suggest that networks with fewer constituents have better AUC results. Figure~\ref{fig:auc-all-comparison}(b) shows that the highest AUC$^\ast$ improvement is from NASDAQ100, reaching almost $30\%$ for $h = 20$ weeks ahead. On the other hand, for the DTN method shown in Figure~\ref{fig:auc-all-comparison}(c), the best results are FTSE100 and NIFTY50, in which EUROSTOXX50 is the most distinct result. The biggest AUC$^\ast$ improvement related to DTN shown in Figure~\ref{fig:auc-all-comparison}(d) is over NASDAQ100 and NIFTY50, reaching almost $40\%$. Results shown in Figure~\ref{fig:auc-all-comparison}(e) are related to the DMST network filtering method and have a similar decay of AUC for all markets, where DAX30 is the best result. Interestingly, the AUC$^\ast$ improvement shown in Figure~\ref{fig:auc-all-comparison}(e) presents similar curves to NIFTY50 and HANGSENG50 markets. Results show that AUC$^\ast$ improvement for NIFTY50 and HANGSENG50 increases until approximately $h = 9$, achieving almost $12\%$ on NIFTY50. After this max value, the AUC$^\ast$ improvement decreases as $h$ increases. NASDAQ100 presents the best AUC$^\ast$ improvement, reaching almost $19\%$ for $h = 15$ trading weeks ahead.
\subsubsection{Model Interpretability}
In finance, particularly in portfolio management, the investment risk is calculated using the correlation among portfolio assets. This is the main information used to estimate risk and, given its importance in financial analyses, we also explore them as an input feature for market structure forecasting. However, we want to measure how the topology of the network helps forecast the future network itself. In other words, we are interested in evaluating the importance of non pair-wise correlation features for the forecasting market structure. As described in Section~\ref{sec:network-based-features}, we separated the feature set into two subsets: pair-wise correlation features and non pair-wise correlation features. After constructing the boosted trees in the XGBoost model, we can estimate the importance of each individual attribute. The importance of an attribute is related to the number of times that it is used to create relevant split decisions, i.e., split points that improve the performance metrics~\cite{hastie2009elements}. For each market index, we calculate the average and standard error of aggregate importance of pair-wise correlation and non pair-wise correlation features. Figure~\ref{fig:importance} presents results related to the importance of non pair-wise correlation features, considering the network filtering methods DAG, DTN and DMST and $L \in \lbrace 126, 252,504 \rbrace $ trading days as the rolling window size. It is important to note that the importance of the two feature subsets add up to $1$.
\begin{figure*}[t!]
\centering
\subfigure[DAG ($L = 126$)]{\includegraphics[trim=0.1cm 6.1cm 0.3cm 0cm, clip=true, clip=true, width=0.30\textwidth]{fig/feature-importance/only_network-sw-126-IMPORTANCE-COMPARISON.pdf}}
\subfigure[DAG ($L = 252$)]{\includegraphics[trim=0.1cm 6.1cm 0.3cm 0cm, clip=true, clip=true, clip=true, width=0.30\textwidth]{fig/feature-importance/only_network-sw-252-IMPORTANCE-COMPARISON.pdf}}
\subfigure[DAG ($L = 504$)]{\includegraphics[trim=0.1cm 6.1cm 0.3cm 0cm, clip=true, clip=true, width=0.30\textwidth]{fig/feature-importance/only_network-sw-504-IMPORTANCE-COMPARISON.pdf}}
\subfigure[DTN ($L = 126$)]{\includegraphics[trim=0.1cm 6.1cm 0.3cm 0cm, clip=true, clip=true, width=0.30\textwidth]{fig/feature-importance/Threshold/only_network-sw-126-IMPORTANCE-COMPARISON.pdf}}
\subfigure[DTN ($L = 252$)]{\includegraphics[trim=0.1cm 6.1cm 0.1cm 0cm, clip=true, clip=true, clip=true, width=0.30\textwidth]{fig/feature-importance/Threshold/only_network-sw-252-IMPORTANCE-COMPARISON.pdf}}
\subfigure[DTN ($L = 504$)]{\includegraphics[trim=0.1cm 6.1cm 0.3cm 0cm, clip=true, clip=true, width=0.30\textwidth]{fig/feature-importance/Threshold/only_network-sw-504-IMPORTANCE-COMPARISON.pdf}}
\subfigure[DMST ($L = 126$)]{\includegraphics[trim=0.1cm 6.1cm 0.3cm 0cm, clip=true, clip=true, width=0.30\textwidth]{fig/feature-importance/MST/only_network-sw-126-IMPORTANCE-COMPARISON.pdf}}
\subfigure[DMST ($L = 252$)]{\includegraphics[trim=0.1cm 0.9cm 0.1cm 0cm, clip=true, clip=true, clip=true, width=0.30\textwidth]{fig/feature-importance/MST/only_network-sw-252-IMPORTANCE-COMPARISON.pdf}}
\subfigure[DMST ($L = 504$)]{\includegraphics[trim=0.1cm 6.1cm 0.3cm 0cm, clip=true, clip=true, width=0.30\textwidth]{fig/feature-importance/MST/only_network-sw-504-IMPORTANCE-COMPARISON.pdf}}
\caption{\textbf{Importance of non pair-wise correlation features for DAG, DTN and DMST.} Figure shows the aggregate importance for non pair-wise correlation features using the size of rolling window $L = \lbrace 126, 252,504 \rbrace $ trading days and DAG, DTN and DMST network filtering methods. Results show the importance of these features increases with the time step $h$. The importance of non pair-wise correlation features for $L = 126$ trading days is higher than $L = 252$ and $L = 504$ for all network filtering methods. The growth of the importance of this subset is consistent across all markets. An interesting result is that the importance of non pair-wise correlation features changes according to the network filtering method.}
\label{fig:importance}
\end{figure*}
Results presented in Figure~\ref{fig:importance} show that non pair-wise correlation features help forecast the future market using different network filtering methods. We observe that the importance of non pair-wise correlation features increases with $h$. Moreover, the importance of this subset of features changes according to the network filtering method. Their importance can be observed mainly for smaller $L$, such as $L = 126$, shown in Figures~\ref{fig:importance}(a),~\ref{fig:importance}(d) and~\ref{fig:importance}(g), where their importance for $h =20$ reaches almost $80\%$ for NIFTY50 using the DAG method, $60\%$ for EUROSTOXX50 using DTN and almost $90\%$ for all markets using DMST. For the DMST method, shown in Figures~\ref{fig:importance}(g),~\ref{fig:importance}(h) and~\ref{fig:importance}(i), the importance of non pair-wise correlation features has a similar shape to $L = 126$, $252$ and $504$ rolling window size. DAG results are shown in Figures~\ref{fig:importance}(a),~\ref{fig:importance}(b) and~\ref{fig:importance}(c). For short $h$ values, non pair-wise correlation attributes do not add much information when compared to pair-wise correlation features. However, the importance of these features rapidly increases with the time step $h$, suggesting that these attributes can be more useful than pair-wise correlation attributes for long-horizon forecasting exercises, particularly for short rolling window sizes. For $L = 252$ and $L = 504$, non pair-wise correlation features have less importance in forecasting networks modeled using DAG and DTN network filtering methods. Considering DMST results, the importance of non pair-wise features rapidly increases, even for short $h$ values. This behavior is different from DAG and DTN. A possible explanation for this is the low persistence of trees, as shown in Figure~\ref{fig:cross-similarity-dmst}. Thus, network features are able to add more information to the ML model when compared to pair-wise correlation features.
\subsection{Dynamic Financial Networks}
\label{sec:dynamic-financial-networks}
There are many methods in the literature to model financial market structure. Some of the most commonly used methods include correlation based networks and network filtering methods~\cite{marti2021review}. Network filtering methods allow prompt and temporal analysis of the market structure by exploring market data snapshots to model financial networks that represent the topology and the structure of the market. Using a rolling window approach, we can take snapshots in each time window of arbitrary length, allowing to explore temporal analysis of the market evolution~\cite{musmeci2014clustering}, also called as dynamic or temporal networks. Some examples of the most common methods include Minimal Spanning Tree approach~\cite{Mantegna1999}, the Planar Maximally Filtered Graph~\cite{Tumminello26072005}, the Directed Bubble Hierarchical Tree~\cite{song2012hierarchical}, asset graphs~\cite{onnela2003} and other approaches based on the threshold networks~\cite{onnela2004clustering}.
In this study, we investigate three different network filtering methods to estimate financial market structure: \textit{(i)} Dynamic Asset Graph; \textit{(ii)} Dynamic Threshold Networks and \textit{(iii)} Dynamic Minimal Spanning Tree. We explore these three methods due to their importance for financial analysis, considering that there is a vast literature~\cite{onnela2003dynamic,onnela2003dynamics,meng2014systemic,mantegna1999introduction,onnela2003,Yang2008,onnela2004clustering} that uses these methods to study different characteristics of the structure of financial networks.
These methods estimate an asset distance matrix through co-movement metrics of daily return prices. Let $P(t)$ be the closing price of an asset at day $t$. We consider assets' daily log-returns $R(t) = \log{P(t)} - \log{P(t-1)}$ that are calculated at time $t$. First, we calculate a distance matrix that measures the co-movement of daily log-returns~\cite{Mantegna1999}, defined as
\begin{equation}
\label{eq:distance}
D_{i,j}(t) = \sqrt{2(1 - \rho_t(i,j))}, \,
\end{equation}
\noindent where $\rho_t(i,j)$ is the Pearson's correlation coefficient between the time series of log-returns of assets $i$ and $j$ at time $t$, $\forall i,j \in V$, where $V$ is the set of assets. The distance matrix is constructed by dividing the returns time-series $R(t)$ into rolling windows of size $L$ trading days with $\delta T$ trading days between two consecutive windows (time-step). The choice of window width $L$ and window time-step $\delta T$ is arbitrary, and it is a trade-off between having an analysis that is either too dynamic or too smooth~\cite{tumminello2007correlation}. The smaller the window width and the larger the window steps, the more dynamic the data are. We report results for $L \in \lbrace 126, 252, 504\rbrace$ and $\delta T = 5$ trading days. A dynamic financial network is defined as a temporal network
\begin{equation}
W = \langle V, E_1, \ldots, E_T : E_t \subseteq V \times V, \, \forall t \in \{1, \ldots, T\} \rangle,
\end{equation}
\noindent where vertices $i \in {V}$ correspond to assets of interest. For every pair $\langle i, j \rangle$ at time-window $t$, $\forall i,j \in {V}$ $\vert$ $ i \neq j$, there is a corresponding edge $(i,j)_t \in {E_t}$ and every edge has a weight $w_{i, j}(t) = D_{i, j}(t)$. Considering the distance matrix $D_{i,j}(t)$ previously defined, we can apply a network filtering method in order to create dynamic networks. The three evaluated methods in this work are described in the next sections.
\subsubsection{Dynamic Asset Graph (DAG) }
A Dynamic Asset Graph~\cite{onnela2003} is a type of filtered financial network modeled by first ranking edges in ascending order of weights $w_1(t), w_2(t), ... , w_{N(N-1)/2}(t)$. The resulting graph is obtained by selecting the edges with the strongest connections. The number of edges are, of course, arbitrary. Here, we select edges with weights in the top quartile, i.e., $w_1(t), w_2(t), ... , w_{\floor{N(N-1)/8}}(t)$, as proposed in Souza et al.~\cite{SOUZA2019122343}. The main idea of this method is to identify the smallest distances in the stock market.
\subsubsection{Dynamic Threshold Networks (DTN) }
Considering the distance matrix $D(t)$ defined in Equation (\ref{eq:distance}), we create a filtered adjacency matrix $A$ to construct the financial network using the following rules~\cite{Yang2008,onnela2004clustering}:
\begin{equation}
A_{i,j}(t) = \left\{
\begin{array}{lr}
1, & \left| D_{i,j}(t) \right| \ge r_c\\
0, & \left| D_{i,j}(t) \right| < r_c
\end{array}
\right.
\end{equation}
\noindent where assets $i,j \in V$ and $\forall (i,j)_t \in E_t$. The critical value $r_c$ converts the matrix $D$ into an undirected network, whereby $A_{ij}(t) = 1$ and $A_{ij}(t) = 0$ represents the existence and absence of edges between $i$ and $j$ at time window $t$, respectively. We fixed the $r_c$ value in $0.65$ because for $r_c \leq 0.65$ the network characteristics are submerged in large fluctuations~\cite{Yang2008}. It is important to observe that the DTN method can produce disconnected graphs and the number of edges is dynamic. In general, the main goal of this method is to identify pairs of assets that are highly correlated and above the threshold $r_c$. This is different from DAG, where pairs with a correlation value lower than $r_c$ can be added to the network.
\subsubsection{Dynamic Minimal Spanning Tree (DMST) }
We create a Dynamic Minimal Spanning Tree~\cite{Mantegna1999} based on the smallest asset distance in the previous defined matrix $D(t)$. We use the Kruskal's Algorithm to identify the Minimal Spanning Tree (MST) in the fully connected graph $D$ at time $t$. The number of edges is fixed and calculated as $N - 1$, where $N$ is the number of assets. This method provides the smallest distance to interconnect the market, producing the minimal market structure to connect all assets.
\section{Introduction}
Multi-asset financial analyses, particularly optimal portfolio selection and portfolio risk management, traditionally rely
on the usage of a covariance matrix representative of market structure, which is commonly assumed to be time invariant.
Under this assumption, however, non-stationarity~\cite{1742-5468-2012-07-P07025,Morales20136470} and long range memory~\cite{Cont2005} can lead to misleading conclusions and spoil the ability to explain future market structure dynamics.
Empirical analyses of networks in finance have been used successfully to study market structure dynamics, particularly to explain market interconnectedness from high-dimensional data~\cite{Mantegna1999,Tumminello10421,IORI2018637,marti2021review}. Under this approach, market structure is modeled as a network whose nodes represent different financial assets and edges represent one or many types of relevant relationships among those assets. There is a vast literature applying financial networks to descriptive analysis of market and portfolio dynamics, including market stability~\cite{morales2012dynamical}, information extraction~\cite{song2008analysis}, asset allocation~\cite{pozzi2013spread,Mineo2018} and dependency structure~\cite{Mantegna1999,Tumminello201040,musmeci2014clustering,song2012hierarchical,musmeci2017multiplex}. However, there is little research on the application of financial networks in market structure forecasting. Recent research on market structure inference makes use of information filtering networks to produce a robust estimate of the global sparse inverse covariance matrix~\cite{PhysRevE.94.062306}, achieving computationally efficient results. In a later study~\cite{SOUZA2019122343}, the authors forecast market structure based on a model that uses a principle of link formation by triadic closure in stock market networks. Spelta~\cite{spelta2017financial} proposed a method to predict abrupt market changes, inferring the future dynamics of stock prices by predicting future distances between them, using a tensor decomposition technique. Musmeci et al.~\cite{Musmeci2016} proposed a new tool to predict future market volatility using correlation-based stock networks, meta-correlation and logistic regression. Park et al.~\cite{park2020link} analyzed the evolution of Granger causality network of global currencies and proposed a link prediction method incorporating the squared eta of the causality directions of two nodes as the weight of future edges. To build the causality network, they used the effective exchange rate of $61$ countries and showed that the predictive capacity of their model outperforms other static methods for predicting links. Other related work~\cite{castilho2019weighted} proposed a model for predicting links in weighted financial networks, used to define input variables for the portfolio management problem, increasing the financial return of the investment.
In this article, financial market structure forecasting is formulated as a link prediction problem where we estimate the probability of adding or removing links in future networks. To tackle this problem, we developed a machine learning-based model that uses node- and link-specific financial network features to forecast stock to stock links based on past market structure. Applying machine learning algorithms in the decision-making process on stock markets is not a recent task~\cite{trippi1992neural}. An increasing number of applications have been created using machine learning-based models to predict the behavior of price time series~\cite{long2019deep}, volatility forecasting~\cite{liu2019novel}, sentiment analysis for investment~\cite{pagolu2016sentiment} and automatic trading rules~\cite{potvin2004generating}. This paper provides a set of empirical experiments designed to address the following research questions:
\begin{enumerate}
\item To what extent can dynamic financial networks help forecast stock market correlation structure?
\item How do financial network topology features perform relative to traditionally used pair-wise correlation data to forecast stock market structure?
\item How does the predictability of market structure vary across multiple financial markets for the proposed models?
\end{enumerate}
Findings can be particularly useful to improve portfolio selection and risk management, which commonly rely on a backward-looking correlation matrix to estimate portfolio risk. To the best of our knowledge, this is the first study that combines financial network features and machine learning to forecast stock market structure.
The remainder of this paper is organized as follows: Section~\ref{sec:material-and-methods} describes the Materials and Methods used to provide the experiments; Section~\ref{sec:results-and-discussion}, which is the Results and Discussion, presents a descriptive analysis of the temporal stock networks and predictive analysis of market structure forecasting, and Section~\ref{sec:conclusion} draws the Conclusions.
\section{Stock Market Structure and Network Prediction}
(UNDER REVIEW)
In~\cite{Mantegna1999}, the authors introduce one of the most commonly used methods to perform structure and topological analysis of financial markets, where assets are represented as nodes and edges represent the relationship between assets based on correlation measures between time series of log-returns of assets. It is used in several studies, including~\cite{gopikrishnan2000scaling},~\cite{onnela2003dynamics},~\cite{lee2012overall},~\cite{bonanno2003topology},~\cite{bonanno2004networks} and~\cite{eom2009topological}. In~\cite{onnela2004clustering}, the authors discuss how to select the relevant correlations through the correlation matrix among stocks and compare the results with random graphs. In addition to Pearson's correlation~\cite{feller1968introduction}, the most popular approach to measure the correlation between two assets, other approaches have been investigated to extract the market structure. In~\cite{yang2014cointegration}, the authors investigate the construction of financial networks using co-integration coefficient between the main indices of world financial markets~\cite{granger1981some}~\cite{johansen1990maximum}. In~\cite{tabak2010topological}, topological properties of financial networks in the Brazilian stock market are assessed using a Minimum Spanning Tree algorithm, employing correlation matrix of the asset price variation of several sectors. These studies often describe the stock market structure using graphs~\cite{bonanno2004networks}~\cite{lee2012overall}. In~\cite{spelta2017financial}, the author proposed a method to predict abrupt market changes by inferring the forthcoming dynamic of stock prices through the prediction of future distances between them, using a tensor decomposition technique. Other areas, besides stock market, use time series transformation in complex networks for different analyzes, such as time series clustering~\cite{ferreira2016time} and time-series link prediction~\cite{Yang2008}.
Another subject related to this work is the stock market structure prediction. We addressed this problem as a link prediction task. For such,
previously known network information is used to find connections that may appear in the future or simply to find hidden connections. This predictive task is investigated
in many real problems, mainly involving social networks~\cite{martinez2017survey}~\cite{grover2016node2vec}. Some authors separate the type of link prediction according to the method, like similarity-based methods, probabilistic and statistical methods, and algorithmic methods, which include methods based on Machine Learning (ML)~\cite{marti2017review}. Other authors differentiate the type of prediction according to the specific nature of the problem, such as dynamic link prediction or hidden link prediction, as well as the application of link prediction, such as recommendation in social networks or network completion~\cite{wang2015link}. ML based methods treat the link prediction problem as a binary classification task, whose classes are linked and not linked. In~\cite{al2006link}, the authors investigate the use ML to identify possible links in future networks. In~\cite{da2012time}, the authors investigate the prediction of links using time series forecasting over similarity metrics.
In a real stock market scenario, relationship between assets can not only appear along the time, but can also disappear. Thus, assets can have dynamic interactions. A sequence of dynamic interactions over time introduces another dimension to the challenge of mining and predicting link structure, named temporal link prediction~\cite{dunlavy2011temporal}. Temporal networks are a specific type of dynamic networks in which time can be organized as a third-order tensor, or multi-dimensional array~\cite{wang2015link}.
A common deficiency of these methods is keeping the links between nodes even when their relationship no longer exist.
To address this deficiency, we propose a model to predict future market correlation structure using link- and node-based financial network features, differing from the previous studies mainly regarding to the possibility of identifying removal of links over the time.
\section*{Acknowledgements}
D.C. and A.C.P.L.F.C would like thank to CAPES, Intel and CNPq (grant 202006/2018-2) for their support.
\section*{Author contributions statement}
D.C. and T.T.P.S. developed the proposed model. D.C. and T.T.P.S. conceived and designed the experiments. D.C. and T.T.P.S. prepared figures and tables, implemented and carried out the experiments. All authors analyzed the results and wrote the manuscript. All authors reviewed the article.
\section*{Additional information}
\noindent \textbf{Competing Interests:} the authors declare no competing financial interests.
\\
\noindent \textbf{Supplementary Information:} provided with this document as Supplementary Material.
\section{Materials and Methods}
\label{sec:material-and-methods}
\input{financial_networks}
\input{prediction}
\subsection{Machine Learning Based Approach}
\label{sec:machine-learning-based-approach}
In this section, we describe the proposed machine learning based approach to forecast stock market structure for a given market index. In this study, we address market structure forecasting as a network link prediction problem. Given snapshots of financial networks up to time $t$, we want to accurately predict the edges that will be present in the network at a given future time $t'$. We choose three times $t_0 < t < t'$ and provide an algorithm that accesses $W[t_0, t] = \langle V, E_{t_0}, \ldots, E_t \rangle$ to estimate the likelihood of edges to be present in $W[t']$, where $t' = t + h$ and $h = \lbrace 1, 2, \dots, 20 \rbrace$ trading weeks.
Similarity-based methods and classifier-based methods are two of the most common approaches for link prediction~\cite{martinez2016survey}. In similarity-based methods~\cite{liben2007link}, the algorithm assigns a connection weight $score(x, y)$ to pairs of nodes $\langle x, y \rangle$, based on the input graph $G$, and then produces a ranked list in decreasing order of $score(x, y)$. These algorithms can be viewed as computing a measure of proximity or ``similarity'' between nodes $x$ and $y$. Common Neighbors, Jaccard Coefficient, Preferential Attachment, Adamic Adar, and Resource Allocation are among the most popular local indices (node-based). Katz, Leicht-Holme-Newman, Average Commute Time, Random Walk, and Local Path represents global indices (path-based). While the local indices are simple in computation, the global indices may provide more accurate predictions.
In classifier-based methods, the link prediction is defined as a binary classification problem. Here, a feature vector is extracted for each pair of nodes and a $1/0$ label should be assigned based on the existence/not-existence of that link in the network. Any similarity-based method could form the required feature vector for a supervised learning method~\cite{al2006link}. Afterwards, any conventional supervised learning algorithm might be applied to train a supervised link predictor. In this article, we applied a classifier-based method to forecast the financial market structure. Our approach uses financial network features as input to a machine learning model in order to create a link prediction method, as presented in Figure~\ref{fig:machine-learning}.
\begin{figure*}[ht!]
\centering
\includegraphics[trim=4.5cm 29.5cm 11.9cm 0.9cm, clip=true, width=0.7\textwidth]{fig/methodology/machine-learning-3-3.pdf}
\caption{\textbf{Building the machine learning dataset.} We calculate features for each node ranging from $1$ to $N$, where $N$ is the number of assets. We applied a pairwise concatenation of node and link features as input variables for the link prediction, while edges on the network at time $t+h$ are used as the target variable, where $h$ is the number of trading weeks. }
\label{fig:machine-learning}
\end{figure*}
\begin{figure*}[ht!]
\centering
\includegraphics[trim=0.5cm 21.5cm 9.7cm 13cm, clip=true, width=0.9\textwidth]{fig/methodology/machine-learning-3-3.pdf}
\caption{\textbf{Train and test sets used to induce the machine learning model.} Machine learning models were trained and tested using a rolling window approach. Considering $L$ as the size of the log-return time series and $t$ as current time, we create the train set using data from $t-k$ to $t-1$ and the test set using data from $t$. The target of the supervised learning is the network $G(t+h)$, where $h$ is the number of trading weeks. After training and testing the machine learning model, the time-step $\delta T$ is used to move the rolling window forward, in order to restart the process and re-train the machine learning model. The train set includes data from $1$ March $2005$ to $30$ May $2007$ and the test set has data from $30$ May $2007$ to $18$ December $2019$.}
\label{fig:machine-learning-train-test}
\end{figure*}
Figure~\ref{fig:machine-learning} presents the process used to create the machine learning database. Assuming $i$ and $j$ as two arbitrary nodes ranging from $1$ to $N$ and $t$ as the current time, an instance of the dataset used in the machine learning algorithm has the following predictive attributes: (a) $i$ node-level features; (b) $j$ node-level features; (c) $(i,j)$ link-level features. As previously described, the target of the supervised machine learning model is to forecast the existence of links in a network $G(t + h)$, where $h = 1, 2, \dots, 20 $ trading weeks. Figure~\ref{fig:machine-learning} presents an illustration of how we build instances to the machine learning model, exemplified as the snapshot at time $t$.
We split the dataset between train and test sets taking into account the temporal sequence of the data. The train set includes data produced in the period from $1$ March $2005$ to $30$ May $2007$ and the test set has data from $30$ May $2007$ to $18$ December $2019$. Figure~\ref{fig:machine-learning-train-test} presents an illustration explaining how we created the train and test sets. Machine learning models were trained and tested using a rolling window approach. Considering $L$ as the size of the log-return time series, $t$ as current time and $t - k < t < t + h$, we create the train set using network features from $G(t - k)$, where $k = 1, 2, \ldots, 30 $. The test set contains data from the current network $G(t)$, in which $G(t + h)$ is the target, where $h = 1, 2, \dots, 20$ trading weeks. After training the machine learning model and testing it, we move the rolling window forward taking into account the time-step $\delta T = 5$ trading days ($1$ trading week) between two consecutive executions (see Supplementary Material, Section S.$1$ for further details).
To assess the information rate that a machine learning model can extract from the features set, we applied the XGboost~\cite{chen2016xgboost} algorithm. In this experiment, the algorithm induces a predictive model for stock market structure forecasting. XGboost is a fast, highly effective, interpretable and widely used machine learning model. Further information regarding the experimental setup is described in the Supplementary Material, Section S.$2$.
\subsubsection{Network Features}
\label{sec:network-based-features}
As previously mentioned, we proposed an approach for market structure forecasting based on supervised machine learning. In order to provide information to train this supervised method, we extracted a set of network features at node- and link-level. These features are used as input to the machine learning model. We summarized the network features as follows:
\begin{itemize}
\item \textbf{Node-Level Features} assess the position of a node within the overall structure of a given graph $G(V,E)$~\cite{oliveira2012overview}. Table~\ref{tab:nodefeatures} presents a set of node-level features related to node/stock $i \in V$ used as input to the machine learning model.
\item \textbf{Link-Level Features} examine both the contents and patterns of relationships in a given graph $G(V,E)$ and measure the implications of these relationships~\cite{oliveira2012overview}. Table~\ref{tab:linkfeatures} presents a set link-level features related to link $(i,j) \in E$ used as input to the machine learning model.
\end{itemize}
Researchers in finance, particularly in portfolio management, commonly use asset correlation in important use cases, such as risk management. Given the importance of this information in financial analyses, we also explore them as input feature for market structure forecasting. However, we are interested in analyzing how topological information helps to forecast the market structure itself. For this reason, we separated the feature set into two distinct subsets. We labeled the two subsets according to their source of information: \textit{(i)} pair-wise correlation features, which are attributes based on asset correlation and not derived from any other network information, and \textit{(ii)} non pair-wise correlation features, which are attributes derived from the network topology. While pair-wise correlation features are traditionally used in financial analysis, the importance of non pair-wise correlation features to forecast market structure is a research question investigated in this work. Thus, we can compare their information gain in market structure forecasting. In Table~\ref{tab:nodefeatures}, all features are non pair-wise correlation attributes. In Table~\ref{tab:linkfeatures}, the pair-wise correlation features are marked with ($^\ast$).
\begin{table}[ht!]
\centering
\caption{\textbf{Node-Level Features:} Features were calculated to node $i$, $ \forall \text{ } i \in V$ for a given graph $G(V,E)$. Consider $N_i$ as the set of adjacent vertices (neighborhood) of node $i$. This set contains only non pair-wise correlation features.}
\begin{tabular}{C{4.0cm} C{12.0cm}}
\hline
\multicolumn{1}{c}{\textbf{Name}} & \textbf{Definition} \\
\hline
\multicolumn{1}{C{4.0cm}}{\textit{Node Degree}} & $$deg(i) = \vert i \vert$$ \\
\hline
\multicolumn{1}{C{4.0cm}}{\textit{Weighted Node Degree}} & $$deg_w(i) = \sum_{j \in N_i }{w_{ < i, j > }},$$ where $w_{ < i, j > }$ is the weight of the edge $e(i,j)$ \\
\hline
\multicolumn{1}{C{4.0cm}}{\textit{Average Neighbor Degree}} & $$avg (i) = \frac{ \sum_{j \in N_i }{ \vert j \vert } }{ \vert i \vert}$$ \\
\hline
\multicolumn{1}{C{4.0cm}}{\textit{Propensity of $i$ to Increase its Degree}} & $$\gamma (i) = \frac{\vert i \vert}{deg_w(i)} $$ \\
\hline
\multicolumn{1}{C{4.0cm}}{\textit{Node Betweenness}} & $$b(v) = \sum_{i,j \in V \setminus v}{ \frac{ \sigma_{ij}(v)}{\sigma_{ij}} },$$ where $\sigma_{ij}(v)$ is the number of shortest paths between $i$ and $j$ passing through node $v$ and $\sigma_{ij}$ the total number of shortest paths from $i$ to $j$\\
\hline
\multicolumn{1}{C{4.0cm}}{\textit{Node Closeness}} & $$nc(i) = \frac{n - 1}{ \sum_{j \in V \setminus i}{d(i,j)}}, $$ where $d(i,j)$ represents the distance between $i$ and $j$ and $n$ is the number of nodes in the graph\\
\hline
\multicolumn{1}{C{4.0cm}}{\textit{Node Eigenvector}} & $$ne(i) = x_i \frac{1}{ \lambda } \sum_{j=1}^{n}{d_{ij}x_j}, $$ where $d_{ij}$ represents an entry of the adjacency matrix $D$ ($0$ or $1$), $\lambda$ denotes the largest eigenvalue, $x_i$ and $x_j$ denotes the centrality of node $i$ and $j$, respectively \\
\hline
\multicolumn{1}{C{4.0cm}}{\textit{Node Clustering Coefficient}} & $$cc(i) = \frac{2 \left| e_{jk} \right| }{\vert i \vert * ( \vert i \vert - 1 )} : j, k \in N_i, e_{jk} \in E$$ \\
\hline
\end{tabular}%
\label{tab:nodefeatures}%
\end{table}%
\begin{table}[ht!]
\centering
\caption{\textbf{Link-Level Features:} Features were calculated between nodes $i$ and $j$, $ \forall \text{ } (i, j) \in E$ for a given graph $G(V,E)$. Pair-wise correlation features are marked with $(^\ast)$, while the remaining are features based on non pair-wise correlation.
Consider $N_i$ and $N_j$ as the set of adjacent vertices of node $i$ and $j$, respectively.}
\begin{tabular}{C{4.0cm} C{12.0cm}}
\hline
\multicolumn{1}{c}{\textbf{Name}} & \textbf{Defition} \\
\hline
\multicolumn{1}{C{4.0cm}}{\textit{Link Existence in $G(t)$} ($^\ast$)} & \begin{equation*}
E(i,j) = \begin{cases}
1, & \quad \text{exists link}, \\[0ex]
0, & \quad \text{not exists link}.
\end{cases}
\end{equation*} \\
\hline
\multicolumn{1}{C{4.0cm}}{\textit{Correlation Value} ($^\ast$)} & $$C (i,j) = \rho_{ij},$$ where $\rho_{i,j}$ is the Pearson’s correlation coefficient between time series of log-returns of assets $i$ and $j$ \\
\hline
\multicolumn{1}{C{4.0cm}}{\textit{Common neighbors}} & $$CN (i,j) = \vert N_i \cap N_j \vert$$ \\
\hline
\multicolumn{1}{C{4.0cm}}{\textit{Jaccard Coefficient}} & $$JC (i,j) = \frac{\vert N_i \cap N_j \vert}{\vert N_i \cup N_j \vert}$$ \\
\hline
\multicolumn{1}{C{4.0cm}}{\textit{Adamic-Adar Coefficient}} & $$AA (i,j) = \sum_{k \in N_i \cap N_j }{ \frac{1}{ \log{\vert N_k \vert} } }, $$ where $N_k$ is the set of adjacent vertices of node $k$\\
\hline
\multicolumn{1}{C{4.0cm}}{\textit{Sorenson-Dice Coefficient}} & $$SDC (i,j) = \frac{2 * \vert N_i \cap N_j \vert}{\vert i \vert + \vert j \vert}$$ \\[0ex]
\hline
\multicolumn{1}{C{4.0cm}}{\textit{Edge Betweenness}} & $$B (i,j) = \sum_{i,j \in V}{ \frac{ \sigma_{ij}(e)}{\sigma_{ij}} },$$ where $\sigma_{ij}(e)$ is the number of shortest paths between $i$ and $j$ crossing the edge $e$ and $\sigma_{i,j}$ is the total number of shortest paths from $i$ to $j$
\\
\hline
\multicolumn{1}{C{4.0cm}}{\textit{Same Community}~\cite{blondel2008fast}} &
\begin{equation*}
SC(i,j) = \begin{cases}
1, & \quad \text{if $i$ and $j$ $\in$ same community}, \\
0, & \quad \text{if $i$ and $j$ $\notin$ same community}.
\end{cases}
\end{equation*} \\
\hline
\multicolumn{1}{C{4.0cm}}{\textit{Preferential Attachment}} & $$PA(i,j) = \vert i \vert * \vert j \vert,$$ where $\vert i \vert$ and $\vert j \vert$ represent the node degree of vertex $i$ and $j$\\
\hline
\end{tabular}%
\label{tab:linkfeatures}%
\end{table}%
\subsubsection{Model Evaluation}
We calculate the \textit{Area Under the ROC curve} (AUC) to evaluate the predictive performance of the link prediction methods. This metric is largely applied in binary classification and unbalanced problems and ranges from $0.5$ to $1$, where $0.5$ represents a random naive algorithm and $1$ represents the highest result. The AUC measure gives a summary metric for the algorithm’s overall performance with different prediction set sizes, while a detailed look into the shape of the ROC curve reveals the predictive performance of the algorithm at each prediction set size~\cite{huang2009time}.
To verify the performance of the proposed method, we compared it against the following similarity-based methods commonly used in literature for link prediction, separated into three categories as follows~\cite{mutlu2019review}:
\begin{enumerate}
\item \textbf{Local Similarity Methods}
\begin{itemize}
\item Common Neighbors~\cite{liben2007link} (CN): This is a simple and effective link prediction method based on common neighbors shared by two nodes. Pairs of nodes with high number of common neighbors tend to establish a link;
\item Preferential Attachment~\cite{barabasi1999emergence} (PA): This method defines that new links are formed between nodes with higher degrees rather than nodes with lower degrees;
\item Jaccard Coefficient~\cite{mutlu2019review} (JC): This method is based on similarity Jaccard's coefficient, taking into account the number of common neighbors shared by two nodes, but normalized by the total number of neighbors of both nodes;
\item Adamic-Adar~\cite{adamic2003friends} (AA): This method is also based on common neighbors shared by two nodes. Instead of using the raw number of common neighbors as CN, it is defined using the sum of the inverse of the logarithmic degree of each shared neighbor.
\end{itemize}
\item \textbf{Quasi-Local Similarity Method}
\begin{itemize}
\item Local Path Index~\cite{zhou2009predicting} (LP): Similar to CN, this method uses information from the next $2$ and $3$ nearest neighbors instead of using only information of the neighbors shared by two nodes.
\end{itemize}
\item \textbf{Global Similarity Method}
\begin{itemize}
\item Random Walk with Restart~\cite{brin1998anatomy} (RW): Based on Random Walk,
it is a special case of following the Markov chain, starting from a given node and randomly reaching a selected neighbor. The restart looks for the probability of a random walker starting from node $x$ visits node $y$ and comes back to the initial state node $x$~\cite{mutlu2019review}.
\end{itemize}
\end{enumerate}
In addition to these methods, we included a naive Time Invariant (TI) baseline benchmark in our experiments. This algorithm uses the link occurrence in graph $G(t)$ as the prediction of link occurrence in graph $G(t+h)$, assuming that market structure is time invariant. This assumption is traditionally used in risk management algorithms, which commonly rely on a backward-looking covariance matrix to estimate portfolio risk~\cite{markowitz1952,SOUZA2019122343}.
|
2,877,628,091,172 | arxiv | \section{\@startsection {section}{1}{\z@}%
{-3.5ex \@plus -1ex \@minus -.2ex}%
{2.3ex \@plus.2ex}%
{\normalfont\fontfamily{phv}\fontsize{16}{19}\bfseries}}
\renewcommand\subsection{\@startsection{subsection}{2}{\z@}%
{-3.25ex\@plus -1ex \@minus -.2ex}%
{1.5ex \@plus .2ex}%
{\normalfont\fontfamily{phv}\fontsize{14}{17}\bfseries}}
\renewcommand\subsubsection{\@startsection{subsubsection}{3}{\z@}%
{-3.25ex\@plus -1ex \@minus -.2ex}%
{1.5ex \@plus .2ex}%
{\normalfont\normalsize\fontfamily{phv}\fontsize{14}{17}\selectfont}}
\makeatother
\usepackage{amsmath}
\usepackage{graphicx}
\usepackage{enumerate}
\usepackage{xcolor}
\usepackage{natbib}
\usepackage{url}
\begin{document}
\def\spacingset#1{\renewcommand{\baselinestretch}%
{#1}\small\normalsize} \spacingset{1}
\if00
{
\title{\bf Lagrangian Relaxation for Mixed-Integer Linear Programming: Importance, Challenges, Recent Advancements, and Opportunities}
\author{Mikhail A. Bragin\\
Department of Electrical and Computer Engineering, \\University of Connecticut, Storrs, USA}
\date{}
\maketitle
} \fi
\if10
{
\title{\bf \emph{IISE Transactions} \LaTeX \ Template}
\author{Author information is purposely removed for double-blind review}
\bigskip
\bigskip
\bigskip
\begin{center}
{\LARGE\bf \emph{IISE Transactions} \LaTeX \ Template}
\end{center}
\medskip
} \fi
\bigskip
\begin{abstract}
Operations in areas of importance to society are frequently modeled as Mixed-Integer Linear Programming (MILP) problems. While MILP problems suffer from combinatorial complexity, Lagrangian Relaxation has been a beacon of hope to resolve the associated difficulties through decomposition. Due to the non-smooth nature of Lagrangian dual functions, the coordination aspect of the method has posed serious challenges. This paper presents several significant historical milestones (beginning with Polyak's pioneering work in 1967) toward improving Lagrangian Relaxation coordination through improved optimization of non-smooth functionals.
Finally, this paper presents the most recent developments in Lagrangian Relaxation for fast resolution of MILP problems. The paper also briefly discusses the opportunities that Lagrangian Relaxation can provide at this point in time.
\end{abstract}
\noindent%
{\it Keywords:} Combinatorial Optimization; Decomposition and Coordination; Lagrangian Relaxation; Discrete Optimization; Duality; Mixed-Integer Linear Programming
\spacingset{1.5}
\section{Introduction} \label{sec1}
The aim of this paper is to review Lagrangian-Relaxation-based methods for \textit{separable} Mixed-Integer Linear Programming (MILP) problems, which are formally defined as follows:
\begin{flalign}
& \min_{(x,y) := \left\{x_i,y_i\right\}_{i=1}^I} \Bigg\{\sum_{i=1}^I \left((c_i^x)^T x_i + (c_i^y)^T y_i\right) \Bigg\}, \label{eq1}
\end{flalign}
whereby $I$ subsystems are coupled through the following constraints
\begin{flalign}
& s.t. \;\; \sum_{i=1}^I A_i^x x_i + \sum_{i=1}^I A_i^y y_i - b = 0, \;\; \left\{x_i,y_i\right\} \in \mathcal{F}_i, i = 1, \dots, I. \label{eq2}
\end{flalign}
\noindent The \textit{primal problem} \eqref{eq1}-\eqref{eq2} is assumed to be feasible and the feasible region $\mathcal{F} \equiv \prod_{i=1}^I \mathcal{F}_i$ with $ \mathcal{F}_i \subset \mathbb{Z}^{n_i^x} \times \mathbb{R}^{n_i^y}$ is assumed to be bounded and finite.
Because of integer variables, Lagrangian Relaxation leads to non-smooth optimization in the dual space. Accordingly, key non-smooth optimization methods will also be reviewed.
\subsection{Importance and Difficulties of MILP Problems}
MILP has multiple applications in problems of importance to society:
ambulance relocation \citep{lee2022lagrangian}, balanced item placement \citep{gasse2022machine}, cost-sharing for ride-sharing \citep{hu2021cost}, drop box location \citep{schmidt2022locating}, efficient failure detection in large-scale distributed systems \citep{er2022data}, home healthcare routing \citep{dastgoshade2020lagrangian}, home service routing and appointment scheduling \citep{tsang2022stochastic}, inventory control under demand and lead time uncertainty \citep{thorsen2017robust}, job-shop scheduling \citep{liu2021novel}, facility location \citep{BASCIFTCI2021548}, flow-shop scheduling \citep{hong2019admission, balogh2022milp, oztop2022metaheuristics}, freight transportation \citep{archetti2021optimization}, location and inventory prepositioning of disaster relief supplies \citep{shehadeh2022stochastic}, machine scheduling with sequence-dependent setup times \citep{yalaoui2021identical}, maritime inventory routing \citep{gasse2022machine}, multi-agent path finding with conflict-based search \citep{huang2021learning}, multi-depot electric bus scheduling \citep{gkiotsalitis2022exact}, multi-echelon/multi-facility green reverse logistics network design \citep{reddy2022multi}, optimal physician staffing \citep{prabhu2021physician}, optimal search path with visibility \citep{morin2023ant}, oral cholera vaccine distribution \citep{smalley2015optimized}, outpatient colonoscopy scheduling \citep{shehadeh2020distributionally}, pharmaceutical distribution \citep{zhu2018design}, plant factory crop scheduling \citep{huang2020plant}, post-disaster blood supply \citep{hamdan2020robust,kamyabniya2021robust}, real assembly line balancing with human-robot collaboration \citep{nourmohammadi2022balancing}, reducing vulnerability to human trafficking \citep{kaya2022improving}, restoration planning and crew routing \citep{MORSHEDLOU2021107626}, ridepooling \citep{gaul2022event}, scheduling of unconventional oil field development \citep{soni2021mixed}, security-constrained optimal power flow \citep{velloso2021exact}, semiconductor manufacturing \citep{chang2017stochastic}, surgery scheduling \citep{kayvanfar2021new}, unit commitment \citep{kim2018temporal, chen2019distributed, li2019multi, chen2020high, li2020robust, van2021decomposition}, vehicle sharing and task allocation \citep{arias2022vehicle}, workload apportionment \citep{gasse2022machine}, and many others. Because of integer variables $x$, MILP problems are NP-hard and instances of practical sizes are generally difficult to solve to optimality due to the combinatorial complexity: the computational effort increases \textit{super-linearly} (i.e., exponentially) as the problem size increases. For a number of practical problems, the computational effort may be significant to obtain even a feasible solution. Additionally, many important problems require short solving times (ranging from 20 minutes down to a few seconds), as well as high-quality solutions.
By the same token, with decreasing problem size, NP-hardness leads to the \textit{super-linear} reduction of complexity. The \textit{dual} decomposition and coordination Lagrangian Relaxation method is promising to exploit this reduction of complexity; the method essentially ``reverses'' combinatorial complexity upon decomposition, thereby drastically reducing the effort required to solve subproblems (each subproblem $i$ corresponds to a subsystem $i$). Lagrangian Relaxation is also deeply rooted in economic theory, whereby the solutions obtained are rested upon the economical principle of ``supply and demand.'' When the ``demand'' exceeds the ``supply,'' Lagrangian multipliers (which can be viewed as ``shadow prices'') increase (and vice versa) thereby discouraging subsystems from making less ``economically viable'' decisions. Notwithstanding the advantage of the decomposition aspect, the ``price-based'' coordination of the method (to appropriately coordinate the subproblems), however, has been the subject of intensive research for many decades because of the fundamental difficulties of the underlying non-smooth optimization of the associated dual functions as explained ahead.
The purpose of this paper is to present a brief overview of the key milestones in the development of the Lagrangian Relaxation method for MILP problems as well as in the optimization of convex non-smooth functions. The rest of the paper is structured as follows:
\begin{enumerate}
\item At the beginning of Section \ref{sec2}, the Lagrangian dual problem
is presented and the difficulties of Lagrangian Relaxation on a pathway to solving MILP problems are explained. In subsequent subsections, the difficulties are resolved one by one;
\item In subsection \ref{sec21}, early research on non-smooth optimization \citep{polyak1967general, polyak1969minimization} is presented to lay the foundation for further developments; specifically, the Polyak's formula \citep{polyak1969minimization} depending on the \textit{optimal dual value} $q(\lambda^*)$ to ensure geometric/linear convergence rate is presented; \item In subsection \ref{sec22}, the \textit{subgradient-level} method \citep{goffin1999convergence} is presented to ensure convergence without the need to know $q(\lambda^*)$;
\item In subsection \ref{sec23}, the fundamental difficulties associated with subgradient methods (high computational effort and zigzagging of multipliers) are explained;
\item In subsections \ref{sec24} and \ref{sec25}, two separate research thrusts (\textit{surrogate} \citep{kaskavelis1998efficient, zhao1999surrogate} and \textit{incremental} \citep{nedic2001incremental}) to reduce computational effort as well to alleviate zigzagging of multipliers are reviewed; the former thrust still requires $q(\lambda^*)$ for convergence; the latter thrust avoids the need to know $q(\lambda^*)$ following the ``subgradient-level'' ideas presented in subsection \ref{sec22}; \item In subsection \ref{sec26}, Surrogate Lagrangian Relaxation (SLR) \citep{bragin2015convergence} that proved convergence without $q(\lambda^*)$ by exploiting ``contraction mapping'' while inheriting convergence properties of the \textit{surrogate} method of subsection \ref{sec24} is reviewed;
\item In subsection \ref{sec27}, further methodological advancements for SLR are presented; to accelerate the reduction of constraint violations while enabling the use of MILP solvers, ``absolute-value'' penalties have been introduced \citep{bragin2018scalable}; to efficiently coordinate distributed entities while avoiding the synchronization overhead, computationally distributed version of SLR has been developed to efficiently coordinate distributed subsystems in an asynchronous way \citep{bragin2020distributed};
\item In subsection \ref{sec28}, Surrogate ``Level-Based'' Lagrangian Relaxation \citep{bragin2022surrogate} is reviewed; this first-of-the-kind method exploits the linear-rate convergence potential intrinsic to the Polyak's formula presented in subsection \ref{sec21} but without the knowledge $q(\lambda^*)$ and without heuristic adjustments of its estimates presented in subsection \ref{sec22}. Rather, an estimate of $q(\lambda^*)$ has been innovatively determined purely through a simple constraint satisfaction problem; the surrogate concept \citep{zhao1999surrogate, bragin2015convergence} ensures low computational requirements as well as the alleviated zigzagging; accelerated reduction of constraint violations is achieved through ``absolute-value'' penalties \citep{bragin2018scalable} enabling the use of MILP solvers;
\item In Section \ref{Conclusion}, a brief conclusion is provided with future directions delineated.
\end{enumerate}
\section{Lagrangian Duality for Discrete Programs and Non-Smooth Optimization} \label{sec2}
The Lagrangian \textit{dual problem} that corresponds to the original MILP problem \eqref{eq1}-\eqref{eq2} is the following \textit{non-smooth} optimization problem:
\begin{flalign}
& \max_{\lambda} \{q(\lambda): \lambda \in \Omega \subset \mathbb{R}^m\}, \label{eq3}
\end{flalign}
where the convex \textit{dual function} is defined as follows:
\begin{flalign}
& q(\lambda) = \min_{(x,y)} \big\{L(x,y,\lambda), \left\{x_i,y_i\right\} \in \mathcal{F}_i, i = 1,\dots,I \big\}. \label{eq4}
\end{flalign}
Here $L(x,y,\lambda) \equiv \sum_{i=1}^I \big((c_i^x)^T x_i + (c_i^y)^T y_i\big) + \lambda^T \cdot \big(\sum_{i=1}^I A_i^x x_i + \sum_{i=1}^I A_i^y y_i - b\big)$ is the \textit{Lagrangian function} obtained by relaxing coupling constraints \eqref{eq2} by using Lagrangian multipliers $\lambda$. The minimization of $L(x,y,\lambda)$ within \eqref{eq4} with respect to $\{x,y\}$ is referred to as the \textit{relaxed problem}, which is separable into subproblems due to the additivity of $L(x,y,\lambda)$. This feature will be exploited starting from subsection \ref{sec24}.
Even though the original \textit{primal problem} \eqref{eq1} is non-convex, $q(\lambda)$ is always continuous and concave with the feasible set $\Omega$ being always convex. Due to integer variables $x$ in the primal space, $q(\lambda)$ is non-smooth with facets (each representing a particular solution to \eqref{eq4}) intersecting at ridges where derivatives of $q(\lambda)$ exhibit discontinuities; in particular, $q(\lambda)$ is non-differentiable at $\lambda^{*}$. As a result, subgradients (referred to in early literature as ``generalized gradients" or ``subdifferentials") are generally non-ascending: when $\lambda$ are updated along subgradients, dual values may decrease. Moreover, subgradients may almost be perpendicular to the directions toward optimal multipliers $\lambda^*$ thereby leading to zigzagging of $\lambda$ across ridges of the dual function (see Figure \ref{fig_ex1} for illustrations).
\begin{figure}[ht]
\centering
\includegraphics[trim=0 0 0 0, width=0.45\linewidth, scale=0.45]{DualFunction.pdf}
\caption{An example of a dual function that illustrates the difficulties associated with subgradient methods. Solid lines denote the level curves, dash-dotted lines denote the ridges of the dual function whereby the gradients are not defined (possible subgradient directions at points A and B are shown by solid arrows), and dashed lines denote the subgradient direction from point B toward optimal multipliers. This Figure is taken from \citep{bragin2022surrogate} with permission.}
\label{fig_ex1}
\end{figure}
While Lagrangian multipliers $\lambda$ are generally fixed parameters within \eqref{eq4}, $\lambda$ are ``dual'' decision variables with respect to the dual problem \eqref{eq3}. Traditionally, \eqref{eq3} is maximized by iteratively updating $\lambda$ by making a series of steps $s^k$ along subgradients $g(x^k,y^k)$ as:
\begin{flalign}
& \lambda^{k+1} = \lambda^k + s^k \cdot g(x^k,y^k), \label{eq5}
\end{flalign}
where $\{x^k,y^k\}$
is a concise way to denote an optimal solution $\{x^*(\lambda^k),y^*(\lambda^k)\}$ to the relaxed problem \eqref{eq4} with multipliers equal to $\lambda^k.$ Within Lagrangian Relaxation, subgradients are defined as levels of constraint violations $g(x^k,y^k) \equiv \left(\sum_{i=1}^I A_i^x x_i^k + \sum_{i=1}^I A_i^y y_i^k - b\right)$; for compactness, $g(x^k,y^k)$ will be denoted simply as $g^k$ as appropriate. If inequality constraints $\sum_{i=1}^I A_i^x x_i + \sum_{i=1}^I A_i^y y_i \leq b$ are present, they are generally converted into equality constraints by introducing non-negative real-valued slack variables $z$ such that $\sum_{i=1}^I A_i^x x_i + \sum_{i=1}^I A_i^y y_i + z = b.$ Multipliers are then updated per \eqref{eq5} with subsequent projection onto the positive orthant - a set delineated by constraints $\lambda \geq 0.$
Because the dual problem results from the relaxation of coupling constraints, dual values are generally less than primal values $q(\lambda^k) < f(x^k,y^k),$\footnote{In this particular case, $f(x^k,y^k) \equiv \sum_{i=1}^I \left((c_i^{x})^T x_i^k + (c_i^{y})^T y_i^k \right)$ such that $\{x^k,y^k\}$ satisfy constraints \eqref{eq2}.} i.e., there is a \textit{duality gap} - the relative difference between $q(\lambda^k)$ and $f(x^k,y^k)$.\footnote{Dual values can be used to quality the quality of the solution $\{x^k,y^k\}$.} Because of the discrete nature of the primal problem \eqref{eq1}-\eqref{eq2}, even at optimality, the duality gap is non-zero, that is $q(\lambda^*) < f(x^*,y^*).$
Consequently, maximization of the dual function does not lead to an optimal primal solution $(x^*,y^*)$ or even a feasible solution. To obtain solutions feasible with respect to the original problem \eqref{eq1}-\eqref{eq2}, solutions to the relaxed problem $\{x^k_i,y^k_i\}$ are typically perturbed heuristically.\footnote{To solve MILP problems, Lagrangian Relaxation is often regarded as a heuristic. However, in dual space, the Lagrangian relaxation method is exact; the method is also capable of helping to improve solutions through the multipliers update, unlike many other heuristic methods.} Generally, the closer the multipliers are to their optimal values $\lambda^*$, the smaller the levels of constraint violations (owing to the concavity of the dual function), and, therefore, the easier the search for feasible solutions.
In short summary, the roadblocks on the way of Lagrangian Relaxation to efficiently solve MILP problems are the following:
\begin{enumerate}
\item Non-differentiability of the dual function;
\begin{enumerate}
\item Subgradient directions are non-ascending;
\item Necessary and sufficient conditions for extrema are inapplicable;
\end{enumerate}
\item High computational effort is required to compute subgradient directions if the number of subsystems is large;
\item Zigzagging of multipliers across ridges of the dual function leading to many iterations required for convergence; this difficulty follows from the non-differentiability of the dual function, but this difficulty deserves a separate resolution;
\item Solutions $\{x^k_i,y^k_i\}$ to the relaxed problem, when put together, do not satisfy constraints \eqref{eq2}.
Moreover,
\item Even at $\lambda^*$, levels of constraint violations may be large and the heuristic effort to ``repair'' the relaxed problem solution $\{x^k_i,y^k_i\}$ may still be significant.
\end{enumerate}
In order to resolve difficulty 1(b), stepsizes need to approach zero (yet, this condition alone is not sufficient), as will be discussed in the subsection that follows. This requirement puts a restriction on the methods that will be reviewed.
\textbf{Scope.} By examining the above-mentioned difficulties (which will also be referred to as D1(a), D1(b), D2, D3, D4, and D5), several stages in the development of Lagrangian Relaxation and its applications to optimizing non-smooth dual functions and solving MILP problems will be chronologically reviewed, along with specific features of the methods that address the above difficulties.
In view of the above difficulties such as D1(b) and D2, several research directions, though having their own merit, will be excluded,
for example:
\begin{enumerate}
\item \textbf{The Method of Multipliers.} The Alternate Direction Method of Multipliers (ADMM), which is derived from Augmented Lagrangian Relaxation (ALR) (the ``Method of Multipliers''), introduces quadratic penalties to penalize violations of relaxed constraints, improving the convergence of Lagrangian Relaxation. The two methods (ADMM and ALR), however, only converge when solving continuous primal problems. Without stepsizes approaching zero, neither method converges [in the dual space] when solving discrete primal problems and does not resolve D1(b). Nevertheless, the penalization idea underlying ALR led to the development of other LR-based methods with improved convergence as described in subsection \ref{sec27}.
\item \textbf{The Bundle Method.} The Bundle Method's idea is to obtain the so-called $\varepsilon-$ascent direction to update multipliers \citep{zhao2002new}. Considering that the non-differentiability of dual functions may generally result in non-ascending subgradient directions, the Bundle method resolves D1(a). Since the relaxed problems need to be solved several times \citep{zhao2002new}, the effort required to obtain multiplier-updating directions exceeds that required in subgradient methods, thus the method does not resolve D2.
\end{enumerate}
\subsection{
Minimization of ``Unsmooth Functionals''}
\label{sec21}
Optimization of non-smooth convex functions, a direction that stems from the seminal work of \cite{polyak1967general}, is a broader subject than the optimization of $q(\lambda)$ within Lagrangian Relaxation. To present the underlying principles that support Lagrangian Relaxation to efficiently solve MILP problems, the work of \cite{polyak1967general} is discussed next.
\noindent \textbf{Subgradient Method with ``Non-Summable'' Stepsize.} While subgradients are generally non-descending (non-ascending) for minimization (maximization) problems \cite[p.33]{polyak1967general}, convergence to the optimal solution optimizing a non-smooth function (e.g., to $\lambda^*$ maximizing $q(\lambda)$) was proven under the following (frequently dubbed as \textit{non-summable}) stepsizing formula satisfying the following conditions:
\begin{flalign}
& s^k > 0, \quad \lim_{k \rightarrow \infty} s^k = 0, \quad \sum_{k=1}^{\infty}{s^k} = \infty. \label{eq6}
\end{flalign}
\noindent \textbf{Subgradient Method with Polyak's Stepsize.}
As Polyak noted in his later work \cite[p.15]{polyak1969minimization}, non-summable stepsizes lead to very slow convergence. Intending to achieve \textit{geometric} (also referred to as \textit{linear}\footnote{Superlinear convergence is also possible, however, 1. A reformulation of the dual problem \citep{charisopoulos2022superlinearly} is required; 2. Within the Lagrangian Relaxation framework, a dual function is generally unavailable as argued in subsection \ref{sec23}.}) rate of convergence so that $\|\lambda^k - \lambda^*\|$ is monotonically decreasing, Polyak developed the stepsizing formula, which, for the problem under consideration, is presented in the following way:
\begin{flalign}
& 0 < s^k < \gamma \cdot \frac{q(\lambda^{*}) - q(\lambda^k)}{\big\|g(x^k,y^k)\big\|^2}, \gamma < 2. \label{eq7}
\end{flalign}
In the simplest form, a rendition of the Polyak's result can be presented as follows. First, consider a binomial expansion of $\|\lambda^*-\lambda^{k+1}\|^2$ as
\begin{flalign}
& \|\lambda^*-\lambda^{k+1}\|^2 = \|\lambda^*-\lambda^{k}\|^2 - 2 \cdot s^k \cdot (g^k)^T \cdot (\lambda^*-\lambda^{k}) + (s^k)^2 \cdot \|g^k\|^2. \label{eq7a}
\end{flalign}
Owing to the concavity of the dual function,
\begin{flalign}
& q(\lambda^{*})-q(\lambda^k) \leq (g^k) \cdot (\lambda^* - \lambda^k). \label{eq7b}
\end{flalign}
Therefore, \eqref{eq7a} becomes:
\begin{flalign}
& \|\lambda^*-\lambda^{k+1}\|^2 \leq \|\lambda^*-\lambda^{k}\|^2 - 2 \cdot s^k \cdot (q(\lambda^{*})-q(\lambda^k)) + (s^k)^2 \cdot \|g^k\|^2. \label{eq7c}
\end{flalign}
From \eqref{eq7}, it follows that
\begin{flalign}
& s^k \cdot \|g^k\|^2 < 2 \cdot (q(\lambda^{*}) - q(\lambda^k)). \label{eq7d}
\end{flalign}
Therefore, \eqref{eq7c} simplifies to
\begin{flalign}
& \|\lambda^*-\lambda^{k+1}\|^2 < \|\lambda^*-\lambda^{k}\|^2. \label{eq7e}
\end{flalign}
Within \eqref{eq7}-\eqref{eq7e} and thereafter in the paper, the standard Euclidean norm will be used (unless specified otherwise).
The Polyak's stepsizing \eqref{eq7} can be regarded as a creative workaround of D1(a) in the sense that a more computationally difficult problem of obtaining ascending directions at every iteration (as in the Bundle method) to ensure convergence is replaced with a provably easier problem of reducing $\|\lambda^*-\lambda^{k}\|$ at every step to guarantee convergence.
Through the decades following 1969, two distinct research directions, along the lines of ``non-summable'' \eqref{eq6} and ``Polyak'' \eqref{eq7} stepsizes continued with various research groups adhering to either one or the other. Both directions
guarantee convergence to $\lambda^*$ that maximizes the dual function $q(\lambda)$ (thereby resolving D1(b)), although, up to this stage in the discussion, convergence by using Polyak's stepsizing is purely theoretical, since optimal dual value $q(\lambda^*)$ required within \eqref{eq7} is unknown. The Subgradient-Level Method was developed to achieve convergence in practice.
\subsection{
The Subgradient-Level Method} \label{sec22} The Subgradient-Level Method \citep{goffin1999convergence} overcame difficulties associated with the unavailability of optimal [dual] value, which is needed to compute Polyak's step-size \eqref{eq7} through adaptively adjusting a level estimate based on the detection of sufficient descent and oscillations of the [dual] solutions.
In terms of the problem \eqref{eq3}, the procedure of the method is explained as follows: the ``level'' estimate $q^{k}_{lev} = q^{k_j}_{rec} + \delta_j$ is used in place of the optimal dual value $q(\lambda^{*})$, where $q^{k}_{rec}$ is the best (highest) dual value (``record objective value'') obtained up to an iteration $k,$ and $\delta_j$ is an adjustable parameter with $j$ denoting the $j^{th}$ update of $q^{k}_{lev}.$ The main premise behind this is when $\delta_j$ is ``too large,'' then multipliers will exhibit oscillations while traveling significant (predefined) distance $R$ without improving the ``record'' value. In this case, the parameter $\delta_j$ is updated as $\delta_{j+1} = \beta \cdot \delta_j$ with $\beta = \frac{1}{2}.$ On the other hand, if $\delta_j$ is such that the dual value is sufficiently increased: $q(\lambda^k) \geq q^{k}_{lev} + \tau \cdot \delta_j,$ with $\tau = \frac{1}{2},$ then the parameter $\delta_j$ is unchanged and the distance traveled by multipliers is reset to 0 to avoid premature reduction of $\delta_j$ by $\beta$ in future iterations.
Followed by an examination of resolutions of D2 and D3, further discussions of the implementation of Polyak's stepsizing to resolve D1 will be deferred to future subsections.
\subsection{Fundamental Difficulties of Sub-gradient Methods} \label{sec23}
\noindent \textbf{High Computational Effort (D2).} In the methods reviewed thus far, non-smooth functions
are assumed to be given.
However, a dual function $q(\lambda)$ cannot be obtained
computationally efficiently.
In fact, even for a given value of multipliers $\lambda^k$, minimization within \eqref{eq4} to obtain a dual value $q(\lambda^k)$ and the corresponding subgradient is time-consuming. Even then, only one possible value of the subgradient can generally be obtained; a complete description of the subgradient is generally non-attainable \citep{goffin1977convergence}.
\noindent \textbf{Zigzagging of Multipliers (D3).} As hypothesized by \citep{goffin1977convergence}, the slow convergence of subgradient methods is due to ill-conditioning. The condition number $\mu$ is formally defined as \citep{goffin1977convergence}: $\mu = \inf\{\mu(\lambda): \lambda \in \mathbb{R}^m/P\},$ where $P = \{\lambda \in \mathbb{R}^m: q(\lambda) = q(\lambda^{*})\}$ (the set of optimal solutions) and $\mu(\lambda) = \min_u {\frac{u^T \cdot (\lambda^{*}-\lambda)}{\|u^T\| \cdot \|\lambda^{*}-\lambda\|}}$ (the cosine of the angle that subgradient form with directions toward the optimal multipliers).\footnote{The condition number for the dual function illustrated in Figure \ref{fig_ex1} is 0 since the subgradient emanating from point B forms a right angle with the direction toward optimal multipliers.} It was then confirmed experimentally when solving, for example, scheduling \cite[Fig. 3(b), p. 104]{czerwinski1994scheduling} as well as power systems problems \cite[Fig. 4, p. 774]{guan1995nonlinear} that the ill-conditioning leads to the zigzagging of multipliers across the ridges of the dual function.
To address these two difficulties, the notions of ``surrogate,'' ``interleaved'' and ``incremental'' subgradients, which do not require relaxed problems to be fully optimized to speed up convergence, emerged in the late 1990s, and early 2000s as reviewed next.
\subsection
The Interleaved Subgradient and the Surrogate Subgradient Methods} \label{sec24} Within the Interleaved Subgradient method proposed by \citep{kaskavelis1998efficient}, multipliers are updated after solving one subproblem at a time
\begin{flalign}
& \min_{(x_i,y_i)} \!\left\{(c_i^x)^T x_i \!+\! (c_i^y)^T y_i\! +\! \lambda^T \cdot \big(A_i^x x_i\! +\! A_i^y y_i\big), \left\{x_i,y_i\right\}\! \in\! \mathcal{F}_i\right\},
\label{eq8}
\end{flalign}
rather than solving all the subproblems as in subgradient methods. This significantly reduces computational effort, especially for problems with a large number of subsystems.
The more general Surrogate Sub-gradient method with proven convergence was then developed by \cite{zhao1999surrogate} whereby the exact optimality of the relaxed problem (or even subproblems) is not required. As long as the following ``surrogate optimality condition''
\begin{flalign}
& L(\tilde{x}^k,\tilde{y}^k,\lambda^k) < L(\tilde{x}^{k-1},\tilde{y}^{k-1},\lambda^k) \label{eq14}
\end{flalign}
is satisfied, the multipliers are updated as
\begin{flalign}
& \lambda^{k+1} = \lambda^k + s^k \cdot g(\tilde{x}^k,\tilde{y}^k), \label{eq15}
\end{flalign}
by using the following formula
\begin{flalign}
& 0 < s^k < \gamma \cdot \frac{q(\lambda^{*}) - L(\tilde{x}^k,\tilde{y}^k,\lambda^k)}{\left\|g(\tilde{x}^k,\tilde{y}^k)\right\|^2}, \;\; \gamma < 1. \label{eq16}
\end{flalign}
The convergence to $\lambda^{*}$ is guaranteed \citep{zhao1999surrogate}.
Unlike that in Polyak's formula, parameter $\gamma$ is less than 1 to guarantee that $q(\lambda^{*}) > L(\tilde{x}^k,\tilde{y}^k,\lambda^k)$ so that the step-sizing formula \eqref{eq16} is well-defined in the first place, as proven in \cite[Proposition 3.1, p. 703]{zhao1999surrogate}. Here, ``tilde'' indicates that the corresponding solutions do not need to be necessarily subproblem-optimal. Solutions $\{\tilde{x}^k,\tilde{y}^k\}$ form a set $\mathcal{S}(\tilde{x}^{k-1},\tilde{y}^{k-1},\lambda^k) \equiv \{(x,y): L(x,y,\lambda^k) < L(\tilde{x}^{k-1},\tilde{y}^{k-1},\lambda^k)\}$. Once a member $\{\tilde{x}^k,\tilde{y}^k\} \in \mathcal{S}(\tilde{x}^{k-1},\tilde{y}^{k-1},\lambda^k)$ is found, i.e., the surrogate optimality condition \eqref{eq14} is satisfied, the optimization of the relaxed problem stops and multipliers are updated per \eqref{eq15}. A case when $\mathcal{S}(\tilde{x}^{k-1},\tilde{y}^{k-1},\lambda^k) = \emptyset$ indicates that for given $\lambda^k$ no solution better than $\{\tilde{x}^{k-1},\tilde{y}^{k-1}\}$ can be found indicating that $(\tilde{x}^{k-1},\tilde{y}^{k-1}) = (x^*(\lambda^k),y^*(\lambda^k))$ is subproblem-optimal for given value $\lambda^k$, and the multipliers are updated by using a subgradient direction.
The convergence proof is quite similar to that presented in \eqref{eq7a}-\eqref{eq7e}. The only caveat is that $L(\tilde{x}^k,\tilde{y}^k,\lambda^k)$ is not a function; unlike the dual function $q(\lambda^k)$, $L(\tilde{x}^k,\tilde{y}^k,\lambda^k)$ can take multiple values for given $\lambda^k$. Therefore, the analogue of \eqref{eq7b} cannot follow from concavity of $L(\tilde{x}^k,\tilde{y}^k,\lambda^k)$. It follows from the fact that the surrogate dual value is obtained without solving all the subproblems, hence
\begin{flalign}
& q(\lambda^*) \leq L(\tilde{x}^k,\tilde{y}^k,\lambda^*).
\end{flalign}
Adding and subtracting $g(\tilde{x}^k,\tilde{y}^k)^T \cdot \lambda^k$ from the previous inequality leads to
\begin{flalign}
& q(\lambda^*) - L(\tilde{x}^k,\tilde{y}^k,\lambda^k) \leq g(\tilde{x}^k,\tilde{y}^k)^T \cdot (\lambda^* - \lambda^k).
\end{flalign}
The procedure described in \eqref{eq7c}-\eqref{eq7e} follows analogously.
In addition to the reduction of computational effort, a concomitant reduction of multiplier zigzagging has been also observed. Indeed, with an exception of the aforementioned situation whereby $\mathcal{S}(\tilde{x}^{k-1},\tilde{y}^{k-1},\lambda^k) = \emptyset$, a solution to one subproblem \eqref{eq8} is sufficient to satisfy \eqref{eq14}. In this case, only one term within each summation of surrogate subgradient $\sum_{i=1}^I A_i^x x_i + \sum_{i=1}^I A_i^y y_i - b$ will be updated thereby preventing surrogate sub-gradients from changing drastically, and from zigzagging of multipliers as the result.
\subsection{
Incremental Subgradient Methods} \label{sec25}
In the incremental subgradient method, a subproblem $i$ is solved before multipliers are updated (similar to the interleaved method). However, as opposed to updating all multipliers at once, the incremental subgradient method updates multipliers incrementally. After the $i^{th}$ subgradient component is calculated, multipliers are updated as
\begin{flalign}
& \psi^{k}_i = \psi^k_{i-1} + s^k \cdot \left(A_i^x x_i^k + A_i^y y_i^k - \beta_i \right). \label{eq17}
\end{flalign}
Here $\beta_i$ are the vectors such that $\sum_{i=1}^I \beta_i = b$, for example, $\beta_i = \frac{b}{I}.$ Only after all $i$ subproblems are solved, are the multipliers ``fully'' updated as
\begin{flalign}
& \lambda^{k+1} = \psi^{k}_I. \label{eq18}
\end{flalign}
Convergence results of the subgradient-level method \citep{goffin1999convergence} have been extended for the subgradient method. Variations of the method were proposed with $\beta$ and $\tau$ belonging to an interval $[0,1]$ rather than being equal to $\frac{1}{2}.$ Moreover, to improve convergence, rather than using constant $R,$ a sequence of $R_l$ was proposed such that $\sum_{l=1}^{\infty} R_l = \infty.$
\subsection{
The Surrogate Lagrangian Relaxation Method} \label{sec26}
Based on contraction mapping, Surrogate Lagrangian Relaxation (SLR) \citep{bragin2015convergence} overcomes the difficulty associated with the lack of knowledge about the optimal dual value. At consecutive iterations, the distance between multipliers must decrease, i.e.,
\begin{flalign}
& \left\| \lambda^{k+1} - \lambda^k \right\| \leq \alpha_k \cdot \left\| \lambda^{k} - \lambda^{k-1} \right\|, \quad 0 \leq \alpha_k \leq 1. \label{eq19}
\end{flalign}
Based on \eqref{eq15} and \eqref{eq19}, the step-sizing formula has been derived:
\begin{flalign}
& s^k = \alpha_k \cdot \frac{s^{k-1}\left\|\tilde{g}(x^{k-1})\right\|}{\left\|\tilde{g}(x^{k})\right\|}. \label{eq22}
\end{flalign}
Moreover, a specific formula to set $\alpha_k$ has been developed to guarantee convergence:
\begin{flalign}
& \alpha_k = 1-\frac{1}{M \cdot k^{1-\frac{1}{k^r}}}, \; M \geq 1, \; 0
\leq r \leq 1. \label{eq23}
\end{flalign}
Since $\alpha_k \rightarrow 1,$ stepsizes within SLR are ``non-summable.'' Linear convergence can only be guaranteed outside of a neighborhood of $\lambda^*$ \cite[Proposition 2.5, p. 187]{bragin2015convergence}.
When multipliers are close to their optimal values\footnote{A quality measure to quantify the quality of multipliers (i.e., how close the multipliers are to their optimal values) will be discussed in subsection \ref{sec28}} and subproblems are ``sufficiently coordinated,'' solutions to the relaxed problem are close to feasible solutions. As a result, only a few subproblems cause infeasibility. This leads to the resolution of the Difficulty D4 (``solutions to the relaxed problem, when put together may not constitute a feasible solution to the original problem''). An ``automatic'' procedure to identify and ``repair'' a few subproblem solutions that cause the infeasibility of the original problem has been developed by \cite{bragin2018scalable}.
\subsection{
Further Methodological Advancements} \label{sec27}
\textbf{Surrogate Absolute-Value Lagrangian Relaxation \citep{bragin2018scalable}.} The Surrogate Absolute-Value Lagrangian Relaxation (SAVLR) method is designed to guarantee convergence and to speed up the reduction of constraint violations while avoiding nonlinearity and nonconvexity that would have occurred if traditional quadratic terms had been used. In the SAVLR method, the following dual problem is considered:
\begin{flalign}
& \max_{\lambda} \{q_{\rho}(\lambda): \lambda \in \Omega \subset \mathbb{R}^m\}, \label{eq24}
\end{flalign}
where
\begin{flalign}
& q_{\rho}(\lambda) = \min_{(x,y)} \Bigg\{\sum_{i=1}^I \left((c_i^x)^T x_i + (c_i^y)^T y_i\right) + \nonumber \lambda^T \cdot \left(\sum_{i=1}^I A_i^x x_i + \sum_{i=1}^I A_i^y y_i - b\right) + \\ & \rho \cdot \Bigg\|\sum_{i=1}^I A_i^x x_i + \sum_{i=1}^I A_i^y y_i - b\Bigg\|_1, \left\{x_i,y_i\right\} \in \mathcal{F}_i, i = 1,\dots,I \Bigg\}. \label{eq25}
\end{flalign}
\noindent The above minimization involves the exactly-linearizable piece-wise linear penalties, which penalize constraint violations thereby ultimately reducing the number of subproblems that cause infeasibility mentioned in \ref{sec26} and consequently reducing the effort required by heuristics to find primal solutions. This resolves Difficulty D5.
\noindent \textbf{Distributed and Asynchronous Surrogate Lagrangian Relaxation (DA-SLR) \citep{bragin2020distributed}.} With the emergence of technologies supporting distributed computational capabilities of multiple distributed entities as well as communication enabled by the Internet of Things, computational tasks can be accomplished much more efficiently by using distributed computing resources than by using a single computer. With the assumption of a single coordinator, the DA-SLR methodology has been developed to efficiently coordinate distributed subsystems in an asynchronous manner while avoiding the overhead of synchronization. Compared to the sequential Surrogate Lagrangian Relaxation \citep{bragin2015convergence}, numerical testing shows a faster convergence (12 times speed-up to achieve a gap of 0.03\% for one instance of the generalized assignment problem).
A short summary is in order here.
While in theory, Polyak's formula offers a geometric rate of convergence, the convergence rate of the Subgradient-Level Method, however, is not discussed either in the original paper by \cite{goffin1999convergence}, or in subsequent applications of the subgradient ``level-based'' ideas (e.g., by \cite{nedic2001incremental}). Likely, because of the requirements that the multipliers need to travel, either explicitly or implicitly, an infinite distance, the geometric/linear rate of convergence cannot be achieved. While SLR-based methods avoid estimating the optimal dual value, the requirement \eqref{eq23} to avoid premature termination results in the [stepsize] non-summability.
Ideally, the goal is to avoid multiplier zigzagging, to reduce the computational effort required to obtain multiplier-updating directions, and to achieve linear convergence. The first step in this direction is explained in the following subsection.
\subsection{
Surrogate Level-Based Lagrangian Relaxation} \label{sec28}
To exploit the linear convergence potential inherent to Polyak's stepsizing formula, the Surrogate ``Level-Based'' Lagrangian Relaxation (SLBLR) method has been recently developed \citep{bragin2022surrogate}. It was proven that once the following feasibility problem
\begin{flalign}
&\|\lambda-\lambda^{k_j+1}\| \leq \|\lambda-\lambda^{k_j}\|, \nonumber \\
&\|\lambda-\lambda^{k_j+2}\| \leq \label{eq26}
\|\lambda-\lambda^{k_j+1}\|, \\
&\dots \nonumber \\
&\|\lambda-\lambda^{k_j+n_j}\| \leq \nonumber
\|\lambda-\lambda^{k_j+n_j-1}\|,
\end{flalign}
admits no feasible solution with respect to $\lambda$ (which are the decision variables in the problem above) for some $k_j$ and $n_j$, then the ``level value'' equals to
\begin{flalign}
& \overline{q}_{j} = \max_{\kappa \in [k_j,k_j+n_j]} \overline{q}_{\kappa,j} > q(\lambda^{*}), \label{eq30}
\end{flalign}
where
\begin{flalign}
& \overline{q}_{\kappa,j} = \frac{1}{\gamma} \cdot s^\kappa \cdot \|g(\tilde{x}^\kappa,\tilde{y}^\kappa)\|^2 + L(\tilde{x}^\kappa,\tilde{y}^\kappa,\lambda^\kappa). \label{eq30aa}
\end{flalign}
In subsequent iterations, Polyak's stepsizing formula is used
\begin{flalign}
& s^k = \zeta \cdot \gamma \cdot \frac{\overline{q}_{j} - L(\tilde{x}^k,\tilde{y}^k,\lambda^k)}{\|g(\tilde{x}^k,\tilde{y}^k)\|^2}, \zeta < 1, k = k_{j+1},\dots,k_{j+1}+n_{j+1}-1. \label{eq32}
\end{flalign}
In essence, the above formula is used until the above feasibility problem admits no solution again, at which point the level value is reset from $\overline{q}_{j}$ to $\overline{q}_{j+1}$ and the multiplier-updating process continues. It is worth noting that the optimization problem \eqref{eq3} is the maximization problem and the quality of its solutions (Lagrangian multipliers) can be quantified through the upper bound provided by $\{\overline{q}_{j}\}$.
The assumption here is that the above feasibility problem \eqref{eq26} becomes infeasible ``periodically'' and ``infinitely often'' thereby triggering recalculations of ``level'' values $\overline{q}_{j}$, which will decrease and approach $q(\lambda^*)$ from above. The assumption is realistic since the only sure-fire way to ensure that \eqref{eq26} is always feasible is to know the optimal dual value.
Given the above and with the addition of ``absolute-value'' penalties to facilitate the feasible solution search, the SLBLR method addresses all the difficulties D1-D5. The method inherits features from Polyak's formula \eqref{eq7}, reduction of computational effort as well as the alleviation of zigzagging from surrogate methods \citep{zhao1999surrogate, bragin2015convergence} and the acceleration of reduction of the constraint violation from \citep{bragin2018scalable}. The decisive advantage of SLBLR is provided by the practically operationalizable and efficient decision-based procedure described above to determine ``level'' values without the need for estimation or heuristic adjustment of estimates of the optimal dual value. Results reported in \citep{bragin2022surrogate} indicate that the SLBLR method solves a wider range of generalized assignment problems (GAPs) to optimality as compared to other methods. With other things being equal, the ``level-based'' stepsizing of the SLBLR method \citep{bragin2022surrogate} is more advantageous as compared to the ``non-summable'' stepsizing of the SAVLR method \citep{bragin2018scalable}. Additionally, SLBLR successfully solves other problems such as stochastic job-shop scheduling and pharmaceutical scheduling outperforming commercial solvers by at least two orders of magnitude.
The SLBLR method \citep{bragin2022surrogate} is not restricted to MILP problems since linearity is not required for the above-mentioned determination of level values. The method is modular and has the potential to support plug-and-play capabilities. For example, while applications to pharmaceutical scheduling have been tested by using fixed data \citep{bragin2022surrogate}, the method is also suitable to handle urgent requests to manufacture new pharmaceutical products since such products can be introduced into the relaxed problem ``on the fly'' and the corresponding subproblems can keep being coordinated through Lagrangian multipliers without the major intervention of the scheduler and without disrupting the overall production schedule.
\section{Conclusions and Future Directions} \label{Conclusion}
This paper intends to summarize the key difficulties encountered on a path to efficiently solve MILP problems as well as to provide a brief summary of important milestones of a more than half-a-century-long research journey to address these difficulties by using Lagrangian Relaxation.
Moreover, the most recent SLBLR method is 1. general having the potential to handle general MIP problems since linearity is not required to exploit the geometric rate of convergence; 2. flexible and modular having the potential to support plug-and-play capabilities with real-time response to unpredictable and/or disruptive events such as natural hazards, operational faults, and cyber-physical events, including generator outages in power systems, receiving an urgent order in manufacturing or encountering a traffic jam in transportation. With communication and distributed computing capabilities, these events can be handled by a continuous update of Lagrangian multipliers, improving system resilience; the method is thus also suitable for fast re-optimization; 3. compatible with other optimization methods such as quantum-computing as well as machine-learning methods, which can potentially be used to further improve subproblem solving thereby contributing to drastically reducing the CPU time supported by the fast coordination aspect of the method.
\section*{Acknowledgements}
This work was supported in part by the U.S. National Science Foundation under Grant ECCS-1810108.
\bibliographystyle{chicago}
\spacingset{1}
|
2,877,628,091,173 | arxiv | \section{Introduction}
\paragraph{Games over hybrid systems.}
Hybrid systems are finite-state machines equipped with a continuous
dynamics. In the last thirty years, formal verification of such
systems has become a very active field of research in computer
science, with numerous success stories. In this context, hybrid
automata, an extension of timed automata~\cite{AD90,AD94}, have been
intensively studied~\cite{henzinger95,henzinger96}, and decidable
subclasses of hybrid systems have been drawn like initialized
rectangular hybrid automata~\cite{henzinger96}. More recently, games
over hybrid systems have appeared as a new interesting and active
field of research since, among others, they correspond to a
formulation of control problems, the counterpart of model checking for
open systems, \textit{i.e.}, systems embedded in a possibly reactive
environment. In this context, many results have already been
obtained, like the (un)decidability of control problems for hybrid
automata~\cite{HHM99}, or (semi-)algorithms for solving such
problems~\cite{AHM01}. Given a system $S$ (with controllable and
uncontrollable actions) and a property $\varphi$, controlling the
system means building another system $C$ (which can only enforce
controllable actions), called the controller, such that $S \parallel
C$ (the system $S$ guided by the controller $C$) satisfies the
property $\varphi$. In our context, the property is a reachability
property and our aim is to build a controller enforcing a given
location of the system, whatever the environment does (which plays
with the uncontrollable actions).
\paragraph{O-minimal hybrid systems.}
O-minimal hybrid systems have been first proposed in~\cite{LPS00} as
an interesting class of systems (see~\cite{vandendries98} for an
overview of properties of o-minimal structures). They have very rich
continuous dynamics, but limited discrete steps (at each discrete
step, all variables have to be reset, independently from their initial
values). This allows to decouple the continuous and discrete
components of the hybrid system (see \cite{LPS00}). Thus, properties
of a global o-minimal system can be deduced directly from properties
of the continuous parts of the system. Since the introductory
paper~\cite{LPS00}, several works have considered o-minimal hybrid
systems~\cite{davoren99,BMRT04,BM05,KV04,KV06}, mostly focusing on
abstractions of such systems, on reachability properties, and on
bisimulation properties.
\paragraph{Word encoding.}
In~\cite{BMRT04}, an encoding of trajectories with words has been
proposed in order to prove the existence of finite bisimulations for
o-minimal hybrid systems (see also~\cite{BM05}). Let us mention that
this technique has been used in \cite{KV04,KV06} in order to provide
an exponential bound on the size of the finite bisimulation in the
case of pfaffian hybrid systems. Let us also notice that similar
techniques already appeared in the literature, see for instance the
notion of signature in~\cite{ASY01}. Different word encoding
techniques have been studied in a wider context
in~\cite{brihaye05}. Recently in~\cite{KRS07}, the authors propose a
new algorithm for counter-example guided abstraction and refinement on
hybrid systems, based on use a word encoding approach. In this paper
we use the so-called \emph{suffix encoding}, which was shown to be in
general too fine to provide the coarsest time-abstract bisimulation.
However, based on this encoding, a semi-algorithm has been proposed
in~\cite{brihaye05,brihaye06} for computing a time-abstract
bisimulation, and it terminates in the case of o-minimal hybrid
systems.
\paragraph{Contributions of this paper.}
In this paper, we focus on games over hybrid systems. We describe two
rather natural frameworks for such games, one assuming a perfect
observation of the dynamics of the system, and another one assuming a
partial observation of the dynamics. For the first framework, we use
the above-mentioned suffix word encoding of trajectories for giving
sufficient computability conditions for the winning states of a
game. Time-abstract bisimulation is an equivalence relation which is
correct with respect to reachability properties on hybrid
systems~\cite{AHLP00} and with respect to control reachability
properties on timed automata~\cite{AMPS98}. Here, we show that the
time-abstract bisimulation is not correct anymore for solving control
problems on a general class of hybrid systems: we exhibit a system in
which two states are time-abstract bisimilar, but one of the states is
winning and the other is not. Using the suffix encoding of
trajectories of~\cite{brihaye05}, we prove that, in the perfect
observation framework, two states having the same suffixes are
equivalently winning or losing (this is a stronger condition than the
one for the time-abstract bisimulation). We then focus on o-minimal
hybrid games and prove that, under the assumption that the theory of
the underlying o-minimal structure is decidable, the control problem
can be solved and that winning states and winning strategies can be
computed. Regarding the partial observation framework, we provide a
new encoding technique, the so-called superword encoding, which turns
out to be sound for the control under partial observation of the
dynamics, and which allows to prove decidability and computability
results similar to those in the perfect observation framework.
\paragraph{Related work.}
The most relevant related works are those dealing with hybrid
games~\cite{HHM99,AHM01}. However, the framework of these papers is
pretty different from ours:
\begin{enumerate}
\item
In their framework, time is considered as a
discrete action, and once action ``let time elapse'' has been chosen,
it is not possible to bound the time elapsing, which is quite
restrictive. For instance, the timed game of Figure~\ref{fig:bete} is
winning from $(\ell_0,x=0)$ in our framework (the strategy is to wait
some amount of time $t \in [2,5]$ and to take the controllable action
$c$), whereas it is not winning in their framework (once $x$ is above
$5$, it is no more possible to take the transition and reach the
winning location $\ell_1$, and there is no way to impose a delay
within $[2,5]$). This yields significant differences in the
properties: in their framework, game bisimulation is one of the tools
for solving the games, and as stated by~\cite[Prop.~1]{HHM99}, the
classical bisimulation tool is then sufficient to solve games. On the
contrary, in our framework, the notion of bisimulation relevant to our
model (time-abstract bisimulation) is not correct for solving games,
as will be explored in this paper.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\draw (0,0) node [draw,circle,inner sep=1.5pt] (A) {$\ell_0$};
\draw (3,0) node [draw,circle,inner sep=1.5pt,fill=black!40!white] (B) {$\ell_1$};
\draw [latex'-] (A) -- +(-.8,0);
\draw [-latex'] (A) -- (B) node [midway,above] {$2 \le x \le 5,\ c$};
\end{tikzpicture}
\caption{A simple game}
\label{fig:bete}
\end{figure}
\item Our games are control games, they are thus asymmetric, which is
not the case of the games in the above-mentioned works; in our
framework, the environment is more powerful than the controller in
that it can outstrip the controller and do an action right before
the controller decides to do a controllable action.
\end{enumerate}
Let us also mention the paper~\cite{WT97} on control of linear hybrid
automata. In~\cite{WT97} the author proposes a semidecision procedure
for synthesizing controllers for such automata. No general
decidability result is given in this paper.
\paragraph{Plan of the paper.}
In Section~\ref{untimed}, we recall results about finite games and
bisimulation. In Section~\ref{timed}, we define the games over
dynamical systems (for both perfect information and partial
observation), and we show that time-abstract bisimulation is not
correct for solving them. The word encoding techniques are presented
in Section~\ref{section2} and used in Section~\ref{mgame} to present a
general framework for solving games over dynamical systems. We apply
and extend these results in Section~\ref{omingames} for computing
winning states and winning strategies in o-minimal games. In the
paper, we often only develop technical details of the partial
observation framework, which actually extends the perfect observation
framework.
\bigskip Part of the results presented in this paper have been
published in~\cite{BBC06} (the decidability of the control
reachability problem and the synthesis of strategies for o-minimal
hybrid systems). In this paper, we give full proofs of those results,
and extend them to a natural partial observation framework.
\section{Classical Finite Games}\label{untimed}
In this section, we recall some basic definitions and results
concerning bisimulations on a transition system (see
\cite{aczel88,milner89,caucal95,henzinger95} for general references)
and classical (untimed) games.
\subsection{Classical Games}
We present here the definitions of the problem of control on a finite
graph (also called finite game) and the notion of strategy
(see~\cite{lncs2500} for an overview on games). These definitions are
classical and will be extended to real-time systems in the next
section.
\begin{defi}
A \emph{finite automaton} is a tuple $\A=(Q,\text{\sffamily\upshape{Goal}},\Sigma,\delta)$
where $Q$ is a finite set of locations, $\text{\sffamily\upshape{Goal}} \subseteq Q$ is a
subset of winning locations, $\Sigma$ is a finite set of actions,
and $\delta$ consists of a finite number of transitions $(q,a,q')
\in Q \times \Sigma \times Q$.
\end{defi}
\begin{defi} \label{trans syst def} A \emph{transition system}
$T=(Q,\Sigma,{\to})$ consists of a set of states $Q$ (which may be
uncountable), $\Sigma$ an alphabet of events, and ${\to} \subseteq Q
\times \Sigma \times Q$ a transition relation.
\end{defi}
A transition $(q_1,a,q_2) \in {\to}$ is also denoted by $q_1
\xrightarrow{a} q_2$. A transition system is said finite if $Q$ is
finite. Note that a finite automaton canonically defines a transition
system $T_{\A}$.
\medskip A {\em run} of $\A$ is a finite or infinite sequence $q_0
\xrightarrow{a_1} q_1 \xrightarrow{a_2} \ldots$ of the transition
system $T_{\A}$. Such a run is said {\em winning} if $q_i \in \text{\sffamily\upshape{Goal}}$
for some $i$. If $\rho$ is a finite run $q_0 \xrightarrow{a_1} q_1
\xrightarrow{a_2} \ldots \xrightarrow{a_n} q_n$ we define
$\last(\rho)=q_n$. We note $\Runsf{\A}$ the set of finite runs in
$\A$.
\begin{defi}
A {\em finite game} is a finite automaton $(Q,\text{\sffamily\upshape{Goal}},\Sigma,\delta)$
where $\Sigma$ is partitioned into two subsets $\Sigma_c$ and
$\Sigma_u$ corresponding to controllable and uncontrollable actions.
\end{defi}
We will consider \emph{control games}. Informally there are two
players in such a game: the \emph{controller} and the
\emph{environment}. The actions of $\Sigma_c$ belong to the controller
and the actions of $\Sigma_u$ belong to the environment. At each step,
the controller proposes a controllable action which corresponds to the
action he wants to perform; then either this action or an
uncontrollable action is done and the automaton goes into one of the
next states\footnote{There may be several next states as the game is
not supposed to be deterministic, and we assume that the environment
chooses the next state in case there are several.}.
In the sequel, we will only consider reachability games : the
controller wants to reach the $\text{\sffamily\upshape{Goal}}$ states and the environment wants
to prevent him from doing so.
\begin{defi}
A {\em strategy} is a partial function $\lambda$ from $\Runsf{\A}$
to $\Sigma_c$ such that for all runs $\rho \in \Runsf{\A}$, if
$\lambda(\rho)$ is defined, then it is enabled in $\last(\rho)$.
\end{defi}
Let $\rho = q_0 \xrightarrow{a_1} q_1 \xrightarrow{a_2} \ldots$ be a
run, and set for every $i$, $\rho_i$ the prefix of length $i$ of
$\rho$. The run $\rho$ is said \emph{compatible with a strategy
$\lambda$} when for all $i$, $a_{i+1}=\lambda(\rho_i)$ or
$a_{i+1}\in \Sigma_u$. A run $\rho$ is said \emph{maximal w.r.t. a
strategy $\lambda$} if it is infinite or if $\lambda(\rho)$ is not
defined.
A strategy $\lambda$ is \emph{winning from a state q} if all maximal
runs starting in $q$ compatible with $\lambda$ are winning.
\subsection{Bisimulation}
We recall now the definition of bisimulation for transition systems:
\begin{defi}[\cite{milner89,caucal95}]
Given a transition system $T=(Q,\Sigma,{\to})$, a {\em bisimulation
for $T$} is an equivalence relation $\mathord\sim \subseteq Q
\times Q$ such that $\forall q_1,q'_1,q_2 \in Q$, $\forall a \in
\Sigma$,
\[\begin{array}{l} \left( q_1 \sim q'_1\
\text{and}\ q_1 \xrightarrow{a} q_2 \right) \Rightarrow \left(
\exists q'_2\ q_2 \sim q'_2\ \text{ and }\ q'_1
\xrightarrow{a} q'_2 \right) \qquad \\
\end{array}\]
Moreover, if $\P$ is a partition of $Q$ and if $\sim$ respects $\P$
(\textit{i.e.}, $q \in P$ and $q \sim q'$ with $P \in \P$ implies $q'
\in P$), we say that $\sim$ is \emph{compatible} with $\P$.
\end{defi}
\subsection{Game and Bisimulation in the Untimed Case}
In the untimed framework, bisimulation is a commonly used technique to
abstract games: bisimilar states can be identified in the control
problem. This is stated in the next folklore theorem, for which we
provide a proof.
\begin{thm}
Let $\A=(Q,\text{\sffamily\upshape{Goal}},\Sigma,\delta)$ be a finite game, $q,q' \in Q$ and
$\mathord\sim$ a bisimulation compatible with $\text{\sffamily\upshape{Goal}}$. Then, there
is a winning strategy from $q$ iff there is a winning strategy from
$q'$.
\end{thm}
\proof
Assume that $\sim$ is a bisimulation relation compatible with
$\text{\sffamily\upshape{Goal}}$ and such that $q \mathrel\sim q'$. Assume furthermore that
$\lambda$ is a winning strategy from $q$. We will define a strategy
$\lambda'$ that will be winning from $q'$. To do that we will map
finite runs starting in $q'$ to finite runs starting in $q$, so that
$\lambda'$ will mimick $\lambda$ through this mapping. We note $f$
this mapping, and start by setting $f(q') = q$. We then proceed
inductively as follows. If $\lambda(f(\varrho'))$ is defined, we
set $\lambda'(\varrho') = \lambda(f(\varrho'))$ and for every run
$\varrho' \xrightarrow{\lambda'(\varrho'))} \widetilde{q}'$ (which
is compatible with $\lambda'$) there is a run $f(\varrho')
\xrightarrow{\lambda(\varrho)} \widetilde{q}$ which is compatible
with $\lambda$ and such that $\widetilde{q} \mathrel\sim
\widetilde{q}'$. We then define $f(\varrho'
\xrightarrow{\lambda'(\varrho')} \widetilde{q}') = f(\varrho')
\xrightarrow{\lambda(\varrho)} \widetilde{q}$. The strategy
$\lambda'$ is winning from $q'$ since $\mathord\sim$ is compatible
with $\text{\sffamily\upshape{Goal}}$. \qed
This theorem remains true for infinite-state discrete
games~\cite{HHM99,AHM01} and can be used to solve them: if an
infinite-state game has a bisimulation of finite index, the control
problem can be reduced to a control problem over a finite graph.
Real-time control problems cannot be seen as classical infinite-state
games because of the special nature of the time-elapsing action.
which does not belong to one of the players. It seems nevertheless
natural to try to adapt the bisimulation approach to solve real-time
control problems.
\section{Games over Dynamical Systems}\label{timed}
\subsection{Dynamical Systems}\label{dynamics}
Let $\M$ be a structure. When we say that some relation, subset or
function is \emph{definable}, we mean it is first-order definable in
the structure $\M$. A general reference for first-order logic is
\cite{hodges97}. We denote by $\text{{\sffamily\upshape{Th}}}(\M)$ the theory of $\M$. In this
paper we only consider structures $\M$ that are expansions of ordered
groups, we also assume that the structure $\M$ contains two symbols of
constants, {\it i.e.}, $\M = \langle M,+,0,1,<,\ldots \rangle$ where
$+$ is the group operation and w.l.o.g. we assume that $0<1$.
\begin{defi}\label{defi ds}
A \emph{dynamical system} is a pair $(\M,\gamma)$ where:
\begin{itemize}
\item $\M = \langle M, +,0,1,<,\ldots \rangle$ is an expansion of an
ordered group,
\item $\gamma: \mkun \times V \to \mkdeux$ is a function definable
in $\M$
(where $V_1 \subseteq M^{k_1}$, $V \subseteq M$ and $V_2 \subseteq
M^{k_2}$).\footnote{We use these notations in the rest of the
paper.}
\end{itemize}
The function $\gamma$ is called the \textit{dynamics} of the system.
\end{defi}
Classically, when $M$ is the field of the reals, we see $V$ as the
time, $V_1$ as the input space, $V_1 \times V$ as the space-time and
$V_2$ as the (output) space. We keep this terminology in the more
general context of a structure $\M$.
\medskip The definition of \emph{dynamical system} encompasses a lot
of different behaviors. Let us first give a simple example, several
others will be presented later.
\begin{exa} \label{ex:AT} We can recover the continuous dynamics
of \emph{timed automata} (see \cite{AD94}). In this case, we have
that $\M = \langle\IR,<,+,0,1 \rangle$ and the dynamics $\gamma:
\IR^n \times [0,+\infty[ \to \IR^n$ is defined by
$\gamma(x_1,\ldots,x_n,t) = (x_1+t,\ldots,x_n+t)$.
\end{exa}
\begin{defi}\label{trajectory}
If we fix a point $x \in \mkun$, the set $\Gamma_x=\{\gamma(x,t)
\mid t \in M^+\} \subseteq \mkdeux$ is called the {\em trajectory}
determined by $x$.
\end{defi}
We define a transition system associated with the dynamical system.
This definition is an adaptation to our context of the classical
\emph{continuous transition system} in the case of hybrid systems (see
\cite{LPS00} for example).
\begin{defi}\label{tsds}
Given $\ds$ a dynamical system, we define a \emph{transition system
$T_\gamma=(Q,\Sigma,\to_\gamma)$ associated with the dynamical
system} by:
\begin{itemize}
\item the set $Q$ of states is $\mkdeux$;
\item the set $\Sigma$ of events is $M^+=\{ \tau \in M \mid \tau \ge
0\}$;
\item the transition relation $y_1 \xrightarrow{t}_\gamma y_2$ is
defined by:
\begin{align*}
&\exists x \in \mkun,\ \exists t_1, t_2 \in M^+\ \text{such that
} t_1 \le t_2,\\
&\qquad \gamma(x,t_1)=y_1,\ \gamma(x,t_2)=y_2\ \text{and}\ t =
t_2-t_1
\end{align*}
\end{itemize}
\end{defi}
\subsection{\texorpdfstring{$\M$}{M}-Games Under Perfect
Observation}\label{subsec-perfect}
In this subsection, we define $\M$-automata, which are automata with
guards, resets and continuous dynamics definable in the
$\M$-structure. We then introduce our model of dynamical game which
is an $\M$-automaton with two sets of actions, one for each player; we
finally express in terms of winning strategy the main problem we will
be interested in, the control problem in a class $\C$ of $\M$-automata
under perfect observation. The partial observation framework will be
discussed in Subsection~\ref{subsec-partial}.
\begin{defi}[$\M$-automaton]
An \emph{$\M$-automaton} $\A$ is a tuple $(\M, Q,\text{\sffamily\upshape{Goal}},
\Sigma,\delta,\gamma)$ where $\M= \langle M,+,0,1,<,\ldots \rangle$
is an expansion of an ordered group, $Q$ is a finite set of
locations, $\text{\sffamily\upshape{Goal}} \subseteq Q$ is a subset of winning locations,
$\Sigma$ is a finite set of actions, $\delta$ consists in a finite
number of transitions $(q,g,a,R,q') \in Q \times 2^{V_2} \times
\Sigma \times (V_2 \to 2^{V_2}) \times Q$ where $g$ and $R$ are
definable in $\M$, and $\gamma$ maps every location $q\in Q$ to a
dynamics $\gamma_q: V_1 \times V \to V_2$.
\end{defi}
We use a general definition for resets: a reset $R$ is indeed a
general function from $\mkdeux$ to $2^{\mkdeux}$, which may correspond
to a non-deterministic update. If the current state is $(q,y)$ the
system will jump to some $(q',y')$ with $y' \in R(y)$.
\medskip An $\M$-automaton $\A=(\M,Q,\text{\sffamily\upshape{Goal}},\Sigma,\delta,\gamma)$
defines a {\em mixed transition system} $T_{\A}=(S,\Gamma,\to)$ where:
\begin{itemize}
\item the set $S$ of states is $Q \times \mkdeux$;
\item the set $\Gamma$ of labels is $M^+ \cup \Sigma$, (where $M^+=\{
\tau \in M\ \mid\ \tau \ge 0\}$);
\item the transition relation $(q,y) \xrightarrow{e} (q',y')$ is
defined when:
\begin{itemize}
\item $e \in \Sigma$, and there exists $(q,g,e,R,q') \in \delta$
with $y \in g$ and $y' \in R(y)$, or
\item $e \in M^+$, $q=q'$, and $y \xrightarrow{e}_{\gamma_q} y'$
where $\gamma_q$ is the dynamic in location $q$.
\end{itemize}
\end{itemize}
In the sequel, we will focus on behaviors of $\M$-automata which
alternate between continuous transitions and discrete transitions.
We will also need more precise notions of transitions. When $(q,y)
\xrightarrow{\tau} (q,y')$ with $\tau \in M^+$, this is due to some
choice of $(x,t) \in V_1 \times V$ such that $\gamma_q(x,t)=y$. We say
that $(q,y) \xrightarrow{\tau}_{x,t} (q,y')$ if $\gamma_q(x,t)=y$ and
$\gamma_q(x,t+\tau)=y'$. To ease the reading of the paper, we will
sometimes write $(q,x,t,y) \xrightarrow{\tau} (q,x,t+\tau,y')$ for
$(q,y) \xrightarrow{\tau}_{x,t} (q,y')$. We say that an action
$(\tau,a) \in M^+ \times \Sigma$ is enabled in a state $(q,x,t,y)$ if
there exists $(q',x',t',y')$ and $(q'',x'',t'',y'')$ such that
$(q,x,t,y) \xrightarrow{\tau} (q',x',t',y') \xrightarrow{a}
(q'',x'',t'',y'')$. We then write $(q,x,t,y) \xrightarrow{\tau,a}
(q'',x'',t'',y'')$.
A {\em run} of $\A$ is a finite or infinite sequence
$(q_0,x_0,t_0,y_0) \xrightarrow{\tau_1,a_1} (q_1,x_1,t_1,y_1) \ldots$
Such a run is said {\em winning} if $q_i \in \text{\sffamily\upshape{Goal}}$ for some $i$.
We note $\Runsf{\A}$ the set of finite runs in $\A$. If $\rho$ is a
finite run $(q_0,x_0,t_0,y_0) \xrightarrow{\tau_1,a_1} \ldots
\xrightarrow{\tau_n,a_n} (q_n,x_n,t_n,y_n)$ we define $\last(\rho) =
(q_n,x_n,t_n,y_n)$.
\begin{defi}[$\M$-game]
An {\em $\M$-game} is an $\M$-automaton $(\M,Q,\text{\sffamily\upshape{Goal}},\Sigma,$
$\delta,\gamma)$ where $\Sigma$ is partitioned into two subsets
$\Sigma_c$ and $\Sigma_u$ corresponding to controllable and
uncontrollable actions.
\end{defi}
\begin{defi}[Strategy]
A {\em strategy}\footnote{In the context of control problems, a
strategy is also called a {\em controller}.} is a partial function
$\lambda$ from $\Runsf{\A}$ to $M^+ \times \Sigma_c$ such that for
all runs $\rho$ in $\Runsf{\A}$, if $\lambda(\rho)$ is defined, then
it is enabled in $\last(\rho)$.
\end{defi}
The strategy tells what is to be done at the current moment: at each
instant it tells what delay we will wait and which controllable action
will be taken after this delay. Note that the environment may have to
choose between several edges, each labeled by the action given by the
strategy (because the original game is not supposed to be
deterministic).
A strategy $\lambda$ is said \emph{memoryless} if for all finite runs
$\rho$ and $\rho'$, $\last(\rho)=\last(\rho')$ implies
$\lambda(\rho)=\lambda(\rho')$. Let $\rho = (q_0,x_0,t_0,y_0)
\xrightarrow{\tau_1,a_1} \ldots$ be a run, and set for every $i$,
$\rho_i$ the prefix of length $i$ of $\rho$. The run $\rho$ is said
\emph{consistent with a strategy $\lambda$} when for all $i$, if
$\lambda(\rho_i) = (\tau,a)$ then either $\tau_{i+1}=\tau$ and
$a_{i+1}=a$, or $\tau_{i+1} \le \tau$ and $a_{i+1} \in \Sigma_u$. A
run $\rho$ is said \emph{maximal w.r.t. a strategy $\lambda$} if it is
infinite or if $\lambda(\rho)$ is not defined.
A strategy $\lambda$ is \emph{winning from a state (q,y)} if for all
$(x,t)$ such that $\gamma(x,t)=y$, all maximal runs starting in
$(q,x,t,y)$ compatible with $\lambda$ are winning. The \emph{set of
winning states} is the set of states from which there is a winning
strategy.
\medskip We can now define the control problems we will study.
\begin{prob}[Control problem under perfect observation in a class
$\C$ of $\M$-automata]
\label{prob:controlperfect}
Given an $\M$-game $\A \in \C$, and a definable initial state
$(q,y)$, determine whether there exists a winning strategy
in $\A$ from $(q,y)$.
\end{prob}
\begin{prob}[Controller synthesis under perfect observation in a
class $\C$ of $\M$-automata]
\label{prob:synthperfect}
Given an $\M$-game $\A \in \C$, and a definable initial state
$(q,y)$, determine whether there exists a winning strategy, and
compute such a strategy if possible.\footnote{In this definition,
`compute a strategy' means `give a formula for the strategy'. In
particular, a strategy which is computable is definable in the
theory.}
\end{prob}
\begin{exa}\label{ex:spiraletot}
Let us consider the $\M$-game $\A =
(\M,Q,\text{\sffamily\upshape{Goal}},\Sigma,\delta,\gamma)$ (depicted in
Fig.~\ref{fig:ex-spiraltot}) where ${\mathcal M} = \langle
\IR,+,\cdot,0,1,<,\sin, \cos \rangle$, $Q=\{q_1,q_2,q_3\}$,
$\text{\sffamily\upshape{Goal}}=\{q_2\}$, $\Sigma=\Sigma_c \cup \Sigma_u$ where
$\Sigma_c=\{c\}$ (resp. $\Sigma_u=\{u\}$) is the set of controllable
(resp. uncontrollable) actions. The dynamics in $q_1$,
$\gamma_{q_1}:\IR^2 \times [0,2 \pi] \times
\IR \to \IR^2$ is defined as follows. \\
$$\gamma_{q_1}(x_1,x_2,\theta,t)=
\begin{cases}
(t.\cos(\theta), t.\sin(\theta)) & \text{ if } (x_1,x_2) = (0,0),\\
(x_1+t.x_1 , x_2 +t.x_2) & \text{ if } (x_1,x_2) \ne (0,0).
\end{cases}$$ We associate with this dynamical system the partition
$\P=\{A,B,C\}$ where $A=\{(0,0)\}$, $B=\{\big(\theta
\cos(\theta),\theta \sin(\theta)\big) \mid 0 < \theta \le 2 \pi\}$
and $C=\IR^2 \setminus (A \cup C)$. Let us call piece $B$ \emph{the
spiral} (see Figure~\ref{fig:spirale}). The guard $g_B$
corresponds to $B$-states ({\it i.e.}, points on the spiral) and the
guard $g_C$ corresponds to $C$-states (points not on the spiral and
different from the origin).
\begin{figure}[ht]
\null\hfill\hfill\subfigure[The $\M$-game $\A$\label{fig:mgame}]{
\begin{tikzpicture}
\draw (0,0) node [circle,inner sep=1.5pt,draw] (q1) {$q_1$};
\draw [latex'-] (q1) -- +(-.8,0);
\draw (2,.8) node [circle,inner sep=1.5pt,draw,fill=black!40!white] (q2) {$q_2$};
\draw (2,-.8) node [circle,inner sep=1.5pt,draw] (q3) {$q_3$};
\draw [-latex'] (q1) -- (q2) node [midway,sloped,above] {$g_C,c$};
\draw [-latex'] (q1) -- (q3) node [midway,sloped,below] {$g_B,u$};
\path (1,-2) --(1,-2);
\end{tikzpicture}
}\hfill\hfill
\subfigure[Dynamics in $q_1$\label{fig:spirale}]{
\begin{tikzpicture}[scale=.3]
\draw[domain=0:6.283,smooth,variable=\t,line width=1.5pt] plot ({\t*cos(\t r)},{\t*sin(\t r)});
\foreach \theta in {0,0.524,1.047,1.571,2.094,2.618,3.141,3.665,4.189,4.712,5.236,5.760}
{
\draw[domain=0:4,variable=\t,loosely dotted,->,line width=1pt] plot ({\t*cos(\theta r)},{\t*sin(\theta r)});
\draw[domain=4:8,variable=\t,loosely dotted,->,line width=1pt] plot ({\t*cos(\theta r)},{\t*sin(\theta r)});
}
\draw (0,0) [fill=white,line width=1.5pt] circle (7pt) node [below right=1.5pt,fill=white,circle,inner sep=0pt] {$A$};
\draw (6.283,0) [fill=black,line width=1.5pt] circle (7pt);
\draw [<-,densely dotted] (6.2,-1) .. controls +(-60:30pt) and +(180:30pt) .. (8,-2) node [pos=1,right] {$B$};
\end{tikzpicture}
}
\hfill\hfill\null
\caption{Time-abstract bisimulation does not preserve winning states}
\label{fig:ex-spiraltot}
\end{figure}
In this example, the point $(q_1,(0,0))$ is a winning state. Indeed
a winning strategy is given by $\lambda(q_1,0,0,\theta,t) =
(\frac{\theta}{2},c)$ where $c$ consists in taking the transition
leading to state $q_2$ (which is winning).
\end{exa}
\subsection{\texorpdfstring{$\M$}{M}-Games Under Partial Observation}\label{subsec-partial}
Subsection~\ref{subsec-perfect}, we have assumed that from a given
point, the environment chooses the continuous trajectory followed by
the game, and the controller reacts accordingly. In this section, we
consider partial observation of the dynamics: the trajectory is not
known by the controller, and its strategy may depend only on the
current point. In particular, this framework naturally models drift of
clocks where the slopes of the clocks lies within an
interval~\cite{puri98,ALM05}. Note that our partial observation
assumption concerns the dynamics of the system, not the actions which
are performed. This has to be contrasted with the notion of partial
observation studied in the framework of finite systems in~\cite{AVW03}
or in the context of timed systems in~\cite{BDMP03} where the partial
observation assumption concerns actions which are done, and not the
dynamics (indeed, in these models, there is no real choice for the
dynamics; It is completely determined by the point in the
state-space). In order to formalize our partial observation framework,
we need to adapt notions such as strategy in this new setting. First,
we define what we call \emph{observation} of a given run.
\begin{defi}[Observation of a run]
Let $\rho=(q_0,x_0,t_0,y_0)\xrightarrow{\!\tau_1,a_1\!} \ldots
\xrightarrow{\!\tau_n,a_n\!} (q_n,x_n,t_n,y_n)$ be a finite run. The
\emph{observation} of $\rho$, denoted $\text{obs}(\rho)$ is the sequence
$(q_0,y_0) \xrightarrow{\tau_1,a_1} \ldots \xrightarrow{\tau_n,a_n}
(q_n,y_n)$.
\end{defi}
\begin{defi}[Strategy under partial observation]
A strategy $\lambda$ is said \emph{under partial observation} if for
all finite runs $\rho, \rho'$, $\text{obs}(\rho)=\text{obs}(\rho')$ implies
$\lambda(\rho)=\lambda(\rho')$.
\end{defi}
All other notions, like memoryless strategies, consistency, winning
strategies, winning states, \textit{etc...} naturally extend in this
new context. In this setting, we will consider the two following
problems.
\begin{prob}[Control problem under partial observation in a class
$\C$ of $\M$-automata]
\label{prob:contpartial}
Given an $\M$-game $\A \in \C$, and a definable initial state
$(q,y)$, determine whether there exists a winning strategy under
partial observation in $\A$ from $(q,y)$.
\end{prob}
\begin{prob}[Controller synthesis under partial observation in a
class $\C$ of $\M$-automata]
\label{prob:synthpartial}
Given an $\M$-game $\A \in \C$, and a definable initial state
$(q,y)$, determine whether there exists a winning strategy under
partial observation in $\A$ from $(q,y)$, and compute such a
strategy if possible.
\end{prob}
\begin{exa}
We consider again the spiral example (Example~\ref{ex:spiraletot}).
We showed that under perfect observation this $\M$-game has a
winning strategy in $(q_1,(0,0))$ given by
$\lambda(q_1,0,0,\theta,t) = (\frac{\theta}{2},c)$. Note that this
strategy depends on the precise trajectory (parameter $\theta$).
Moreover, one can show that there is no winning strategy under
partial observation for this game: such a strategy may only depend
on the current point, and in this precise example, whatever action
$(\tau,a)$ the controller proposes in $(q_1,(0,0))$, there is a
trajectory which reaches a \emph{bad} state (\textit{i.e.}, points
on the spiral) before $\tau$.
\end{exa}
The previous example shows that some games can be winning under
perfect observation whereas they are not winning under partial
observation. Nevertheless, considering a new dynamics which will
roughly inform the controller of the current trajectory, we can see
the perfect observation control problem as a special case of the
partial observation framework. This is stated by the following
proposition :
\begin{prob} \label{prop:casparticulier} Given an $\M$-game
$\A_1$ and a state $(q,y)$ of $\A_1$,
we can effectively construct an $\M$-game $\A_2$ and a state
$(q',y')$ of $\A_2$ such that there exists a winning strategy under
perfect observation in $\A_1$ from $(q,y)$ iff there exists a
winning strategy under partial observation in $\A_2$ from
$(q',y')$.
\end{prob}
\proof
Let $\A_1=(\M, Q,\text{\sffamily\upshape{Goal}}, \Sigma,\delta,\gamma)$ where $\gamma:V_1
\times V \to V_2$. We define $V'_2=\{(x,t,y) \in V_1 \times V
\times V_2 \mid \gamma(x,t)=y\}$ and for $q \in Q$, $\gamma'_q:V_1
\times V \to V'_2$ such that
$\gamma'_q(x,t)=(x,t,\gamma_q(x,t))$. The dynamics $\gamma'$ behaves
exactly like $\gamma$ but ``gives'' to the controller the current
trajectory as this information is stored in the state space $V'_2$.
We then use $\A_2=(\M, Q,\text{\sffamily\upshape{Goal}}, \Sigma,\delta',\gamma')$, where
$\delta'$ is the transition relation $\delta$ adapted to the new
states $V'_2$: if $(q_1,g,a,R,q_2)\in \delta$ then
$(q_1,g',a,R',q_2) \in \delta'$ where $g'=\{(x,t,\gamma(x,t)) \mid
\gamma(x,t) \in g \}$ and for all $(x,t) \in V_1 \times V$,
$R'(\gamma(x,t))= \{(x',t',\gamma(x',t')) \mid \gamma(x',t') \in
R(\gamma(x,t))\}$.
W.l.o.g. we can suppose that there exists a unique $(x_0,t_0) \in
V_1 \times V$ such that $\gamma(x_0,t_0)=y$ (if necessary, we add a
location with constant continuous dynamics pointing to the actual
location of $y$). Then there exists a
winning strategy under perfect observation in $\A_1$ from $(q,y)$
iff there exists a winning strategy under partial observation in
$\A_2$ from $(q,(x_0,t_0,y))$. \qed
From the above proposition we get that any definability, decidability,
\textit{etc} result in the partial observation framework will hold in
the perfect observation framework.
\subsection{\texorpdfstring{$\M$}{M}-Games and Bisimulation} \label{subsec:contre_ex}
Time-abstract bisimulation \cite{henzinger95,davoren99,AHLP00} is a
sufficient behavioral relation to check reachability properties of
hybrid systems, and in particular of
$\M$-automata~\cite{brihaye05}. Moreover, it has been shown that it is
also a sufficient behavioral relation in order to solve control
problems in the framework of timed automata~\cite{AMPS98}. However,
when considering wider classes of hybrid systems, we will see that
this tool is not sufficient anymore for solving control problems in
the perfect observation framework.
\begin{defi}
Given a mixed transition system $T=(S,\Gamma,\to)$, a {\em
time-abstract bisimulation for $T$} is an equivalence relation
$\mathord{\sim} \subseteq S \times S $ such that $\forall q_1,q'_1,
q_2 \in S$, the two following conditions are satisfied:
$$\begin{array}{l} \forall a \in \Sigma,\ \left( q_1 \sim q'_1\
\text{and}\ q_1 \xrightarrow{a} q_2 \right) \Rightarrow \\
\qquad\qquad\qquad \left(
\exists q'_2 \in S\ \text{s.t.}\ q_2 \sim q'_2\ \text{and}\ q'_1
\xrightarrow{a} q'_2 \right)\\[0.3cm]
\forall \tau \in M^+,\ \left( q_1 \sim q'_1\ \text{and}\ q_1
\xrightarrow{\tau} q_2 \right) \Rightarrow\\
\qquad \left( \exists \tau' \in M^+,\ \exists
q'_2 \in S\ \text{s.t.}\ q_2 \sim q'_2\ \text{and}\ q'_1
\xrightarrow{\tau'} q'_2 \right)
\end{array}$$
\end{defi}
\begin{exa}\label{bisimnotgood}
In this example, we assume a perfect observation framework. Let us
consider the $\M$-game $\A = (\M,Q,\text{\sffamily\upshape{Goal}},\Sigma,\delta,\gamma)$
where $\M = \langle \IR,<,+,0,1,\equiv_2 \rangle$ ($\equiv_2$
denotes the ``modulo $2$'' relation), $Q=\{q_1,q_2,q_3\}$,
$\text{\sffamily\upshape{Goal}}=\{q_2\}$, $\Sigma=\Sigma_c \cup \Sigma_u$ where
$\Sigma_c=\{c\}$ (resp. $\Sigma_u=\{u\}$) is the set of controllable
(resp. uncontrollable) actions. The dynamics in $q_1$,
$\gamma_{q_1}: \IR^+ \times \{0,1\} \times \IR^+ \to \IR^+ \times
\{0,1\}$ is defined as $\gamma_{q_1}(x_1,x_2,t) = (x_1+t,x_2)$.
\begin{figure}[ht]
\null\hfill\hfill\subfigure[The $\M$-game $\A$\label{fig:mgame2}]{
\begin{tikzpicture}
\draw (0,0) node [circle,inner sep=1.5pt,draw] (q1) {$q_1$};
\draw [latex'-] (q1) -- +(-.8,0);
\draw (2,.8) node [circle,inner sep=1.5pt,draw,fill=black!40!white] (q2) {$q_2$};
\draw (2,-.8) node [circle,inner sep=1.5pt,draw] (q3) {$q_3$};
\draw [-latex'] (q1) -- (q2) node [midway,sloped,above] {$g_C,c$};
\draw [-latex'] (q1) -- (q3) node [midway,sloped,below] {$g_B,u$};
\path (1,-2) --(1,-2);
\end{tikzpicture}
}\hfill\hfill
\subfigure[Dynamics in $q_1$\label{fig:cexbisim}]{
\begin{tikzpicture}[scale=.8]
\begin{scope}
\draw (0,0) -- (5.5,0) node [pos=0,left=.5cm] {$x_2=0$};
\draw [dotted] (5.5,0) -- (6.5,0);
\foreach \x in {0,1,2,3,4,5}
{
\draw (\x,-.1) -- (\x,.1);
\ifnum\x=1 \path (\x-1,0) -- (\x,0) node [midway,above] {$A$}; \fi
\ifnum\x=2 \path (\x-1,0) -- (\x,0) node [midway,above] {$C$}; \fi
\ifnum\x=4 \path (\x-1,0) -- (\x,0) node [midway,above] {$C$}; \fi
\ifnum\x=3 \path (\x-1,0) -- (\x,0) node [midway,above] {$B$}; \fi
\ifnum\x=5 \path (\x-1,0) -- (\x,0) node [midway,above] {$B$}; \fi
}
\end{scope}
\begin{scope}[yshift=1cm]
\draw (0,0) -- (5.5,0) node [pos=0,left=.5cm] {$x_2=1$};
\draw [dotted] (5.5,0) -- (6.5,0);
\foreach \x in {0,1,2,3,4,5}
{
\draw (\x,-.1) -- (\x,.1);
\ifnum\x=1 \path (\x-1,0) -- (\x,0) node [midway,above] {$A$}; \fi
\ifnum\x=2 \path (\x-1,0) -- (\x,0) node [midway,above] {$B$}; \fi
\ifnum\x=4 \path (\x-1,0) -- (\x,0) node [midway,above] {$B$}; \fi
\ifnum\x=3 \path (\x-1,0) -- (\x,0) node [midway,above] {$C$}; \fi
\ifnum\x=5 \path (\x-1,0) -- (\x,0) node [midway,above] {$C$}; \fi
}
\end{scope}
\path (0,-2) -- (1,-1);
\end{tikzpicture}
}
\hfill\hfill\null
\caption{Time-abstract bisimulation does not preserve winning states}
\end{figure}
We consider the partition depicted on Figure \ref{fig:cexbisim}. The
guard $g_C$ is satisfied on $C$-states and the guard $g_B$ is
satisfied on $B$-states. Note that this partition is compatible with
$\text{\sffamily\upshape{Goal}}$ and w.r.t. discrete transitions.
In this game, the controller can win when it enters a $C$-state by
performing action $c$ and it loses when entering a $B$-state because
it cannot prevent the environment from performing a $u$ and going in
the losing state $q_3$.
It follows that the state $s_1=(q_1,(0,1))$ is losing, whereas the
state $s_2=(q_1,(0,0))$ is winning. However, the equivalence
relation induced by the partition $\{A,B,C\}$ is a time-abstract
bisimulation: the two states $s_1$ and $s_2$ are thus time-abstract
bisimilar, but not equivalent for the game. It follows that
time-abstract bisimulation is not correct for solving control
problems, in the sense that a time-abstract bisimulation cannot
always distinguish between winning and losing states.
\end{exa}
\begin{prob}
\label{prop:incorrect}
Let $\M$ be a structure and $\A$ an $\M$-game. A partition
respecting $\text{\sffamily\upshape{Goal}}$ and inducing a time-abstract bisimulation on $Q
\times V_2$ does not necessarily respect the set of winning states
of $\A$.
\end{prob}
\section{The Suffix and the Superword Abstractions}\label{section2}
In this section we explain how to encode symbolically trajectories of
dynamical systems with ``words''. We will present two different
encodings (or abstractions) depending on the observation framework
(perfect or partial) we assume.
\subsection{Perfect Observation and the Suffix Abstraction}
\label{subsec-suffix}
In this subsection, we review the word encoding technique introduced
in~\cite{BMRT04} in order to study o-minimal hybrid systems. We focus
on the \emph{suffix partition} introduced in \cite{brihaye05}. This
encoding will be suitable in order to study control reachability
problem in the perfect observation framework
(see~Subsection~\ref{subsec-aboutperfect}). We first explain how to
build words associated with trajectories. Given a dynamical system
$\ds$ and a finite partition $\P$ of $\mkdeux$, given $x \in \mkun$ we
associate a word with the trajectory $\Gamma_x=\{\gamma(x,t) \mid t
\in V\}$ in the following way. We consider the sets $\{t \in V \mid
\gamma(x,t) \in P\}$ for $P \in \P$. This gives a partition of the
time $V$. In order to define a word on $\P$ associated with the
trajectory determined by $x$, we need to define the set of intervals
${\mathcal F}_x = \bigl\{I \mid I\ \text{is a time interval or a point and
is maximal for the property ``} \exists P \in \P,\ \forall t \in
I,\ \gamma(x,t) \in P\text{''}\bigr\}$. For each $x$, the set ${\mathcal
F}_x$ is totally ordered by the order induced from $M$. This allows
us to define \emph{the word on $\P$ associated with the trajectory
$\Gamma_x$} denoted $\omega_x$.
\begin{defi}\label{def:wordgx}
Given $x \in \mkun$, \emph{the word associated with $\Gamma_x$} is
given by the function $\omega_x : {\mathcal F}_x \to \P$ defined by
$\omega_x(I)= P$, where $I \in {\mathcal F}_x$ is such that $\forall t
\in I$, $\gamma(x,t) \in P$.
\end{defi}
The set of words associated with $\ds$ over $\P$ gives in some sense a
complete \emph{static} description of the dynamical system $\ds$
through the partition $\P$. In order to recover the \emph{dynamics},
we need further information.
Given a point $x$ of the input space $\mkun$, we have associated with
$x$ a trajectory $\Gamma_x$ and a word $\omega_x$. If we consider
$(x,t)$ a point of the space-time
$\mkun \times V$, it corresponds to a point $\gamma(x,t)$ lying on
$\Gamma_x$. To recover in some sense the position of $\gamma(x,t)$ on
$\Gamma_x$ from $\omega_x$, we associate with $(x,t)$ a suffix of the
word $\omega_x$ denoted $\omega_{(x,t)}$. The construction of
$\omega_{(x,t)}$ is similar to the construction of $\omega_x$, we only
need to consider the sets of intervals ${\mathcal F}_{(x,t)}= \big\{I \cap
\{t' \in V \mid t' \ge t\} \mid I \in {\mathcal F}_x \big\}.$
Let us notice that given $(x,t)$ a point of the space-time $\mkun
\times V$ there is a unique suffix $\omega_{(x,t)}$ of $\omega_x$
associated with $(x,t)$. Given a point $y \in \mkdeux$ it may have
several $(x,t)$ such that $\gamma(x,t)=y$ and so several suffixes are
associated with $y$. In other words, given $y \in \mkdeux$, the
\emph{future} of $y$ is non-deterministic, and a single suffix
$\omega_{(x,t)}$ is thus not sufficient to recover the dynamics of the
transition system through the partition $\P$. To encode the dynamical
behavior of a point $y$ of the output space $\mkdeux$ through the
partition $\P$, we introduce the notion of suffix abstraction (called
suffix dynamical type in~\cite{brihaye05,brihaye06}) of a point
$y$ w.r.t. $\P$.
\begin{defi}\label{suf}
Given a dynamical system $\ds$, a finite partition $\P$ of
$\mkdeux$, a point $y \in \mkdeux$, the \emph{suffix abstraction} of
$y$ w.r.t. $\P$ is denoted $\text{{\sffamily\upshape{Suf}}}_\P(y)$ and defined by $\text{{\sffamily\upshape{Suf}}}_\P(y) =
\{ \omega_{(x,t)} \mid \gamma(x,t) = y\}$.
\end{defi}
This allows us to define an equivalence relation on $\mkdeux$. Given
$y_1$, $y_2 \in \mkdeux$, we say that they are
\emph{suffix-equivalent} if and only if $\text{{\sffamily\upshape{Suf}}}_{\P}(y_1) =
\text{{\sffamily\upshape{Suf}}}_{\P}(y_2)$. We denote $\text{{\sffamily\upshape{Suf}}}\left(\P\right)$ the partition
induced by this equivalence, which we call the \emph{suffix partition}
w.r.t. $\P$. We say that a partition $\P$ is \emph{suffix-stable} if
$\text{{\sffamily\upshape{Suf}}}(\P)=\P$ (it implies that if $y_1$ and $y_2$ belong to the same
piece of $\P$ then $\text{{\sffamily\upshape{Suf}}}_{\P}(y_1) = \text{{\sffamily\upshape{Suf}}}_{\P}(y_2)$).
To understand the suffix abstraction technique, we provide several
examples.
\begin{exa}
We start with example~\ref{ex:spiraletot}. The suffix abstraction in
$(0,0)$ is composed of a unique suffix $ACBC$ because any trajectory
leaving $(0,0)$ crosses exactly once the spiral at some point. By
looking at Fig.~\ref{fig:ex-spiraltot} one can convince oneself that
the suffixes associated with the other points of the plane are given
by suffixes of $ACBC$; for instance, the points lying on the spiral
(the piece $B$) have suffix $BC$.
\end{exa}
\begin{exa}\label{ex-ta}
We first consider a two dimensional timed automata dynamics (see
Example~\ref{ex:AT}). In this case we have that
$\gamma(x_1,x_2,t)=(x_1+t,x_2+t)$. We associate with this dynamics
the partition $\P=\{A,B\}$ where $B=[1,2]^2$ and $A=\IR^2 \setminus
B$. In this example the suffix partition is made of three pieces,
which are depicted in Figure~\ref{figautempo}.
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[scale=1.3]
\draw [latex'-latex'] (0,2.5) -- (0,0) node [pos=0,left] {$x_2$} node [pos=1,below,left] {$0$} -- (2.5,0) node [pos=1,below] {$x_1$};
\draw [fill=black!20!white] (1,1) -- (2,1) -- (2,2) -- (1,2) --cycle;
\draw [dashed] (1,0) -- (2,1);
\draw [dashed] (0,1) -- (1,2);
\draw (1.5,1.5) node {$BA$};
\draw (2.5,2.5) node {$A$};
\draw (.5,.5) node {$ABA$};
\end{tikzpicture}
\end{center}
\caption{Suffixes for the timed automata dynamics}
\label{figautempo}
\end{figure}
\end{exa}
The suffix abstraction allows to encode more sophisticated continuous
dynamics than the previous suffix encoding of a trajectory. In the
next example we recover in some sense the continuous dynamics of
\emph{rectangular automata}~\cite{HKPV98}, which requires to use the
suffix abstraction (some of the points do not have a unique suffix).
\begin{exa}\label{ex-rect}
We consider the dynamical system $({\mathcal M},\gamma)$ where ${\mathcal M}
= \langle \IR,+,\cdot,0,1,<\rangle$ and $\gamma:\IR^2 \times [1,2]
\times \IR^+ \to \IR^2$
is defined by $\gamma(x_1,x_2,p,t)= (x_1 + t, x_2 + p \cdot t)$. We
associate with this dynamical system the partition $\P=\{A,B,C\}$
where $B = [2,5] \times [3,4]$, $C = [3,5]\times[1,2]$ and $A =
\IR^2 \setminus (B \cup C)$ (see Figure~\ref{fig rect}). Let us
focus on the suffix abstractions of the two points $y_1 = (1,2.5)$
and $y_2=(2,0.5)$. We have that $\text{Suf}_{\mathcal P}(y_1) =
\{A,ABA\}$ and $\text{Suf}_{\mathcal P}(y_2) = \{ABA,ACABA\}$. Though
several points have several possible suffixes, the partition induced
by the suffix abstraction is finite and illustrated in
Figure~\ref{fig rect1}.
\begin{figure}[ht]
\null\hfill
\subfigure[The dynamics\label{fig rect}]{
\begin{tikzpicture}[scale=.8]
\draw [latex'-latex'] (0,5.5) -- (0,0) -- (5.5,0);
\draw [fill=black!60!white,line width=0pt] (3,1) -- (5,1) -- (5,2) -- (3,2) --cycle;
\draw (4.5,1.5) node {\textcolor{white}{$C$}};
\draw [fill=black,line width=0pt] (2,3) -- (5,3) -- (5,4) -- (2,4) --cycle;
\draw (3,3.5) node {\textcolor{white}{$B$}};
\draw (2.5,5.5) -- (1,2.5) -- (4,5.5);
\path [opacity=.5,fill=black!20!white,line width=0pt] (2.5,5.5) -- (1,2.5) -- (4,5.5) --cycle;
\draw (2.5,5.5) -- (1,2.5) -- (4,5.5);
\draw (1,2.5) [fill=black] circle (1.5pt) node [below left] {$y_1$};
\path [opacity=.5,fill=black!20!white,line width=0pt] (5.5,5.5) -- (4.5,5.5) -- (2,.5) -- (5.5,4) --cycle;
\draw (4.5,5.5) -- (2,.5) -- (5.5,4);
\draw (2,.5) [fill=black] circle (1.5pt) node [below left] {$y_2$};
\draw (1,1) node {$A$};
\end{tikzpicture}
}
\hfill\hfill
\subfigure[The suffix partition \label{fig rect1}]{
\begin{tikzpicture}[scale=.8]
\path [fill=black!10!white,line width=0pt] (0,0) -- (2,4) -- (0,2) --cycle;
\path [fill=black!20!white,line width=0pt] (2,0) -- (3,2) -- (1,0) --cycle;
\draw [latex'-latex'] (0,5.5) -- (0,0) -- (5.5,0);
\draw (3,1) -- (5,1) -- (5,2) -- (3,2) --cycle;
\draw (2,3) -- (5,3) -- (5,4) -- (2,4) --cycle;
\draw (0,2) -- (2,4);
\draw (0,0) -- (2,4);
\draw (1,0) -- (3,2);
\draw (2,0) -- (3,2);
\draw (2,0) -- (5,3);
\draw (3.5,0) -- (5,3);
\draw (4,0) -- (5,1);
\draw (4.5,0) -- (5,1);
\draw (1,2.5) [fill=black] circle (1.5pt) node [below left] {$y_1$};
\draw (2,.5) [fill=black] circle (1.5pt) node [below left] {$y_2$};
\draw [densely dotted,->] (.5,1.6) .. controls +(-120:20pt) and +(0:20pt) .. (-1,1) node [pos=1,left] {${\scriptstyle \{A,ABA\}}$};
\draw [densely dotted,->] (2,.8) .. controls +(90:20pt) and +(-120:20pt) .. (2.7,2.2) node [pos=1,above] {${\scriptstyle \{ABA,ACABA\}}$};
\end{tikzpicture}
} \hfill\null
\caption{A rectangular dynamics}
\end{figure}
\end{exa}
\subsection{Partial Observation and the Superword Abstraction}
The suffix-partition proposed in Subsection~\ref{subsec-suffix} is not
suitable for the partial observation framework. We will intuitively
convince the reader of this fact. Let $({\mathcal M},\gamma)$ be a
dynamical system, $y$ be a point of $V_2$ and $\P$ be a partition of
$V_2$. Since several trajectories cross the point $y$, there exist
several $y'$ such that $y \xrightarrow{\tau} y'$, for some $\tau \in
M^+$. In the partial observation framework, the controller does not
know which trajectory will be chosen by the environment and have to
choose a pair $(\tau,c)$ independently. In particular, starting from
$y$, one can potentially be in several different pieces of $\P$ after
$\tau$ time units. The notion of suffix abstraction is not sufficient
in order to capture these behaviors, that is why we now associate a
word $\omega_y$ on $2^{\mathcal P}$ with a given $y \in V_2$. We will see
in~Subsection~\ref{subsec-partial2} that this new encoding is suitable
in order to study control reachability problem in the partial
observation framework. In order to define the word on $2^{\mathcal P}$
associated with $y \in V_2$, we need to introduce further definitions.
\begin{defi}
Let $y$ be a point of $V_2$ and $\tau$ be a time in $M^+$.
\[
{\mathcal F}_y(\tau) = \big\{ P \in {\mathcal P} \mid \exists x \in M^{k_1}
\ \exists t \in M \ \gamma(x,t)=y \text{ and } \gamma(x,t+\tau) \in
P \big\} .
\]
The set ${\mathcal F}_y(\tau)$ represents the set of pieces that we have
potentially reached after $\tau$ time units when starting from $y$.
\end{defi}
\begin{defi}
Let $y$ be a point of $V_2$.
\begin{align*}
{\mathcal F}_y = & \big\{ I\ \mid\ I \text{ is a time interval and
is maximal for the property }\\
& \qquad \qquad \qquad \exists S \in 2^{\mathcal P} \ \forall \tau \in
I \ \ {\mathcal F}_y(\tau) = S \big\}
\end{align*}
\end{defi}
For each $y \in V_2$, the set ${\mathcal F}_y$ exactly consists of the
connected components of the sets $\{ \tau \in M^+ \mid {\mathcal
F}_y(\tau) = S \}$, for $S \in 2^{\mathcal P}$. We can now define the
superword $\text{{\sffamily\upshape{Sup}}}_{\P}(y)$ associated with a given $y \in V_2$.
\begin{defi}
Let $({\mathcal M},\gamma)$ be a dynamical system, $y$ be a point of
$V_2$, and $\mathcal P$ be a partition of $V_2$. \emph{The superword
associated with $y$} is given by the function $\text{{\sffamily\upshape{Sup}}}_{\P}(y):{\mathcal
F}_y \to 2^{\mathcal P}$ defined by:
\[
\text{{\sffamily\upshape{Sup}}}_{\P}(y)(I) = S \qquad \text{ where } I \in {\mathcal F}_y \text{ is
such that } \forall \tau \in I \ \ {\mathcal F}_y(\tau) = S.
\]
\end{defi}
Let us notice that given $(\M,\gamma)$ a dynamical system, $\P$ a
partition of $V_2$, and $y$ a point of $V_2$, there exists a unique
superword $\text{{\sffamily\upshape{Sup}}}_{\P}(y)$ associated with $y$. If $(\M,\gamma)$ is a
dynamical system and $\P$ a finite partition of $V_2$, we write
$\text{{\sffamily\upshape{Sup}}}(\P)$ for the partition induced by superwords. We say that a
partition $\P$ is \emph{superword-stable} if $\text{{\sffamily\upshape{Sup}}}(\P)=\P$. Let us
illustrate this new notion on examples.
\begin{exa}\label{ex:supword}
Let us consider the three dynamical systems depicted on
Figures~\ref{sufsup}. In the three cases, the dynamical system
consists of two trajectories exiting the point $y_i$. What differs
in the three systems is the way the partition $\P=\{A,B,C\}$ is
crossed. We are interested in the superword associated with
$y_i$. For the two first dynamical systems we have that
$\text{{\sffamily\upshape{Sup}}}_{\P}(y_1) = \text{{\sffamily\upshape{Sup}}}_{\P}(y_2) = \{A\}\{B,C\}$, and for the last
one we have that $\text{{\sffamily\upshape{Sup}}}_{\P}(y_3)=\{A\}\{B,C\} \{B\} \{B,C\} \{C\}
\{B,C\}$.
\begin{figure}[h]
\null\hfill\subfigure[$\{A\}\{B,C\}$]{
\begin{tikzpicture}
\draw (0,0) [fill=black] circle (1.5pt) node [below left] {$y_1$} node [above left] {$A$};
\draw (3,-.3) -- (.3,-.3) -- (0,0) -- (.3,.3) -- (3,.3);
\path (.3,.3) -- (.7,.3) node [midway,above] {$B$};
\path (1,.3) -- (2,.3) node [midway,above] {$C$};
\path (2,.3) -- (3,.3) node [midway,above] {$B$};
\draw (1,.25) -- (1,.35);
\draw (2,.25) -- (2,.35);
\path (.3,-.3) -- (.7,-.3) node [midway,below] {$C$};
\path (1,-.3) -- (2,-.3) node [midway,below] {$B$};
\path (2,-.3) -- (3,-.3) node [midway,below] {$C$};
\draw (1,-.25) -- (1,-.35);
\draw (2,-.25) -- (2,-.35);
\end{tikzpicture}
}\hfill
\subfigure[$\{A\}\{B,C\}$]{
\begin{tikzpicture}
\draw (0,0) [fill=black] circle (1.5pt) node [below left] {$y_2$} node [above left] {$A$};
\draw (3,-.3) -- (.3,-.3) -- (0,0) -- (.3,.3) -- (3,.3);
\path (.3,.3) -- (2.7,.3) node [midway,above] {$B$};
\path (.3,-.3) -- (2.7,-.3) node [midway,below] {$C$};
\end{tikzpicture}
}\hfill
\subfigure[\mbox{$\{A\}\{B,C\} \{B\} \{B,C\} \{C\} \{B,C\}$}]{
\begin{tikzpicture}
\draw (0,0) [fill=black] circle (1.5pt) node [below left] {$y_3$} node [above left] {$A$};
\draw (3,-.3) -- (.3,-.3) -- (0,0) -- (.3,.3) -- (3,.3);
\path (.3,.3) -- (.7,.3) node [midway,above] {$B$};
\path (1,.3) -- (2,.3) node [midway,above] {$C$};
\path (2,.3) -- (3,.3) node [midway,above] {$B$};
\draw (1,.25) -- (1,.35);
\draw (2,.25) -- (2,.35);
\path (.3,-.3) -- (.4,-.3) node [midway,below] {$C$};
\path (.7,-.3) -- (1.7,-.3) node [midway,below] {$B$};
\path (1.7,-.3) -- (2.7,-.3) node [midway,below] {$C$};
\draw (.7,-.25) -- (.7,-.35);
\draw (1.7,-.25) -- (1.7,-.35);
\end{tikzpicture} \hspace*{2cm}
}
\caption{Suffix and superword are not comparable}
\label{sufsup}
\end{figure}
Let us notice that the notions of \emph{suffix abstraction} and
\emph{superword abstraction} are incomparable. To illustrate this
fact, let us consider again the three dynamical systems of
Figure~\ref{sufsup}. We have that
$\text{{\sffamily\upshape{Sup}}}_{\P}(y_1)=\text{{\sffamily\upshape{Sup}}}_{\P}(y_2)\ne\text{{\sffamily\upshape{Sup}}}_{\P}(y_3)$. Let us now
consider the suffix abstractions of these points:
\[
\text{{\sffamily\upshape{Suf}}}(y_1) = \{ ABCB,ACBC\} \ ; \ \text{{\sffamily\upshape{Suf}}}(y_2) = \{ AB,AC\} \ ; \
\text{{\sffamily\upshape{Suf}}}(y_3) = \{ ABCB,ACBC\}.
\]
This shows that the superword abstraction can distinguish between
$y_1$ and $y_3$, but cannot distinguish between $y_1$ and $y_2$,
although the suffix abstraction can distinguish between $y_1$ and
$y_2$, but cannot distinguish between $y_1$ and $y_3$.
\end{exa}
\section{Solving an \texorpdfstring{$\M$}{M}-Game}\label{mgame}
In this section we first present a general procedure to compute the
set of winning states for an $\M$-game under partial observation. We
then show that if a partition is superword-stable,
the procedure can be performed symbolically on pieces of the
partition. The procedure described is not always effective and we will
later point out specific $\M$-structures for which each step of the
procedure is computable. By Proposition~\ref{prop:casparticulier}, we
know that the perfect observation control problem can be seen as a
special case of the partial observation framework; however at the end
of this section, we explain how the suffix partition can be used in
order to directly solve the perfect observation control problem.
\subsection{Controllable Predecessors under Partial Observation}
\label{subsec:contpred}
As for classical reachability games~\cite{lncs2500}, one way of
computing winning states is to compute the {\em attractor} of goal
states by iterating a {\em controllable predecessor} operator. Let
$\A=(\M,Q,\text{\sffamily\upshape{Goal}},\Sigma,\delta,\gamma)$ be an $\M$-game. For $W
\subseteq Q \times V_2$,
$a \in \Sigma_c$ and $u \in \Sigma_u$ we first define the notion of
controllable discrete predecessors. For every $a \in \Sigma = \Sigma_c
\cup \Sigma_u$, we have
$$\Pred_a(W) = \left\{(q,y) \in Q \times V_2 \ \begin{array}{c}
\mid\\[-0.2cm] \mid \\[-0.2cm] \mid\\[-0.2cm] \mid \\[-0.2cm] \mid
\end{array}\
\begin{array}{l} a\ \text{is enabled in}\ (q,y), \\
\text{and}\ \forall (q',y') \in Q \times V_2,\ \\
\left((q,y) \xrightarrow{a} (q',y') \Rightarrow (q',y') \in
W\right) \end{array}\right\}.$$ The
intuition of this operator is the following: a state is in
$\Pred_a(W)$ if action $a$ can be done from $(q,y)$, and whichever
transition is taken leads to a state in $W$ (action $a$ ensures $W$ in
one step). We also define $\cPred(W) = \displaystyle \bigcup_{c \in
\Sigma_c} \Pred_c(W)$ and $\uPred(W) = \displaystyle \bigcup_{u \in
\Sigma_u} \Pred_u(W)$.
As for timed and hybrid games~\cite{AMPS98,HHM99}, we also define a
\emph{safe time predecessor} of a set $W$ w.r.t. a set $W'$, that is
specific to the partial observation framework:
a state $(q,y)$ is in $\timePred_{\textsf{partial}}(W,W')$ if a delay
$\tau$ can be chosen such that for all trajectories starting from
$(q,y)$, one can let $\tau$ time units pass avoiding $W'$ and then
reach $(q',y') \in W$. Formally the operator
$\timePred_{\textsf{partial}}$ is defined as follows:
$$\timePred_{\textsf{partial}}(W,W') = \left\{ (q,y) \in Q \times V_2
\begin{array}{c}
\mid\\[-0.2cm] \mid \\[-0.2cm] \mid\\[-0.2cm] \mid \\[-0.2cm] \mid
\end{array}\!\!
\begin{array}{l} \!\!\exists \tau \in M^+,\ \forall (x,t) \in V_1 \times
V\ \text{s.t.}\\ \!\!\gamma_q(x,t)=y,\ \text{and}\ (q,y)
\xrightarrow{\tau}_{x,t} (q',y')\\ \!\!\text{implies} \left((q',y')\in W\
\text{and}\ \Post_{[t,t+\tau]}^{q,x} \subseteq
\overline{W'}\right)\!\!\end{array}\right\}.$$ where
$\Post_{[t,t+\tau]}^{q,x} = \{\gamma_q(x,t')\ \mid\ t \le t' \le
t+\tau \}$.
\medskip The \emph{controllable predecessor} operator under partial
observation $\pi_{\textsf{partial}}$ is then defined as:
\[
\pi_{\textsf{partial}}(W)= W \cup \bigcup\limits_{a \in \Sigma_c}
\timePred_{\textsf{partial}}(\Pred_a(W), \uPred(\overline{W})).
\]
\begin{rem}
Note that the operator $\pi_{\textsf{partial}}$ is definable in any
expansion of an ordered group. Hence, if $W$ is definable, so is
$\pi_{\textsf{partial}}(W)$.
\end{rem}
\begin{exa}
We first illustrate the computation of the operator
$\pi_{\textsf{partial}}$ on Example~\ref{ex:spiraletot} (see
page~\pageref{ex:spiraletot}). In this case,
$\pi_{\textsf{partial}}$ does not induce a winning strategy from
$(q_1,(0,0))$ under partial observation. Setting $W = \text{\sffamily\upshape{Goal}} \times
V_2 = \{q_2\} \times V_2$, we have that $\pi_{\textsf{partial}}(W)$
does not contain the point $(q_1,(0,0))$ because there is no uniform
choice for a positive delay $\tau$ before taking action $c$ so that
the spiral (area $B$) can be avoided. Notice however that
$\pi_{\textsf{partial}}(W)$ is not empty because it includes all
points different from $(q_1,(0,0))$ (from which there is a unique
trajectory).
\end{exa}
\begin{rem}
\label{rk:partial}
Note also that due to the partial observation assumption, in the
definition of $\pi_{\textsf{partial}}$, the action $a$ for
controlling the system has to be chosen before choosing the delay
$\tau$. Indeed, the controller does not know which precise
trajectory will be chosen by the environment, in particular, action
$a$ should be available after time $\tau$ independently of the
choice of trajectory made by the environment. This is illustrated in
the next example.
\end{rem}
\begin{exa}\label{ex:ab}
Let us consider the $\M$-game $\A$ depicted on
Figure~\ref{fig:mgamebis} where $\text{\sffamily\upshape{Goal}}=\{q_2,q_3\}$ and where $c_1,
c_2 \in \Sigma_c$ are distinct controllable actions. The dynamics in
$q_1$ is depicted on Figure~\ref{fig:dynqun}, roughly speaking, it
consists of of two trajectories exiting the point $y$. perfect
observation from $y$; indeed depending on the trajectory we are
following, we will either play $(\tau,c_1)$ or $(\tau,c_2)$, for
some well-chosen $\tau \in \IR^+$. However, there is no winning
strategy under partial observation from $y$. Although we can find
$\tau \in \IR^+$ such that a controllable action will be (safely)
available (from $y$) after $\tau$ time units, we are unable to tell
which controllable action will be taken.
In fact if $W=\text{\sffamily\upshape{Goal}} \times V_2$ we have that
$\pi_{\textsf{partial}}(W)= \{(q_1,z) \mid z \in V_2\backslash \{y\}\}$.
Indeed if $(q_1,z) \neq (q_1,y)$, the controller can deduce the trajectory
from the current state and choose its action accordingly.
\begin{figure}[ht]
\null\hfill\subfigure[The $\M$-game $\A$\label{fig:mgamebis}]{
\begin{tikzpicture}
\draw (0,0) node [circle,inner sep=1.5pt,draw] (q1) {$q_1$};
\draw [latex'-] (q1) -- +(-.8,0);
\draw (2,.8) node [circle,inner sep=1.5pt,draw,fill=black!40!white] (q2) {$q_2$};
\draw (2,-.8) node [circle,inner sep=1.5pt,draw,fill=black!40!white] (q3) {$q_3$};
\draw [-latex'] (q1) -- (q2) node [midway,sloped,above] {$g_B,c_1$};
\draw [-latex'] (q1) -- (q3) node [midway,sloped,below] {$g_C,c_2$};
\end{tikzpicture}
}\hfill\hfill
\subfigure[Dynamics in $q_1$\label{fig:dynqun}]{
\begin{tikzpicture}
\draw (0,0) [fill=black] circle (1.5pt) node [below left] {$y$} node [above left] {$A$};
\draw (3,-.3) -- (.3,-.3) -- (0,0) -- (.3,.3) -- (3,.3);
\path (1.5,.3) -- (3,.3) node [midway,above] {$B$};
\draw (1.5,.25) -- (1.5,.35);
\path (1.5,-.3) -- (3,-.3) node [midway,below] {$C$};
\draw (1.5,-.25) -- (1.5,-.35);
\path (0,-.8) -- (1,-.8);
\end{tikzpicture}}\hfill\null
\caption{}
\end{figure}
\end{exa}
The next proposition states the soundness of this operator for
computing winning states in the games under a partial observation
hypothesis.
\begin{prob}\label{proppi2}
Let $\A=(\M,Q,\text{\sffamily\upshape{Goal}},\Sigma,\delta,\gamma)$ be an $\M$-game. If there
exists $n\in \IN$ s.t. $\pi_{\textsf{\upshape
partial}}^n(\text{\sffamily\upshape{Goal}})=\pi_{\textsf{\upshape partial}}^{n+1}(\text{\sffamily\upshape{Goal}})$
then $\pi_{\textsf{\upshape partial}}^*(\text{\sffamily\upshape{Goal}})=\pi_{\textsf{\upshape
partial}}^n(\text{\sffamily\upshape{Goal}})$ is the set of winning states of~$\A$ under
partial observation.
\end{prob}
\proof
We first prove that if $(q,y) \in \pi_{\textsf{\upshape
partial}}^*(\text{\sffamily\upshape{Goal}})$ then there exists a winning strategy under
partial observation from $(q,y)$. To this aim, we define a
memoryless winning strategy from any $(q,y) \in
\pi_{\textsf{\upshape partial}}^*(\text{\sffamily\upshape{Goal}})$. By notation misuse, we
define the strategy $\lambda$ on states $(q,y)$ instead of
executions.
We define a strategy $\lambda$ on all sets $\bigcup_{0 \le i \le k}
\pi_{\textsf{\upshape partial}}^i(\text{\sffamily\upshape{Goal}})$ by induction on $k$, and
prove that it is a winning strategy. If $k=0$, we assume $\lambda$
is defined nowhere, it is thus winning from all states in
$\text{\sffamily\upshape{Goal}}$.
Suppose now that $\lambda$ is already defined on $W=\bigcup_{0 \le i
\le k} \pi_{\textsf{\upshape partial}}^i(\text{\sffamily\upshape{Goal}})$ and is winning on
these states. We now define $\lambda$ on $\pi_{\textsf{\upshape
partial}}(W)$. Let $(q,y) \in Q \times V_2$: if $(q,y) \in W$,
$\lambda$ is already defined; if $(q,y) \in \pi_{\textsf{\upshape
partial}}(W) \setminus W$, then we know that there exists $a\in
\Sigma_c$ with $(q,y) \in
\timePred_{\textsf{partial}}\left(\Pred_a(W), \uPred(\overline{W})
\right)$. There exists $\tau \in M^+$ with $(\tau,a)$
enabled\footnote{We say that $(\tau,a) \in M^+ \times \Sigma$ is
enabled in $(q,y)$ if there exists $(x,t) \in V_1 \times V$ such
that $\gamma(x,t)=y$ and $(\tau,a)$ is enabled in $(q,x,t,y)$.} in
$(q,y)$ such that for every $(x,t)$ if $\gamma_q(x,t)=y$, then
$(q,y) \xrightarrow{\tau,a}_{x,t} (q',y')$, $(q',y') \in W$ and
$\Post_{[t,t+\tau]}^{q,x} \subseteq
\overline{\uPred{(\overline{W}})}$. We set $\lambda(q,y)=(\tau,a)$
and show that this is a winning choice.
We show by induction on $k$ that $\lambda$ is winning for each
state of $W=\bigcup_{0 \le i \le k} \pi_{\textsf{\upshape
partial}}^i(\text{\sffamily\upshape{Goal}})$. This is immediate for $k=0$. Suppose now
that the result is true for $k$ and let $(q,y) \in
\pi_{\textsf{\upshape partial}}(W)$. Let $\rho = (q,x,t,y)
\xrightarrow{\tau_1,a_1} (q_1,x_1,t_1,y_1) \xrightarrow{\tau_2,a_2}
\ldots$ be an execution compatible with $\lambda$. We have that
either $\tau_1 = \tau$ and $a_1=a$, in which case $(q_1,y_1)\in W$,
or $\tau_1 \le \tau$ and $a_1 \in \Sigma_u$, in which case $(q,y)
\xrightarrow{\tau_1}_{x,t} (q',y') \xrightarrow{a_1} (q_1, y_1)$
with $(q',y') \notin \uPred{(\overline{W})}$ so $(q_1,y_1) \in
W$. In both cases, $(q_1,y_1) \in W$ so by induction hypothesis,
$\rho$ is winning.
\medskip We now show that if there exists a strategy under partial
observation $\lambda$ winning from $(q,y)$ then $(q,y) \in
\pi_{\textsf{\upshape partial}}^*(\text{\sffamily\upshape{Goal}})$. Set
$W=\pi_{\textsf{\upshape partial}}^*(\text{\sffamily\upshape{Goal}})$, by contradiction
suppose that $(q,y) \notin W$, we will construct a non-winning
execution compatible with $\lambda$. By hypothesis
$\pi_{\textsf{\upshape partial}}(W)=W$ so $(q,y) \notin
\pi_{\textsf{\upshape partial}}(W)$, it follows that for all $a \in
\Sigma_c$, for all $\tau \in M^+$ there exists $(x,t) \in V_1 \times
V$ such that $\gamma_q(x,t)=y$, and $(q,y) \to_{x,t}^{\tau} (q',y')$
implies $(q',y')\notin \Pred_a(W)$ or $\Post_{[t,t+\tau]}^{q,x} \cap
\uPred(\overline{W}) \ne \emptyset$. Let $(\tau,a) = \lambda(q,y)$
(as $\lambda$ is a strategy under partial observation it does not
depend of $x$ and $t$) and let $(x,t) \in V_1 \times M^+$ be as in
the previous statement.
There exists $(q_1,x_1,t_1,y_1)$ with $(q_1,y_1) \notin W$ such that
either $(q,x,t,y) \xrightarrow{\!\tau,a\!} (q_1,x_1,t_1,y_1)$ or there
exists $\tau' \le \tau$ and $u \in \Sigma_u$ with $(q,x,t,y)
\xrightarrow{\tau',u} (q_1,x_1,t_1,y_1)$. In both cases, the
constructed execution is compatible with $\lambda$. As $(q_1,y_1)
\notin W$ we can repeat the same argument and construct inductively
an execution $\rho =(q,x,t,y) \xrightarrow{\tau_1,a_1}
(q_1,x_1,t_1,y_1) \xrightarrow{\tau_2,a_2} \ldots$ compatible with
$\lambda$ and such that for every $i$, $(q_i,x_i,t_i,y_i) \notin W$.
By definition of $W$, for every $i$, $q_i \notin \text{\sffamily\upshape{Goal}}$, which
contradicts the assumption that $\lambda$ is a winning strategy.
\qed
$\pi_{\textsf{\upshape partial}}^*(\text{\sffamily\upshape{Goal}})$, but this does not imply
that we can compute this set, as some $\M$-structures have an
undecidable theory. The following corollary states that if some
conditions on the structure and on $\pi_{\textsf{\upshape partial}}$
are satisfied, then this procedure provides an algorithmic solution to
the control problem.
\begin{cor} \label{cor:etatsgagnants} Let $\M$ be a structure
such that $\text{{\sffamily\upshape{Th}}}(\M)$ is decidable.\footnote{We recall that a theory
$\text{{\sffamily\upshape{Th}}}(\M)$ is decidable iff there is an algorithm which can
determine whether or not any sentence ({\it i.e.}, a formula with
no free variable.) is a member of the theory ({\it i.e.}, is
true). We suggest to readers interested in general decidability
issues on o-minimal hybrid systems to refer to Section~5
of~\cite{BM05}.} Let $\C$ be a class of $\M$-games such that for
every $\A$ in $\C$, there exists a finite partition $\P$ of $Q
\times \mkdeux$ definable in $\M$, respecting
$\text{\sffamily\upshape{Goal}}$\footnote{\emph{I.e.}, $\text{\sffamily\upshape{Goal}}$ is a union of pieces of
$\P$.}, and stable under $\pi_{\textsf{\upshape
partial}}$.\footnote{Meaning that if $P$ is a piece of $\P$ then
$\pi_{\textsf{\upshape partial}}(P)$ is a union of pieces of
$\P$.} Then the control problem under partial observation in the
class $\C$ is decidable. Moreover if $\A \in \C$, the set of
winning states under partial observation of $\A$ is computable.
\end{cor}
\proof
Let $\M$ be a structure and $\C$ a class of automata satisfying the
hypotheses and take $\A \in \C$. As $\P$ is stable under
$\pi_{\textsf{\upshape partial}}$, $\pi_{\textsf{\upshape
partial}}^*(\text{\sffamily\upshape{Goal}})$ is a finite union of pieces of $\P$. Hence
there exists $n \in \IN$ such that $\pi_{\textsf{\upshape
partial}}^*(\text{\sffamily\upshape{Goal}}) = \pi_{\textsf{\upshape
partial}}^n(\text{\sffamily\upshape{Goal}})$. Thus proposition \ref{proppi2} shows that
the set of winning states is $\pi_{\textsf{\upshape
partial}}^*(\text{\sffamily\upshape{Goal}})$.
As $\pi_{\textsf{\upshape partial}}$ and $\text{\sffamily\upshape{Goal}}$ are definable, we
have that $\pi_{\textsf{\upshape partial}}^i(\text{\sffamily\upshape{Goal}})$ is definable
and as $\text{{\sffamily\upshape{Th}}}(\M)$ is decidable we can test if $\pi_{\textsf{\upshape
partial}}^i(\text{\sffamily\upshape{Goal}})=\pi_{\textsf{\upshape
partial}}^{i+1}(\text{\sffamily\upshape{Goal}})$, we can thus effectively find a
representation of $\pi_{\textsf{\upshape partial}}^*(\text{\sffamily\upshape{Goal}})$.
As $\text{{\sffamily\upshape{Th}}}(\M)$ is decidable, if a state $(q,y)$ is definable we can
test if $(q,y) \in \pi_{\textsf{\upshape partial}}^*(\text{\sffamily\upshape{Goal}})$. It
follows that the control problem in an $\M$-structure is decidable.
\qed
\subsection{Superwords and the \texorpdfstring{$\pi_{\textsf{\upshape partial}}$}{pi_Partial} Operator}\label{subsec-partial2}
We now present a sufficient condition for a partition to be stable
under the operator $\pi_{\textsf{\upshape partial}}$: we require that
the partition is stable under $\Pred_a$ (for all $a \in \Sigma$) to
handle the discrete part of the automaton and we show that the
stability by superwords is fine enough to be correct for solving
control problems under partial observation.
\begin{prob} \label{prop:supwordstable} Let $\A$ be an
$\M$-game and $\P$ be a partition of $Q \times V_2$. If $\P$
respects $\text{\sffamily\upshape{Goal}}$, is stable under $\Pred_a$ (for all $a \in \Sigma$)
and superword-stable, then $\P$ is stable under the operator
$\pi_{\textsf{\upshape partial}}$.
\end{prob}
\proof
We fix a location $q$ of the automaton and we take $y_1,y_2 \in V_2$
such that there exists $A \in \P$ with $y_1,y_2 \in A$. We now show
that if $y_1 \in \pi_{\textsf{\upshape partial}}(X)$, for some $X
\in \P$ then $y_2 \in \pi_{\textsf{\upshape partial}}(X)$. In case
$y_1 \in X$ then $X = A$ and $y_2 \in Y$.
We assume $y_1 \in \pi_{\textsf{partial}}(X) \setminus X$.
There exists $a \in \Sigma_c$ and $\tau_1 \in M^+$ such that for all
$(x,t) \in V_1 \times V$ with $\gamma_q(x,t)=y_1$ and for all $y'_1$
such that $y_1 \xrightarrow{\tau_1}_{x,t} y'_1$, we have that $y'_1
\in \Pred_a(X)$, and $\Post_{[t,t+\tau_1]}^{q,x} \subseteq
\overline{\uPred{(\overline{X}})}$. Let us now express the previous
condition in term of superword. Assume that
$$\text{{\sffamily\upshape{Sup}}}_{\P}(y_1) = S_1S_2\cdots S_k, \quad
\text{ where } S_i \in 2^{\P},$$ the previous condition means that
$\text{{\sffamily\upshape{Sup}}}_{\P}(y_1)$ contains a prefix $S_1 \cdots S_l$ is such that:
\begin{itemize}
\item for all $P_i \in S_l$, we have that $P_i \subseteq \Pred_a(X)$
(this condition makes sense since $\mathcal P$ is stable under
$\Pred_a$; indeed, \textit{a priori} we only have that there
exists $y'_1 \in P_i$ such that $y'_1 \in \Pred_a(X)$, the
stability of $\mathcal P$ under $\Pred_a$ implies that $P_i \subseteq
\Pred_a(X)$),
\item for all $j \le l$, for all $P_i \in S_j$, we have that
$\uPred(\overline{X}) \cap P_i = \emptyset$ (again this condition
makes sense since $\mathcal P$ is stable under $\Pred_a$).
\end{itemize}
Since $\P= \text{{\sffamily\upshape{Sup}}}\left(\P\right)$ and both $y_1$ and $y_2$ belong to
the same piece of $\P$, we have that $\text{{\sffamily\upshape{Sup}}}_\P(y_1)=\text{{\sffamily\upshape{Sup}}}_\P(y_2)=
S_1S_2\cdots S_k$. In particular, we can find $\tau_2 \in M^+$ such
that if $y_2 \xrightarrow{\tau_2} y'_2$, we have that $y'_2$
corresponds to the letter $S_l$. Thus we have that $y'_2 \in
\Pred_a(X)$ and $\Post_{[t,t+\tau_2]}^{q,x} \subseteq
\overline{\uPred{(\overline{X}})}$, \textit{i.e.} $y_2 \in
\pi_{\textsf{\upshape partial}}(X)$. \qed
As an immediate corollary of this proposition and of
Corollary~\ref{cor:etatsgagnants}, we get the following general
decidability result.
\begin{cor} \label{corta} Let $\M$ be a structure such that
$\text{{\sffamily\upshape{Th}}}(\M)$ is decidable. Let $\C$ be a class of $\M$-games such that
for every $\A$ in $\C$, there exists a finite partition $\P$ of $Q
\times \mkdeux$ definable in $\M$, respecting $\text{\sffamily\upshape{Goal}}$,
superword-stable, and
stable under $\Pred_a$ for every action $a \in \Sigma$. Then the
control problem under partial observation
(Problem~\ref{prob:contpartial}) in the class $\C$ is decidable, and
if $\A \in \C$, the set of winning states under partial observation
of $\A$ is computable.
\end{cor}
\subsection{A Note on the Perfect Observation
Framework}\label{subsec-aboutperfect} We briefly discuss the perfect
observation framework. We have already seen that it is a special case
of the partial observation framework (see
Proposition~\ref{prop:casparticulier}). Hence, we can reuse the
previous results and get decidability and computability results.
However, we can also define an appropriate controllable predecessor
operator $\pi_{\textsf{perfect}}$ that will be correct in the perfect
observation framework. The new operator $\pi_{\textsf{perfect}}$ is
just a twist of the previous operator, which we define as:
\[
\pi_{\textsf{perfect}}(W) = W \cup
\timePred_{\textsf{perfect}}\left(\cPred(W),\uPred(\overline{W})\right)
\]
where $\timePred_{\textsf{perfect}}$ existentially quantifies on pairs
$(x,t)$ such that $y = \gamma_q(x,t)$ (instead of universally
quantifying on those pairs, as in $\timePred_{\textsf{partial}}$).
\begin{rem}
In the perfect observation framework, the controller is aware of the
precise trajectory that will be followed, hence his choice of action
can be done after his choice of delay contrarily to the partial
observation case (remember Remark~\ref{rk:partial}). That is why
the union over actions is put within the scope of the safe time
predecessor in $\pi_{\textsf{perfect}}$.
\end{rem}
Applying similar reasoning as in the previous sections, we can prove
that $\pi_{\textsf{perfect}}^*(\text{\sffamily\upshape{Goal}})$ corresponds to the set of
winning states of $\A$, and that a partition, which is both stable
under $\Pred_a$ (for every $a \in \Sigma$) and suffix-stable, is
actually correct for solving control problems in the perfect
observation framework. We can thus state the following theorem.
\begin{thm} \label{thmta} Let $\M$ be a structure such that
$\text{{\sffamily\upshape{Th}}}(\M)$ is decidable. Let $\C$ be a class of $\M$-games such that
for every $\A$ in $\C$, there exists a finite partition $\P$ of $Q
\times \mkdeux$ definable in $\M$, respecting $\text{\sffamily\upshape{Goal}}$,
suffix-stable, and stable under $\Pred_a$ for every action $a \in
\Sigma$. Then the control problem under perfect observation
(Problem~\ref{prob:controlperfect}) in the class $\C$ is decidable,
and if $\A \in \C$, the set of winning states under perfect
observation of $\A$ is computable.
\end{thm}
Note that being suffix-stable is a stronger condition than being a
time-abstract bisimulation~\cite{brihaye05}, and we see here that this
is one of the right tools to solve control problems. For instance in
Example~\ref{bisimnotgood} the partition $\mathcal P$ is a time-abstract
bisimulation but is not suffix-stable. Indeed $s_1,s_2 \in A$ but
$\text{Suf}_{\mathcal P}(s_1) \ne \text{Suf}_{\mathcal P}(s_2)$.
\begin{rem}\label{remark:TA}
Using the results of this section, we recover the results of
\cite{AMPS98} about control of timed automata. Note that for the
timed automata dynamics (remember Example~\ref{ex:AT}) partial or
perfect observation do not make a difference (the dynamics is
deterministic). Indeed we consider the classical finite partition
of timed automata that induces the region graph
(see~\cite{AD94}). Let us call ${\mathcal P}_R$ this partition, and
notice that ${\mathcal P}_R$ is definable in $\langle
\IR,<,+,0,1\rangle$. ${\mathcal P}_R$ is stable under the action of
$\Pred_a$ for every action $a \in \Sigma$. By Example~\ref{ex:AT}
the continuous dynamics of timed automata is definable in $\langle
\IR,<,+,0,1\rangle$. Hence it makes sense to encode continuous
trajectories of timed automata as words. Then one can easily verify
that $\text{{\sffamily\upshape{Suf}}}(\P_R)=\P_R$. By Theorem~\ref{thmta} we get the
decidability and computability of winning states under perfect
information in timed games \cite{AMPS98} as a side result.
\end{rem}
\begin{cor}
The control problem under perfect information in the class of timed
automata is decidable. Moreover the set of winning states under
perfect observation is computable.
\end{cor}
\section{O-Minimal Games}\label{omingames}
In this section, we focus on the particular case of o-minimal games
({\it i.e.}, $\M$-games where $\M$ is an o-minimal structure and in
which extra assumptions are made on the resets). We first briefly
recall definitions and results related to o-minimality~\cite{PS86}.
We show that existence of finite partitions which are stable
w.r.t. the controllable predecessor operator can be guaranteed for
o-minimal games. More precisely, we first show that, in this
framework, a partition stable under the controllable predecessor
operator can easily be obtained \textit{via} the superword abstraction
(this is due to the assumptions on the resets). Then, we use
properties of o-minimality to prove the finiteness of the previously
obtained partition. Finally we focus on o-minimal structures with a
decidable theory in order to obtain full decidability and
computability results. As in the previous section, we mostly focus on
the partial observation framework, but also mention results in the
perfect observation framework.
\subsection{O-Minimality}
We recall here the definition of o-minimality and the ``\emph{Uniform
Finiteness Theorem}'' that will be applied later in this
section. The reader interested in o-minimality should refer
to~\cite{vandendries98} for further results and an extensive
bibliography on this subject.
\begin{defi} \label{o-minimal} An extension of an ordered
structure ${\mathcal M} = \langle M, \mathord< , \ldots \rangle$ is
\emph{o-minimal} if every definable subset of $M$ is a finite union
of points and open intervals (possibly unbounded).
\end{defi}
In other words the definable subsets of $M$ are the simplest possible:
the ones which are definable in $\langle M, < \rangle$. This
assumption implies that definable subsets of $M^n$ (in the sense of
${\mathcal M}$) admit very nice structure theorems (like the \emph{cell
decomposition}~\cite{KPS86}) or Theorem~{\ref{uf}} below. The
following are examples of o-minimal structures: the ordered group of
rationals $\langle\IQ,<,+,0,1\rangle$, the ordered field of reals
$\langle\IR,<,+,\cdot,0,1\rangle$, the field of reals with exponential
function, the field of reals expanded by restricted pfaffian functions
and the exponential function, and many more interesting structures
(see~\cite{vandendries98,Wi96}). An example of non o-minimal structure
is given by $\langle\IR,<,\sin,0\rangle$, since the definable set $\{x
\mid \sin(x)=0\}$ is not a finite union of points and open intervals.
However, let us mention that the
structure\footnote{$\sin_{|_{[0,2\pi]}}$ and $\cos_{|_{[0,2\pi]}}$
correspond to the sinus and cosinus functions restricted to the
segment $[0,2\pi]$.} $\langle \IR,+,\cdot,0,1,<,\sin_{|_{[0,2\pi]}},
\cos_{|_{[0,2\pi]}} \rangle$ is o-minimal (see~\cite{vandendries96}).
\begin{thm}[Uniform Finiteness~\cite{KPS86}]\label{uf}%
Let ${\mathcal M} = \langle M, < , \ldots \rangle$ be an o-minimal
structure. Let $S \subseteq M^m \times M^n$ be definable (in $\mathcal
M$), we denote by $S_a$ the fiber $\{y \in M^n | (a,y) \in S
\}$. Then there is a number $N_S \in \IN$ such that for each $a \in
M^m$ the set $S_a \subseteq M^n$ has at most $N_S$ definably
connected components.
\end{thm}
\subsection{Generalities on O-Minimal Games}
\begin{defi}
Given $\A$ an $\M$-game, we say that $\A$ is an \emph{o-minimal
game} if the structure $\M$ is o-minimal and if all transitions
$(q,g,a,R,q')$ of ${\mathcal A}$ belong to\footnote{This is a particular
case of reset for $\M$-game where we consider only constant
functions for resets.} $Q \times 2^{V_2} \times \Sigma \times
2^{V_2} \times Q$.
\end{defi}
\label{PA}
Let us notice that the previous definition implies that given $\A$ an
o-minimal game, the guards, the resets and the dynamics are definable
in the underlying o-minimal structure. We denote by $\P_\A$ the
coarsest partition of the state space $S =Q \times V_2$ which respects
$\text{\sffamily\upshape{Goal}}$, and all guards and resets in $\A$. Note that $\P_\A$ is a
finite definable partition of $S$.
Due to the strong reset condition we have that $\P_\A$ is stable under
the action of $\Pred_a$ for every action $a$. This holds by the same
argument that allows to decouple the continuous and discrete
components of a hybrid system in~\cite{LPS00}. Let us also notice
that, in the framework of o-minimal games, any refinement of $\P_\A$
is stable under the action of $\Pred_a$ for every $a \in \Sigma$.
\begin{exa}
The continuous dynamics of timed automata (see Example~\ref{ex-ta})
is definable in the o-minimal structure $\langle
\IR,+,0,1,<\rangle$. The continuous dynamics of rectangular
automata (see Example~\ref{ex-rect}) is definable in the o-minimal
structure $\langle \IR,+,\cdot,0,1,<\rangle$. Hence games on timed
(resp. rectangular) automata with strong resets are particular cases
of o-minimal games. The $\mathcal M$-game of Example~\ref{ex:spiraletot}
is in fact an o-minimal game; indeed one can see that it can be
defined in the structure $\langle
\IR,+,\cdot,0,1,<,\sin_{|_{[0,2\pi]}}, \cos_{|_{[0,2\pi]}} \rangle$
which is o-minimal (see~\cite{vandendries96}).
\end{exa}
\subsection{Solving O-Minimal Games}\label{subsec-solvingomin}
In this subsection, we will see how we can (easily) build a partition
which is stable under the actions of the controllable predecessor
operator. The key ingredients to build this partition will be $(i)$
the strong resets conditions and $(ii)$ the superword abstraction. The
finiteness of the obtained partition will be discussed in
Subsection~\ref{subsec-defina}.
\begin{prob}\label{prop-subwordpart}
Let $\A$ be an o-minimal game, and $\P_\A$ the partition
corresponding to its guards and resets. The superword (resp. suffix)
partition $\text{{\sffamily\upshape{Sup}}}(\P_\A)$ (resp. $\text{{\sffamily\upshape{Suf}}}(\P_\A)$) is stable under the
action of $\pi_{\textsf{\upshape partial}}$
(resp. $\pi_{\textsf{\upshape perfect}}$).
\end{prob}
\proof
This proposition is not a corollary of
Proposition~\ref{prop:supwordstable}, as $\text{{\sffamily\upshape{Sup}}}(\P_\A)$ \emph{is
not} superword-stable. However, the proof of
Proposition~\ref{prop:supwordstable} only relied on the fact that in
a superword-stable partition, two points in a piece of the partition
have the same superword abstraction,
which is precisely what we have in the current case. Hence the
previous proof can be mimicked, and we do not write all details. It
is worth noting also that we do not use all properties of o-minimal
games, but only the strong reset property, which ensures that the
partition is stable under $\Pred_a$ for every action $a \in
\Sigma$. \qed
\subsection{Definability and Finiteness Issues.}\label{subsec-defina}
In the previous subsection, we have proved that, given $\A$ an
o-minimal game, the partition $\text{{\sffamily\upshape{Sup}}}(\P_\A)$ (resp. $\text{{\sffamily\upshape{Suf}}}(\P_\A)$) is
stable under the action of the controllable predecessor operator under
the partial (resp. perfect) observation framework. We will now show
that this partition is finite. For this we will exploit the finiteness
property of o-minimality and in order to do so, we first need to prove
that our encodings are definable.
\subsubsection{Definability.}
Let $(\M,\gamma)$ be a dynamical system and $\P$ be a finite partition
of $V_2$. We now would like to show that in the case of
\emph{o-minimal dynamical system} the superword encoding previously
discussed can be done in a \emph{definable} way. The approach closely
follows the one used in~\cite[Section~12.2]{brihaye06} for the suffix
abstraction (called suffix dynamical type in this paper). \medskip
Let $(\M,\gamma)$ be an o-minimal dynamical system and $\P$ be a
finite definable partition of $V_2$. First let us notice that, since
$\P$ is finite and definable, given $S \in 2^{\P}$ one can easily
write a first-order formula $\varphi(y,\tau)$ which is true if and
only if ${\mathcal F}_y(\tau) = S$ (where ${\mathcal F}_y$ is defined
similarly to ${\mathcal F}_x$~--~see page \pageref{def:wordgx}). Let us
give this formula, assuming that $S = \{A_1,\ldots,A_n\}$:
\begin{align*}
\varphi_S(y,\tau) \ \equiv \ & \exists x_1 \ \exists t_1 \ \cdots \
\exists x_n \ \exists t_n \ \bigwedge_{i=1,\ldots,n} \big(
\gamma(x_i,t_i)=y ~\wedge~
\gamma(x_i,t_i+\tau) \in A_i\big)\\
& \quad \wedge \ \forall x \ \forall t \
\big(\gamma(x,t)=y \big) \Rightarrow \big(\gamma(x,t+\tau) \in A_1
\cup \cdots \cup A_n\big).
\end{align*}
Thus, for each $y \in V_2$, the set ${\mathcal F}_y$ exactly consists of
the connected components of the sets $\{ \tau \in M^+ \mid
\varphi_S(y,\tau) \}$, for $S \in 2^{\P}$; \textit{i.e.} ${\mathcal F}_y$
is a set of intervals. In order to show that ${\mathcal F}_y$ is
first-order definable we need to encode each interval $I \subseteq M$
as a point in some cartesian power of $M$. An interval $I \subseteq M$
is entirely characterized by \emph{(i)} its end-points and \emph{(ii)}
the fact of being right (resp. left) open or closed. For \emph{(i)} we
formally need a couple to represent a single end point in order to
recover $-\infty$ and $+\infty$ (as in the projective line case). For
\emph{(ii)} we can use a binary encoding, let us say $0$ means open
and $1$ closed. Thus any interval $I \subseteq M$ will be encoded by
an element $(a_1,a_2,a_3,b_1,b_2,b_3) \in M^6$. For instance, the
interval $I = \{ x \in \IR \mid x \ge 5\} $ is encoded by
$(5,1,1,1,0,0)$. Thanks to this ``trick'', one can find a first-order
formula $\varphi_y$ defining ${\mathcal F}_y$. The writing of the formula
$\varphi_y$ is not difficult but rather tedious: different cases have
to be considered (depending on whether the interval $I$, encoded by an
element of $M^6$, is left (resp. right) bounded and left (resp. right)
open or closed). Further details of the construction of the formula
can be found in~\cite[Section~12.2]{brihaye06}.
\subsubsection{Finiteness.}
We will now prove that when considering o-minimal dynamical systems,
only finitely many finite superwords are needed to encode all possible
trajectories.
\begin{prob}
Let $({\mathcal M},\gamma)$ be an o-minimal dynamical system and $\P$ be
a finite definable partition of $V_2$. There exists finitely many
finite superwords associated with $(\M,\gamma)$ w.r.t. $\P$.
\end{prob}
\proof
Given $S \in 2^{\P}$ let us first consider the set
$${\mathcal F}_y(S) = \big\{ \tau \in M^+ \mid {\mathcal F}_y(\tau)
= S\big\} = \big\{ \tau \in M^+ \mid \varphi_S(y,\tau) \big\}.$$ By
the above discussion, the set ${\mathcal F}_y(S)$ is a definable subset
of $M$. Hence by o-minimality it is a finite union of points and
open intervals, in particular, it has only finitely many connected
components. By definition of ${\mathcal F}_y$ we have the following
equality.
\[
|{\mathcal F}_y| = \sum_{S \in 2^{\P}} \Big(\text{number of connected
components of } {\mathcal F}_y(S)\Big).
\]
Since $\P$ is finite we can conclude that ${\mathcal F}_y$ is finite.
\medskip Using the uniform finiteness theorem (Theorem~\ref{uf}) we
obtain that there exists $N \in \IN $ such that for all $y \in V_2$
we have that $\bigl|{\mathcal F}_y \bigr| \le N$.
In terms of word encoding, this means that there are only finitely
many superwords associated with the points of the (output) space
$V_2$. More precisely, the superwords $\text{{\sffamily\upshape{Sup}}}_{\P}(y)$ have lengths
uniformly bounded by $N$. Since the superwords $\text{{\sffamily\upshape{Sup}}}_{\P}(y)$ are
words on the \emph{finite} alphabet $2^{\P}$, this completes the
proof. \qed
The previous proposition directly implies the finiteness of the
partition $\text{{\sffamily\upshape{Sup}}}(\P)$. Moreover we have that this partition is
definable, as stated in the following proposition.
\begin{prob}\label{tpfinidefinable}
Let $({\M},\gamma)$ be an o-minimal dynamical system, $\P$ be a
finite definable partition of the output space $V_2$. The partition
$\text{{\sffamily\upshape{Sup}}}(\P)$ is finite and definable.
\end{prob}
\proof
Since there are only finitely many superwords, it suffices to show
that given $y \in V_2$ and $SW$ a superword on $\P$ (i.e. a word on
$2^{\P}$), we can define (by a first-order formula) that
$SW=\text{{\sffamily\upshape{Sup}}}_{\P}(y)$. Suppose that $SW= S_{1} \cdots {S}_{k} \cdots
S_{n}$, where $S_{k} \in 2^{\P}$. We have that $SW=\text{{\sffamily\upshape{Sup}}}_{\P}(y)$ if
and only if the following formula holds.
\begin{align*}
\exists \tau_1 \in M^+, \ \exists \tau_2 \in M^+, \ \cdots \
\exists \tau_n \in M^+, \ \exists I_1 \in {\mathcal F}_{y}, \ I_2 \in
{\mathcal F}_{y}, \ \cdots \ \exists I_n \in {\mathcal F}_{y}\\
(\tau_1 < \tau_2 < \cdots < \tau_n) \ \wedge \
\bigwedge_{k=1}^n {\mathcal F}_y(\tau_k)=S_k \ \wedge \
{\mathcal F}_y = \{I_1,I_2,\ldots,I_n\}.
\end{align*}
Notice that the above formula is first-order since ${\mathcal F}_y$ is
first-order definable and testing whether ${\mathcal F}_y(\tau_k)=S_k$
is also first-order definable. \qed
\subsection{Synthesis of Winning Strategies}
We now prove that given $\A$ an o-minimal game definable in $\mathcal M$,
we can construct a \emph{definable} strategy (in the same structure
$\mathcal M$) for the winning states under partial observation. The
effectiveness of this construction will be discussed later.
\begin{thm}\label{synthesis}
Given $\A$ an o-minimal game, there exists a definable memoryless
winning strategy under partial (resp. perfect) observation for each
$(q,y) \in \pi_{\textsf{\upshape partial}}^*(\text{\sffamily\upshape{Goal}})$
(resp. $\pi_{\textsf{\upshape perfect}}^*(\text{\sffamily\upshape{Goal}})$).
\end{thm}
\proof
By Proposition~\ref{prop-subwordpart}, the partition $\text{{\sffamily\upshape{Sup}}}(\P_\A)$
is finite, definable and stable under $\pi_{\textsf{\upshape
partial}}$. In particular, there exists thus $n \in \IN$ such
that $\pi_{\textsf{\upshape partial}}^*(\text{\sffamily\upshape{Goal}}) =
\pi_{\textsf{\upshape partial}}^n(\text{\sffamily\upshape{Goal}})$. Hence, by
Proposition~\ref{proppi2}, $\pi_{\textsf{\upshape
partial}}^n(\text{\sffamily\upshape{Goal}})$ is the set of winning states.
Given $(q,y) \in \pi_{\textsf{\upshape partial}}^n(\text{\sffamily\upshape{Goal}})$, we know
that there exists a winning strategy from $(q,y)$. We now have to
point out a definable winning strategy from $(q,y)$. Following the
proof of Proposition~\ref{proppi2}, we build the definable strategy
by induction on the number of iterations of $\pi_{\textsf{\upshape
partial}}$. Let us suppose we have already built a strategy on
each piece of $W = \displaystyle \bigcup_{0 \le i \le
k}\pi_{\textsf{\upshape partial}}^i(\text{\sffamily\upshape{Goal}})$, let us now consider
$\pi_{\textsf{\upshape partial}}(W) \setminus W$.
By Proposition~\ref{prop-subwordpart}, we know that
$\pi_{\textsf{\upshape partial}}(W) \setminus W$ is a finite union
of pieces of $\text{{\sffamily\upshape{Sup}}}(\P_\A)$. Let $P$ be one of these pieces. We know
that $P$ corresponds to a finite superword on $\P_\A$. Thus given
$(q,y) \in P$ we have that
$$\text{{\sffamily\upshape{Sup}}}_{\P_{\A}}(y) = S_1S_2\cdots S_k, \quad
\text{ where } S_i \in 2^{\P_{\A}}.$$
Since $(q,y) \in \pi_{\textsf{partial}}(W) \setminus W$, the
superword $\text{{\sffamily\upshape{Sup}}}_{\P_{\A}}(y)$ contains a prefix $S_1 \cdots S_l$
such that there is $a \in \Sigma_c$ with:
\begin{itemize}
\item for all $P_i \in S_l$, $P_i \subseteq \Pred_a(W)$,
\item for all $j \le l$, for all $P_i \in S_j$,
$\uPred(\overline{W}) \cap P_i = \emptyset$.
\end{itemize}
Since for all $P_i \in S_l$, we have that $P_i \subseteq
\Pred_a(W)$, the controllable action $a \in \Sigma_c$ is such that
given any $(q,y) \in S_l$ a transition labelled by $a$ is enabled
and all such transitions lead to $W$. The strategy for $(q,y)$ will
be to perform action $a$ after some delay. We now explain how to
choose this delay.
Let $(q,y)$ be such that $(q,y) \in P$. Let us consider $\text{{\sffamily\upshape{Time}}}(y)$
the subset of $M^+$ defined as follows:
$$\text{{\sffamily\upshape{Time}}}(y)=\{\tau \in M^+ \mid
\exists y' \in S_l\text{ such that } (q,y) \xrightarrow{\tau}
(q,y')\}.$$ This set is definable since $S_l$ is definable.
By o-minimality, we have that $\text{{\sffamily\upshape{Time}}}(y)$ is a finite union of points
and open intervals. Let us denote by $I$ the leftmost point or
interval. Let us notice that $I$ is definable. If $I$ has a minimum
$m$, we define $\lambda(q,y)=(m,c)$. Otherwise two cases may
occur. If $I$ is bounded then it is of the form $(m,m')$ or $(m,m']$
in this case we define\footnote{Let us recall that every o-minimal
ordered group is torsion free and divisible (see~\cite{PS86}), this
implies there exists a unique $y$ satisfying $y+y =(m+m')$, which
we note $\frac{1}{2}(m+m')$.}
$\lambda(q,y)=(\frac{1}{2}(m+m'),c)$. Finally if $I$ has no minimum
and is unbounded it is of the form $(m,\infty)$ and in this case we
define $\lambda(q,y)=(m+1,c)$. We summarize\footnote{Let us notice
that the way we extract a single point from $\text{{\sffamily\upshape{Time}}}(y)$ is nothing
more than the \emph{curve selection} for o-minimal expansions of
ordered abelian groups, see~\cite[chap.6]{vandendries98}.} the
definition of $\lambda$ on $S_l$ as follows:
\[
\lambda(q,y) =
\begin{cases}
\big(\min(I),c\big) & \text{ if } \ \varphi_1(y)\\
\big(\frac{1}{2}\big(\inf(I)+\sup(I)\big),c\big) &
\text{ if } \ \varphi_2(y)\\
\big(\inf(I) +1,c\big) & \text{otherwise}
\end{cases}
\]
where $\varphi_1(y)$ is a formula which is true if and only if $I$
(or $\text{{\sffamily\upshape{Time}}}(y)$) has a minimum and $\varphi_2(y)$ is a formula which
is true if and only if $I$ has no minimum and is bounded. Thus
clearly $\lambda$ is definable.
Since there are finitely many $P \in \text{{\sffamily\upshape{Sup}}}({\P_{\A}})$, we can
conclude that $\lambda$ is definable. \qed
\begin{rem}
Note that the memoryless strategy given by Theorem~\ref{synthesis}
is computable if $\pi_{\textsf{\upshape partial}}^*(\text{\sffamily\upshape{Goal}})$ is.
\end{rem}
\begin{rem}
Let us notice that in the case of timed automata dynamics (described
in Example~\ref{ex:AT}), our definable strategies correspond to the
realizable strategies computed in~\cite{BCFL04}.
\end{rem}
\subsection{Decidability Result}\label{subdecid}
Theorem~\ref{synthesis} is an existential result. It claims that given
an o-minimal game, there exists a definable memoryless strategy for
each $y \in \pi_{\textsf{\upshape partial}}^*(\text{\sffamily\upshape{Goal}})$, and by
Theorem~\ref{prop-subwordpart} we know that $\text{{\sffamily\upshape{Sup}}}(\P_{\A})$ is finite.
The conclusion of the previous subsection is that given an o-minimal
game there exists a definable memoryless winning strategy for each $y
\in \pi_{\textsf{\upshape partial}}^*(\text{\sffamily\upshape{Goal}})$.
In general, Theorem~\ref{synthesis} does not allow to conclude that
the control problem in an $\M$-structure is decidable. Indeed it
depends on the decidability of $\text{{\sffamily\upshape{Th}}}(\M)$. We can state the following
theorem:
\begin{thm}\label{decidability}
Let $\M$ be an o-minimal structure such that $\text{{\sffamily\upshape{Th}}}(\M)$ is decidable
and $\C$ a class of $\M$-automata. Then the control problem under
partial (resp. perfect) observation in class $\C$ is
decidable. Moreover if $\A \in \C$, the set of winning states
$\pi_{\textsf{\upshape partial}}^*(\text{\sffamily\upshape{Goal}})$
(resp. $\pi_{\textsf{\upshape perfect}}^*(\text{\sffamily\upshape{Goal}})$) under partial
(resp. perfect) observation is computable and a memoryless winning
strategy can be effectively computed for each $(q,y) \in
\pi_{\textsf{\upshape partial}}^*(\text{\sffamily\upshape{Goal}})$
(resp. $\pi_{\textsf{\upshape perfect}}^*(\text{\sffamily\upshape{Goal}})$).
\end{thm}
\proof
By Proposition~\ref{tpfinidefinable}, for each $\A \in \C$, $\text{{\sffamily\upshape{Sup}}}
(\P_\A)$ is a definable finite partition respecting
$\text{\sffamily\upshape{Goal}}$. Moreover by Proposition~\ref{prop-subwordpart}, $\text{{\sffamily\upshape{Sup}}}
(\P_\A)$ is stable under $\pi_{\textsf{\upshape
partial}}$. Hypothesis of Corollary~\ref{cor:etatsgagnants} are
thus satisfied and we get that the control problem in class $\C$ is
decidable and that the winning states of a game $\A \in \C$ are
computable. Moreover Theorem~\ref{synthesis} ensures that a
memoryless strategy can be effectively defined from such winning
states. \qed
\begin{rem}
$\langle \IR,<,+,0,1\rangle$ and $\langle \IR,<,+,\cdot,0,1\rangle$
are examples of o-minimal structures with decidable theory and so
o-minimal games based on theses structures can be solved by
Theorem~\ref{decidability}.
\end{rem}
\begin{rem}
In this paper we did not distinguish Zeno behaviours. In particular,
in our framework, if the environment has a strategy that prevents the
game to reach the $\text{\sffamily\upshape{Goal}}$ locations by blocking time, we say that the
controller loses the game. In the framework of timed automata, an
\textit{ad-hoc} solution to this \emph{problem of Zenoness} has been
proposed in~\cite{AFH+03}. However, due to the strong reset conditions
of o-minimal hybrid systems, the method of~\cite{AFH+03} cannot be
easily applied to our framework, but this problem is somehow
orthogonal to ours.
\end{rem}
\section{Conclusion}
In this paper we have studied games based on dynamical systems with
general dynamics, both under a prefect and a partial observation of
the dynamics. Under the first hypothesis, we have shown that
time-abstract bisimulation is not fine enough to solve these games,
which is a major difference with the case of timed automata. By means
of an encoding of trajectories by words, we have obtained a good
abstraction for control problems (with reachability winning
conditions, but it applies also to basic safety winning
conditions). We have finally provided decidability and computability
results for o-minimal games under both perfect and partial observation
hypothesis. Our technique applies to timed automata, and we recover
decidability of timed games~\cite{AMPS98}, as well as the construction
of winning strategies~\cite{BCFL04} as side results.
\section*{Acknowledgment}
The two first authors have been partly supported by the ESF project
GASICS. The first author has been partly supported by the project
DOTS (ANR-06-SETI-003) and by the EU project QUASIMODO. The second
author has been partly supported by a grant from the National Bank of
Belgium and by a FRFC grant: 2.4530.02.
\newcommand{\etalchar}[1]{$^{#1}$}
|
2,877,628,091,174 | arxiv |
\section{Local Navigation and multi-robot Simulation}\label{sec:simulation_model}
In this section, we describe a simulation environment that we developed for testing the performance of the proposed framework under uncertainty. An indoor environment is used to demonstrate the possible use of the system in a museum environment. In addition, to complete the framework, we also discuss the local navigation planner and the strategy used. The simulation pipeline is implemented in ROS \cite{quigley2009ros}, which contains a multi-robot tour planner ROS node, multiple local planner nodes for a single robot, and a simulation wrapper node.
\subsection{Local Navigation Planner}
A local navigation planner generates the actions according to the current robot position and the routes generated by the multi-robot tour planner. Listed in Table \ref{tab:action_space}, there are four discrete actions that a robot can take. Since localization and mapping is not the focus of this research, robot positions are assumed to be known in the simulation.
\subsection{Habitat-AI Simulation Environment}
A real-time simulation environment is used to simulate the dynamics and obstacle avoidance behavior of the robots involved in this work.
The simulation is implemented in the Habitat-AI environment, a photo-realistic, physics-based simulator, to ensure that the framework and test results are transferable to real-world deployment \cite{savva2019habitat, chang2017matterport3d, kadian2020sim2real}
We choose a scene from the Matterport3D dataset, which is a collection of 90 scans, to design a sample tour of an indoor environment. Though the simulated environment is a house, it contains most of the features that would exist in a museum.
The top view of the chosen scene is depicted in Fig. \ref{fig:house}. And three first-person views of the robots are shown in Fig. \ref{fig:camera_view}.
\begin{figure}
\centering
\includegraphics[width = \linewidth, trim=0 80 0 90, clip]{figure/sim_scene.png}
\caption{Top view of the chosen scene within the Matterport3D dataset to give a house tour within the Habitat-AI simulation environment. The white bounding boxes describe the navigable area in the scene, rendered as meshes.}
\label{fig:house}
\end{figure}
\begin{figure}
\includegraphics[width=0.32\linewidth, trim=0 0 0 0, clip]{figure/simulation/sim_camera_view1.png}
\includegraphics[width=0.32\linewidth, trim=0 0 0 0, clip]{figure/simulation/sim_camera_view2.png}
\includegraphics[width=0.32\linewidth, trim=0 0 0 0, clip]{figure/simulation/sim_camera_view3.png}
\caption{Example views of the robots conducting tasks.}
\label{fig:camera_view}
\end{figure}
\begin{table}[b]
\caption{The discrete action space and its continuous action counterpart for the robot.}
\label{tab:action_space}
\centering
\footnotesize
\begin{tabular}{l|l}\toprule
0 & STOP \\ \midrule
1 & MOVE\_FORWARD (0.25 m) \\ \midrule
2 & TURN\_LEFT ($10 ^{\circ}$) \\ \midrule
3 & TURN\_RIGHT ($10 ^{\circ}$) \\ \bottomrule
\end{tabular}
\end{table}
\subsection{Uncertainties in Navigation}
While the planned tour from the high-level planner specifies the path that a robot should execute, the actual paths of the robots have some inherent uncertainties. The tour planner considers such uncertainties as edge traveling time (between two POIs) and node visiting time (at each POI).
This section describes the factors in the simulation that introduce uncertainties to the traveling and visiting time.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figure/simulation/uncertainty_90_crop_legend.png}
\caption{The planned path from the tour planner and the actual path in the presence of action uncertainty (10\% random actions). Due to the overlap, the actual path is only visible when the robot deviates from the ideal behaviors.}
\label{fig:action}
\end{figure}
\subsubsection{Uncertainties in the Traveling Time}
\textbf{Disturbances and Imperfect Actions:}
Due to the lack of an accurate map and the dynamic objects in the world, the robot might need to adjust the planned tour routes according to the actual environment changes. A robot could also take wrong actions because of imperfect perception, action selection, and actuation. In the simulation, these factors are modeled as incorrect robot actions, which inject uncertainties to the edge time. The actions here are a set of movements (listed in Table \ref{tab:action_space}) that the robots can take. For instance, if the correct action for the robot is to move forward and it actually turns left, an incorrect action happens. We add a rate of correct robot actions in the simulator to control the level of incorrectness. As an example, when this rate is set to 90\%, a robot only takes the correct actions 90\% of the time and random actions for the remaining 10\%. The effect of 10\% incorrect actions is depicted in Fig. \ref{fig:action}.
\textbf{Robot Dynamics:}
Another set of uncertainties in edge time comes from the robot dynamics. The multi-robot tour planner estimates the traveling time according to traveling distance and average robot velocities. However, the actual dynamics of the robot system depend on the sequence of the actual robot actions and results in varying velocity and in-place rotations. The dynamics lead to variations of the actual traveling time, which is also considered a traveling time uncertainty. Note that this is an uncertainty that we are not able to control in the simulator.
\subsubsection{Uncertainties in the Visiting Time}
The interaction time is uncertain for a robot tour guide interacting with people at a POI.
This is considered as a visiting time uncertainty in the multi-robot tour planner, and robust plans are generated to ensure the satisfaction of the user-specified constraints of time under uncertainty.
In the simulator, the visiting time at each POI is sampled from a Gaussian distribution centered at a manually defined nominal time.
\section{Conclusions and Future Work}\label{sec:conclusion}
This paper presents a multi-robot framework that deals with robotic tour guiding in a partially known environment with uncertain traveling and visiting times.
A two-level structure is described in the paper: a centralized multi-robot planner (the focus of the paper) generates the tours of the robots and assigns the humans to robots to maximize the number of satisfied human requests (places of interest); a distributed local navigation planner decides the actual trajectories for a single robot according to the global tour routing plan.
The problem considered in the multi-robot planner is generalized and modeled as a simultaneous matching and routing problem, which simultaneously optimizes the human-robot matching and robot routing problem. The uncertainties in the traveling and visiting times are considered, and robust plans are generated by penalizing the expected tour times that pass a predefined threshold. The mixed-integer bilinear program that represents the above planning problem is solved through both an exact branch and cut algorithm as well as an innovative large neighborhood search method.
The algorithms are evaluated through comprehensive computational investigations.
Then, the robustness of the whole framework is evaluated in a photo-realistic simulation that captures multiple practical uncertainties.
Results demonstrate that, through the large neighborhood search, the planner is scalable to the number of robots, humans, and POIs and can output plans with low dropped requests. The largest case tested involves 50 robots, 250 humans, and 50 POIs. Second, results show that the planner makes a smooth trade-off under the time limit for the tours, which provides valuable information for tour design. Finally, parametric study and simulation evaluation demonstrate that the planner can generate robust tours that keep the time constraints in practice with crude uncertainty estimations.
Future work will consider dynamic replanning mechanisms to address real-time disturbances or failures, as well as a more consolidated local planner that can deal with local disturbances. These are options to further improve the robustness of the tour guiding. Heterogeneity within the robots will be considered too. Examples include different sensors and actuators that allow different robots to perform distinct gestures and movements during the guidance. Real-world experiments will be a further step to demonstrate the practicality of our proposed system.
\subsection{Multi-robot Simulation in Habitat-AI} \label{sec:simulation_experiments}
In this section, we run the whole framework, including the multi-robot planner that solves an SMRP, the local navigation planner, and the simulator.
As described in Sec. \ref{sec:simulation_model}), in the simulator, incorrect actions, non-constant velocity, rotations, as well as the varying visiting time at each POI introduce uncertainties to the traveling and visiting time. Therefore, we first show how different levels of simulated uncertainties will affect the touring time of the human-robot teams.
The SMRP planner assumes estimated distributions of time costs are available.
In reality, we can gather samples to estimate the time uncertainty, but the estimation can be inaccurate.
We did a parametric study on the level of environmental and estimated uncertainties to see how planned tours and actual tour times change accordingly. In the simulation, we constrain the estimated uncertainty to be Gaussian distributions, but our SMRP model with time uncertainty does not assume a specific type of random distribution.
\subsubsection{Varying Environmental Uncertainty}
We fix the plan generated by an SMRP solver with uncertainty (the assumed standard deviation is 40\% of the nominated value), and for the uncertainties in the simulation, we change the rate of correct actions from (100\% to 70\%). Then, we show the results of the actual tour time. Since the result is stochastic, we repeat the test five times for each setup. Note that there are 3 robots, 10 humans, and 22 POI in total for visiting in this simulation. The time limit of the tour is set to 100 seconds.
According to Fig. \ref{fig:sim_rate}, with the touring routes fixed (Fig. \ref{fig:sigma4}), the tour time generally increases when the rate of correct actions decreases. Apart from that, for the situations with \(\geq\) 80\% accuracy, the generated plans are always completed within the time limit.
We note that the time uncertainty is assumed as Gaussian distribution, but the actual time uncertainty introduced by incorrect actions may not necessarily be Gaussian. In addition, the standard deviation of the actual time distribution might not be 40\%. Nevertheless, a fixed and possibly inaccurate uncertainty estimation (40\%) can ensure the time constraints of the tour under different correct rates (80\% to 100\%).
This shows that the generated plan has a relatively wide margin with respect to the uncertainties during the tasks. Moreover, the robust planning framework does not require a perfectly accurate uncertainty estimation to work.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{figure/simulation/sim_rate.pdf}
\caption{The actual tour time of the three robots when the environmental uncertainty changes.
The dots show the results of multiple random trials, while the line shows the mean.
The expected tour times according to the planner are 83, 82, and 75 seconds, respectively.}
\label{fig:sim_rate}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figure/simulation/sigma4.png}
\caption{The planned tour of the three robots assuming the standard deviation is 40\%. Between two POIs, straight lines instead of the actual trajectories are shown for clarity.}
\label{fig:sigma4}
\end{figure}
\subsubsection{Varying Estimated Uncertainty}
In this subsection, we fix the rate of correct actions in the simulation and vary the estimated uncertainty of the planner. We input Gaussian distributions to the planner and set the uncertainty for traveling between two POIs a fraction (0\% to 100\%) of the nominal value (proportional to traveling distance). Again, under one setup, the tests are repeated five times. The actual tour times are shown in Fig. \ref{fig:sim_sigma}. According to the figure, there is no clear trend when the standard deviation of the estimated distribution increases. For this specific case, all the plans that consider the uncertainties end up not surpassing the time limit. Again, this is a sign that a perfect estimation of the uncertainty parameters is not a hard requirement for the multi-robot SMRP planner model with uncertainty.
Finally, the planned tour with 0\% and 40\% standard deviations are shown in Fig. \ref{fig:sigma0} and Fig. \ref{fig:sigma4}, respectively. And it can be seen that the tours in Fig. \ref{fig:sigma0} are indeed longer and, therefore, more aggressive. Here, a more aggressive tour tries to address more human requests but is more likely to surpass the tour time limit.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{figure/simulation/sim_sigma.pdf}
\caption{The actual tour time of the three robots when the estimated uncertainty changes. The dots show the results of multiple random trials, while the line shows the mean.}
\label{fig:sim_sigma}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figure/simulation/sigma0.png}
\caption{The planned tour of the three robots assuming no uncertainty (standard deviation is 0\%).}
\label{fig:sigma0}
\end{figure}
\section{Human-robot Matching and Tour Routing}\label{sec:global_planner}
Consider the following practical situation: several human visitors arrive and each selects several places of interest. The guiding system wants to use a number of robots to guide these humans around the environment while satisfying as many POIs as possible. It is assumed that there are fewer robots than humans, and therefore, the people need to be split into teams. Since there is a time limit for the tours, not all human requests can be satisfied. There is uncertainty in the traveling and visiting time during the tour. A multi-robot planning system needs to find an optimal way to form the human-robot teams and the corresponding tours (defined as routes and schedules for the teams) to minimize the dropped requests under the aforementioned constraints and time uncertainties.
In this section, we first formally define the simultaneous matching and routing problem for the above practical multi-robot tour guidance. Then, we encode the problem as a mixed-integer program and provide a heuristic algorithm to solve the problem.
\subsection{Problem Description and Graphical Model}
Suppose there is a set of guidance robots \(V = \{1, \cdots, n_V\}\), a set of humans \(L = \{1, \cdots, n_L\}\), and a set of POIs to visit \(M = \{1, \cdots, n_M\}\). Each human can choose a subset of POIs that they want to visit. In this problem, a planner needs to determine which robot a human should follow and what routes (a sequence of POIs) a robot should take, such that the number of satisfied human requests (POIs) is maximized within certain touring time limits.
Other user-defined practical constraints include time window constraints (certain POIs are available only during fixed time windows), sequence dependencies (the visit of certain POIs has prerequisites that other POIs be visited first), and human dependencies (some people may prefer to be assigned to the same robots, such as families and friends).
Suppose the start and terminal locations of the robots are nodes (places) \(n_M + 1\) and \(n_M + 2\). Namely, we have two additional nodes \(s = n_M + 1\) and \(u = n_M + 2\) in the graph. Then, let \(N = \{1, \cdots, n_M, s, u\}\) be the whole set of places. Note that, \(M\) is a subset of \(N\), and \(M\) does not include the start and terminal. We first define a directed graph \(G = (N, E)\), with \(N\) and \(E\) the sets of nodes and edges, respectively (see Figure \ref{fig:graphical_model}). For each node pair \(\{i,j\}\) with \(i, j \in N\), we add two directional edges \((i,j)\) and \((j,i)\) between them. Therefore, we have the edge set denoted as \(E = \{(i,j)\}, \ \forall i,j \in N\). Note that in our model, there is no assumption that the POIs are on a flat plane. If the tour happens in a multi-level building, no further modeling is needed as long as the edge costs are correctly adjusted according to the levels.
\begin{figure}[t!]
\centering
\includegraphics[width=0.53\linewidth]{figure/graphical_model.pdf}
\caption{A graphical model example with two POIs. \(s\) and \(u\) are the start and terminal, respectively.}
\label{fig:graphical_model}
\end{figure}
\subsection{A Mixed-integer Bilinear Program}\label{sec:deterministic_math}
Here we discuss a mixed-integer bilinear program that mathematically models the SMRP we discussed above. We first provide the notations that will be used in Table \ref{tab:variable_definition}. Note that the decision variables are \(x_{kij}\), \(y_{ki}\), \(t_{ki}\), and \(z_{lk}\). Other symbols in Table \ref{tab:variable_definition} are predefined hyper-parameters or parameters that describe the environment and human-related information. A graphical illustration of the key notations is shown in Figure \ref{fig:framework_model}.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{figure/framework_model.pdf}
\caption{An illustration of the key notations through an example matching and routing.}
\label{fig:framework_model}
\end{figure}
\begin{table}[h]
\caption{Definition of the notations.}
\label{tab:variable_definition}
\small
\begin{tabular}{p{0.06\linewidth}|p{0.8\linewidth}}
\toprule
& Meaning
\\
\midrule
\(x_{kij}\) & = 1, if robot \(k \in V\) travels edge \(i,j \in N\), otherwise 0. \\
\(y_{ki}\) & = 1, if robot \(k \in V\) visits node \(i \in M\), otherwise 0. \\
\(t_{ki}\) & The time robot \(k \in V\) visits node \(i \in N\). \\
\(z_{lk}\) & = 1, if human \(l \in L\) is assigned to follow robot \(k \in V\), otherwise 0. \\
\(a_{li}\) & = 1, if human \(l \in L\) wants to visit node \(i \in M\), otherwise 0. \\
\(V\) & The set of robots. \\
\(L\) & The set of humans. \\
\(M\) & The set of POIs. \\
\(N\) & The set of POIs plus the start and terminal. \\
\(M_T\) & The set of POIs with additional time window constraints. \\
\(S\) & The set of sequence dependencies. \\
\(L_\mathrm{pair}\) & The set of human pairs who want to be in the same team. \\
\(T_{L}\) & A large constant time. \\
\(T_{k i}\) & The time for robot \(k\) and its team to visit POI \(i\). \\
\(T_{k i j}\) & The time for robot \(k\) and its team to travel edge \((i,j)\). \\
\(T_k^\mathrm{lim}\) & A preset time limit for the tour guided by robot \(k\). \\
\(T_i^{\min}\) & The lower bound for a preset time window for visiting POI \(i\). \\
\(T_i^{\max}\) & The upper bound for a preset time window for visiting POI \(i\). \\
\(Z_k^{\max}\) & The maximum number of humans guided by robot \(k\). \\
\(A^{\max}\) & The maximum number of POIs that can be dropped for one human. \\
\bottomrule
\end{tabular}
\end{table}
\noindent\textbf{Objective Function:}
In the objective function \eqref{eqn:bilinear_objective}, we want to minimize the weighted combination of dropped human requests and time usage, with the weights \(C_a\) and \(C_t\), respectively. The right part is the sum of the time that each human-robot team reaches the terminal node. The left part is the total number of dropped POIs for all humans. As an example, if human \(l\) follows robot \(k\), \(z_{l k}\) equals 1. If this human \(l\) wants to see POI \(i\), \(a_{l i}\) equals 1. In addition, if robot \(k\) does not pick POI \(i\) in its route, \(y_{k i}\) equals 0.
Finally, if \(a_{l i} = 1\), \(z_{l k} = 1\), and \(y_{k i} = 0\) happen simultaneously, the left part of the objective function will be incremented by 1, penalizing the drop of a human request. In this example, the weights are chosen such that \(C_a \gg C_t\) to ensure that the main focus is on satisfying human requests.
\begin{align}
\min \
C_a \cdot \sum_{l \in L} \sum_{k \in V} z_{l k}\left(\sum_{i \in M} a_{l i} \cdot\left(1-y_{k i}\right)\right) + C_t \sum_{k \in V} t_{k u}. \label{eqn:bilinear_objective}
\end{align}
\noindent\textbf{Variable Bounds:}
The feasible regions and bounds of four sets of decision variables are defined as in \eqref{eqn:var_bounds}.
\begin{align}
x_{k i j}, \ y_{k i}, \ z_{l k} \in \{0,1\}, \quad t_{k i} \geq 0, \nonumber \\
\forall k \in V, \ \forall i, j \in N, \ \forall l \in L. \label{eqn:var_bounds}
\end{align}
\noindent\textbf{Network Flow Constraints:}
Equation \eqref{eqn:flow_constraints1} is a network flow constraint that ensures the incoming robot number equals the outgoing robot number. Constraint \eqref{eqn:flow_constraints2} ensures that there is only one path in the network flow of robot \(k\). Constraints \eqref{eqn:var_relation_constraint1}-\eqref{eqn:var_relation_constraint2} show the relationship between variable \(x\) and \(y\). The number of robots at a node equals the number of incoming robots through the corresponding edges. Moreover, the number of robots at any node should be no larger than the number departing from the start node.
\begin{align}
\sum_{i \in N} x_{k i m}=\sum_{j \in N} x_{k m j}, \quad & \forall m \in M, \quad \forall k \in V. \label{eqn:flow_constraints1} \\
\sum_{i \in N} x_{k s i} \leq 1, \quad \quad \quad & \forall k \in V. \label{eqn:flow_constraints2} \\
y_{k j} \leq \sum_{i \in N} x_{k s i}, \quad \quad & \forall j \in M, \quad \forall k \in V. \label{eqn:var_relation_constraint1} \\
y_{k j} = \sum_{i \in N} x_{k i j}, \quad \quad & \forall j \in M, \quad \forall k \in V. \label{eqn:var_relation_constraint2}
\end{align}
\noindent\textbf{Time Constraints:}
Inequalities \eqref{eqn:time_constraint1}-\eqref{eqn:time_constraint2} are cumulative time constraints: if robot \(k\) travels edge \((i,j)\), then \(x_{k i j}=1\), and the difference between the arrival time at node \(j\) and node \(i\) equals the total time for visiting node \(i\) and traveling edge \((i,j)\) (i.e., \(T_{k i} + T_{k i j}\)). Constraint \eqref{eqn:time_limit_constraint} sets a time limit \(T_k^\mathrm{lim}\) for the whole tour. Constraint \eqref{eqn:time_window_constraint} sets a specific time window \([T_i^{\min}, \ T_i^{\max}]\) for visiting a node. This is important for some of the POIs that have a specific opening time (e.g., a film or a show). We use the set \(M_T \subset M\) to denote these specific POIs.
\begin{align}
t_{k i} - t_{k j} + T_{k i j} + T_{k i} \leq T_{L} (1 - x_{k i j}), \ \ & \forall i, j \in N, \forall k \in V. \label{eqn:time_constraint1} \\
t_{k i} - t_{k j} + T_{k i j} + T_{k i} \geq - T_{L} (1 - x_{k i j}), \ \ & \forall i, j \in N, \forall k \in V. \label{eqn:time_constraint2} \\
t_{k u} \leq T_k^\mathrm{lim}, \quad \quad \quad \quad &\forall k \in V. \label{eqn:time_limit_constraint} \\
T_i^{\min} \leq t_{k i} \leq T_i^{\max}, \quad \quad &\forall k \in V, \forall i \in M_{T}. \hspace{-0.1cm} \label{eqn:time_window_constraint}
\end{align}
\noindent\textbf{Sequence Dependency Constraints:}
There can be sequence dependencies between the POIs. For example, if a person does not visit POI \(i\), they would not be able to gain the knowledge needed to understand and appreciate the work at POI \(j\). Let \(S\) denote the set of the sequence dependencies. As an example, \(S\) can be denoted as \(\{(1,2), (2,3), (6,8)\}\), which means that POI 2 depends on 1, and POI 3 depends on 1 and 2, while POI 8 has POI 6 as a prerequisite. Mathematically, the sequence dependency is encoded as in \eqref{eqn:artistic_dependency}. The first inequality ensures that node \(i\) has to be visited if a route visits node \(j\). The second inequality related to time ensures that node \(i\) is visited before \(j\).
\begin{align}
y_{k i} \geq y_{k j}, \ \ \ t_{k i} \leq t_{k j} + T_L (1 - y_{k j}), \ \
\forall k \in V, \ \forall (i, j) \in S. \hspace{-0.1cm} \label{eqn:artistic_dependency}
\end{align}
\noindent\textbf{Team Size Limit Constraints:}
All humans must be matched to a robot guide, introducing the constraint \eqref{eqn:visitor_assignment_constraint}.
We also impose a limit to the maximum number of humans that follow a robot to avoid imbalanced teams as in \eqref{eqn:team_size_constraint}. This limit is denoted as \(Z_k^{\max}\). Due to this constraint, a human might not be matched to their optimal robot guide for balancing the assignment and minimizing overall dropped POIs. However, we add constraint \eqref{eqn:max_drop_constraint} to limit the maximum number of dropped requests, \(A^{\max}\), to avoid extreme sacrifices of specific humans' needs.
\begin{align}
\sum_{k \in V} z_{l k} = 1, \quad \quad \quad \quad & \forall l \in L. \label{eqn:visitor_assignment_constraint} \\
\sum_{l \in L} z_{l k} \leq Z_k^{\max}, \quad \quad \quad \quad & \forall k \in V. \label{eqn:team_size_constraint} \\
\sum_{k \in V} z_{l k} (\sum_{i \in M} a_{l i} \cdot\left(1-y_{k i}\right)) \leq A^{\max}, \quad & \forall l \in L. \label{eqn:max_drop_constraint}
\end{align}
\noindent\textbf{Human Dependency Constraints:}
There can be human dependencies where some people want to be guided by the same robot guide (e.g., families and friends). Such a dependency is encoded in \eqref{eqn:visitor_dependency}, which forces two people to be in the same team. \(L_\mathrm{pair}\) is the set of human pairs that have such dependencies. An example can be \(L_\mathrm{pair} = \{(1,2), (2,3), (5,6)\}\).
\begin{align}
z_{l_1 k} = z_{l_2 k}, \quad \forall k \in V, \ \ \forall (l_1, l_2) \in L_\mathrm{pair}. \label{eqn:visitor_dependency}
\end{align}
\subsection{Uncertain Travel and Visiting Time} \label{sec:stochastic_math}
In the above formulation, we do not consider time uncertainty. When there is uncertainty in the traveling time \(T_{kij}\) and visiting time \(T_{ki}\), we still need to make sure that the tour ends on time, i.e., constraint \eqref{eqn:time_limit_constraint} still holds. In such situations, \(T_{kij}\), \(T_{ki}\), and \(t_{k u}\) can be modeled as random distributions.
And we penalize the scenarios where the finishing time \(t_{k u}\) is close to the time limit \(T_k^\mathrm{lim}\) by adding an expected penalty, \(\phi_(t_{k u})\),
\begin{align}
\phi_(t_{k u}) =& \mathrm{E}\left([t_{k u} - \alpha_k]^+\right)=\lim_{n_\xi \rightarrow \infty} \frac{1}{n_\xi}\sum_{\xi=1}^{n_\xi}\left[t_{ku}^\xi - \alpha_k\right]^+, \nonumber \\
\alpha_k =& T_k^\mathrm{lim} - \Delta T_k^\mathrm{lim}, \nonumber \\
[x]^+ =& \begin{cases} x, \quad x > 0 \\
0, \quad x \leq 0
\end{cases}. \nonumber
\end{align}
\(\alpha_k\) is a time threshold when the penalty starts being applied and \(\Delta T_k^\mathrm{lim}\) is a time margin which is usually chosen as a small number (e.g., 5 minutes). \(t_{ku}^\xi\) is a sample of the distribution \(t_{k u}\) with \(\xi = 1, \cdots, n_\xi\). Such a penalty model has shown effectiveness in previous work in VRP with stochastic time cost \cite{sundar2017path}.
In practice, we only calculate an approximation of the \(\phi_(t_{k u})\) with a bounded number of samples as follows
\begin{align}
\hat{\phi}(t_{ku}) = \frac{1}{n_\xi}\sum_{\xi=1}^{n_\xi}\left[t_{ku}^\xi - \alpha_k\right]^+. \nonumber
\end{align}
Based on the derivation above, when there is time uncertainty, we modify the objective in \eqref{eqn:bilinear_objective} and minimize the new objective function \eqref{eqn:linearized_uncertain_objective1}, subject to the original constraints.
\begin{align}
\min \ &
C_a \cdot \sum_{l \in L} \sum_{k \in V} z_{l k} (\sum_{i \in M} a_{l i} \cdot\left(1-y_{k i}\right) ) + C_t \sum_{k \in V} \hat{\phi} \left( t_{k u} \right) \nonumber \\
= \
& C_a \cdot \sum_{l \in L} \sum_{k \in V} z_{l k} (\sum_{i \in M} a_{l i} \cdot\left(1-y_{k i}\right) ) \label{eqn:linearized_uncertain_objective1} \\
& + C_t \sum_{k \in V} \frac{1}{n_\xi} \sum_{\xi=1}^{n_\xi} \left[ t_{k u}^\xi - \alpha_k \right]^+. \nonumber
\end{align}
In addition, the total tour time \(t_{k u}\) equals the sum of all traveling time and visiting time as in
\begin{align}
t_{k u} = \sum_{i \in N} \sum_{j \in N} T_{kij} \cdot x_{kij} + \sum_{i \in N} T_{ki} \cdot x_{ki}, \quad \forall k \in V. \label{eqn:sum_tour_time}
\end{align}
Suppose we know and can represent the random distributions of \(T_{kij}\) and \(T_{ki}\) as a series of samples \(T_{kij}^\xi\) and \(T_{ki}^\xi\) (\(\xi = 1, \cdots, n_\xi\)). Then, substituting \eqref{eqn:sum_tour_time} into \eqref{eqn:linearized_uncertain_objective1}, and linearizing the function \([x]^+\) using inequalities \eqref{eqn:w_constraint1}-\eqref{eqn:w_constraint2}, the original objective function \eqref{eqn:bilinear_objective} is replaced with \eqref{eqn:linearized_uncertain_objective2}. Therefore, in the situation with time uncertainty, the optimization problem becomes the following
\begin{align}
\min \ &
C_a \sum_{l \in L} \sum_{k \in V} z_{l k} (\sum_{i \in M} a_{l i} \cdot\left(1-y_{k i}\right) )
+ C_t \sum_{k \in V} \frac{1}{n_\xi} \sum_{\xi=1}^{n_\xi} w_k^\xi \label{eqn:linearized_uncertain_objective2} \\
\text{sub } & \text{to } \eqref{eqn:var_bounds}-\eqref{eqn:visitor_dependency} \text{ and } \nonumber \\
w_k^\xi \geq & \sum_{i \in N} \sum_{j \in N} T_{kij} \cdot x_{kij} + \sum_{i \in N} T_{ki} \cdot x_{ki} - \alpha_k, \ \ \ \forall k \in V, \ \forall \xi, \label{eqn:w_constraint1} \\
w_k^\xi \geq & 0, \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \ \ \ \forall k \in V, \ \forall \xi. \label{eqn:w_constraint2}
\end{align}
Note that \(w_{k}^\xi\) is a helper variable.
\subsection{A Branch-and-Cut Approach for an Exact Solution}
The optimization problem (with or without uncertainty) described in the above section is a mixed-integer quadratic program (MIQP) with bilinear objective function and linear constraints. Such a MIQP can be solved through a branch-and-cut algorithm where multiple continuous quadratic optimization problems are solved iteratively. A branch-and-cut algorithm tries to find an exact solution for an MIQP: it gives a solution and the optimality gap associated with the solution that indicates how close the solution is to the global optimum. Given enough time, such an exact algorithm will ultimately find the global optimum for a problem. Some available commercial solvers implement branch-and-cut algorithms. We encode the presented math formulation using the GUROBI solver.
\subsection{A Large Neighborhood Search Approach}\label{sec:large_neiborhood_search}
Exact algorithms are guaranteed to find the global optimum given enough time. However, for an NP-hard problem like the SMRP in this paper, the exact optimum requires exponentially growing computations. Therefore, exact solution algorithms are usually applied to problems of smaller sizes. Here, we provide a heuristic algorithm that provides potential solutions with low objective values for larger problem cases.
According to the formulation in Sec. \ref{sec:deterministic_math}, the only nonlinear part in the optimization problem is the bilinear term in the objective function \eqref{eqn:bilinear_objective}, if we fix either of the variables \(z_{l k}\) or \(y_{k i}\), and optimize the sub-problem with the remaining variables, the optimization becomes linear. Therefore, to solve the optimization, we can fix part of the variables, limit the search space to a subset of the whole feasible domain, find the solution for a simpler linear sub-problem, and then change the search space and repeat the process to improve the solution.
The idea of solving a larger problem by iteratively solving still-large but simpler sub-problems is called a large neighborhood search (LNS). The difference between LNS and a small neighborhood search (SNS), such as the gradient descent algorithm and trust-region algorithms, is that in each iteration, the size of the search space is much larger, which ends up with larger step sizes and fewer iterations. The LNS algorithm has been widely adopted and has shown its efficiency in many scheduling and routing-related applications \cite{chen2014optimizing, he2018improved, alinaghian2018multi, deng2021room}.
We now describe how we apply an LNS algorithm to solve the SMRP problem with and without time uncertainty in Sec. \ref{sec:deterministic_math} and \ref{sec:stochastic_math}.
The pseudo-code of the large neighborhood search algorithm for SMRP is in Algorithm \ref{alg:large_neighborhood_search}. The input of the algorithm is the unsolved optimization problem with all predefined parameters. From a set of randomly initialized variable values, the algorithm optimizes a matching sub-problem and \(n_V\) (the number of robots) routing sub-problems in each step to improve the solution quality iteratively. Note that both sub-problems are linear and with fewer variables, and therefore, much easier to solve than the original SMRP. Since the solving processes of the two sub-problems interleave in each step, the quality of the matching and routing are simultaneously optimized. The following paragraphs describe the matching sub-problem and routing sub-problem in detail.
\begin{algorithm}[t]
\small
\textbf{Input:} the unsolved optimization problem in Sec. \ref{sec:deterministic_math}-\ref{sec:stochastic_math}
Randomly initialize \(x_{kij}\), \(y_{ki}\), \(z_{lk}\), \(t_{ki}\)
\For{\textnormal{iteration} \(= 1, \cdots, n_{\max}\)}{
Fix \(x_{kij}\), \(y_{ki}\), \(t_{ki}\) \(\quad (\forall k \in V, \ \ \forall i,j \in N)\)
Solve the matching sub-problem in \eqref{eqn:matching_subproblem}
Update \(z_{lk}\) \(\quad (\forall k \in V, \ \ \forall l \in L)\)
\For{\(k \in V\)}{
Fix \(z_{lk}\) \(\quad (\forall l \in L)\)
Solve the routing sub-problem in \eqref{eqn:routing_subproblem1} or \eqref{eqn:routing_subproblem2}
Update \(x_{kij}\), \(y_{ki}\), \(t_{ki}\) \(\quad (\forall i,j \in N)\)
}
\If{\textnormal{the objective value does not change}}
{
\Break
}
}
\Return the solution \(x_{kij}\), \(y_{ki}\), \(z_{lk}\), \(t_{ki}\)
\caption{Large Neighborhood Search}
\label{alg:large_neighborhood_search}
\end{algorithm}
\textbf{Matching Sub-problem:}
Fix the variables \(x_{kij}, y_{ki}, t_{ki}\) and solve the SMRP optimization within the neighborhood of \(z_{lk}\). Both the problems with and without uncertainty reduce to the following optimization problem (variables are highlighted in blue).
\begin{align}
\min & \ \
\sum_{l \in L} \sum_{k \in V} \eqcolor{z_{l k}} (\sum_{i \in M} a_{l i} \cdot\left(1-y_{k i}\right) ) \label{eqn:matching_subproblem} \\
\text{sub to} & \ \ \eqcolor{z_{lk}} \in \{0, 1\} \label{eqn:z_constraint} \\
& \text{and} \ \ \eqref{eqn:team_size_constraint}-\eqref{eqn:visitor_dependency}. \nonumber
\end{align}
This is a smaller integer linear program (ILP) with fewer variables than the original SMRP and can be solved much faster. If there is no maximum dropped requests constraint \eqref{eqn:max_drop_constraint} or human dependency constraint \eqref{eqn:visitor_dependency}, then, this is a standard bipartite matching problem whose integer solutions can be obtained by solving the reduced linear program (removing constraint \eqref{eqn:z_constraint}) in a polynomial time \cite{heller1956extension}. We encode the math formulation here using GUROBI.
\textbf{Routing Sub-problem:}
Fix variable \(z_{lk}\) and solve the SMRP optimization within the neighborhood of \(x_{kij}, y_{ki}, t_{ki}\). Mathematically, the problem then reduces to a mixed-integer linear program (MILP). In terms of practical application, it becomes a variation of the vehicle routing problem.
Moreover, since the human-robot matching is fixed (fix variables \(z_{lk}\)), a robot only needs to consider the requests of the people in its own team. Therefore, the optimization is decoupled into multiple single-vehicle routing problems (i.e., traveling salesman problems). Mathematically, this means the optimization of \(x_{kij}, y_{ki}, t_{ki}\) decouples between different robots \(k \in V\).
Using the large neighborhood search algorithm, we solve \(n_{V}\) single-vehicle routing sub-problems.
The decoupled routing problem for a single problem is shown below. For this version without uncertainty, we can leverage established routing solvers, which efficiently return low-cost solutions using multiple heuristics. Here we choose the Google Or-Tools to encode this model.
\begin{align}
\min & \ \
C_a \sum_{l \in L} z_{l k}\left(\sum_{i \in M} a_{l i} \cdot\left(1- \eqcolor{y_{k i}}\right)\right) + C_t \ \eqcolor{t_{k u}} \label{eqn:routing_subproblem1} \\
\text{sub to} & \ \ \eqref{eqn:var_bounds}-\eqref{eqn:artistic_dependency}. \nonumber
\end{align}
The sub-problem formulation with uncertain traveling and visiting times is shown below. Again, we encode the math formulation here using GUROBI.
\begin{align}
\min \ &
C_a \sum_{l \in L} z_{l k} (\sum_{i \in M} a_{l i} \cdot\left(1- \eqcolor{y_{k i}}\right) )
+ \frac{C_t}{n_\xi} \sum_{\xi=1}^{n_\xi} \eqcolor{w_k^\xi} \label{eqn:routing_subproblem2} \\
\text{sub} \ & \text{to} \ \ \eqref{eqn:var_bounds}-\eqref{eqn:artistic_dependency}, \ \text{and } \eqref{eqn:w_constraint1}-\eqref{eqn:w_constraint2}. \nonumber
\end{align}
Note that in \eqref{eqn:routing_subproblem1} and \eqref{eqn:routing_subproblem2}, the decision variables are \(y_{k i}\), \(t_{k u}\), and \(w_k^\xi\), while \(z_{l k}\) is fixed.
\section{Related Work}\label{sec:related_work}
In this work, the multi-robot tour guiding problem contains not only tour planning but also the human-robot team generation. Therefore, the optimization combines two classic problem types from the operations research: bipartite matching and vehicle routing. The matching problem refers to splitting the humans into teams such that the matching between humans and robots minimizes the number of dropped human requests. It is a bipartite matching problem as there are two separate sets of elements (robots and humans) in the problem. The definition of bipartite matching in the tour planning is illustrated in Fig. \ref{fig:matching_definition}. The routing problem focuses on optimally arranging the robot tours (routes), such that the dropped requests, as well as the traveling distance, are minimized (see Fig. \ref{fig:routing_definition}).
\begin{figure}[t!]
\centering
\includegraphics[width=0.37\linewidth]{figure/matching_definition.pdf}
\caption{A bipartite matching example. A bipartite graph is a graph where there are two sets of nodes and no edge connections within each set.
A matching is a selected set of edges that satisfy some desired properties.
In this paper, a matching should ensure that every human node is matched to one robot node (the red edges form a matching).
The bipartite matching problem in the tour planning tries to find the best matching between humans and robots, such that the overall satisfied human requests are maximized.}
\label{fig:matching_definition}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=0.95\linewidth]{figure/routing_definition.pdf}
\caption{A vehicle routing problem (VRP) example with 3 vehicles. The VRP tries to find the minimum cost vehicle paths that satisfy some user-defined constraints.}
\label{fig:routing_definition}
\end{figure}
Optimal bipartite matching can be found through linear programming or maximum flow optimization \cite{ford2015flows}. Bipartite matching has been applied to multiple real-world problems including image feature matching \cite{cheng1996maximum}, object detection \cite{carion2020end}, budget allocation \cite{aggarwal2011online}, and task assignment \cite{dutta2019one}.
The vehicle routing problem (VRP) considers the minimum traversing distance of all the places of interest using multiple vehicles (robots). It generalizes the traveling salesman problem and is NP-hard to solve optimally. The stochastic vehicle routing problem (SVRP) is a variation of VRP where some of the parameters in the optimization are random distributions. Previous works in the field of SVRP have investigated optimization under uncertain requests number at each POI \cite{mendoza2013multi,secomandi2009reoptimization,marinakis2013particle, fu2021robust},
uncertain time costs for traveling edges or service at a POI \cite{sundar2017path,chen2014optimizing,li2010vehicle,gomez2016modeling},
and uncertain energy costs \cite{venkatachalam2019two,venkatachalam2018two, fu2020heterogeneous}. This work considers the uncertainty in time costs.
In this work, the optimal robot guide for a human depends on the robots' routes, while the route of a robot depends on the humans in its team. Therefore, the matching and routing problems are coupled to form a larger problem that must be optimized simultaneously. An SMRP problem contains multiple vehicle routing problems (formally, a VRP can be reduced to an SMRP); and since a VRP is NP-Hard \cite{toth2002vehicle}, an SMRP is NP-hard. A ride-sharing system can be regarded as an SMRP where the riders should be matched with drivers while the routes are determined simultaneously. However, many systems in previous work decouple the problems by generating the vehicle routes first and then matching humans with the closest routes \cite{schreieck2016matching, aydin2020matching}. Another way to formulate the ride-sharing problem is as a VRP with pickup and delivery \cite{sitek2019capacitated}.
Optimally solving the simultaneous matching and routing optimization under uncertainty can be challenging as this is a discrete non-convex problem that is NP-hard.
Unlike previous work, we directly address this simultaneous optimization problem, model it as a mixed-integer optimization, and provide methods that efficiently find sub-optimal low-cost solutions in Sec. \ref{sec:global_planner}.
\section{Introduction}\label{sec:introduction}
\begin{figure*}[t!]
\centering
\includegraphics[width=\linewidth]{figure/framework_diagram.pdf}
\caption{A framework for applying our multi-robot planner within a simulation. The centralized multi-robot tour planner is discussed in Sec. \ref{sec:global_planner}. The local navigation and simulation are described in Sec. \ref{sec:simulation_model}.}
\label{fig:framework_diagram}
\end{figure*}
Robot tour guides have been applied in different environments \cite{satake2009approach, shiomi2009field, doering2019neural, kirby2010affective, burgard1998interactive, burgard1999museum} because they save human labor, do not require time-consuming training, and have the possibility to ease discomfort during social interactions \cite{de2019social}. Previous works have been focusing on the routing of places of interest (a traveling salesman style problem) \cite{preferences},
localization and navigation within the environment \cite{burgard1998interactive, burgard1999museum},
and social interactions with humans through gesture, face, and language processing \cite{bennewitz2005towards}.
While they cover diverse topics, these works mainly focus on single robot guiding for a small predetermined team.
In practice, it is unlikely that a single robot will be used to guide all of the human visitors; hence there is a need to use multiple robots to handle a larger number of humans. In such a situation, instead of considering each robot separately (as in previous works), a crucial question is how to coordinate the robots as a multi-robot system, such that the overall efficiency/performance of the system is maximized.
A natural first step is to consider loose coordination:
once the humans are split into teams and assigned to each robot,
the single-robot tour guide systems in previous works can be applied to the low-level navigation and interaction tasks.
However, the problems of choosing the best robot guides for a human and finding the optimal tour plan for a robot depend on each other.
The two problems are tightly coupled, leading to challenges to the optimization of both the tour plan and the human-robot matching.
Note that in this formulation, we consider homogeneous robots, and therefore, the local navigation and social interaction behaviors of the robots are homogeneous, do not affect, and therefore, are not considered during the multi-robot coordination at this time.
Future work may consider navigation and complex social interactions within the matching and touring problem.
This work focuses on a multi-robot planning system that simultaneously optimizes the human-robot team and a high-level robot tour plan specified as a route of places of interest (POIs).
The system provides an optimal matching between humans and robots and the corresponding tour plans.
The optimization planner proposed in this paper can be paired with a local navigation/interaction planner to form a complete framework that can be used to handle practical multi-robot guidance applications. Additionally, the touring system can be applied to different environments, such as a museum docent, a park tour, and a city tour.
As the multi-robot planning is performed premission, the actual touring times may vary due to dynamic movements of the teams, changes in the environment, and uncertain visiting times at a POI. Such variations will be considered as uncertainty in the traveling and visiting time in the optimization to generate time-robust touring plans.
A computational investigation is conducted with the proposed multi-robot planning system. Then, the system performance in an uncertain environment is evaluated through a photo-realistic simulation.
A system diagram of the simulation framework is shown in Fig. \ref{fig:framework_diagram}. A centralized multi-robot tour planner first generates the teams and tour plans for all robots. Then, individual robots use a local navigation planner to follow the planned routes and lead the humans within the simulation environment.
This work has the following contributions.
\begin{enumerate}[label={\arabic*)}]
\item The algorithmic modeling of a \textbf{simultaneous} human-robot \textbf{matching} and tour \textbf{routing problem (SMRP)} for multi-robot tour guidance under execution time uncertainty.
\item A comprehensive computational evaluation of the scalability and solution quality of the proposed algorithms.
\item The simulation verification of the proposed multi-robot system through a concrete tour guiding case study in a photo-realistic indoor environment.
\end{enumerate}
The remainder of this paper is organized as follows:
Sec. \ref{sec:related_work} briefly introduces work related to socially assistive robots and the SMRP problem.
Sec. \ref{sec:global_planner} describes the math formulation and algorithms of the SMRP planner for multi-robot tour guides.
Sec. \ref{sec:simulation_model} introduces the local navigation planner and the simulation environment.
Note that sections \ref{sec:global_planner}-\ref{sec:simulation_model} also describe how uncertainties are considered in the planner and simulated in the simulation.
Sec. \ref{sec:experiment} provides computational and simulation results, followed by discussion.
Sec. \ref{sec:conclusion} concludes the paper.
\section{Experiments and Results}\label{sec:experiment}
In this section, we first use randomly generated cases to evaluate the scalability and optimality of the proposed planning algorithms. These computational experiments are done on a laptop with an Apple M1 chip, and the corresponding results are shown in Sec. \ref{sec:computational_experiments}. Then we evaluated both the multi-robot tour planner in a photo-realistic simulation environment where the human's visiting time at a POI is uncertain and the robot could perform imperfect actions (which result in longer execution time). The simulation results are described in Sec. \ref{sec:simulation_experiments}.
\subsection{Computational Evaluation of the Tour Planning} \label{sec:computational_experiments}
We proposed two math models: a deterministic formulation and a stochastic formulation considering uncertain traveling and visiting time. We also propose two solution algorithms for the math models: an exact solution method that tries to find the global optimum and a LNS-based method that returns a heuristic solution. We will evaluate three of the combinations listed in Table \ref{tab:algorithm_tested}.
\begin{table}[tbp]
\centering
\small
\caption{Algorithm types tested.}
\begin{tabular}{c|l}
\toprule
\multicolumn{1}{c|}{Abbr} & \multicolumn{1}{c}{Meaning} \\
\midrule
D-LNS{} & Deterministic formulation with LNS algorithm \\
D-ES{} & Deterministic formulation with exact solution method \\
S-ES{} & Stochastic formulation with exact solution method \\
\bottomrule
\end{tabular}
\label{tab:algorithm_tested}
\end{table}
\subsubsection{Scalability Evaluation}
The size of the problem mainly depends on the robot number \(n_V\), human number \(n_L\), and the total number of POIs in the area \(n_M\). Here we variate these three hyper-parameters and test the scalability of the algorithms under the randomly generated test cases. Specifically, We choose \(n_V \in \{4, 10, 20, 50\}\), \(n_L \in \{10, 50, 100, 250\}\), and \(n_M \in \{10, 20, 30, 50\}\). For other hyper-parameters, we choose a fixed value for all cases as they are not related to the scalability test. Particularly, for the scalability evaluation, the penalty for dropping a request, \(C_a\), is set to 1000, while the penalty on the time, \(C_t\), is set to 1, to ensure the dominant goal is to satisfy human requests. For each test case, we set a limit of 120 seconds for the solvers. For optimality, we use the dropped requests ratio, which is defined as (dropped requests / total requests) to indicate the solution quality.
\begin{table}[t]
\centering
\small
\caption{The dropped requests ratio of D-LNS{}. The \fbox{results in boxes} are where the result of D-LNS are larger (worse) than the results from the exact method D-ES{}, results without boxes are the opposites.}
\begin{tabular}{c|cccc}
\toprule
& \(n_V=\)4 & \(n_V=\)10 & \(n_V=\)20 & \(n_V=\)50 \\
& \(n_L=\)10 & \(n_L=\)50 & \(n_L=\)100 & \(n_L=\)250 \\
\midrule
\(n_M=\)10 & \fbox{{0.06}} & \fbox{{0.20}} & \fbox{{0.26}} & 0.27 \\
\(n_M=\)20 & 0.00 & 0.26 & 0.31 & 0.45 \\
\(n_M=\)30 & 0.00 & 0.20 & 0.31 & 0.44 \\
\(n_M=\)50 & 0.03 & 0.12 & 0.28 & 0.36 \\
\bottomrule
\end{tabular}
\label{tab:normalized_dropped_demand}
\end{table}
Results show that the largest problem where D-ES{} and S-ES{} can find a non-trivial solution within 120 seconds for a problem consisting of 10 robots, 50 humans, and 50 POIs (a trivial solution is defined as dropping all requests and conducting no tour). In contrast, D-LNS{} can return a non-trivial solution for all the cases. For the case with 50 robots, 250 humans, and 50 POIs, the D-LNS{} completes the optimization within 26.5 seconds and 7 iterations. The corresponding dropped requests ratio of D-LNS{} is listed in Table \ref{tab:normalized_dropped_demand}. According to the table, for the same deterministic SMRP, the LNS-based algorithm D-LNS{} outputs a better solution within 120 seconds except for the smallest cases.
\subsubsection{Time Limit and Dropped Requests}
With unlimited time, the generated matches and tours can satisfy all human requests. Therefore, the multi-robot planner tries to find the most efficient tours that drop the least number of requests within a limited tour length. We use the smallest test case (\(n_V=4, n_L=10, n_M=10\)) as an example to show the trade-off between the tour time limit and the dropped requests ratio. Fig. \ref{fig:drop_demand_time_curve} shows a smooth and close-to-linear trade-off. Therefore, during a practical situation, we can select an optimal trade-off point accordingly.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\linewidth]{figure/drop_demand_time_curve.pdf}
\caption{Trade-off between the tour time limit and dropped requests ratio.}
\label{fig:drop_demand_time_curve}
\end{figure}
\subsubsection{Time Uncertainty and Dropped Requests}\label{sec:computational_time_uncertainty}
We use the smallest test case (\(n_V=4, n_L=10, n_M=10\)) as an example to show the effect of uncertainty in an SMRP. In this parametric study, we assume the traveling time \(T_{kij}\) and visiting time \(T_{ki}\) are Gaussian distributed, and the ground truth standard deviation is 30\% of the expected value, which means the robots' imperfect actions will cause the actual tour time distributed in a range specified by 30\%. To apply the method S-ES{} to consider such uncertainty, the planner also needs to specify an estimated distribution for these time parameters. When the estimated standard distribution (the level of uncertainty) changes, the planner changes its trade-off between satisfying human requests and ensuring the tour time limit. Below, we change the standard deviation of the time distribution input to the planner and show the corresponding trade-off. The metrics used are dropped requests ratio and the average probability of surpassing the time limit during the tour.
According to Fig. \ref{fig:prob_demand_curve} and taking into account the randomness within the optimization, when the estimated standard deviation is larger, more requests are dropped in order to decrease the probability of surpassing the time limit. When a larger uncertainty is estimated, the planner becomes more cautious about time. However, the marginal decrease in the probability becomes small when the estimated standard deviation exceeds a certain threshold. Besides, accurately estimating the uncertainty level, 30\%, provides a good trade-off (both the dropped requests and the probability of surpassing the time limit are small). But a slightly off estimation (e.g., estimate the standard deviation as 20\% or 40\%) still provides a similarly good trade-off. This shows that the optimization is not sensitive to inaccurate uncertainty estimation.
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{figure/prob_demand_curve.pdf}
\caption{The influence of the estimated time uncertainty. The estimated distribution is the time distribution input to the planner. It does not necessarily equal the ground truth distribution. Here the ground truth distribution has a standard deviation of 0.3.}
\label{fig:prob_demand_curve}
\end{figure}
\subsubsection{An Example Case}
Here we use the case in Sec. \ref{sec:computational_time_uncertainty} with 30\% estimated standard deviation to show a plan generated by solving the SMRP using the S-ES{} method.
The 10 people can select their POIs from 10 locations (a small case is used for visualization clarity); there are 4 robots as the tour guides. According to the optimization, the human-robot team number 0-3 contains 2, 3, 2, and 3 humans, respectively (Table \ref{tab:example_team}). The distribution of the POIs and the tours are shown in Fig. \ref{fig:example_route}. Note that POI 9 is dropped by the planner in this example. The plan drops 12 of the 45 requested POIs and satisfies the rest.
\definecolor{colorrobot0}{rgb}{0,0.447000000000000,0.741000000000000}
\definecolor{colorrobot1}{rgb}{0.850000000000000,0.325000000000000,0.0980000000000000}
\definecolor{colorrobot2}{rgb}{0.929000000000000,0.694000000000000,0.125000000000000}
\definecolor{colorrobot3}{rgb}{0.494000000000000,0.184000000000000,0.556000000000000}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth]{figure/example_route.pdf}
\caption{The distribution of the POIs and the routes of the robots. D refers to both the start and terminal. Robot 0: {\color{colorrobot0}blue}, robot 1: {\color{colorrobot1}brick red}, robot 2: {\color{colorrobot2}yellow}, robot 3: {\color{colorrobot3}purple}.}
\label{fig:example_route}
\end{figure}
\begin{table}[htbp]
\centering
\small
\caption{The planned tour and teams guided by the robots}
\begin{tabular}{c|lcl}
\toprule
Robot & \multicolumn{1}{c}{Tour} & Time (s) & \multicolumn{1}{c}{Humans} \\
\midrule
0 & D, 3, 1, 2, 7, D & 465 & 2, 8 \\
1 & D, 5, 8, 4, 0, D & 517 & 1, 4, 9 \\
2 & D, 6, 4, 0, 1, D & 522 & 0, 7 \\
3 & D, 3, 0, 4, 6, 8, D & 426 & 3, 5, 6 \\
\bottomrule
\end{tabular}
\label{tab:example_team}
\end{table}
|
2,877,628,091,175 | arxiv | \section{Introduction}
The COVID-19 pandemic started in December 2019 in China and subsequently spread fast in the rest of the world. After an initial ``hesitant" approach, most countries essentially adopted rules of social distancing that also originated in China. Several countries delayed the imposition of measures and, as a result, saw large numbers of infected persons and deaths. Other countries acted very swiftly and managed to control the infected numbers and especially the mode of spreading. There was an initial discussion related to ``herd immunity" that was in part attempted by some countries, but soon the basic global approach was that introduced by China, i.e. social distancing. However, the degree and swiftness of social distancing were different in each country; in Italy and Spain, for instance, there was an initial delay while Greece acted very quickly and with strong measures. Although ultimately the effectiveness of any measure is reflected in the number of deaths, in this work, we use the more error-prone infection data for a number of reasons. The infection data are representative of the dynamics of the disease at the country level, even though they clearly depend on the number of tests performed. At the initial phase, the test availability was limited, and thus it was used on a need to be the basis and, as a result, targeted more closely individuals with symptoms. Additionally, since the COVID-19 pandemic affects people of older ages primarily, the death data are strongly age biased and thus do not reflect the true dynamics of the spreading that leads to these deaths.
In an earlier publication, the first version of which appeared in the arXiv on March 31, 2020, i.e. right in the middle of the pandemic, we used a Gaussian hypothesis for the spreading of the disease and number of infected persons and predicted the spreading and the horizon of the first wave \cite{BT}. We showed that this specific functional dependence originated from the imposed measures and, in particular, from an approximately linear reduction in the infection rate $\alpha (t)$ as a result of imposed measures. This hypothesis proved to have two-fold usefulness: On one hand, it gave a good prediction for the horizon of the epidemic in countries like Greece, Italy, and Spain while the measures were in effect. On the other hand, for countries such as the US and UK where measures either did not enforce in full strength or were not applied fast enough, the prediction of the model based on the Gaussian hypothesis was rather poor. Although this was expected, it nevertheless gives a very good way to assess now, i.e. after the fact, how efficient were the measures in these and other countries. This may be done by evaluating from the real data an effective number that gives a degree of the harshness of imposed measures, adequate timing, etc. This number, denoted by $\sigma$, is the slope of the assumed linear dependent decay of the infection rate coefficient. Large $\sigma$ means that the effective measures where drastic and applied on time while, in the other extreme, $\sigma \approx 0 $ signifies the practical absence of measures.
In order to perform the present study, we use the following approach: We start with an SIR model \cite{KM} and use analytics in order to derive a differential equation of the infection rate $\alpha (t)$; this equation contains the information on the individual infection percentage in the population. We then take the data for the country's infected population and estimate the infection rate $\alpha (t)$. This step is done by using Machine Learning (ML) techniques and, in particular, by using physics-informed neural networks (PINN). We pre-train the latter on simulated SIR data and subsequently train it on each country's reported infected data. Instead of assessing a general $\alpha (t)$ curve, we assume a linear functional dependence explicitly; its slope $\sigma$ is the result of the ML procedure we apply. Once the infection rate is known, we validate the resulting SIR model to the country's data and then vary sigma to see the changes in the epidemic. This procedure gives a clear picture of both of the effective measures in each country but also their efficiency.
The assumption of linear decay in $\alpha (t)$ with slope $\sigma$ is tantamount to an effective linearization to the actual infection rates. Clearly, other, more complex forms may be assumed. We find that this simple form can efficiently capture the nature of the phenomenon and give a simple quantitative estimate of the imposed measures. The values of the slope $\sigma$ are obtained directly through ML and, thus, in a sense, are directly derived from the infection data. Thus, we may link each infection curve with an effective decay slope $\sigma$ that denotes the overall control that the measures exercise on the infection phenomenon. Since the approach is fundamentally data-driven, the knowledge of a particular slope gives a handling on the possible measures exercised. Furthermore, once the PINN we develop works well, we may use it to make predictions. Specifically, we use data from the second phase of the spreading, that we assume starts after the initial decline in the infection, train the network with this data and make short term predictions for the current period.
The structure of the paper is thus the following. In the next section II, we map the SIR system of two first-order ODE's into a unique second-order one that, in the case of constant $\alpha (t)$, may be solved exactly. Subsequently, in section III, we derive a first-order equation for $\alpha (t)$ and write its general solution of arbitrary time dependence in the infected population. We verify explicitly that when the population dynamics is Gaussian, the dominant decay in $\alpha (t)$ is linear, as discussed in a more restricted form in \cite{BT}. Subsequently, in section IV, we apply our ML arsenal to derive $\alpha (t)$ from each country's data and determine the spectrum of $\sigma$'s for eight countries. We use this information back to the SIR model to investigate the actual as well as other hypothetical scenarios for the evolution of the epidemic in each country. This step gives a clear picture of the effectiveness of measures in each country. In section V, we use the data from the first phase of the pandemic for the eight countries, train the PINN with this data except for the last week of available data and subsequently make predictions for the evolution during that week and compare with the available data. Finally, in section VI, we summarize our findings and conclusions. In Appendix A, we present the solution of the SIR model obtained through the exponential ansatz as well as approximate solutions in extreme limits to demonstrate the infection's behavior. In Appendix B, we present a flowchart that details the ML approach used in this work.
\section{Exact mathematics of SIR}
The simple Susceptible-Infected-Removed (SIR) infection model is very powerful in
determining qualitative but also quantitative aspects of the COVID-19 pandemic\cite{KM}. The basic equations are
\begin{eqnarray}
\label{Eq-1a}
\frac{dS}{dt} = - \alpha S I\\
\label{Eq-1b}
\frac{dI}{dt} = \alpha S I - \mu I
\end{eqnarray}
where $S \equiv S(t) $, $I \equiv I (t)$ are the percentage of susceptible and infected individuals respectively and the infection and removal rates $\alpha \equiv \alpha (t)$, $\mu = \mu (t)$ respectively
are functions of time in general. We introduce the variable $q(t)$ through the ansatz:
\begin{equation}
\label{Eq-2}
I(t) = e^{q(t) - \int_0^t \mu (t') dt'}
\end{equation}
Upon substitution to the set of Eqs (\ref{Eq-1a},\ref{Eq-1b}) we obtain
\begin{eqnarray}
\label{Eq-3a}
\dot{S} = - \alpha e^{q-\nu}S \\
\label{Eq-3b}
\dot{q} = \alpha S \\
\label{Eq-3c}
\nu \equiv \nu (t) = \int_0^t \mu (t') dt'
\end{eqnarray}
Using Eqs. (\ref{Eq-3a},\ref{Eq-3b}) we obtain a closed equation for $q$, ie.
\begin{equation}
\label{Eq-4}
\ddot{q}=-\left( \alpha e^{q-\nu} - \frac{\dot{\alpha}}{\alpha} \right) \dot{q}
\end{equation}
Equation (\ref{Eq-4}) is a unique second order equation that fully captures the dynamics of the SIR infection model. While it is highly nonlinear, it is nevertheless quite useful in determining the infection dynamics since it is general and contains the arbitrary time dependence of both the infection and removal rates. It will be used subsequently in the application of ML techniques to the COVID-19 infection data. In the case of constant infection and removal rates it can be solved exactly; this solution is given in Appendix A.
\section{Time-dependent infection rate equation}
We start with the general Eq. (\ref{Eq-4}) with time dependent infection rate $\alpha (t)$ and for the sake of simplicity we assume that the recovery rate $\mu (t) \equiv \mu$ is a constant; in this case the expression of Eq (\ref{Eq-4}) simplifies to $\nu =\mu t$ and thus $I(t) = exp \left[ q(t) - \mu t\right]$. While Eq. (\ref{Eq-4}) is rather involved in several cases we might be interested in the inverse problem where although we know the infection data we are not able to asses directly the applied measures $\alpha (t)$ that generates it. We know, for instance, that a monotonic linear drop in the infection rate, as for instance introduced by gradual social distancing measures results in an approximately Gaussian evolution\cite{BT}. We may thus write Eq. (\ref{Eq-4}) as:
\begin{eqnarray}
\label{Eq-20a}
\frac{d \alpha}{dt} + f(t) \alpha = g(t) \alpha^2 \\
\label{Eq-20b}
f(t) = -\frac{\ddot{q}}{\dot{q}}\\
\label{Eq-20c}
g(t) = e^{q - \mu t}.
\end{eqnarray}
\\
The Eq. (\ref{Eq-20a}) is a Bernoulli equation\cite{PZ} that can be turned into a linear first order equation by using the transformation $z(t) = 1/ \alpha (t)$; we obtain
\begin{equation}
\label{Eq-21}
\frac{d z}{dt} - f(t) z + g(t) = 0.
\end{equation}
The general solution thus of Eq. (\ref{Eq-20a}) obtained through the solution of Eq. (\ref{Eq-21}) is
\begin{eqnarray}
\label{Eq-22a}
\frac{1}{\alpha (t)} = C''e^{-F(t) } - e^{-F(t) } \int e^{F(t') } g(t') dt' \\
\label{Eq-22b}
F = - \int f(t') dt',
\end{eqnarray}
where $C''$ is an arbitrary constant.
Let us now consider the case where the infected population behaves similar to a Gaussian
function \cite{BT}, keeping however also a linear time-term in the exponent that provides some
time asymmetry, i.e. take
\begin{equation}
\label{Eq-23}
q(t) = \beta t^2 + \gamma t.
\end{equation}
Simple algebra leads to
\begin{eqnarray}
\label{Eq-24a}
q-\mu t = \beta t^2 + (\gamma - \mu ) t\\
\label{Eq-24b}
f(t) = - \frac{\ddot{q}}{\dot{q}} =- \frac{2 \beta}{2 \beta t + \gamma }\\
\label{Eq-24c}
g(t) = e^{q - \mu t} = e^{\beta t^2 + (\gamma - \mu ) t}\\
\label{Eq-24d}
F = - \int f(t') dt' = \int \frac{2 \beta}{2 \beta t' + \gamma } dt' = ln (2 \beta t + \gamma )
\end{eqnarray}
and thus the solution of Eq. (\ref{Eq-22a}) becomes
\begin{eqnarray}
\label{Eq-25a}
\frac{1}{\alpha (t)} = \frac{ \alpha (0) \gamma }{ 2 \beta t + \gamma } +\frac{K(t)}{ 2 \beta t + \gamma }\\
\label{Eq-25b}
K(t) = -
\int_0^t
(2 \beta t' + \gamma)e^{\beta {t'}^2 + (\gamma - \mu ) t'}dt' = \left[ 2 \beta -(\gamma - \mu )
(\gamma + 2 \beta t) \right] e^{\beta t^2 +(\gamma - \mu ) t}
\end{eqnarray}
Finally,
\begin{equation}
\label{Eq-26}
\alpha (t) = (2 \beta t + \gamma ) \left [ \alpha (0) \gamma + \frac{2 \beta}{2 \beta t + \gamma }
- (\gamma - \mu ) e^{\beta t^2 + (\gamma - \mu )t }\right]
\end{equation}
We note that the dominant term is that of linear decay since at longer times and $\beta < 0$ the Gaussian term in Eq. (\ref{Eq-26}) essentially disappears while the exponential term also decays when $\mu > \gamma$. In general, of course, the functional dependence of $\alpha (t)$ is more complex and in cases with strong asymmetry introduced by $\gamma$ we have distinctly nonlinear decay. We observe thus how significant is the precise functional form of the time dependent infection rates for the general evolution of the SIR modeling of the infection phenomenon.
\section{Feature extraction through pre-trained, physics informed neural networks}
The exact mathematical analysis of the SIR model is important for the analysis of the data through ML techniques. The COVID-19 ``first wave" started at different times in various countries and had a completely different evolution. In countries where very restrictive measures similar to those of China were imposed an effective spreading control swiftly was accomplished. Other countries that either delayed or imposed partial or essentially no measures saw larger numbers in the infected population and slower decay in the infected numbers. Here we make no judgment as to whether measures were ``good" or ``bad", but we simply want to be able to extract the presence of the measures from the dynamics of the infected population. Specifically, we would like to see what is the imprint of social distancing in the distribution of the infected population across the eight model countries we follow. To accomplish this, we use a strategy that utilizes methods from Artificial Intelligence and in particular Machine Learning (ML). The basic assumption in our approach is that the SIR model can capture the essentials of the epidemic in each country. A direct consequence of this assumption is that we can use simulated data from the SIR model to pre-train the specific neural networks we use for each country. The specifics of the application of ML in this problem are detailed in the Appendix.
The application of ML techniques to data often suffers from the fact that data are considered ``pure" with no connection to a specific phenomenon. A remedy towards introducing specificity is through the use of physics-informed techniques where the ML processes, typically those involving Artificial Neural Networks (ANN), are restricted imposed by physical laws in mathematical form\cite{RPK}. In the specific problem, the SIR equations play this role and put strict bounds to the ANN used for simulating the phenomenon. Once we have a physics-informed network that is trained on the infected data of the specific country, we may use it to extract the presence and persistence of the social distancing measures typified through the function $\alpha (t)$. Since the ANN finds a general decay and also given the discussion in the previous section, we posit a linear dependence in the form $\alpha (t) = \sigma_0 + \sigma t $, where the intercept $\sigma_0$ and the slope $\sigma$ are determined through the ANN. The slope $\sigma$, in particular, is a parameter with an important physical significance, since it estimates within the linearized model the degree of efficiency of social distancing. In other words, a large value in $\sigma$ describes in an average way a country that followed through the first wave strict social distancing measures while, on the contrary, small values in the slope denote much looser adherence to measures. We note that these measures are not necessarily the externally imposed ones but also include the self-imposed measures.
Specifically, in this work, we approximate each country's daily reported cases using a deep neural network containing one input node, five hidden layers of 100 nodes with a ``sigmoid" activation function, and one output node. Initially, we use simulated SIR data and an arbitrary linear function for $\alpha(t)$ with a constant value for $\mu$, to train the model. The model is trained, using a custom training loop, by minimizing the mean squared error loss on the data, $MSE_{D}$:
\begin{equation}
MSE_{D} = \frac{1}{N_D} \sum_{i=1}^{N_D} |x_i - \tilde{x}_i|^2,
\end{equation}
where ${ \{ x_i, \tilde{x}_i \} }_{i=1}^{N_D} $ denote the set of the reported, $x_i = ln(I_i)$, and corresponding predicted cases, $\tilde{x}_i = model(t_i)$ and the mean squared error loss defined by Eq.(\ref{Eq-4}) with $\alpha = \alpha(t) = \sigma_0 + \sigma t$ and $\nu = \mu t$ (ie. $\mu$ = constant), $MSE_{SIR}$:
\begin{equation}
MSE_{SIR} = \frac{1}{N_{SIR}} \sum_{j=1}^{N_{SIR}} |f(t_j, \tilde{x}_j, \dot{\tilde{x}}_j, \ddot{\tilde{x}}_j, \sigma_0, \sigma, \mu)|^2,
\end{equation}
where,
\begin{equation}
f(t_j, \tilde{x}_j, \dot{\tilde{x}}_j, \ddot{\tilde{x}}_j, \sigma_0, \sigma, \mu) = \ddot{\tilde{x}}_j + \left( \alpha_j e^{\tilde{x}_j} - \frac{\dot{\alpha}_j}{\alpha_j} \right) (\dot{\tilde{x}}_j + \mu) = 0.
\end{equation}
Then for each of the countries into consideration, we load the real data and smooth it using a seven time-steps moving average. We then scale the data using Min-Max normalization. We load the pre-trained model and allow all its weights to be tuned by minimizing again both the $MSE_{D}$ and $MSE_{SIR}$ loss functions on the country's data, getting at the same time the optimal values for $\alpha(t)$ and $\mu$ for the given country. The pre-trained model is used to accelerate each country's training process. The training process of each country stops using early stopping with a horizon of 200 epochs.
Having extracted the optimal $\alpha(t)$ as well as $\mu$ for each country, we use them to solve the SIR model, Eqs. (\ref{Eq-1a}, \ref{Eq-1b}). The solution is then fitted to the country's real data using the initial conditions ($I_0$, $S_0$) as fitting parameters. The total number of the predicted cases during the ``first wave" period of each country, including the relative error to the corresponding total number of reported cases and the total number of cases obtained by varying $\alpha(t)$ by $\pm$10\%, is presented in the Table (\ref{Tab1}). A plot of the results for each country is shown in Fig. (\ref{Fig1}). In the map of Fig. (\ref{Fig2}) we portray the results of Table (\ref{Tab1}) in a more graphic way.
The machine learning algorithms were implemented in Python using TensorFlow/Keras \cite{TFK} and the ADAM \cite{ADAM} optimizer. The data used in this study were published online at OurWorldInData.org\cite{SRC}.
\begin{figure} [h]
\includegraphics[width=8.5cm]{USA.png} \includegraphics[width=8.5cm]{Spain.png}
\includegraphics[width=8.5cm]{Germany.png}\includegraphics[width=8.5cm]{Italy.png}
\includegraphics[width=8.5cm]{Netherlands.png} \includegraphics[width=8.5cm]{UK.png}
\includegraphics[width=8.5cm]{France.png} \includegraphics[width=8.5cm]{Greece.png}
\caption{Country level predicted infections, $I(t)$, for the extracted $\alpha(t)$ of each country. Magenta dots represent the reported cases of each country. Red, dashed and dotted line, blue solid and green dashed line represent the infections with +10\%, no change, -10\% to the infection rate $\alpha(t)$, respectively. Black dashed line represents the extracted infection rate of each country.}
\label{Fig1}
\end{figure}
\begin{table}[h!]
\begin{minipage}[t]{0.65\linewidth}
\centering
\begin{tabular}{|l|r|r|r|r|r|c|}
\hline
Country & \multicolumn{2}{|c|}{Total cases} & Error & $\alpha(t)$ + 10\% & $\alpha(t)$ - 10\% & $R^2$\\
& Reported & Predicted & (\%) & (\% Difference) & (\% Difference) & \\
\hline
USA & 1961185 & 1945830 & -0.78 & 1793214 (-7.84) & 2063029 (6.02) & 0.944\\
\hline
Italy & 240961 & 275667 & 14.40 & 259349 (-5.92) & 284978 (3.38) & 0.863\\
\hline
Spain & 245938 & 280859 & 14.20 & 249833 (-11.05) & 298190 (6.17) & 0.791\\
\hline
UK & 286141 & 312211 & 9.11 & 217528 (-30.33) & 382591 (22.54) & 0.872 \\
\hline
Germany & 186839 & 215563 & 9.45 & 182532 (-15.32) & 233021 (8.10) & 0.776\\
\hline
The Netherlands & 50412 & 55040 & 9.18 & 52524 (-4.57) & 56583 (2.80) & 0.837 \\
\hline
France & 149668 & 163580 & 9.30 & 117539 (-28.15) & 187154 (14.41) & 0.702 \\
\hline
Greece & 2967 & 3014 & 1.60 & 2794 (-7.32) & 3155 (4.68) & 0.540 \\
\hline
\end{tabular}
\end{minipage}\hfill
\begin{minipage}[c]{0.35\linewidth}
\centering
\includegraphics[width = 5cm]{Slopes.png}
\end{minipage}
\label{Tab1}
\caption{Left: Total number of reported cases during the ``first wave" for each country and the corresponding predicted cases and percentage error obtained from our model, including the predictions with $\pm$ 10\% variation of a(t). Right: A bar plot of the slope $\sigma$ of each country signifying the degree of adherence to measures.}
\end{table}
\begin{figure} [h]
\includegraphics[width=8.5cm]{map.png}
\caption{Presentation of the values $\sigma$ derived for the first phase of the infection for each country on a map for better visualization. The red color in the US denotes opposite sign, i.e. in terms of the present interpretation non-adherence to measures on average. In the European countries the value of Greece is maximal.}
\label{Fig2}
\end{figure}
\section{Short term predictions}
The arsenal of physics informed machine learning was used in the previous section in order to extract dynamical parameters such as the time-dependent infection we well as the removal rates from the documented infection data. The procedure through SIR pre-training proved to be quite efficient and gave a hierarchy of $\alpha (t)$ for different countries for the initial period of the infection. It is both tempting as well as challenging to apply this procedure to the present phase of the COVID-19 pandemic and attempt to make future predictions. In the process of future point evaluations, our procedure needs to satisfy two constraints; one is the overall mean square minimization that reduces the overall error. The second is the one imposed by physics, i.e. it must follow the SIR model. In order to accomplish the latter, the procedure needs to know the functional of $\alpha (t)$ as well as the value of $\mu$ at the future points. We provide this information through the extrapolation of the values from the previous times. Once these values are known, the SIR dynamics is warrantied and provides the second, physics derive constraint.
In Fig. (\ref{Fig3}) we show the evolution of the current phase of the pandemic as well as the prediction obtained for a horizon of one week. In preparing these results, we used the available COVID-19 data, starting precisely where the first phase ended and used it up to one week before end dates for all eight countries for PINN training. Subsequently, we used the network for prediction and compared the results with the existing data. We note that the network's short-term predictive power is quite good on average in most countries.
\begin{figure} [h]
\includegraphics[width=8.5cm]{USA2.jpeg} \includegraphics[width=8.5cm]{Spain2.jpeg}
\includegraphics[width=8.5cm]{Germany2.jpeg} \includegraphics[width=8.5cm]{Italy2.jpeg}
\includegraphics[width=8.5cm]{Netherlands2.jpeg} \includegraphics[width=8.5cm]{UK2.jpeg}
\includegraphics[width=8.5cm]{France2.jpeg} \includegraphics[width=8.5cm]{Greece2.jpeg}
\caption{PINN short term predictions and comparison with existing data. We use data of the second phase of COVID-19 spreading except for the last week, we train the network and predict the evolution during the last week. In most cases the short term prediction is reasonably good. }
\label{Fig3}
\end{figure}
\section{Conclusions}
The spreading of COVID-19 has generated a wave of illness and death worldwide, accompanied by a severe disruption in financial, educational, commercial activities, global travel etc \cite{RM,KAN,JS}. During the first phase of the spreading, there were different approaches to the measures to be taken to slow it down. Different countries reacted in different ways, and, as a result, the epidemic dynamics proceeded differently. The infection curves were different and dependent strongly both on the imposed social distancing measures and the adoption of responsible practices from individuals. One important aspect of the pandemic is to find ways to assess the degree to which the social distancing measures where followed. It is not trivial to extract this information from the data since the infection dynamics are directly related to the imposed social distancing measures. Furthermore, this cannot be done in a completely model free way, and thus assumptions about both the model and the way measures are imposed are important.
In the present work, we followed an early attempt \cite{BT} and used the publicly available infection data to assess the effectiveness and adherence to social distancing in different countries; in doing this analysis, we assumed that the mathematical model underlying the infection dynamics is the simplest SIR model. Before using the arsenal of ML we tackled the model analytically and produced two basic results; the first one is a general analytical solution for the model obtained through a specific exponential ansatz. The second, also dependent on this ansatz, is a differential equation for the function $\alpha (t)$ that describes the time dependent nature of the infection rate. The latter depends strictly on the imposed social distancing measures as well as the practices of the individuals. We pointed out, as also in reference \cite{BT} , that a linear drop in the infection rate leads to an approximate Gaussian functional dependence in the infected population. The specifics of this functional form depends both on the form as well as values of $\alpha (t)$ but also on the removal rate $\mu$.
In order to extract the time-dependent infection rate from the data, we used physics-informed neural networks, i.e. a machine learning method that uses input from the actual model assumed, viz. SIR. This input, together with the real infection data from each country we considered, led to a prediction of the assumed linear in time infection rate. The data derived slope $\sigma$ signifies the adherence of each country to social distancing. In Greece, for instance, the slope is large in absolute value, designating strong application of the imposed measures by the individuals. In the other extreme, we find the USA with a practically zero slope, demonstrating that the measures taken had low efficiency.
The other six countries we analyzed fall in intermediate locations between these two extremes. Application to the SIR model of each country, an alternative infection rate that differs by a few percent ($\pm 10\%$) in total from the one obtained through ML gives an estimate of how dependent the infection is on the applied measures. We find that this variation, while it affects the early SIR fast rise strongly, results in quite a different infection decay and horizon in countries like the UK.
Once we know how the PINN behaves with the data fort the initial period of the infection, we may use it for the second phase. We consider that the latter starts from the end of the initial period and reaches the present day. Thus, we use country infection data during this period except for the last week to train the network and subsequently make predictions for the last week and compare it with real data. We find that while the short term predictive power of PINN is good, it has large deviations in countries where the data appear to have a rather stochastic character.
The basic conclusion of this work is that the use of physics-informed ML may enable the extraction of COVID-19 infection information in different countries, show how different measures and practices are directly reflected in the data and ultimately make predictions. The use of physics in machine learning gives specificity to the data, but, on the other hand, is restricted and some times limited to inserted physics knowledge. The present approach assumes a well-mixed, essentially uniform country, an assumption that is introduced through the use of the SIR model. However, countries have regions, and each region may behave differently for geographical, environmental, cultural, as well as population reasons. If regional data is available, one can go one step further and introduce spatial in addition to temporal distribution in the infection and from this be able to obtain more accurate results and predictions. We believe the methodology used in this work may be extended in this more realistic case and provide a more direct approach to local dynamics and the effectiveness of imposed measures at a local level.
\section*{Appendix A: Exact solution of the SIR model for time-independent infection rates}
\subsection{Differential Equations}
For the simpler case of constant $\alpha$ and $\mu$, Eq. (\ref{Eq-4}) becomes
\begin{equation}
\label{Eq-5}
\ddot{q}=- \alpha e^{q-\mu t} \dot{q}
\end{equation}
Introducing the transformation $x=q-\mu t$ we turn Eq. (\ref{Eq-5}) into the following form:
\begin{equation}
\label{Eq-6}
\ddot{x} + \alpha e^{x}\dot{x}+ \alpha \mu e^{x} =0
\end{equation}
The new initial conditions are $x(0) = ln I(0)$ and $\dot{x}(0) = aS(0) - \mu$.
The Eq. (\ref{Eq-6}) is a Lienard Equation that can be turned into an Abel equation through the introduction of the transformation \cite{Polyanin}
\begin{equation}
\label{Eq-7}
y(x) = \dot{x}
\end{equation}
We obtain the following Abel equation of the second kind:
\begin{eqnarray}
\label{Eq-8a}
y y_x = f_1 (x) y + f_0 (x)\\
\label{Eq-8b}
f_1 (x) = - \alpha e^x ~,~f_0 (x) =- \alpha \mu e^x
\end{eqnarray}
We introduce further the variable $\xi$ as follows
\begin{equation}
\label{Eq-9}
\xi = \int f_1 (x) dx = - \alpha e^x
\end{equation}
Since $y_x = y_{\xi} f_1 (x)$, Eq. (\ref{Eq-8a}) becomes
\begin{equation}
y y_{\xi} = y + \mu
\label{Eq-10}
\end{equation}
The Eq. (\ref{Eq-10}) has the implicit solution
\begin{equation}
\xi = y - \mu ln|y+ \mu | +C
\label{Eq-11}
\end{equation}
where $C$ is an arbitrary constant, or
\begin{equation}
-\alpha e^x = y - \mu ln|y+\mu | +C
\label{Eq-12}
\end{equation}
Once the solution $y = y (x) $ is substituted to Eq. (\ref{Eq-10}) in the form
\begin{equation}
\label{Eq-13}
t= \int^x \frac{dx}{y(x)}
\end{equation}
we have the implicit solution $t= t(x)$ for Eq. (\ref{Eq-9}). Upon inversion of this solution we may obtain $q(t)$ and thus have a solution for the original SIR equation.
\subsection{Initial Conditions}
The Eq. (\ref{Eq-5}) is a second-order equation while the SIR system of Eq. (\ref{Eq-1a},\ref{Eq-1b}) constitutes a system of two first-order equations with initial conditions $S(0)$ and $I(0)$. It is easy to see that $q(0) = ln I(0)$ while $\dot{q}(0) = \alpha S(0)$. Thus for Eq. (\ref{Eq-9}) we have the following initial conditions $x(0) = q(0) = ln I(0)$ and $\dot{x}(0) = \dot{q}(0) -\mu = \alpha S(0) - \mu$. Since both susceptible and infected variables are percentages over the total population, the range of the $q=q(t)$ variable is $(-\infty, 0]$ while $x(t)$ takes similarly values in the same range.
Let us designate for simplicity the values at $t=0$ of $x(0) = k$ and $\dot{x}(0) = m$; clearly $k = ln I(0) < 0$ and $m = \alpha S(0) - \mu$. The latter can be either positive, negative or zero, depending on the initial state of the infection and the corresponding infection rate $R_0$. The defining transformation of Eq. (\ref{Eq-7}) at $t=0$ becomes $y(k) = m$ and thus the constant $C$ in Eq. (\ref{Eq-12}) is
\begin{equation}
C= \alpha e^k - m -+ \mu ln|m+\mu | \equiv - \alpha I(0) - \alpha S(0) +\mu + \mu ln | \alpha S(0) |
\label{Eq-14}
\end{equation}
In other words, the solution Eq. (\ref{Eq-12}) of the differential equation of Eq. (\ref{Eq-10}) should be solved for $x \ge k$ since at $t=0$ we have $x(0) = k$. Thus, the original SIR problem has solution given by
\begin{equation}
- \alpha e^x = y - \mu ln|y+\mu | - \alpha I(0) - \alpha S(0) +\mu + \mu ln | \alpha S(0) |
\label{Eq-15}
\end{equation}
or, in the equivalent form
\begin{equation}
ln\left[\frac{e^y }{|y+\mu |^\mu }\right] = - \alpha e^x +\alpha I(0) + \alpha S(0) -\mu - \mu ln | \alpha S(0) |
\label{Eq-15a}
\end{equation}
for $x \ge lnI(0)$ and with the final implicit formula given through the integral of Eq. (\ref{Eq-13}) modified as follows:
\begin{equation}
\label{Eq-16}
t= \int_k^x \frac{dx}{y(x)} \equiv \int_{ln I(0)}^{ln I(t)} \frac{dx}{y(x)}
\end{equation}
\subsection{Approximate solutions}
We can write Eq. (\ref{Eq-6}) in the following form
\begin{equation}
\label{Eq-17}
x'' + \lambda e^{x} x' + e^{x} =0
\end{equation}
where $\lambda = \sqrt{\frac{\alpha}{\mu}}$ and the prime derivative is wrt the new time variable
$\tau = \sqrt{\alpha \mu } t$.
A particular solution of Eq. (\ref{Eq-17}) is given by $x(\tau ) = - \frac{\tau}{\lambda}$, or $x(t) = - \mu t$. This is, of course, a trivial solution since it corresponds to the solution $q(t) = 0$ of Eq. (\ref{Eq-5}) and thus to a constant infection population.
To find approximate solutions we may look into the limits $\lambda >>1$ and $\lambda << 1$. In the former case, we may ignore the last term of Eq. (\ref{Eq-17}) and solve the equation
\begin{equation}
\label{Eq-18}
x'' + \lambda e^{x} x' =0
\end{equation}
Its solution in the original variables is
\begin{eqnarray}
\label{Eq-18a}
t = \int \frac{dx}{C - \alpha e^x} = \frac{x}{C} - \frac{ln (C - \alpha e^x ) }{C~ ln ~e}\\
\label{Eq-18b}
C= \alpha S(0) - \mu + \alpha I(0)
\end{eqnarray}
In the second case we ignore in turn the first derivative term of Eq. (\ref{Eq-17}), i.e.
\begin{equation}
\label{Eq-19}
x'' + e^{x} =0
\end{equation}
and obtain the solution in the original variables
\begin{eqnarray}
\label{Eq-19a}
t = \int \frac{dx}{\sqrt{2 (C' - \alpha \mu e^x )} }=- \frac{2 arctanh\left[\frac{\sqrt{C'- \alpha \mu e^x}}{\sqrt{C}}\right]}{\sqrt{C} ln ~e}\\
\label{Eq-19b}
C' = \frac{\left[\alpha S(0) - \mu \right]^2}{2} - \alpha \mu I(0)
\end{eqnarray}
\section*{Appendix B: Machine learning application procedure}
In this Appendix we describe the methods used in this work through a flowchart. We note that the after the extraction of the infection rate from the data we use the SIR model for further investigation of the infection.\\
\\
\usetikzlibrary{shapes.geometric, arrows, calc}
\tikzstyle{decision} = [diamond, draw, text width=4.5em, text badly centered, node distance=3cm]
\tikzstyle{block} = [rectangle, draw, text width=10em, text badly centered, rounded corners, minimum height=4em]
\tikzstyle{line} = [draw, -latex']
\tikzstyle{terminator} = [ draw, ellipse, node distance=3cm, minimum height=2em]
\begin{tikzpicture}[node distance=2cm, auto]
\node [block] (cm) {Create the Model. Data Driven \& PINN.};
\node [block, right of = cm, xshift=3cm] (tsd) {Train the Model on Simulated SIR Data.};
\node [block, below of = tsd] (lcrd) {Load Country's \\ Real Data \& the Pre-Trained Model.};
\node [block, below of = lcrd] (smooth) {Smooth Country's Data using a seven time-steps moving average.};
\node [block, below of = smooth] (train) {Train the Model \\ using Early Stopping.};
\node [block, below of = train] (extract) {Extract Country's $x(t)$, $\alpha(t)$ and $\mu$.};
\node [block, right of = extract, xshift = 3cm] (sim) {Solve the SIR model \\ using the extracted $\alpha(t)$ and $\mu$.};
\node [block, above of = sim, yshift = 6.5mm] (fit) {Fit to Country's Real Data using the initial conditions as fitting paramaters.};
\node [decision, above of = fit, yshift = 3.5mm] (check) {Other Countries into consideration?};
\node [block, text width=5em, right of = check, xshift = 2cm] (stop) {Stop};
\path [line] (cm) -- (tsd);
\path [line] (tsd) -- (lcrd);
\path [line] (lcrd) -- (smooth);
\path [line] (smooth) -- (train);
\path [line] (train) -- (extract);
\path [line] (extract) -- (sim);
\path [line] (sim) -- (fit);
\path [line] (fit) -- (check);
\path [line] (check) -- node[anchor=south] {Yes} (lcrd.east);
\path [line] (check.east) -- node {No} (stop.west);
\end{tikzpicture}
|
2,877,628,091,176 | arxiv | \section{Introduction}
There are many common physical processes and other similarities in the study of the Heliosphere and of the astrospheres of massive stars, but also a few differences.
Observations have shown the structure of the Solar Wind \citep{BalBadBon19, KasBalBel19} and Heliosphere \citep{McCAllBoc09, DiaKriMit17, ZanNakWeb19} in incredible detail, and global 3D computer models \citep{PogZanOgi06} have shown how these data can be interpreted in the context of the magnetohydrodynamics of partially ionized plasmas.
Much more limited observations are possible for other Sun-like stars, but we can measure their mass-loss rates and some properties of their stellar-wind bubbles \citep{WooMueZan05}.
Massive stars are much rarer than Sun-like stars and the nearest are $\sim10^2$\,pc distant \citep{GvaLanMac12}, but their extreme luminosity helps with observing their astrospheres.
Winds and extreme ultraviolet (EUV; $h\nu>13.6$\,eV) radiation from massive stars are many orders of magnitude stronger than for Solar-type stars \citep[e.g.][]{Lan12}, and their wind bubbles \citep{CasMcCWea75} and photoionized H~\textsc{ii} regions \citep{MacGvaMoh15} are of order parsec scale, $\sim10^3\times$ larger than the Heliosphere.
The mass of displaced interstellar material in the bow shock is correspondingly much larger, and this means that it is often easier to observe the astrosphere around a massive star than that around a Sun-like star.
Understanding the astrospheres of massive stars is more complicated than the Heliosphere because of the different timescales involved.
The size of the Heliosphere is $D\approx 100$\,A.U., and the Sun is moving with velocity $v_\star\sim10$\,km\,s$^{-1}$, and so a dynamical timescale $\tau_\mathrm{d}\equiv D/v_\star\sim10^2$\,years can be estimated.
This is many orders of magnitude shorter than the evolutionary timescale of the Sun ($\sim10^{10}$\,years), and shorter than the variation timescale for the local interstellar medium (ISM) properties encountered by the Heliosphere (i.e.~density fluctuations are weak on 100\,A.U. scales).
For massive stars, typical wind-bubble sizes are $\sim 1$\,pc (1\,pc$=3.086\times10^{18}$\,cm) and space velocities are again typically $v_\star\sim10$\,km\,s$^{-1}$, leading to $\tau_\mathrm{d}\sim10^5$\,years.
This is not too different from the stellar nuclear timescale over which a massive star evolves significantly ($\sim10^6$\,years, or shorter for helium-burning and later phases).
Furthermore the insterstellar medium (ISM) often has significant density fluctuations over the parsec length-scales traversed by a runaway massive star over $10^5$ years, and this means that the external ram pressure may change faster than the astrosphere can relax to its equilibrium size and shape.
Recent 2D simulations of the Bubble Nebula \citep{GreMacHaw19} modelled a star moving through a uniform ISM, and some synthetic observations such as predicted H$\alpha$ intensity maps of the nebula show discrepency with observations because the massive star BD+60$^{\circ}$\,2522 is apparently embedded in a medium of increasing density along its direction of motion.
Pioneering 2D hydrodynamical simulations \citep{GarLanMac96, ComKap98, BloKoe98} have shown the complexity of astrospheres from evolving and runaway stars in different environments.
These wind-blown bubbles have been studied for runaway massive stars of different masses for their entire evolution \citep{MeyMacLan14}, and also the expansion of a supernova explosion into the pre-shaped circumstellar environment \citep{MeyLanMac15}.
Magnetic fields have been included in astrosphere simulations for massive hot stars \citep{vanMelMar15, MeyMigKui17}, but until recently only for 2D calculations with field geometry limited by the rotational symmetry.
A 3D implementation in spherical coordinates has recently been presented and applied to astrospheres of massive stars \citep{SchBaaFic20}.
The need for 3D simulations is twofold:
\begin{enumerate}
\item it allows us to use a general ISM magnetic field orientation that is not parallel to the direction of motion and/or rotation axis of the star being modelled; and
\item the forward shock is often radiative and highly compressed, subject to thin-shell instabilities \citep{DgaBurNor96}, which can lead to an artificial accumulation of dense gas along the symmetry axis in 2D calculations, compromising the validity of the solution \citep[e.g.][]{GreMacHaw19}.
\end{enumerate}
In this contribution we describe an upgrade of the \textsc{pion} magnetohydrodynamics (MHD) software \citep{MacLim11, Mac12} and some initial results of 3D MHD simulations of astrospheres.
\begin{figure}
\includegraphics[width=0.6\textwidth]{PION_blocks.png}\hspace{2pc}%
\begin{minipage}[b]{0.38\textwidth}\caption{\label{fig:grid}} An example of the nested-grid configuration for modelling a stellar-wind bow shock.
The overlaid grid shows blocks of $32^3$ grid cells.
Log of gas density plotted for a 3D hydrodynamic simulation, run with 3 levels of refinement and $256^3$ grid cells at each level.
\end{minipage}
\end{figure}
\section{Methods}
\textsc{pion} is a finite-volume fluid-dynamics programme for solving the Euler \citep{MacLim10} and ideal-MHD \citep{MacLim11} equations on a rectilinear mesh, including a raytracer coupled to a chemical-kinetics solver to track the microphysical heating/cooling and the ionization of hydrogen by photoionization and collisional processes \citep{Mac12}.
We have recently added static mesh-refinement capabilities, to make 3D simulations of wind bubbles tractable with modest computing resources.
In brief, the improvements consist of the following:
\begin{enumerate}
\item
We implemented coarse-to-fine interpolation (prolongation) to populate the boundaries of refined grids with data from their parent grid, and fine-to-coarse averaging (restriction) to update regions of coarse grids with more accurately calculated data from an underlying finer-level grid \citep{TotRoe02}.
\item
We also implemented the boundary flux correction that ensures conservation of conserved quantities across different grid levels \citep{BerCol89}.
\item
We updated the raytracing algorithms for ionizing radiation from point sources so that they work on a multiply nested grid.
\item
We improved the Riemann solvers for HD and MHD, adding the robust (but diffusive) HLL solver for cases where the more accurate Roe and HLLD solvers do not maintain positive pressure or density \citep{MigZanTze12}.
\end{enumerate}
The method is similar to some previous 2D algorithms \citep{YorKai95, FreHenYor06}, using nested grids that differ in spatial and temporal resolution by a factor of 2 at each level.
For each dimension, the focus of the nested grids can be at the centre or the negative or positive extremity of the coarsest grid.
More details will be presented in a forthcoming paper, to accompany a public release of the software.
An example of a slice through the midplane of a 3D hydrodynamic simulation of a stellar-wind bow shock is shown in Fig.~\ref{fig:grid}, where the grid shows blocks of $32^3$ grid cells on three levels of refinement, focused on the apex of the bow shock.
\section{Results}
A 3D MHD simulation of a stellar-wind bow shock was set up with parameters given in Table~\ref{tab:3dmhd}.
The ISM values are typical of the diffuse gas in the Galactic plane within a factor of 2, \citep{MeyMacLan14} and the pressure is appropriate for photoionized gas at a temperature of $T\approx10^4$\,K.
The stellar values are typical for a massive star, here moving with velocity $v_\star=30\,\mathrm{km\,s}^{-1}$ through the ISM.
For these values, the standoff distance of the bow shock is
\begin{equation}
R_\mathrm{SO} \equiv \sqrt{\frac{\dot{M}v_\infty}{4\pi\rho_0 (v_\star*2+a^2)}} \approx0.60\,\mathrm{pc} \;,
\end{equation}
where symbols are as defined in Table~\ref{tab:3dmhd}, and $a\approx13.3\,$km\,s$^{-1}$ is the adiabatic sound speed in the photoionized ISM.
This is where we expect to find the wind termination-shock in the upstream direction.
\begin{table}
\caption{\label{tab:3dmhd}Parameters of the interstellar medium (ISM) and stellar wind for 3D MHD simulation of a bow shock.}
\begin{center}
\begin{tabular}{ll}
\br
Parameter&Value\\
\mr
ISM density, $\rho_0$ & $2.0\times10^{-24}\,\mathrm{g\,cm}^{-3}$ \\
ISM pressure, $p_g$ & $2.9\times10^{-12}\,\mathrm{dyne\,cm}^{-2}$ \\
ISM velocity, $\mathbf{v}$ & $[-30,0,0] \,\mathrm{km\,s}^{-1}$ \\
ISM B-field, $\mathbf{B}_0$ & $[4,1,1]\times10^{-6}$\,G \\
\mr
Wind mass-loss rate, $\dot{M}$ & $10^{-7}\,\mathrm{M}_\odot\,\mathrm{yr}^{-1}$ \\
Wind terminal velocity, $v_\infty$ & 1500\,km\,s$^{-1}$ \\
Surface rotation (equator), $v_\mathrm{rot}$ & 100\,km\,s$^{-1}$ \\
Surface split-monopole field strength, $|\mathbf{B}|$ & 10\,G \\
Surface temperature, $T_\mathrm{eff}$ & 35\,000\,K \\
\br
\end{tabular}
\end{center}
\end{table}
A simulation was initialised with a coarse grid of $128^3$ grid cells and volume $\approx8^3$ pc (each cell has diameter $\Delta x=0.0622$\,pc).
The simulation extents in the $x$-direction are $x\in[-6.30,1.67]$\,pc, and $\{y,z\}\in[-3.98,3.98]$\,pc.
The nested grids are centred on $[1.67,0,0]$\,pc, and the star is placed at the origin.
Two levels of refinement are added to the coarse grid, giving a finest level cell-diameter $\Delta x=0.0156$\,pc.
The wind inner boundary is initiated at at radius of 0.311\,pc, corresponding to 20 grid cells on the most refined level.
The same gas radiative heating and cooling prescription as in ref.~\citep{GreMacHaw19} was used, appropriate for photoionized gas with chemical abundances close to that of the Sun.
A second-order-accurate integration scheme (in time and space) was used with the HLL Riemann solver to evolve the simulation.
This relatively low-resolution calculation takes approximately $3\times10^3$ core-hours to run to completion.
A higher resolution simulation with $256^3$ grid cells per level takes $\sim5\times10^4$ core-hours, and $512^3$ would take $\approx10^6$ core-hours.
The weak scaling of \textsc{pion} for these simulations is very good, and so it is feasible to run high-resolution calculations on many cores in a few days to a few weeks.
Results at $t\approx0.32$\,Myr ($\tau_\mathrm{d}=16$ if we use $R_\mathrm{SO}$ as the size scale) are plotted in Fig.~\ref{fig:b3d_DB} for (a) $\rho$ and (b) $|\mathbf{B}|$ in the $x$-$z$ plane.
The wind termination-shock and contact discontinuity are easily discernable in both panels, as is the bow shock produced by the supersonic motion of the star through the ISM.
Already in Fig.~\ref{fig:b3d_DB} we can see morphological differences with respect to a hydrodynamic solution: the compression of the bow shock is larger in the upper half-plane than the lower, because of the orientation of the ISM magnetic field.
Without a magnetic field the solution should be axisymmetric, modulo instabilities that can form \citep{ComKap98}.
This would result in a brighter bow shock when observed in optical spectral lines such as H$\alpha$ and [O\,\textsc{iii}] (5007\AA), for which the emissivity is $\propto \rho^2$ in photoionized gas.
The wake behind the bow shock is also distorted by the ISM magnetic field and is no longer axisymmetric about the $x$-axis.
Both of these effects could introduce a systematic uncertainty in interpreting observations if one uses the symmetry axis of a bow shock to infer the direction of motion of the star.
The distortions get stronger as the interstellar magnetic field strength increases.
\begin{figure}
\includegraphics[width=1.0\linewidth]{Ostar3D_B010_c00118052.png}
\caption{\label{fig:b3d_DB} (a) Gas density, $\log_{10}\left\{\rho/\mathrm{(g\,cm}^{-3})\right\}$, and (b) magnetic field magnitude, $\log_{10}(|\mathbf{B}|/\mathrm{G})$, in the $x$-$z$ plane through $y=0$ are plotted on a logarithmic scale as indicated, for a 3D MHD simulation of a bow shock produced by a massive star. The star is at the origin and moving in the $+\hat{x}$ direction.
The magnetic axis of the star is $\hat{z}$, the stellar surface field is $B=10$\,G, and the upstream ISM field is $\mathbf{B}_0=[4,1,1]\times10^{-6}$\,G.}
\end{figure}
\begin{figure}
\includegraphics[width=1.0\linewidth]{Ostar3D_B100_c00121744.png}
\caption{\label{fig:b3d_B100} As Fig.~\ref{fig:b3d_DB} but for a simulation with a $10\times$ stronger stellar surface field of $B=100$\,G.}
\end{figure}
Fig.~\ref{fig:b3d_DB} (b) shows the magnetic structure of the astrosphere.
The Parker spiral from the rotating star ensures that $B\propto r^{-1}$ near the equatorial plane and $B\propto r^{-2}$ near the poles, with a current sheet at the equator across which the field lines switch from directed inwards to directed outwards.
The current sheet is preserved across the termination shock and is swept back (in the upper half-plane) along the contact discontinuity.
The very weakly magnetised regions emanating from the poles are also swept downstream in the shocked wind into the wake.
These features are consistent with MHD modelling of the Heliosphere \citep{PogZanOgi06} except that the length scales are $10^3\times$ larger.
The current sheet in the equatorial plane does not affect the structure of the wind bubble as long as the wind is only weakly magnetised, but if (keeping other parameters constant) the stellar surface field is increased to 100\,G (allowed by observational upper limits for most O stars \citep{FosCasSch15}) then some effects can be noticed in the shocked wind, shown in Fig.~\ref{fig:b3d_B100}.
Here, panel (a) shows an increased density in the equatorial plane in the freely-expanding wind (not present for stellar field of 10\,G), driven by the pressure gradient that arised from reconnection the current sheet and the associated very low magnetic pressure.
The contact discontinuity also moves closer to the star just below the equator in the upstream direction.
Both of these effects become more pronounced if the stellar surface magnetic field is increased further.
This is possibly related to the numerical issue in Heliosphere simulations, where a V-shaped structure forms on the contact discontinuity in ideal-MHD simulations \citep{WasTan01}.
This feature was shown to be strongly dependent on the numerical resolution at the current sheet, and disappeared with the inclusion of neutrals as a separate fluid \citep{PogZanOgi06,WasZanHu15}.
Fig.~\ref{fig:b3d_B100} (b) also shows that the magnetic field in the shocked wind bubble is about the same strength as the ISM field, and there is no sharp change in the field strength across the contact discontinuity in the upstream direction.
Assuming that the wind termination-shocks of massive stars are reasonably efficient at accelerating electrons, this could have observable consequences.
2D hydrodynamic simulations with an analytically estimated magnetic field strength have been used to predict synchrotron radiation from bow shocks \citep{DelPoh18}.
Their calculations show that the relativistic electrons are well-confined in the wind bubble and so the synchrotron radiation is dominated by emission from within the wind bubble if the wind has comparable magnetic field to the shocked ISM.
This is in strong contrast with Bremsstrahlung which is dominated (at radio frequencies) by the photoionized and shocked ISM, external to the wind bubble, because the thermal electron density is orders of magnitude larger in the shocked ISM than in the wind bubble.
Sensitive radio observations could disentangle these two components and constrain the stellar surface magnetic field and the particle acceleration efficiency \citep{BenRomMar10, DelBosMul18}.
Application of the method of \citep{DelPoh18} to 3D MHD simulations, where the magnetic field is calculated self-consistently with the fluid dynamics, is an interesting avenue for making more detailed predictions of non-thermal radiation.
\section{Outlook}
We have presented some initial results from 3D MHD modelling of stellar-wind bubbles around O stars moving supersonically through the ISM.
Algorithm updates have also been briefly described which enable high-resolution 3D MHD simulations of the astrospheres of massive stars at reasonable computational cost.
A paper describing the algorithms and tests is currently in preparation, and we are beginning to apply the methods to the astrospheres of some well-known runaway stars, e.g., BD+60$^{\circ}$\,2522 and the Bubble Nebula that surrounds it \citep{GreMacHaw19}, and BD+43$^{\circ}$\,3654 and its parsec-scale bow shock \citep{BenRomMar10}.
For a stellar magnetic field strength of 10\,G and wind properties given in Table~\ref{tab:3dmhd}, the stellar magnetic field does not significantly affect the structure of the astrosphere, whereas the interstellar field does change the morphology of the bow shock to some extent.
For this case the Alfv\'enic Mach number of the wind is $\mathcal{M}_\mathrm{A}\approx70$, and so the magnetic pressure is significantly less than the ram pressure in the unshocked wind and less than the thermal pressure in the shocked wind.
The interstellar $\mathcal{M}_\mathrm{A}\approx3.5$ and so the magnetic effects on the shocked ISM are correspondingly more evident.
With a 100\,G stellar field, the Alfv\'enic Mach number of the wind is only $\mathcal{M}_\mathrm{A}\approx7$, and so the magnetic pressure in the shocked wind is signficant but still smaller than the thermal pressure.
The Axford-Cranfill effect means that the importance of magnetic pressure increases outwards in the shocked wind; for a review see \cite{Zan99} and for a recent application to wind bubbles of massive stars see \cite{ZirPtu18}.
This introduces small changes to the shape of the wind bubble, especially near the magnetic equator at the contact discontinuity.
An even stronger stellar field would be expected to produce a wind bubble that is signficantly affected by magnetic effects.
In that case the boundary condition that we impose is probably not appropriate anyway and a more complicated wind injection method would be required.
Nevertheless, our results show that for $\mathcal{M}_\mathrm{A}\geq10$ the effects of the equatorial current sheet on the flow dynamics are modest, and decreasing as $\mathcal{M}_\mathrm{A}$ increases.
The ideal MHD algorithms that are presented here are therefore adequate to describe astrospheres from the majority of massive stars.
3D MHD simulations have significantly more predictive power than hydrodynamic calculations because (with some assumptions) the non-thermal radiation from the nebula can be calculated more realistically.
This holds the promise of constraining the efficiency of particle acceleration in stellar-wind shocks, a topic of considerable interest \citep{BenRomMar10, DelBosMul18, HESS2018_Bowshocks} for current and upcoming observing facilities.
\ack
JM acknowledges funding from a Royal Society-SFI University Research Fellowship (14/RS-URF/3219).
SG is funded by a Hamilton Scholarship from the Dublin Institute for Advanced Studies.
MM acknowledges funding from a Royal Society Research Fellows Enhancement Award (RGF\textbackslash EA\textbackslash 180214).
We acknowledge the SFI/HEA Irish Centre for High-End Computing (ICHEC) for the provision of computational facilities and support (project dsast022c).
\bibliographystyle{iopart-num}
|
2,877,628,091,177 | arxiv | \section{Field equations}\label{sec:appendixA}
In the following sections we will provide the basic ingredients to compute, {\it ab initio}, the equations
describing metric perturbations in the dRGT theory which generalises the original formulation of Ref.~\cite{deRham:2010kj} to
arbitrary metrics \cite{Hassan:2012ka}. The ghost-free action that describes two interacting spin-2 fields in vacuum is:
\begin{eqnarray}
S=\int d^4x\sqrt{-g}\Big[M^2_gR_g+\sqrt{\frac{f}{g}}M^2_{f}R_{f}-2M_v^4V(g,f)\Big]\nonumber\\\label{dgrtaction}
\end{eqnarray}
where $R_g$ and $R_f$ are the Ricci scalars of the two metrics
$g_{\alpha\beta}$ and $f_{\alpha\beta}$, whose determinants are given by $g$ and $f$ \cite{Hassan:2013bc}.
The matter couplings in eq.~\eqref{dgrtaction} are given by $M_g^{-2}=16\pi G$, $M_f^{-2}=16\pi {\cal G}$, and $M_v$.
The latter can be expressed as a function of the first two. In the limit $M_f\rightarrow0$, eq.~\eqref{dgrtaction}
reduces to a massive gravity theory in which matter coupling and the kinetic term of $f_{\mu\nu}$ vanish \cite{Baccetti:2012bk}.
The two
quantities $f$ and $g$ can be recast in terms of 5 interactions coefficients, free of the Boulware-Deser ghosts on
generic backgrounds \cite{deRham:2010kj,deRham:2010ik,Hassan:2013bc}. The potential $V$ is given by:
\begin{equation}
V(\gamma,\beta_n)=\sum_{n=0}^{4}\beta_n e_n(\gamma)\ ,
\end{equation}
where $\beta_n$ are coupling constants, and $e_n(\gamma)$ are symmetric polynomials of the matrix
$\gamma^\mu{_\nu}=\sqrt{g^{-1}f}^\mu{_\nu}$, which can be written in 4 dimensions as:
\begin{eqnarray}
e_0&=&1\,,\qquad e_1=[\gamma]\,,\qquad e_2=\frac{1}{2}([\gamma]^2-[\gamma^2])\,,\nonumber\\
e_3&=&\frac{1}{6}([\gamma]^3-3[\gamma][\gamma^2]+2[\gamma^3])\,,\quad e_4 = \det(\gamma) \ ,\nonumber
\end{eqnarray}
where $[,]$ denotes the trace of the matrix. Varying the action \eqref{dgrtaction} with respect to $g_{\mu\nu}$
and $f_{\mu\nu}$ we obtain two sets of equations:
\begin{eqnarray}
&R_{\mu\nu}(g)&-\frac{1}{2}g_{\mu\nu}R(g)+\frac{M^4_v}{M^2_g}V^g_{\mu\nu}=\frac{T_{\mu\nu}^g}{m_g^2}\ ,\label{field1}\\
&R_{\mu\nu}(f)&-\frac{1}{2}f_{\mu\nu}R(f)+\frac{M^4_v}{M^2_f}V^f_{\mu\nu}=\frac{T_{\mu\nu}^f}{m_f^2}\label{field2}\,,
\end{eqnarray}
where the interactions terms $V^{f,g}_{\mu\nu}$ depend on the matrix $\gamma$, and the stress-energy tensors
are defined such as $(T^{g}_{\mu\nu},T^{f}_{\mu\nu})=-(\frac{1}{\sqrt{g}}\frac{\delta S_m}{\delta g^{\mu\nu}},\frac{1}{\sqrt{f}}\frac{\delta S_m}{\delta f^{\mu\nu}})$.
Choosing the two background metrics to be proportional \cite{Hassan:2012wr,Brito:2013wya}, i.e.
$\bar{f}_{\mu\nu}=c^2\bar{g}_{\mu\nu}$\footnote{Background quantities are identified by a bar superscript.}
eqns.~\eqref{field1}-\eqref{field2} lead to two copies of the Einstein's equations for $\bar{f}$ and $\bar{g}$:
\begin{eqnarray}
&\bar{R}_{\mu\nu}&-\frac{1}{2}\bar{g}_{\mu\nu}\bar{R}+\bar{f}_{\mu\nu}\Lambda_g=\frac{T^f_{\mu\nu}}{m_g^2}\ ,\label{back1}\\
&\bar{R}_{\mu\nu}&-\frac{1}{2}\bar{g}_{\mu\nu}\bar{R}+\bar{g}_{\mu\nu}\Lambda_g=\frac{T^g_{\mu\nu}}{m_g^2}\ .\label{back2}
\end{eqnarray}
Note that, consistency of the background relations requires $\Lambda_g=\Lambda_f$,
and that the stress energy tensor are proportional through the matter couplings,
$T_{\mu\nu}^{f}=M_f^2/M_g^2 T_{\mu\nu}^{g}$.
In this work we focus on BH solutions, with $T_{\mu\nu}$ being a first order perturbation of the background metric, and
explicitly given in Sec.~\ref{sec:appendixD}.
We also assume that the cosmological constants vanishes, $\Lambda_f=\Lambda_g=0$, as their effect can be considered
negligible on the scales relevant for the local physics around BHs. Note, however that the asymptotic cosmological behavior may impose which branch of BH solution is chosen by Nature is theories of massive gravity, where several local solutions generically exist\footnote{We thank an anonymous refree for highlighting this issue}.
Here, we focus on asymptotically flat solutions.
Equations \eqref{back1}-\eqref{back2}, supplied by usual ansatz for a spherically symmetric spacetime with coordinates
$x^\mu=(t,\vec{x})=(r,t,\theta,\varphi)$:
\begin{equation}
ds^2=-f(r)dt^2+\frac{dr^2}{g(r)}+r^2d\theta^2+r^2 \sin^2\theta d\varphi^2\ ,
\end{equation}
yield the Schwarzschild solution for a BH of mass $M$, with $f(r)=g(r)=1-2M/r$.
We can now turn our attention on the metric perturbations. At the linear order we have:
\begin{eqnarray}
g_{\mu\nu}&=&\bar{g}_{\mu\nu}+M_g^{-1}\delta g_{\mu\nu}\ ,\\
f_{\mu\nu}&=&\bar{f}_{\mu\nu}+cM_f^{-1}\delta f_{\mu\nu}\ .
\end{eqnarray}
Although the functions $\delta g_{\mu\nu}$ and $\delta f_{\mu\nu}$ are generically independent, it is possible to find a
suitable combination:
\begin{eqnarray}
h^{(0)}_{\mu\nu}&=&\frac{M_g \delta g_{\mu\nu}+cM_f\delta f_{\mu\nu}}{\sqrt{c^2M^2_f+M_g^2}}\ ,\\
h_{\mu\nu}&=&\frac{M_g \delta f_{\mu\nu}-cM_f\delta g_{\mu\nu}}{\sqrt{c^2M_f^2+M_g^2}}\,,
\end{eqnarray}
for which the perturbations decouple, yielding two sets of field's equations \cite{Hassan:2013bc}. The first one,
\begin{equation}
\delta G^{(0)}_{\mu\nu}=\kappa^{(0)} T_{\mu\nu}\ ,
\end{equation}
refers to a massless spin-2 field, with $\delta G^{(0)}_{\mu\nu}$ perturbed Einstein tensor.
The second set of solutions describes a massive theory whose equations of motions
are given by
\begin{eqnarray}
\delta G_{\mu\nu}&=&\kappa\left[ T_{\mu\nu}-\frac{\bar{g}_{\mu\nu}}{3} T\right]-\frac{\mu^2}{2}h_{\mu\nu}\label{fieldeq1}\,,\\
\nabla^\mu h_{\mu \nu}&=&\alpha\nabla_\nu T \label{fieldeq2} \,,\\
h&=&\alpha T\,,\label{fieldeq3}
\end{eqnarray}
where $\delta G_{\mu\nu}$ is the first order perturbation of the Einstein tensor for a metric of the
form $g_{\mu\nu} = \overline{g}_{\mu\nu} + h_{\mu\nu}$, $T_{\mu\nu}$ is the stress-energy tensor, to be
considered as a first order perturbation of the background solution, $h=g^{\mu\nu}h_{\mu\nu}$, $T = g^{\mu\nu}T_{\mu\nu}$
and $\kappa$, $\alpha$ and $\mu$ are constants, the latter depending on the parameters of the theory through
\begin{equation}
\mu^2 = M_v^4(c\beta_1+2c^2\beta_2+c^3\beta_3)\left(\frac{1}{c^2M_f^2}+\frac{1}{M_g^2}\right).
\end{equation}
With a suitable choice of such constants, $\kappa=8\pi$ and
$\alpha=-\frac{2\kappa}{3\mu^2}$, we obtain the equations of motion of a nonlinear spin-2 field with a linear
mass term of mass $\mu$.
\section{Spherical decomposition of the metric tensor}\label{sec:appendixB}
In Sec.~\ref{sec:appendixC} we provide the master equations for all the components of the metric perturbation
$h_{\mu\nu}$ around the Schwarzschild solution. Due to the spherical symmetry of the background, the field $h_{\mu\nu}$
can be expanded in term of a complete set of tensor spherical harmonics. The latter can be classified into {\it axial}
and {\it polar} components, according to their properties under parity transformation~\cite{Regge:1957cx,Zerilli:1970bo}.
Axial modes change sign as $(-1)^{\ell+1}$ under the inversion
$(\theta\rightarrow\pi-\theta,\phi\rightarrow\phi+\pi)$, while polar perturbation are multiplied by a factor
$(-1)^\ell$.
The background spherical symmetry also yields a significant simplification of the formalism, as it leads the
field's equations to be independent of $m$, and does not allow perturbations with different multipole
number, or different parity, to couple together.
In order to decompose the metric $h_{\mu\nu}$, and the test body stress-energy tensor $T_{\mu\nu}$
in tensor spherical harmonics, we follow the approach introduced by Regge, Wheeler and Zerilli
\cite{Regge:1957cx,Zerilli:1970bo}. A generic tensor
field can be expanded in terms of two classes of functions, the axial and polar harmonics,
In this paper we have adopted the following
ansatz for the metric perturbation ${\bf h}={\bf h}^\textnormal{ax}+{\bf h}^\textnormal{pol}$, where\footnote{As noted in
\cite{Brito:2013wya}, we can not simplify eq.~\eqref{expansion} by imposing further gauge conditions, like those
adopted by Regge and Wheeler \cite{Regge:1957cx}, since they are too restrictive for the massive gravity
theory considered in this work.}
\begin{widetext}
\begin{equation}
{\bf h}^\textnormal{ax}=\sum_{\ell m }\frac{\sqrt{2\ell(\ell+1)}}{r}\left[ih_{1}^{\ell m }(r,t){\bf c}_{\ell m }
-h_{0}^{\ell m }(r,t){\bf c}^{(0)}_{\ell m }+\frac{\sqrt{(\ell-1)(\ell+2)}}{2r}h_{2}^{\ell m }(r,t){\bf d}_{\ell m }\right]\ ,
\end{equation}
and
\begin{align}
{\bf h}^\textnormal{pol}=\sum_{\ell m }&\left[f(r)H^{\ell m }_0(t,r){\bf a}^{(0)}_{\ell m }-i \sqrt{2}H^{\ell m }_1(t,r){\bf a}^{(1)}_{\ell m }
+\frac{H_2^{\ell m }(t,r)}{f(r)}{\bf a}_{\ell m }-\frac{i}{r}\sqrt{2\ell(\ell+1)}\eta^{\ell m }_0(t,r){\bf b}^{(0)}_{\ell m }\right.\nonumber\\
&\left.+\frac{1}{r}\sqrt{2\ell(\ell+1)}\eta^{\ell m }_1(t,r){\bf b}_{\ell m }
+\sqrt{\frac{l}{2}(\ell+1)(\ell-1)(\ell+2)}G^{\ell m }(t,r){\bf f}_{\ell m }+\sqrt{2}K^{\ell m }(t,r){\bf g}_{\ell m }\right.\nonumber\\
&\left.-\frac{\ell(\ell+1)}{\sqrt{2}}G^{\ell m }(t,r){\bf g}_{\ell m }\right]\ .\label{expansion}
\end{align}
\end{widetext}
Similarly, we have decomposed the stress-energy tensor of the test particle as follows:
\begin{eqnarray}
{\bf T}&=&\sum_{\ell =0}^{\infty}\sum_{m =-\ell}^{\ell}{\cal A}^{(0)}_{\ell m }{\bf a}^{(0)}_{\ell m }+{\cal A}^{(1)}_{\ell m }{\bf a}^{(1)}_{\ell m}+{\cal A}_{\ell m }{\bf a}_{\ell m }\nonumber\\
&+&{\cal B}^{(0)}_{\ell m }{\bf b}^{(0)}_{\ell m }+{\cal B}_{\ell m }{\bf b}_{\ell m }
+{\cal Q}^{(0)}_{\ell m }{\bf c}^{(0)}_{\ell m }+{\cal Q}_{\ell m }{\bf c}_{\ell m }\nonumber\\
&+&{\cal D}_{\ell m }{\bf d}_{\ell m }+{\cal G}_{\ell m }{\bf g}_{\ell m }+{\cal F}_{\ell m }{\bf f}_{\ell m }\ .\label{harmonicexp}
\end{eqnarray}
The form of the axial $({\bf c}^{(0)}_{\ell m },{\bf c}_{\ell m },{\bf d}_{\ell m },)$ and polar $({\bf a}^{(0)}_{\ell m },{\bf a}^{(1)}_{\ell m },{\bf a}_{\ell m },{\bf b}^{(0)}_{\ell m },
{\bf b}_{\ell m },{\bf g}_{\ell m },{\bf f}_{\ell m })$ basis components are explicitly
given in \cite{Sago:2003ib}, while the expansion coefficients $({\cal A}^{(0)}_{\ell m },{\cal A}^{(1)}_{\ell m },\ldots,{\cal F}_{\ell m })$ can be
computed exploiting the orthonormality properties of the tensor harmonics:
\begin{equation}
(A,B)=\int\int \eta^{\mu\rho}\eta^{\nu\sigma}A^{\star}_{\mu\nu}B_{\rho\sigma}d\Omega\ ,
\end{equation}
where the superscript $\star$ denotes complex conjugation. Then, for example
$Q^{(0)}_{\ell m }=({\bf c}^{(0)}_{\ell m },{\bf T})$. We refer the reader to Table 1 of \cite{Sago:2003ib}
for the general expression of such coefficients.
Finally, we work in Fourier space, and decompose any function $Z(t,r)$, as
\begin{equation}
Z(t,r)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}\,d\omega\,e^{-i\omega t} Z(\omega, r)\,.
\end{equation}
\section{Master equations for axial and polar perturbations}\label{sec:appendixC}
Having defined the decomposition of the metric and of the stress energy tensor, we can now
derive the differential equations for the axial and polar perturbations $({\bf h}^\textnormal{ax},{\bf h}^\textnormal{pol})$.
The explicit form of such equations is provided for different multipole $\ell$ in the next sections.
\subsection{Axial sector} \label{sec:axialequations}
\subsubsection{$\ell\ge$ 2}
A system of coupled differential equations for the three metric perturbations $(h_0,h_1,h_2)$ can
be derived by replacing the anzatz \eqref{expansion} within the linearized field's equations \eqref{fieldeq1}-\eqref{fieldeq3}.
Introducing the new variables\footnote{For the sake of clarity, hereafter we will neglect the $(\ell,m )$ indices
on the metric and source perturbations.} $Q(r)=h_1(r)(1-2M/r)$ and $Z(r)=h_2(r)/r$, the $(r\theta)$ and $(\theta\theta)$
components of eq.~\eqref{fieldeq1} lead to the following Schr\"odinger-like equations for the axial
sector,
\begin{equation}
\frac{d^2{\bf \Psi}}{dr_\star^2}+\left[\omega^2-V_\textnormal{ax}\right]\bf \Psi={\bf {\cal S}}_\textnormal{ax}\,,\label{axialeq}
\end{equation}
where ${\bf \Psi}=(Q,Z)^\textnormal{T}$, being $r_\star= r+2M\ln(r/2M-1)$ the tortoise coordinate. The potential
matrix $V_\textnormal{ax}$ is given by,
\begin{equation}
V_\textnormal{ax}=
f\begin{bmatrix}
\mu^2+2\frac{\lambda+3}{r^2}-\frac{16M}{r^3} & 2 i\lambda\frac{3M-r}{r^3}\\
\frac{4i}{r^2} &\mu^2+\frac{2\lambda}{r^2}+\frac{2M}{r^3}
\end{bmatrix}\ ,
\end{equation}
$f(r)=1-2M/r$, and $\lambda=\ell(\ell+1)$.
The source term reads
\begin{equation}
{\cal S}_\textnormal{ax}=
\begin{bmatrix}
{\cal S}_{Q}\\
{\cal S}_{Z}
\end{bmatrix}
=\frac{8\pi f(r) r}{\sqrt{\lambda+1}}\begin{bmatrix}
if(r){\cal Q}\\
-\frac{\sqrt{2}}{\sqrt{\lambda}}{\cal D}
\end{bmatrix}\ .
\end{equation}
The coefficients ${\cal Q}$ and ${\cal D}$ represent the contribution of the infalling test particle,
and depend on the specific orbital configuration considered. Their explicit form for radial plunge and
circular motion is shown in appendix \ref{sec:appendixD}.
Note also that the $\theta$ component of eq.~\eqref{field2} provides a first order differential equation for $h_1$,
namely:
\begin{equation}
\frac{dh_1}{dr}=\frac{2(M-r)}{r^2f(r)}h_1-\frac{i\lambda}{r^2f(r)}h_2-\frac{i\omega}{f(r)^2}h_0\ ,\label{axialcons}
\end{equation}
which can be solved fo $h_0$, once the solution for $h_1$ and $h_2$ has been obtained from
eqns.~\eqref{axialeq}.
\subsubsection{$\ell=1$}
The metric perturbations for $\ell=0$ vanish identically in the axial sector, as $\partial_\theta Y_{l,m}=
\partial_\varphi Y_{l,m}=0$, and then the monopole term does not provide any contribution.
For the dipole mode, $\ell=1$, we are left with only two functions: $h_0$ and $h_1$. A master equation for the
latter can be derived from the $(r\theta)$ component of eqns.~\eqref{fieldeq1}:
\begin{equation}
\frac{d^2Q}{dr^2_\star}+\left[\omega^2-V^{\ell=1}_\textnormal{Ax}\right]Q=8\pi i rf(r)^2{\cal Q}\ .
\end{equation}
where
\begin{equation}
V^{\ell=1}_\textnormal{Ax}=f(r)\left(\frac{6}{r^2}-\frac{16M}{r^3}+\mu^2\right)\ ,
\end{equation}
and $Q(r)=h_1(r)f $. The function $h_0$ is obtained again from eq.~\eqref{axialcons}, provided that
$h_2=0$.
\subsection{Polar sector}\label{sec:polarequations}
\subsubsection{$\ell\ge$ 2}
The procedure to obtain the fundamental relations for the polar sector follows the same spirit
of the axial modes. The linearized field equations for the even parity metric lead in general
to seven ODEs for the set of variables $(H_0,H_1,H_2,\eta_0,\eta_1,K,G)$ (cfr. Appendix \ref{sec:appendixB}).
However, after some algebraic manipulations, it is possible to isolate a system of coupled equations for
$(G,K,\eta_1)$ only and, their first derivative $(P,W,R)=d/dr_\star(G,K,\eta_1)$. Such system can be
recast in a compact matrix form:
\begin{equation}
\left[\frac{d}{dr_\star}+V^{\ell=2}_\textnormal{pol}(\omega,r)\right]{\bf \Xi}={\bf {\cal S}}^{\ell=2}_\textnormal{pol} \,, \label{poleq}
\end{equation}
where ${\bf \Xi}=(G,K,\eta_1,P,W,R)^\textnormal{T}$. The $6\times6$ matrix potential $V^{\ell=2}_\textnormal{pol}$ depends on the
frequency $\omega$ and the mass $\mu$, as well as on the BH mass $M$. The actual expressions of
$V^{\ell=2}_\textnormal{pol}$ and of the source vector ${\bf {\cal S}}^{\ell=2}_\textnormal{pol}$ are rather cumbersome, and for sake of clarity
we will not show them here, but they be explicitly reported within a \texttt{Mathematica}
notebook~\cite{grit}. Finally, the $(t,r,\theta)$ components of eqns.~\eqref{fieldeq2} and the relation
\eqref{fieldeq3}, also provided in the supplementary \texttt{Mathematica} files, yield four equations for
$(H_0,H_1,H_2,\eta_0)$ in terms of ${\bf \Xi}$ and the test particle stress energy tensor, that can be solved
after the solution of eqns. \eqref{poleq} have been found.
\subsubsection{$\ell=$ 0}
The number of perturbations to be determined for the monopole mode in the even-parity sector decreases
to four, i.e. $(H_0,H_1,H_2,K)$. We first use the time component $\nabla^\mu h_{\mu t}=\alpha\nabla_t \delta T$
and the trace equation $h=\alpha\delta T$ to eliminate $H_0$ and $H_2$.
We then proceed to replace their values in the $(tt)$ and $(tr)$ components of eq.~\eqref{fieldeq1},
as in the radial equation. From these, we obtain $H_1$ and its first and second derivatives as function of $K$ (and its derivatives also).
Applying this to the $(rr)$ component of eq.~\eqref{fieldeq1} we obtain a $2^{\text{nd}}$ order ODE for $K$.
As noted in Sec.~II-A of the main letter, using the following change of variable
\begin{equation}
K=\frac{\sqrt{-4 \mu ^2 M+\mu ^4 r^3+2 \mu ^2 r+4 r \omega ^2}}{r^{5/2}}\varphi_0\ ,
\end{equation}
we obtain a wave equation for $\varphi_0$ only:
\begin{equation}
\frac{d^2\varphi_0}{dr_*^2} + [\omega^2-V^{\ell=0}_\textnormal{pol}(r,\omega)]\varphi_0 = {\bf {\cal S}}^{\ell=0}_\textnormal{pol}\ .\label{l0polar_app}
\end{equation}
For radial infall motion in the highly relativistic regime, the expression for ${\bf {\cal S}}^{\ell=0}_\textnormal{pol}$ reduces to:
\begin{equation}
{\bf {\cal S}}^{\ell=0}_\textnormal{pol}=\frac{8 \sqrt{2} m_p\gamma (r-2 M) \left(\mu ^2 r+2 i \omega \right) e^{i \omega t_p(r)}}{\sqrt{r} \left(-4 \mu ^2 M+\mu ^4 r^3+2 \mu ^2 r+4 r \omega ^2\right)^{3/2}}\ .
\end{equation}
\subsubsection{$\ell=$ 1}\label{sec:polarequations_l1}
Finally, for the dipole term in the polar sector, the function $G$ vanishes, and the perturbations are
completely determined by two coupled equations for $K$ and $\eta_1$, which can be derived, after
some manipulations from the $(tt)$ and $(rr)$ components of eqns.~\eqref{fieldeq1}.
It is possible to write the equations in a linear system:
\begin{equation}
\left[\frac{d}{dr_\star}+V^{\ell=1}_\textnormal{pol}(r)\right]{\bf \Sigma}={\bf {\cal S}}^{\ell=1}_\textnormal{pol} \ , \label{poleql1_v2}
\end{equation}
where ${\bf \Sigma}=(K,\eta_1,dK/dr_\star,d\eta_1/dr_\star)^\textnormal{T}$, and $V^{(1)}_\textnormal{pol}$ is a $4\times 4$
potential matrix.
Once the solution of the system is known, the remaining components of the metric $(H_0,H_1,\eta_0,H_2)$ are
obtained from the $(rr)$ component of eqns.~\eqref{fieldeq1}, from the $r$ and $\theta$ terms of eqns.~\eqref{fieldeq2},
and from the trace of the metric perturbation \eqref{fieldeq3}, respectively. The general expressions for the
source terms can be found in the online notebooks~\cite{grit}, while in the highly relativistic
head-on collisions, the vector ${\bf {\cal S}}^{\ell=1}_\textnormal{pol}$ reduces to eqns.~(6)-(7) of the main letter.
\section{Components of the stress-energy tensor}\label{sec:appendixD}
In the case of radial plunge the form of the stress-energy tensor greatly simplifies. Moreover, the
axial sector is not excited, as ${\cal B}^{(0)}_{\ell m }={\cal B}_{\ell m }={\cal Q}^{(0)}_{\ell m }={\cal Q}_{\ell m }
={\cal D}_{\ell m }={\cal F}_{\ell m }={\cal G}_{\ell m }=0$, and $T_{\mu\nu}$ is given by
\begin{equation}
T^{\ell m}_{\mu\nu}=
\begin{bmatrix}
{\cal A}^{(0)}_{\ell m } & \frac{i}{\sqrt{2}}{\cal A}^{(1)}_{\ell m } & 0 & 0\\
\frac{i}{\sqrt{2}}{\cal A}^{(1)}_{\ell m } & {\cal A}_{\ell m } & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0
\end{bmatrix}Y_{\ell m }(\theta,\varphi)\ .
\end{equation}
The non vanishing coefficients of the harmonic expansion then read:
\begin{eqnarray}
{\cal A}^{(0)}_{\ell m }&=&m_pu^0\left(\frac{dr_p}{dt}\right)^{-1}\frac{f(r)^2}{r^2}e^{i\omega t_p}Y_{\ell m }^\star(\theta_p,\varphi_p)\ , \nonumber\\
{\cal A}_{\ell m }&=&m_pu^0\frac{dr_p}{dt}\frac{e^{i\omega t_p}}{r^2f(r)^2}Y_{\ell m }^\star(\theta_p,\varphi_p)\ , \nonumber\\
{\cal A}^{(1)}_{\ell m }&=&\sqrt{2}i m_pu^0 \frac{e^{i\omega t_p}}{r^2}Y_{\ell m }^\star(\theta_p,\varphi_p)\ ,
\end{eqnarray}
where $u^0=dt_p/d\tau=\gamma f(r)^{-1}$, $dr_p/dt=-f(r) \gamma^{-1}\sqrt{\gamma^2-f(r)}$, and $t_p=t_p(r)$ is a function of the radial orbital coordinate of the test body, that evolves according
to
\begin{equation}
\frac{dt_p}{dr}=\left(\frac{dr_p}{dt}\right)^{-1}-\frac{\gamma f(r)^{-1}}{\sqrt{\gamma^2-f(r)}}\,,\label{dtdr}
\end{equation}
with $\gamma$ being the particle's energy per unit mass (i.e., the Lorentz boost). Equation \eqref{dtdr} can be integrated together with the
equations of the metric perturbations.
The procedure to compute the form of $T_{\mu\nu}$ for circular orbits at a given radius $\hat{r}$ is also
straightforward. As in the Schwarzschild background geodesics are all planar, we can choose, without loss
of generality, that $\theta_p=\pi/2$, and then
\begin{equation}
r_p(t)=\hat{r}\quad \ , \quad \varphi_p(t)=\omega_pt=\sqrt{\frac{M}{r^3}}t\ ,
\end{equation}
where $\omega_p$ Keplerian frequency. The 4-velocity of the test particle reads
\begin{equation}
u^\mu=\left(\sqrt{\frac{r}{r-3M}},0,0,\frac{1}{r}\sqrt{\frac{M}{r-3M}}\right)\ .
\end{equation}
For this configuration we have ${\cal A}={\cal A}^{(1)}={\cal B}={\cal Q}={\cal F}={\cal D}=0$, while the non vanishing
coefficients of the expansion \eqref{harmonicexp} can be expressed in terms of the orbital parameters as follows:
\begin{eqnarray}
{\cal A}_{\ell m }^{(0)}&=&\frac{f(r)^2m_p N_\ell}{\sqrt{2r^3(r-3M)}} P_{\ell m }\delta_\omega\delta_r\,,\nonumber\\
{\cal B}_{\ell m }^{(0)}&=&\frac{\sqrt{2}m_pf(r)\omega_p m N_\ell}{\sqrt{(\Lambda+1)r(r-3M)}}\delta_\omega\delta_rP_{\ell m }\,,\nonumber\\
{\cal Q}_{\ell m }^{(0)}&=&\frac{m_pf(r)\omega_p(1+\ell-m )N_\ell}{\sqrt{(\Lambda+1)\pi r(r-3M)}}\delta_\omega\delta_rP_{\ell+1 m }\,,\nonumber\\
{\cal G}_{\ell m }&=&\frac{m_p\omega_p^2r N_\ell}{2\sqrt{r(r-3M)}}\delta_\omega\delta_rP_{\ell m }\,,\nonumber\\
{\cal D}_{\ell m }&=&\frac{m_p\omega_p^2r (1+\ell-m)mN_\ell}{2\sqrt{\Lambda(\Lambda+1)r(r-3M)}}\delta_\omega\delta_rP_{\ell+1 m }\,,\nonumber\\
{\cal F}_{\ell m }&=&\frac{m_p\omega_p^2r [2(\Lambda+1)-2m^2]N_\ell}{\sqrt{2\Lambda(\Lambda+1)r(r-3M)}}\delta_\omega\delta_rP_{\ell m }\,.
\end{eqnarray}
Here, $\delta_\omega\delta_r=\delta(\omega-m \omega_p)\delta(r-r_p)$, $\Lambda=\frac{\ell(\ell+1)-2}{2}$,
$P_{\ell m }(\theta_p) = P_{\ell m }\left(\pi/2\right)$ are the Legendre polynomials, and
$N_\ell=\sqrt{(2\ell+1)\frac{(\ell-m )!}{(\ell+m )!}}$ is a normalization factor.
\section{Numerical integration}\label{sec:appendixE}
In order to compute the metric perturbations produced by the infalling test particle, we need to solve the
inhomogeneous equations derived in Sec.~\ref{sec:axialequations}-\ref{sec:polarequations}.
To this aim, we employ a Green function approach
We first consider a single second order equation of the form
\begin{equation}
\frac{d^2\varphi}{dr^2_\star}+(\omega^2-V)\varphi=S\,,\label{scalareq}
\end{equation}
as the one obtained for the monopole polar component \eqref{l0polar_app}. The homogeneous
problem associated to eq.~\eqref{scalareq} is solved requiring that the solution satisfy purely
ingoing waves at horizon and purely outgoing waves at spatial infinity. Let's assume that
$\varphi_\pm(\omega,r_\star)$ are the two solutions of equation $$\frac{d^2\varphi}{dr^2_\star}+(\omega^2-V)\varphi=0$$
with boundary conditions
\begin{equation}
\lim_{r_\star\rightarrow -\infty}\varphi_- \sim e^{- i\omega r_\star}\,,\quad
\lim_{r_\star\rightarrow\infty}\varphi_+ \sim e^{k_\omega r_\star}\,,
\end{equation}
where $k_\omega =\sqrt{V(r_\star\rightarrow\infty)-\omega^2}=\sqrt{\mu^2-\omega^2}$ is a complex quantity.
For $r_\star\rightarrow \infty$ the $\varphi_-$ component tends to the following analytical expression:
\begin{equation}
\lim_{r_\star\rightarrow\infty}\varphi_{-}=\zeta(\omega)e^{k_\omega r_\star}+\beta(\omega)e^{-k_\omega r_\star}\,.
\end{equation}
It can be shown that far from the source the general solution for the inhomogeneous equation \eqref{scalareq} can be constructed starting from $\varphi_-(\omega,r_\star)$ as
\begin{equation}
\varphi_\textnormal{out}({\omega,r_\star})=\frac{e^{k_\omega r_\star}}{2k_\omega \beta(\omega)}\int_{-\infty}^{\infty}
S(\omega,r_\star) \varphi_{-}(\omega,r_\star)dr_\star\,,\nonumber
\end{equation}
with $\omega^2>V(r_\star\rightarrow\infty)$.
This procedure can be generalized for a system of coupled differential equations, as those
derived for the $\ell =1$ polar component \cite{Pani:2011ir}. For a given set of functions ${\bf \Psi}(r,\omega)=(\Psi_1,\ldots,\Psi_n)$ that satisfy the linear system
\begin{equation}
\left[\frac{d}{dr_\star}+V\right]{\bf \Psi}={\bf S} \ ,\label{lineareq}
\end{equation}
with $n$ source terms ${\bf S}=( S_1,\ldots, S_n)$, the general solution is given by:
\begin{equation}
{\bf \Psi}(r,\omega)=X\int_{-\infty}^{\infty}dr_\star X^{-1} {\bf S}\ .\label{greensystem}
\end{equation}
The fundamental matrix $X$ contains $2n$ independent solutions of the associated homogeneous problem
which satisfy suitable boundary conditions, and that can be constructed as follows. At the horizon $r_h$,
the proper solution has the following form:
\begin{equation}
{\bf \Psi}(r_h)=\sum_{i=0}^{N}{\bf b}^{(i)}(r-r_h)^ie^{-i\omega r_\star}\ ,\label{horizoncond}
\end{equation}
where in general, the coefficients of the expansion depend on the form the leading vector ${\bf b}^{(0)}=(b_{1},\ldots,b_{n})$ only. The first $n$ components of the matrix
$X$ can be obtained by choosing $n$ initial conditions corresponding to ${\bf b}^{(0)}_1=(1,0,\ldots,0)$, ${\bf b}^{(0)}_2=(0,1\ldots,0)$ and ${\bf b}^{(0)}_n=(0,0,\ldots,1)$, and integrating the homogeneous equations {\it forward} from $r_h$.
In the same way, at infinity, we ask the solution to satisfy the boundary condition
given by
\begin{equation}
{\bf \Psi}(r\rightarrow \infty)=\sum_{i=0}^{N}{\bf c}^{(i)}\frac{1}{r^i}r^{\nu}e^{k_\omega r_\star}\ ,\label{infcon}
\end{equation}
where $\nu=M(k_\omega ^2-\omega^2)/k_\omega $ and again, the coefficients ${\bf c}^{(i)}$ are all proportional to the $i=0$ terms only. Therefore,
the remaining $n$ component of the fundamental matrix can be constructed by specifying $n$ linear
independent vectors $[c^{(0)}_1=(1,\ldots,0),\ldots,c^{(0)}_n=(0,\ldots,1)]$ and integrating the homogeneous
system {\it inward} down to the horizon. With this algorithm we can easily generate the components of $X$
and then integrate eq.~\eqref{greensystem}.
As an example, for the $\ell=1$ circular motion (described in Sec.~III-B of the main letter), the linear
system \eqref{lineareq} is defined in terms of metric functions $K(\omega,r)$ and $\eta_1(\omega,r)$.
In this case, the boundary conditions \eqref{horizoncond}-\eqref{infcon} are specified by two parameters
$(K^{(0)h},\eta_1^{(0)h})$ and $(K^{(0)\infty},\eta_1^{(0)\infty})$ respectively.
The matrix $X$ can be then constructed by 4 solutions of eqns.~\eqref{lineareq}. Two are obtained solving
the ODEs from the horizon to infinity choosing the initial conditions as $(K^{(0)h},\eta_1^{(0)h})=(1,0)$ and
$(K^{(0)h},\eta_1^{(0)h})=(0,1)$. The other two can be derived integrating inward from infinity to the horizon with
$(K^{(0)\infty},\eta_1^{(0)\infty})=(1,0)$ and $(K^{(0)\infty},\eta_1^{(0)\infty})=(0,1)$. At the end we obtain:
\begin{equation}
X=
\begin{bmatrix}
K_h^{(1,0)} & K_h^{(1,0)} & K_\infty^{(1,0)} & K_\infty^{(0,1)}\\
\eta_h^{(1,0)} & \eta_h^{(1,0)} & \eta_\infty^{(1,0)} & \eta_\infty^{(0,1)}\\
K_h^{'(1,0)} & K_h^{'(1,0)} & K_\infty^{'(1,0)} & K_\infty^{'(0,1)}\\
\eta_h^{'(1,0)} & \eta_h^{'(1,0)} & \eta_\infty^{'(1,0)} & \eta_\infty^{'(0,1)}\\
\end{bmatrix}\ ,
\end{equation}
where primes identify tortoise derivatives.
Once the numerical Fourier-domain solution is known, the gravitational waveform in the time domain is given by
\begin{eqnarray}
{\bf \Psi}(t)&=&\int_{-\infty}^{\infty}\frac{{\bf \Psi}^\textnormal{out}(r,\omega)e^{-i\omega t}}{\sqrt{2\pi}}d\omega\nonumber\\
&=&\int_{-\infty}^{\infty}\frac{{\bf A}^\textnormal{out}(\omega)e^{\sqrt{\mu^2-\omega^2}r}e^{-i\omega t}}{\sqrt{2\pi}}d\omega\label{timewave}\ ,
\end{eqnarray}
where ${\bf A}^\textnormal{out}(\omega)$ is the generic amplitude of the perturbation. Note that in order to compute the integral
\eqref{timewave} we need to specify an {\it extraction} radius, fixing $r=R\gg r_h$. The result of this approach is shown in
Fig.~1 and Fig.~3 of the letter for the $\ell=(0,1)$ polar sector. The waveforms have been produced using two independent
\texttt{Mathetematica} codes, which yield the same output within the software numerical accuracy.
\section{The effective stress energy tensor}\label{sec:appendixF}
In this section we describe the basic equations to derive the explicit form of the GW luminosity for the dRGT theory.
We first compute an effective stress-energy tensor $T_{\mu\nu}^\textnormal{GW}$ for the metric perturbations
using the Noether theorem \cite{Finn:2002dd} applied to the the Lagrangian \eqref{dgrtaction}, which reduces, in flat spacetime,
to the following expression:
\begin{eqnarray}
{\cal L}&=&\frac{1}{64\pi}\bigg[h_{\mu\nu,\lambda}h^{\mu\nu,\lambda}-2h_{\mu\nu}{^{,\nu}}h^{\mu\lambda}{_{,\lambda}}+2h_{\mu\nu}{^{,\nu}}h^{,\mu}\nonumber\\
&-&h^{,\mu}h_{,\mu}-32\pi h_{\mu\nu}T^{\mu\nu}+\mu^2(h_{\mu\nu}h^{\mu\nu}-h^2)\bigg]\,,\nonumber
\end{eqnarray}
such that
\begin{equation}
T_{\mu\nu}^\textnormal{GW}=\left\langle\frac{\partial {\cal L}}{\partial h^{\alpha\beta,\mu}}h^{\alpha\beta}{_{,\nu}-\eta_{\mu\nu}{\cal L}}\right\rangle \nonumber\,.
\end{equation}
Varying the Lagrangian, and using the field equations \eqref{fieldeq1}-\eqref{fieldeq3} to further simplify
our calculations, we find
\begin{equation}
T_{\mu\nu}^\textnormal{GW}
=\frac{1}{32\pi}\left\langle h_{\alpha\beta,\mu}h^{\alpha\beta}{_{,\nu}}-h_{,\mu}h_{,\nu}\right\rangle\ ,
\end{equation}
where angular brackets correspond to averaging on several wavelengths. The GW luminosity is then given by integrating the
components $T^\textnormal{GW}_{0i}$, that provide the energy which flows across the unit surface orthogonal to the axis $x^i$ per
unit time, i.e.,
\begin{eqnarray}
L_\textnormal{GW}&=&\frac{dE}{dt}=\int T_{0i}^\textnormal{GW}n^ir^2d\Omega=\int T_{0r}^\textnormal{GW}r^2d\Omega\nonumber\\
&=&\frac{r^2}{32\pi}\int \langle h_{\alpha\beta,\mu}h^{\alpha\beta}{_{,\nu}}\rangle d\Omega=\frac{r^2}{32\pi} {\cal C}_{\mu\nu}\,,\label{LGW}
\end{eqnarray}
where $n^i=x^i/r$ is the unit vector specifying the normal to the surface element $dS^i$, and ${\cal C}_{\mu\nu}$ is the
angular average of the product of the metric perturbation derivatives.
We can write the previous expression directly in the frequency domain. We first note that the generic
perturbation $h_{\alpha\beta}$ depends on the angular variables through the spherical harmonics of the multipole decomposition.
Using the orthonormality properties of $Y_{\ell m }(\theta,\phi)$ we can immediately perform the integral over the solid angle in
eq.~\eqref{LGW}. Moreover, using the Fourier transform (in cartesian coordinates):
\begin{align}
h_{\alpha\beta}&(\vec{x},t)=\frac{1}{\sqrt{2\pi}}\sum_{\ell,m }\int_{-\infty}^{\infty}d\omega e^{-i\omega t}h^{\ell m}_{\alpha\beta}(\vec{x},\omega)\nonumber\\
&\simeq\frac{1}{\sqrt{2\pi}r}\sum_{\ell,m }\int_{-\infty}^{\infty}d\omega e^{-i\omega t}e^{k_\omega r}h^{\ell m}_{\alpha\beta}(\omega)\nonumber\\
&=\sqrt{\frac{2}{\pi}}\frac{1}{r}\sum_{\ell,m }\Re \int_{0}^{\infty}d\omega e^{-i\omega t}e^{k_\omega r} h^{\ell m}_{\alpha\beta}(\omega) \label{hrt}\,.
\end{align}
where $h^{\ell m }_{\alpha\beta}=h^{\textnormal{ax},\ell m }_{\alpha\beta}+h^{\textnormal{pol},\ell m }_{\alpha\beta}\ .$
We can now plug the former expressions into eq.~\eqref{LGW} and integrate over time, in order to compute the total emitted energy,
\begin{equation}
E=\frac{1}{16\pi}\sum_{\ell,m }\int_{0}^{\infty}d\omega \big[ {\cal C}^{\textnormal{ax},\ell m_{\ell}}_{0r}+{\cal C}^{\textnormal{pol},\ell m_{\ell}}_{0r} \big] \omega \sqrt{\omega^{2}-\mu^2}\,,\nonumber
\end{equation}
and the GW energy spectrum,
\begin{equation}
\frac{dE}{d\omega}=\frac{1}{16\pi}\sum_{\ell,m }[{\cal C}^{\textnormal{ax},\ell m}_{0r}+{\cal C}^{\textnormal{pol},\ell m}_{0r}]\omega \sqrt{\omega^{2}-\mu^2}\label{dEdo}\,,
\end{equation}
where,
\begin{eqnarray} {\cal C}_{0r}^{\text{pol},lm} =&\phantom{+}& \vert H_0(\omega)\vert^2 - 2\vert H_1(\omega)\vert^2 + \vert H_2(\omega)\vert^2 \nonumber\\
&-&\frac{2\lambda}{r^2}\vert \eta_0(\omega)\vert^2 + \frac{\lambda}{2}(\lambda-1)\vert G(\omega)\vert^2\nonumber\\
& +& 2\vert K(\omega)\vert^2 + \frac{2\lambda}{r^2}\vert \eta_1(\omega)\vert^2 \nonumber\\
&-& \lambda[K^*(\omega)G(\omega) + G^*(\omega)K(\omega)]\nonumber\\
{\cal C}_{0r}^{\text{ax},lm} = &-& \frac{2\lambda}{r^2}\vert h_0(\omega)\vert^2 + \frac{2\lambda}{r^2}\vert h_1(\omega)\vert^2 \nonumber\\
&+& \frac{\lambda(\lambda-2)}{2r^2}\vert h_2(\omega)\vert^2\,.\nonumber
\end{eqnarray}
For the polar $\ell=0$ case, we simply have:
\begin{equation}
\frac{dE^{\ell=0}}{d\omega}=3\frac{\vert\varphi_0(\omega)\vert^2}{8\pi}\omega\sqrt{\omega^2-\mu^2}\mu^4\ .\label{dedomegal0}
\end{equation}
As described in Sec.~\ref{sec:polarequations_l1}, for the dipolar mode, the metric perturbations are completely determined by
$K$ and $\eta_1$. The remaining variables $(H_0,H_1,\eta_0,H_2)$ are therefore determined in terms of the formers,
once the numerical solution is known.
Substituting these expressions within eq.~\eqref{dEdo} we find the
energy spectrum of the $\ell=1$ mode in the radial infall case:
\begin{align}
\frac{dE^{\ell=1}}{d\omega}&=\frac{\omega\sqrt{\omega^2-\mu^2}}{16\pi}\left(2\vert K\vert^2+4\frac{\mu^2}{\omega^2}\vert \eta_1\vert^2\right.\nonumber\\
&\left.+9\vert K +\eta_1\vert^2+\left(2\frac{\mu^2}{\omega^2}-1\right)\vert K +3\eta_1\vert^2\right)\ .
\end{align}
For circular orbits, we have also computed the luminosity $dE/dt$. The computation in this case is straightforward, as the
source terms of the field's equations depend on $\delta(\omega- \omega_p)$, where
$\omega_p=m \sqrt{M/r_p^3}$ is the
Keplerian frequency of the test particle on the equatorial orbit specified by the radius $r_p$ (see Sec.~\ref{sec:appendixD}). Therefore,
we can write the generic perturbation as $h^{\ell m }_{\alpha\beta}(r,\omega)=\bar{h}^{\ell m }_{\alpha\beta}(r,\omega)\delta(\omega-m \omega_p)$, and integrate the Dirac's delta in eq.~\eqref{hrt}. Replacing this expression into the GW luminosity
\eqref{LGW}, and averaging over several periods, we finally obtain the energy per unit time emitted by the system, with
a test-body on circular geodesics,
\begin{align}
\frac{dE}{dt}=\frac{1}{16\pi} &\sum_{\ell,m }\sum_{\alpha,\beta=0}^4\bigg(\vert \bar{h}^{\textnormal{ax},\ell m }_{\alpha\beta}(\omega_p)\vert^2\nonumber\\
&+ \vert \bar{h}^{\textnormal{pol},\ell m }_{\alpha\beta}(\omega_p)\vert^2\bigg)
\omega_p \sqrt{\omega_p^{2}-\mu^2}.
\end{align}
Again, the $\ell=1$ component is explicitly given by
\begin{align}
\frac{dE^{\ell=1}}{dt}&=\frac{\omega_p\sqrt{\omega_p^2-\mu^2}}{32\pi^{2}}\left(2\vert \bar{K}\vert^2+4\mu^2/\omega^2\vert
\bar{\eta}_1\vert^2\right.\nonumber\\
&\left.+9\vert \bar{K} +\bar{\eta}_1\vert^2+(2\mu^2/\omega_p^2-1)\vert \bar{K} +3\bar{\eta}_1\vert^2\right)\ .
\end{align}
\section{Introduction}
General Relativity (GR) is special and unique, in a very precise mathematical sense~\cite{Berti:2015itd,Barack:2018yly}.
Nevertheless, several arguments suggest that such an elegant theory cannot easily accommodate neither the ultraviolet nor the infrared description of the universe. Simultaneously, observations of large-scale phenomena indicate that either the matter sector or the gravitational interaction require a better understanding. In other words, extensions of GR are welcome. One of the possible extensions draws inspiration from the standard model of particle physics, and consists in
allowing for a massive graviton~\cite{Hinterbichler:2011tt,deRham:2014zqa,Barack:2018yly,Hassan:2012ka}.
Bounds on such theories can be imposed via gravitational-wave (GW) emission and propagation mechanisms. These include:
\noindent i. Modified dispersion relations for GWs, {\it assuming} that their generation is as in GR~\cite{Will:1997bb,TheLIGOScientific:2016src,Abbott:2017oio}.
\noindent ii. The spin-down of black holes (BHs), caused by superradiant instabilities~\cite{Brito:2013wya}.
\noindent iii. Changes in the orbital period of binary pulsars, caused by a different energy flux~\cite{Finn:2001qi}.
Other mechanisms may also help in bounding the graviton mass, such as modifications of the GW memory effect~\cite{Kilicarslan:2018bia}.
There are no constraints using directly the measured properties of GWs, without any assumption on the production
mechanism. Our main concern here is precisely to compute the gravitational waveform and fluxes from the merger of two compact objects,
using the strong-field regime of massive gravity theories. We consider the ghost-free theory describing two interacting spin-2 fields
described in detail in the supplementary material.
Following all observational evidence thus far, we consider only BHs which are as similar as possible to those in GR; in particular, we study Schwarzschild BHs which are also exact solutions of massive bi-gravity theories. We focus on the truly unique features of massive gravity theories: the extra polarizations with respect to GR and their signatures on the GW emission.
We thus consider mergers of extreme-mass ratio objects in which the massive one is a Schwarzschild BH. We will show that
the extra degrees of freedom give rise to substantially different GW-signals, even when the underlying backgrounds are exactly the same.
The calculations can, in principle, encompass also collapsing objects as long as the final state is a BH.
Finally, the extrapolation to nearly-equal mass objects
allows us to infer physics of interest to Earth-based detectors.
Throughout this work we use geometrized units, in which $G=c=1$.
\section{Formalism and master equations}\label{Sec:metricequations}
In our framework, a small point particle is orbiting, or merging with, a massive Schwarzschild BH of mass $M$. This system may
model the merger of a neutron star with a stellar-mass or a supermassive BH, but it may well describe qualitatively the merger of
two equal-mass BHs as well. In fact, the lesson from the two-body problem in GR is that perturbation
theory is able to account for this process even at a quantitative level~\cite{Cardoso:2014uka}. The point particle moves
on a spacetime geodesic $y_p^{\mu}(\tau)=(t_p(\tau),r_p(\tau),\theta_p(\tau),\varphi_p(\tau))$, with $\tau$ being the test body
proper time. The particle is taken to be pointlike and described by the stress-energy tensor
\begin{equation}
T^{\mu\nu}=m_p\int (-g)^{1/2} u^\mu u^\nu \delta^{(4)}(x^\beta-y_p^\beta)d\tau\ ,\label{stressnergy}
\end{equation}
where $m_p$ is the rest mass of the test particle and $u^\mu=dy_p^{\mu}/d\tau$ its 4-velocity. The point particle stress
slightly disturbs the background geometry $\bar{g}_{\mu\nu},\, \bar{f}_{\mu\nu}$ (the theory has two metrics) describing
the BH and a graviton of mass $\mu$. The latter is given by a specific combination of the coupling parameters of the
theory~\cite{Brito:2013wya}. Here, we study backgrounds for which the two
metrics $\bar{g}_{\mu\nu}$ and $\bar{f}_{\mu\nu}$ are proportional, leading to geometries which coincide with those of GR~\cite{Hassan:2012wr}. The stress energy-tensor contributes with fluctuations $(\delta g_{\mu\nu}, \delta f_{\mu\nu})$, which we analyse in tensor spherical harmonics and Fourier decompose. Details are left for the the supplementary material.
\subsection{Head-on collisions} \label{sec:headon_eqs}
Hereafter, we consider two prototypical dynamical processes: radial infall corresponding to head-on collisions, and pure equatorial
motion corresponding to quasicircular inspirals (once radiation reaction is taken into account). The complete expressions for the
source components in these two specific configurations are shown in Sec.~IV of the supplementary material.
For radial motion, axial perturbations are not excited. The multipolar expansion then describes only polar-type perturbations with $
\ell\geq0$. Of these, the $\ell\geq 2$ equations contain small $\mu$-dependent corrections to the GR expressions.
We do not consider these any further~\footnote{Such corrections were studied in some detail in the weak-field, slow-motion limit elsewhere~\cite{Finn:2001qi}.} and focus on the truly unique properties of massive gravity: the presence of new degrees of freedom, described by the $\ell=0$ and $\ell=1$ modes.
For the monopole, $\ell=0$ mode, the number of perturbation functions reduces to the four metric components $(H_0,H_1,H_2,K)$
(see supplementary material and Ref.~\cite{Brito:2013wya}). Through the following transformation:
\begin{equation}
K=\frac{\sqrt{-4 \mu ^2 M+\mu ^4 r^3+2 \mu ^2 r+4 r \omega ^2}}{r^{5/2}}\varphi_0\ ,
\end{equation}
we obtain a single wave equation for $\varphi_0$:
\begin{equation}
\frac{d^2\varphi_0}{dr_{\star}^2} + [\omega^2-V^{\ell=0}_\textnormal{pol}(r,\omega)]\varphi_0 = {\bf {\cal S}}^{\ell=0}_\textnormal{pol}\ .\label{eq:l0polar}
\end{equation}
Here, $V^{\ell=0}_\textnormal{pol}(r,\omega)$ is a radial potential whose expression is lengthy and not very illuminating, while $r_\star$ is a tortoise coordinate defined by $dr_\star/dr=1/f$. The potential
$V^{\ell=0}_\textnormal{pol}(r,\omega)\sim \mu^2$ at large spatial distances, and it vanishes close to the BH horizon. The source
term $ {\bf {\cal S}}^{\ell=0}_\textnormal{pol}$ depends on the radial position and on the energy with which the point particle is colliding.
In the highly relativistic regime,
\begin{equation}
{\bf {\cal S}}^{\ell=0}_\textnormal{pol}=\frac{8 \sqrt{2} m_p\gamma (r-2 M) \left(\mu ^2 r+2 i \omega \right) e^{i \omega t_p(r)}}{\sqrt{r} \left(-4 \mu ^2 M+\mu ^4 r^3+2 \mu ^2 r+4 r \omega ^2\right)^{3/2}}\ ,\label{eq:l0polar_source}
\end{equation}
where $\gamma$ is the Lorentz boost factor of the test particle at large spatial separations. Note that the $z$-axis is chosen to coincide with the particle trajectory, hence only $m=0$ modes are excited.
For the dipole $\ell=1$ term the perturbations are completely determined by two coupled equations for $K$ and $\eta_1$, which
can be recast in a linear form as:
\begin{equation}
\left[\frac{d}{dr_\star}+V^{\ell=1}_\textnormal{pol}(r)\right]{\bf \Sigma}={\bf {\cal S}}^{\ell=1}_\textnormal{pol} \ , \label{poleql1}
\end{equation}
where ${\bf \Sigma}=(K,\eta_1,dK/dr_\star,d\eta_1/dr_\star)^\textnormal{T}$, and $V^{(1)}_\textnormal{pol}$ is a $4\times 4$
matrix which is shown within the supplementary material.
For a radial infalling particle with a relativistic boost factor, the source vector is simply given by
\begin{equation}
{\bf {\cal S}}^{\ell=1}_\textnormal{pol}=(0,0,S_K,S_{\eta_1})=(0,0,f(r)/r,1)S_{\eta_1}\ ,\label{sourcel10radial}
\end{equation}
where $f(r)=(1-2M/r)$ and
\begin{equation}
S_{\eta_1}=-\frac{8\sqrt{6}m_p\gamma(2+r^2\mu^2+2i r\omega)e^{i\omega t_p(r)}}{4M r^2\mu^2-8M-6r^3\mu^2-r^5\mu^4-4r^3\omega^2
}\ .\label{sourcel1radial}
\end{equation}
\subsection{Quasi-circular inspirals} \label{sec:inspiral_eqs}
For circular motion, the only non-trivial new degree of freedom is the dipolar-polar component.
Our system of equations can be written as
\begin{eqnarray}
&&K''+ a_1 K'+a_2K+a_3\eta_1'+a_4\eta_1=S_1\delta(r-r_p)\,,\label{eq:K}\\
&&\eta_1''+ b_1 \eta_1'+b_2\eta_1+b_3K'+b_4K=S_2\delta(r-r_p)\,,
\end{eqnarray}
where primes stand for tortoise derivatives, and $r_p$ is the orbital radius of the test particle.
The system above can be cast in the form
\begin{equation}
\left[\frac{d}{dr_\star}+V^{\ell=1}_\textnormal{pol}(r)\right]{\bf \Sigma}={\bf {\cal S}}^{\ell=1}_\textnormal{circ} \,, \label{l1poleq}
\end{equation}
being ${\bf \Sigma}=(K,\eta_1,dK/dr_\star,d\eta_1/dr_\star)^\textnormal{T}$ and ${\bf {\cal S}}^{\ell=1}_\textnormal{circ}=(0,0,S^\textnormal{circ}_K,S^\textnormal{circ}_{\eta_1})$.
We solve eq.~\eqref{l1poleq} by first constructing a $4\times 4$ fundamental matrix $X$ built with the homogenous solutions of
the previous system (see Sec.~V of the supplementary), which yields the general solution:
\begin{equation}
{\bf \Sigma}(\omega,r)=X\int^{\infty}_{-\infty} X^{-1}{\bf {\cal S}}^{\ell=1}_\textnormal{circ} dr_\star\ .\label{circsol}
\end{equation}
Note that the source vector contains a linear combinations of the Dirac delta and its first derivative.
Therefore, integrating by part eq.~\eqref{circsol} we can immediately obtain an explicit form for the metric
functions, ${\bf \Sigma}(\omega,r)=X[{\bf A}+{\bf B}]$, where ${\bf A}$ and ${\bf B}$ are two
vectors given by:
\begin{subequations}
\begin{align}
{\bf A}=&\left(1-\frac{2M}{r_p}\right)X^{-1}(r_p){\bf {\cal S}}^{\ell=1}_\textnormal{circ}(r_p)\ ,\\
{\bf B}=&-\frac{d}{dr}\left[\left(1-\frac{2M}{r}\right)X^{-1}{\bf {\cal S}}^{\ell=1}_\textnormal{circ}\right]_{r=r_p}\ .
\end{align}
\end{subequations}
\section{Numerical results}\label{sec:numerical}
In this section we describe the numerical results obtained by solving the systems of ODEs for the
monopole and dipole component of the polar sector. As described in Sec.~\ref{Sec:metricequations},
we consider circular and radial trajectories: for both the configurations, axial modes are
not excited, as the source terms vanish. We integrate eq.~\eqref{eq:l0polar} and eqns.~\eqref{poleql1}
through a Green function approach, with appropriate boundary conditions at the BH horizon and at spatial
infinity (see the supplementary material for further details).
\subsection{Head-on collisions}\label{sec:numerical_headon}
\begin{figure}[th]
\centering
\includegraphics[width=5.5cm]{./myfig1}
\caption{GW energy spectrum $dE/d\omega$ for the $\ell=0$ polar mode, with $M\mu=(0.1,0.01)$
and a radial infalling particle.}
\label{fig:l0_amp}
\end{figure}
\begin{figure}[th]
\begin{tabular}{c}
\includegraphics[width=6cm]{./myfig2a}\\
\includegraphics[width=6cm]{./myfig2b}
\end{tabular}
\caption{Gravitational waveforms for the $\ell=0$ component of the polar sector, and a radial infalling particle, as a function of the retarded time $(t-r)/M$.
We consider $M\mu=0.01$ and different extraction radius at $R=(10,100)$. The waveform scales trivially with the BH mass $M$ and the particle mass and boost $m_p, \gamma$, according to eq.~\eqref{peak_time}. \label{fig:l0_polar} }
\end{figure}
For head-on collisions, the waveform amplitude scales linearly with the mass of the infalling point particle, and the only free parameter is the relative velocity at large distances. We fix this to be relativistic, and we find, as expected, that the amplitude then scales linearly with the boost factor $\gamma$. Although our formalism includes the general case, relativistic collisions should mimic well the late stages of inspiral. In addition, and perhaps more important for us here, they should also describe even explosive events such as supernovae. In theories of massive gravity, even spherically symmetric explosive events release a non-negligible amount of radiation in the monopole mode.
The energy spectrum $dE/d\omega$ for the monopole perturbation is shown
in Fig.~\ref{fig:l0_amp} as a function of the frequency $\omega$, for a head-on collision.
The spectrum peaks close to the value of the graviton mass, and quickly decays to zero for higher frequencies. The total integrated energy is not shown but it scales like $E_{\rm tot}\sim 0.01 \mu m_p^2\gamma^2$ at small couplings $M\mu$.
Knowing the solution in the frequency domain, we can immediately compute the GW signal as a function of the retarded time
by simply applying a Fourier transform to $\varphi_0(\omega)$. This is shown in Fig.~\ref{fig:l0_polar} for two values
of the extraction radii $R=r\mu=(10,100)$~\cite{extraction_radii}. It is important to highlight that GWs in theories of massive gravity are {\it dispersed}: the waveform at large distances is no longer a function only of $t-r$. This property is apparent in Fig.~\ref{fig:l0_polar} and was also recently discussed in other setups~\cite{Sperhake:2017itk}. We find that the peak of the (time-domain) waveform can be described by the following scaling,
\begin{equation}
\varphi_0^{\rm peak}\sim \kappa\frac{m_p\gamma M^2}{(M\mu)^{5/2}R^{1/2}}\,,\label{scalingl0}
\end{equation}
where $\kappa\simeq0.055$, when the extraction radius $R>1$. This is not too surprising, given the $\mu$-dependence of the source term,
eq.~\eqref{sourcel1radial}, at low frequencies $\omega\sim \mu$. Our results indicate that the peak of the amplitude, with respect to the beginning of the signal, can be approximated by the following law,
\begin{eqnarray}
&&(t-r)^{\rm peak}\sim (M\mu)^{-0.34} MR\nonumber\\
&\sim& 1800 \left(\frac{M}{M_{\odot}}\right)^{0.66}\left(\frac{\mu}{10^{-23}{\rm eV}}\right)^{0.66}\frac{r}{8{\rm Kpc}}{\rm secs}\,.\nonumber
\end{eqnarray}
When expressed in terms of physical metric perturbations, we find
\begin{eqnarray}
K^{\rm peak}&=&\kappa\frac{m_p}{M}\left(\frac{M}{r}\right)^{3/2}\frac{1}{M\mu} \label{peak_time}\\
&\sim&10^{-16}\frac{m_p\gamma}{0.01M}\sqrt{\frac{M}{M_{\odot}}}\left(\frac{8\,{\rm Kpc}}{r}\right)^{3/2}\frac{10^{-23}\,{\rm eV}}{\mu}\nonumber\\
&\sim&10^{-22}\frac{m_p\gamma}{0.01M}\sqrt{\frac{M}{M_{\odot}}}\left(\frac{{\rm Gpc}}{r}\right)^{3/2}\frac{10^{-25}\,{\rm eV}}{\mu}\ .\nonumber
\end{eqnarray}
These numbers are encouraging, however the large-amplitude signals carry a low-frequency content $\omega \sim \mu$, corresponding to a frequency~\cite{Brito:2015yfh}
\begin{equation}
f\sim 2.5\times 10^{-9}\left(\frac{\mu}{10^{-23}\,{\rm eV}}\right)\,{\rm Hz}\,.
\end{equation}
Thus, observations of these signals will require low-frequency sensitive detectors.
At late times and large extraction radii, the waveform is exponentially damped. We cannot rule out power-law decay at very late times.
We have searched for the characteristic ringdown modes in this theory and find both good agreement with previously reported values~\cite{Brito:2013wya} and with the ones inferred from the time-domain waveforms. We note in particular the presence of an unstable mode, which does not seem to be significantly excited - on these timescales.
\begin{figure}[th]
\includegraphics[width=6cm]{myfig3.pdf}
\caption{Waveforms obtained for the $\ell=1$ polar mode, derived for a radial infalling particle
with source term given by eq.~\eqref{sourcel1radial}, as a function of the retarded time $(t-r)/M$.
The panel refers to $M\mu=0.01$ at extraction radii $R\mu=10$. The overall behavior is the same as the
monopole $\ell=0$ mode.
The maximum amplitudes of the two metric functions scale as $K^{\rm peak}\sim m_p\gamma \delta_1\sqrt{M\mu}/R^{1.3}$ and
$\eta_1^{\rm peak}\sim m_p\gamma M\delta_2(M\mu)^{-0.04}/R^{1/2}$
where $(\delta_1,\delta_2)\simeq(0.84,1.3)$ (these expressions also provide the scaling with $M, m_p, \gamma$).}
\label{fig:l1_radial_polar}
\end{figure}
Waveforms for the $\ell=1$ mode are shown in Fig.~\ref{fig:l1_radial_polar} (again for relativistic collisions).
The maximum value of the amplitudes can be described again with a scaling factor of the form given by
eq.~\eqref{scalingl0} (see caption of Fig.~\ref{fig:l1_radial_polar}).
Our results can also be applied to spherically symmetric collapse: in such a case, the source term is trivially replaced
by a spherically-symmetric shell; the final source term is unchanged. Even if $1\%$ of the star's rest mass is involved in the collapse, our results indicate that the peak waveform is detectable when $\mu$ is small enough. In fact, eq.~\eqref{peak_time} implies that stronger constraints can be obtained via (non-) observations of GWs from collapsing stars in our galaxy. Such conclusions are consistent also with recent results of core-collapse supernovae in massive scalar-tensor theories of gravity~\cite{Sperhake:2017itk}.
\subsection{Particles in circular motion}\label{sec:numerical_circular}
\begin{figure}[th]
\includegraphics[width=6.6cm]{myfig4.pdf}
\caption{GW luminosity $dE/dt$ for the $(\ell,m )=(1,1)$ polar mode as a function of the 2-spin field mass $M\mu$
for different radius $\bar{r}=r_p/M$ of the test particle on circular orbits around the BH.}
\label{fig:l1_circular_luminosity}
\end{figure}
Quasi-circular inspirals in the weak-field, slow-motion approximation have been used to impose constraints on massive theories of gravity
using pulsar timing observations~\cite{Finn:2001qi}. Those constraints used only corrections -- which scaled like $\mu^2$ -- to the quadrupole formula. Our results include relativistic motion in strong-gravity situations. In GR, particles in circular motion excite only quadrupolar or higher modes. As we saw, a new, dipolar mode arises in massive gravity, the energy flux of which is shown in Fig.~\ref{fig:l1_circular_luminosity}.
For a particle in a circular orbit of radius $r_p$, our results indicate that the flux in the $\ell=m=1$ mode scales like $1/r_p^4$, so truly a dipolar behavior, with BHs having a nontrivial dipolar charge in this theory. Furthermore, the charge is non-negligible at small $M\mu$. We find a flux $dE/dt\sim 0.6 m_p^2M^2/r_p^4$~\cite{vainshtein}. On the other hand, the quadrupole formula in GR predicts a quadrupolar emission $dE/dt=(32/5)m_p^2M^3/r_p^5$. This is one of our main results: the dipolar emission in massive gravity theories dominates the GR quadrupolar term, at arbitrarily small $\mu$. Thus, observations of binary BHs can potentially be used to rule out these theories~\cite{Barausse:2016eii}. We are extrapolating point-particle results to BH spacetimes. Such procedure was shown to be robust in GR
when the interacting objects are both BHs~\cite{Berti:2010ce,Tiec:2014lba,Sperhake:2011ik}. When stars are involved, interference effects decrease the total energy output~\cite{Haugan:1982fb,1985ApJS...58..297P}.
\section{Discussion} \label{sec:discussion}
We have worked out the details of gravitational radiation in theories of massive gravity,
when two BHs merge. It is clear that a substantial fraction of the radiation emitted in this process decays slowly, at large distances.
In fact, because the graviton is massive, low-energy GWs are confined. This was also noticed in Ref.~\cite{Sperhake:2017itk}.
Such radiation will clearly have an impact in any star or object located within its sphere of influence, but such effects are unknown to us so far. Circular motion at an orbital frequency $\omega=\mu$ will likely lead to resonant excitations of dipolar GWs.
Unfortunately, the numerical study of such resonances is a challenging task~\cite{Cardoso:2011xi,Yunes:2011aa,Fujita:2016yav}, upon which we did not embark.
Technically, our procedure is free of computational challenges. The perturbative framework that we use is an expansion in mass ratio. All the observables that we extract are finite, and tend to zero when the mass ratio decreases. Thus, perturbation theory is applicable and never breaks down as long as mass ratios are sufficiently small (in a well defined manner). The numerical results are converging and very clear: we show that new modes are excited to a substantial amplitude,
both in head-on collisions and in quasi-circular motion. For head-on collisions -- because new modes are excited at characteristically small frequencies -- GW detectors sensitive to low frequency radiation will be able to impose constraints on the mass of gravitons tighter than ever before. In fact, if we trust that our results carry over to two, nearly equal-mass neutron stars, then the constraints on the mass of the graviton will be improved by two orders of magnitude or more. The dipolar mode excited by quasi-circular inspirals is in fact dominant with respect to the GR quadrupolar emission. Thus, accurate observations of binary BHs have the potential to tightly constraint massive gravity.
Alternatively, our results can be a manifestation that the background geometry does not describe astrophysical BHs. Indeed, it can be shown that Schwarzschild (and Kerr) BHs are unstable in theories with a massive graviton~\cite{Babichev:2013una,Brito:2013wya,Brito:2013xaa}. Nevertheless, for small mass coupling $M\mu$ the instability timescale is extremely large and the spacetime responds to short-timescale phenomena ``unaware'' of the instability. Thus, sufficiently short-scale phenomena are expected to produce Schwarzschild BHs, and our methods and results apply in the regime where we would like them to, which is that of small graviton masses.
In addition, numerical results suggest that when one of the metrics is taken to be non-dynamical, hairy stationary BHs do not even exist~\cite{Brito:2013xaa,Volkov:2016ehx}. One cannot exclude the possibility that a viable astrophysical BH is described by a dynamical metric~\cite{Rosen:2017dvn}, in which case our results could change considerably. In particular, Vainshtein screening -- absent in our background solutions -- may play a critical role in more generic background BH solutions. Notwithstanding, it is clear that GW astronomy carries a huge potential to understand theories of massive gravity: the existence of extra degrees of freedom lead in general to substantially different dynamics and gravitational-wave emission. To fully realize this potential several challenges (including the correct description of astrophysical BHs) need to be seriously tackled.
\begin{acknowledgments}
We are indebted to Evgeny Babichev, Claudia de Rham, Chris Moore, Andrew Tolley and Luis Lehner for many and very useful comments,
discussions and suggestions.
The authors acknowledge financial support provided under the European Union's H2020 ERC
Consolidator Grant ``Matter and strong-field gravity: New frontiers in Einstein's theory'' grant
agreement no. MaGRaTh--646597.
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 690904.
We acknowledge financial support provided by FCT/Portugal through grant PTDC/MAT-APL/30043/2017.
We acknowledge the SDSC Comet and TACC Stampede2 clusters through NSF-XSEDE Award Nos. PHY-090003.
The authors would like to acknowledge networking support by the GWverse COST Action CA16104, ``Black holes, gravitational waves and fundamental physics.''
\end{acknowledgments}
|
2,877,628,091,178 | arxiv | \section{Introduction}
A family of probability distributions is called a statistical model.
Maximum likelihood estimation is a method of estimating the true probability distribution as the one that maximizes the likelihood of the observed data.
The probability distribution (or often the point in an associated parameter space) that maximizes the likelihood is called a \emph{maximum likelihood estimate (MLE)}.
One important problem is to understand the minimal number of samples required such that, almost surely,
(1) the likelihood function is bounded from above,
(2) MLEs exist, and
(3) there is a unique MLE.
Surprising connections between sample size thresholds for a class of models called Gaussian group models and stability notions in invariant theory were recently discovered in~\cite{AKRS}.
In this paper, we study sample size thresholds for \emph{tensor normal models}, which fall under the purview of Gaussian group models and are hence amenable to techniques from invariant theory.
The setting of invariant theory that relates to tensor normal models are the so-called tensor actions, i.e., the natural action of the group ${\rm SL}_{d_1} \times {\rm SL}_{d_2} \times \dots \times {\rm SL}_{d_k}$ on ${\mathbb F}^{d_1} \otimes {\mathbb F}^{d_2} \otimes \dots \otimes {\mathbb F}^{d_k}$, where ${\mathbb F}$ is the underlying field (either~${\mathbb R}$ or ${\mathbb C}$) and ${\rm SL}_{d_i}$ denotes the group of $d_i \times d_i$ matrices with determinant one.
Tensor normal models are statistical models consisting of multivariate Gaussian distributions whose concentration matrix is a Kronecker (or tensor) product of several matrices.
These are particularly useful in studying data that naturally occurs as multi-dimensional arrays.
Examples include wood density in given growth rings and directions at several heights in a tree trunk~\cite{KZ}, monitoring of a vector of physiological variables in different organs over multiple days~\cite{RL}, and $3$-dimensional spatial glucose content data~\cite{Man-etal}. Moreover, tensors are ubiquitous in big data applications.
A special case of tensor normal models is the \emph{matrix normal model}, where the concentration matrix is a Kronecker product of exactly two matrices.
Sample size thresholds for matrix normal models have been investigated in~\cite{Dut99,Ros,Srivastava,Drton-etal,ST,AKRS,DM-mle}.
In particular, a complete answer for matrix normal models was obtained in~\cite{DM-mle} with techniques from quiver representations.
We do not use quiver representations in this paper, but instead we use castling transforms and results on stabilizers in general position.
It is worth mentioning that the invariant theory for tensor actions with two tensor factors (which corresponds to the matrix normal models) is well understood and we have efficient algorithms, see~\cite{GGOW,DM,IQS,IQS2,DM-arbchar,DM-oc,AZGLOW}, whereas the invariant theory gets significantly more difficult for three and more tensor factors, see~\cite{BGOWW,BFGOWW,DM-exp} for more details.
To find the MLE, one can use the so called flip-flop algorithm~\cite{Dut99,LZ1,LZ2,Werner} for matrix normal models and its natural generalizations to tensor normal models, which is closely related to a recent alternating minimization algorithm in the invariant theory of tensor actions~\cite{AKRS,BGOWW,FORW}.
In general, MLEs for Gaussian group models can be found using the geodesic optimization algorithms in~\cite{BFGOWW}.
A separate motivation for studying the questions in this paper comes from quantum information.
Here tensors describe the states of a quantum mechanical system, and our invariant theoretic results characterize the existence of states with certain prescribed marginals, see~\cite{Klyachko,EntPoly,Walter,BRVR,BRVRquantum} for details.
\subsection{Tensor normal models}\label{subsec:models intro}
Let ${\mathbb F} = {\mathbb R}$ or ${\mathbb C}$.
Let $\PD_n$ denote the cone of positive definite $n \times n$ matrices with entries in ${\mathbb F}$. For an $n$-dimensional centered Gaussian distribution with concentration matrix $\Psi \in \PD_n$, the density function is defined as
\[
f_\Psi(y) = \det \left( \frac{\Psi}{2 \pi} \right)^{1/2} e^{-y^\dag \Psi y},
\]
where $y^\dag$ denotes the adjoint (conjugate transpose) of $y$.
Given a subset $\mathcal{M} \subseteq \PD_n$, we define the corresponding \emph{Gaussian model} as the statistical model consisting of the distributions with concentration matrix~$\Psi \in \mathcal M$.
Then the likelihood function $L_Y\colon \mathcal M \rightarrow {\mathbb R}$ is, for $m$ samples specified by an $m$-tuple $Y = (Y_1,\dots,Y_m) \in ({\mathbb F}^n)^m$, given by
\[
L_Y(\Psi) = \prod_{i=1}^m f_\Psi(Y_i) = \det \left(\tfrac{\Psi}{2\pi}\right)^{m/2} e^{-\frac{1}{2} \sum_{i=1}^m Y_i^\dag \Psi Y_i}.
\]
The log-likelihood function is then (up to an additive constant)
\begin{equation}\label{eq:l_Y intro}
l_Y(\Psi) = \frac{m}{2} \log \det (\Psi) - \frac{1}{2} {\rm Tr} \left(\Psi \sum_{i=1}^m Y_i Y_i^\dag \right).
\end{equation}
A \emph{maximum likelihood estimate (MLE)} given~$Y$ is a concentration matrix~$\hat{\Psi} \in \mathcal{M}$ that maximizes the likelihood of observing the data~$Y$, i.e., $l_Y(\hat{\Psi}) \geq l_Y(\Psi)$ for all $\Psi \in \mathcal{M}$.
For an MLE to exist, it is therefore necessary (but not necessarily sufficient) that~$l_Y$ is bounded from above.
Even when they exist, MLEs need not be unique.
For $d_1,\dots,d_k \in {\mathbb Z}_{>0}$, the Gaussian model $\mathcal{M}(d_1,\dots,d_k) = \{\Psi_1 \otimes \Psi_2 \otimes \dots \otimes \Psi_k\ |\ \Psi_i \in \PD_{d_i} \} \subseteq \PD_n$ (where $n = d_1 d_2 \cdots d_k$) is called a \emph{tensor normal model}.
When we want to differentiate between the real and the complex model, we will write $\mathcal{M}_{\mathbb R}(d_1,\dots,d_k)$ and $\mathcal{M}_{\mathbb C}(d_1,\dots,d_k)$ respectively.
For the tensor normal model $\mathcal{M}(d_1,\dots,d_k)$, a sample can not only be thought of as a vector of size~$n$, but also as a $k$-tensor with local dimensions $d_1,d_2,\dots,d_k$.
The latter viewpoint will be particularly useful.
Accordingly, we define ${\mathbb F}^{d_1,\dots,d_k} \coloneqq {\mathbb F}^{d_1} \otimes {\mathbb F}^{d_2} \otimes \dots \otimes {\mathbb F}^{d_k}$.
Then a sample for the tensor normal model $\mathcal{M}(d_1,\dots,d_k)$ is simply a point in the tensor space ${\mathbb F}^{d_1,\dots,d_k}$. We also write ${\mathbb F}^{d_1,\dots,d_k;m}$ for $({\mathbb F}^{d_1,\dots,d_k})^{\oplus m}$.
\subsection{Main results on sample size thresholds}
Generalizing the quantity $R(d_1,\dots,d_k)$ defined in~\cite{BRVR}, we consider
\begin{align*}
R(d_1,\dots,d_k;m) \coloneqq m \prod_{i=1}^k d_i + \sum_{n=1}^k (-1)^n \sum_{1\leq i_1 < \ldots < i_n \leq k} \gcd(d_{i_1},\dots,d_{i_n})^2,
\end{align*}
as well as the following two quantities:
\begin{align*}
g_{\max}(d_1,\dots,d_k) \coloneqq \max_{i<j} \gcd(d_i,d_j), \qquad
\Delta(d_1,\dots,d_k; m) \coloneqq m \prod_{i=1}^k d_i - 1 - \sum_{i=1}^k (d_i^2 - 1).
\end{align*}
By convention, $g_{\max}(d) = 1$ for any $d\in{\mathbb Z}_{>0}$.
Then all three quantities are invariant under leaving out dimensions equal to one.
The following theorem shows that these quantities precisely predict the almost sure behavior of the MLE.
By almost surely we mean that the stated property holds for all~$Y$ away from a subset of ${\rm Ten}(d_1,\dots,d_k)^{\oplus m} \cong ({\mathbb F}^n)^m$ of Lebesgue measure zero.
\begin{theorem}\label{thm:main}
Let ${\mathbb F} = {\mathbb R}$ or ${\mathbb C}$.
Consider $m$ samples $Y = (Y_1,\dots,Y_m)$ of the tensor normal model $\mathcal{M}(d_1,\dots,d_k)$.
Let $R = R(d_1,\dots,d_k;m)$, $\Delta = \Delta(d_1,\dots,d_k;m)$, and $g_{\max} = g_{\max}(d_1,\dots,d_k)$.~Then:
\begin{enumerate}
\item If $R > 0$, then almost surely an MLE exists. Furthermore:
\begin{itemize}
\item If $m \geq 2$,
the MLE is almost surely unique if and only if $R > g_{\max}^2$ or $g_{\max}\!=\!1$.
\item If $m = 1$,
the MLE is almost surely unique if and only if $\Delta \geq -1$.
\end{itemize}
\item If $R = 0$, then almost surely an MLE exists. It is almost surely unique if and only if~$g_{\max}\!=\!1$.
\item If $R < 0$, then the likelihood function is always unbounded from above.
\end{enumerate}
\end{theorem}
\begin{remark}
It was conjectured in~\cite{Drton-etal} and proved in~\cite{DM-mle} that for matrix normal models (tensor normal models with $k=2$), almost sure boundedness of the log-likelihood function implies almost sure existence of an MLE.
Theorem~\ref{thm:main} implies that the same holds for all tensor normal models.
\end{remark}
From Theorem~\ref{thm:main}, we can extract the following result.
Let us denote by ${\rm mlt}_b$ (resp.\ ${\rm mlt}_e$, ${\rm mlt}_u$) the smallest integer~$m_0$ such that, for all $m \geq m_0$, the log-likelihood function for $Y \in {\mathbb F}^{d_1,\dots,d_k;m}$ is almost surely bounded from above (resp.\ MLEs exist, the MLE exists uniquely).
\begin{corollary}\label{cor:threshold}
Let ${\mathbb F} = {\mathbb R}$ or ${\mathbb C}$.
Consider the tensor normal model $\mathcal{M}(d_1,\dots,d_k)$.
Without loss of generality, assume $2 \leq d_1 \leq d_2 \leq \dots \leq d_k$, and assume $k \geq 3$.
Let $r = \frac{d_k}{d_1d_2\cdots d_{k-1}}$.
Then
\[
\lceil r \rceil \leq {\rm mlt}_b = {\rm mlt}_e \leq {\rm mlt}_u \leq \lceil r \rceil + 1.
\]
\end{corollary}
We note that for the case $k = 2$, a complete answer is known~\cite{DM-mle}.
The case $k = 1$ is trivial.
Corollary~\ref{cor:threshold} gives nearly tight bounds on sample size thresholds.
However, we note that for any particular choice of $d_1,\dots,d_k$, we can always use Theorem~\ref{thm:main} to get exact sample size thresholds.
\subsection{Main results in invariant theory}
Recently, Am\'endola, Kohn, Reichenbach and Seigal~\cite{AKRS} established a connection between a class of Gaussian models called Gaussian group models and the invariant theory of a corresponding group action (see Theorem~\ref{theo:AKRS}).
We revisit this connection in Section~\ref{sec:gg-inv}.
As mentioned previously, the group action that corresponds to tensor normal models is the tensor action.
Given natural numbers $d_1,\dots,d_k,m\in {\mathbb Z}_{>0}$, we denote by $\rho_{d_1,\dots,d_k;m}$ the natural representation of $G = {\rm SL}_{d_1}({\mathbb F}) \times \cdots \times {\rm SL}_{d_k}({\mathbb F})$ on $V = {\mathbb F}^{d_1,\dots,d_k;m}$.
Theorem~\ref{thm:main} is a consequence of the following invariant-theoretic result (see Section~\ref{sec:gg-inv} for the definitions of stability).
\begin{theorem}\label{thm:main-inv}
Let ${\mathbb F} = {\mathbb R}$ or ${\mathbb C}$.
Consider the tensor representation $\rho = \rho_{d_1,\cdots,d_k;m}$.
Let $R = R(d_1,\dots,d_k;m)$, $\Delta = \Delta(d_1,\dots,d_k;m)$, and $g_{\max} = g_{\max}(d_1,\dots,d_k)$.
Then:
\begin{enumerate}
\item If $R > 0$, then $\rho$ is generically polystable. Furthermore:
\begin{itemize}
\item If $m\geq2$, then $R \geq g_{\max}^2$, and $\rho$ is generically stable if and only if $R > g_{\max}^2$ or~$g_{\max}=1$.
\item If $m = 1$, then $\Delta \geq -2$, and $\rho$ is generically stable if and only if $\Delta \geq -1$.
\end{itemize}
\item If $R = 0$, then $\rho$ is generically polystable. It is generically stable if and only if~$g_{\max} = 1$.
\item If $R < 0$, then $\rho$ is unstable.
\end{enumerate}
\end{theorem}
While the preceding theorem gives a nice and uniform characterization, it is essentially a reformulation of the following result which is \emph{recursive} in nature, but more enlightening.
\begin{theorem}\label{thm:recursive}
Let ${\mathbb F} = {\mathbb R}$ or ${\mathbb C}$.
Consider the tensor representation $\rho = \rho_{d_1,\cdots,d_k;m}$.
Without loss of generality, assume $d_1 \leq d_2 \leq \dots \leq d_k$.
Then:
\begin{enumerate}
\item If $d_k > d_1 \cdots d_{k-1} m$, then $\rho$ is not generically semistable.
\item If $d_k = d_1 \cdots d_{k-1} m$, then $\rho$ is generically polystable.
It is generically stable if and only if $d_1 = \cdots = d_{k-1} = 1$.
\item\label{it:castling} If $\frac{d_1 \cdots d_{k-1} m}2 < d_k < d_1 \cdots d_{k-1} m$, then $\rho$ is generically semistable (polystable, stable) if and only if the same is true if we replace~$d_k$ by $d'_k = d_1 \cdots d_{k-1} m - d_k$.
Note that $1 \leq d'_k < d_k$.
\item If $d_k \leq \frac{d_1 \cdots d_{k-1} m}2$, then $\rho$ is generically polystable.
Further, it is not generically stable if and only if $(d_1,\dots,d_k;m) =(1,\dots,1,2,d,d;1)$ or $(1,\dots,1,1,d,d;2)$ for some $d\geq2$.
\end{enumerate}
Moreover, if $\rho$ is not generically semistable then it is unstable.
\end{theorem}
\noindent
Part~(\ref{it:castling}) of Theorem~\ref{thm:recursive} above is a reflection of the fact that the property of being generically semistable (polystable, stable) is unchanged under an operation known as a \emph{castling transform}.
Castling transforms played a crucial role in Sato and Kimura's classification of prehomogeneous vector spaces~\cite{SK} (see also~\cite{Venturelli}).
Its origins can be traced back to at least Elashvili's paper~\cite{Elashvili}.
As a corollary of Theorems~\ref{thm:main-inv} and~\ref{thm:recursive}, we can derive a formula for the dimension of the GIT quotient (see Section~\ref{sec:git quotient} for definition) of $V = {\mathbb F}^{d_1,\dots,d_k;m}$ for the action of $G = {\rm SL}_{d_1} \times \dots \times {\rm SL}_{d_k}$.
This generalizes the result of~\cite{BRVR}, where the dimension was computed in the case that $m=1$.
\begin{theorem}\label{thm:git}
Let ${\mathbb F}={\mathbb C}$.
Consider the natural action of $G = {\rm SL}_{d_1} \times \cdots \times {\rm SL}_{d_k}$ on $V = {\mathbb F}^{d_1,\dots,d_k;m}$.
Let $\delta$ denote the dimension of the GIT quotient $\mathbb PV\!\!\gitsymbol\!G$.
\begin{enumerate}
\item If $R < 0$, then
the GIT quotient is empty.
\item If $R = 0$, then $\delta = 0$. In fact, the GIT quotient is a single point.
\item If $R > 0$, then
\begin{align*}
\delta = \begin{cases}
\max(g_{\max}-3,0) & \text{ if $m = 1$ and $\Delta = -2$}, \\
g_{\max} & \text{ if $m = 2$ and $R = g_{\max}^2 > 1$}, \\
\Delta & \text{ otherwise}.
\end{cases}
\end{align*}
\end{enumerate}
\end{theorem}
\subsection*{Organization of the paper}
In Section~\ref{sec:gg-inv}, we revisit the general connection between Gaussian group models and invariant theory, and discuss the relevant notions of stability.
In Section~\ref{sec:castling}, we introduce castling transforms and discuss how they preserve stability.
In Section~\ref{sec:stability}, this is used as the key ingredient to derive our recursive characterization (Theorem~\ref{thm:recursive}).
In Section~\ref{sec:uniform}, we deduce our uniform characterization (Theorem~\ref{thm:main-inv}) from the former.
In Section~\ref{sec:mle}, we prove our main results on sample size thresholds for tensor normal models (Theorem~\ref{thm:main} and Corollary~\ref{cor:threshold}).
Finally, in Section~\ref{sec:git quotient} we compute the dimension of the GIT quotient (Theorem~\ref{thm:git}).
\subsection*{Acknowledgements} We would like to thank Carlos Am\'endola, Suguman Bansal, Christian Ikenmeyer, Kathl\'en Kohn, Siddharth Krishna, Mark Van Raamsdonk, Philipp Reichenbach, and Anna Seigal for interesting discussions.
\section{Gaussian group models and invariant theory} \label{sec:gg-inv}
In this section we first discuss the general setup of invariant theory.
Then we define Gaussian group models and their connection to notions of generic stability in invariant theory.
Finally, we discuss some general criteria from the literature useful for characterizing generic stability.
Let ${\mathbb F} = {\mathbb R}$ or ${\mathbb C}$.
Let $G$ be a group.
A \emph{representation} of $G$ is an action of $G$ on a (finite-dimensional) vector space $V$ (over ${\mathbb F}$) by linear transformations.
This is captured succinctly as a group homomorphism $\rho\colon G \rightarrow \operatorname{GL}(V)$.
In particular, an element $g \in G$ acts on $V$ by the linear transformation $\rho(g)$.
We write $g \cdot v$ or $gv$ to mean $\rho(g)v$.
The $G$-orbit of $v \in V$ is the set of all vectors that you can get from $v$ by applying elements of the group, i.e.,
\[
O_v \coloneqq \{gv\ |\ g \in G\} \subseteq V.
\]
Throughout this paper, we will only consider the setting where $G$ is a linear algebraic group (over~${\mathbb F}$) and where the action is rational, i.e., $\rho\colon G \rightarrow \operatorname{GL}(V)$ is a morphism of algebraic groups.
We denote by ${\mathbb F}[V]$ the ring of polynomial functions on $V$ (also known as the coordinate ring of~$V$).
A polynomial function $f \in {\mathbb F}[V]$ is called {\em invariant} if $f(gv) = f(v)$ for all $g \in G$ and $v \in V$.
In other words, a polynomial is called invariant if it is constant on orbits.
The invariant ring is
\[
{\mathbb F}[V]^G \coloneqq \{f \in {\mathbb F}[V]\ |\ f(gv) = f(v) \ \forall\ g \in G, v \in V\}.
\]
The invariant ring has a natural grading by degree, i.e., ${\mathbb F}[V]^G = \oplus_{d=0}^\infty {\mathbb F}[V]^G_d$ where ${\mathbb F}[V]^G_d$ consists of all invariant polynomials that are homogeneous of degree $d$.
For $v \in V$, we define the stabilizer subgroup $G_v \coloneqq \{g \in G \ |\ gv = v\}$ and we denote by $\overline{O}_v$, the closure of the orbit $O_v$.
\begin{remark}
To define the closure, we need to specify a topology on~$V$.
In this paper, we only use the fields ${\mathbb F} = {\mathbb R}$ or ${\mathbb C}$.
Hence, we will use the standard Euclidean topology on $V$ for orbit closures, unless otherwise specified.
At times we will also need to use the Zariski topology, but we will be careful in specifying it each time.
For ${\mathbb F} = {\mathbb C}$, the orbit closure w.r.t.\ the Euclidean topology agrees with the orbit closure w.r.t.\ the Zariski topology (in the setting of rational actions of reductive groups).
\end{remark}
We make a few definitions, the significance of which will become clear in the following subsections.
\begin{definition}\label{def:stability}
Let ${\mathbb F} = {\mathbb R}$ or ${\mathbb C}$, and let $G$ be an algebraic group (over ${\mathbb F}$) with a rational action on a vector space $V$ (over ${\mathbb F}$), given by $\rho\colon G \rightarrow \operatorname{GL}(V)$.
Let $K$ denote the kernel of the homomorphism~$\rho$. Give $V$ the standard Euclidean topology. Then, for $v \in V$, we say $v$ is
\begin{itemize}
\item {\em unstable} if $0 \in \overline{O}_v$;
\item {\em semistable} if $0 \notin \overline{O}_v$;
\item {\em polystable} if $v \neq 0$ and $O_v$ is closed;
\item {\em stable} if $v$ is polystable and the quotient $G_v/K$ is finite.
\end{itemize}
\end{definition}
\subsection{Gaussian group models}
For a subgroup $G \subseteq \operatorname{GL}_n$, we define an associated \emph{Gaussian group model} by the following family of concentration matrices:
\[
\mathcal{M}_G \coloneqq \{g^\dag g\ |\ g \in \operatorname{GL}_n\}.
\]
where $g^\dagger = \bar g^T$ denotes the adjoint.
So for a concentration matrix $\Psi = g^\dag g \in \mathcal{M}_G$ and an $m$-tuple of samples $Y = (Y_1,\dots,Y_m) \in ({\mathbb F}^n)^m$, the log-likelihood function~\eqref{eq:l_Y intro} simplifies to
\[
l_Y(\Psi) = \frac{m}{2} \log(\det(g^\dag g)) - \frac{1}{2} \lVert g \cdot Y\rVert^2,
\]
where $\lVert\cdot\rVert$ denotes the $\ell_2$-norm on $({\mathbb F}^n)^m \cong {\mathbb F}^{nm}$ and we note that $G$ acts on $({\mathbb F}^n)^m$ by the diagonal action $g \cdot Y = (g Y_1,\dots,g Y_m)$.
The following result was proved in \cite[Theorems~6.10 and 6.24]{AKRS}.
It connects maximum likelihood estimation in Gaussian group models to the stability notions introduced in Definition~\ref{def:stability}.
\begin{theorem}[\cite{AKRS}]\label{theo:AKRS}
Let ${\mathbb F} = {\mathbb R}$ or ${\mathbb C}$.
Let $G \subseteq \operatorname{GL}_n$ be a Zariski-closed subgroup that is closed under adjoints and non-zero scalar multiples.
Let $G_{{\rm SL}} = \{g \in G \ | \det(g) = 1\} \subseteq G$ and let $Y \in ({\mathbb F}^n)^m$ be an $m$-tuple of samples.
Then, for the diagonal action of $G_{{\rm SL}}$, we have
\begin{itemize}
\item $Y$ is semistable $\Longleftrightarrow$ $l_Y$ is bounded from above;
\item $Y$ is polystable $\Longleftrightarrow$ an MLE exists (i.e., $l_Y$ has a maximum);
\item $Y$ is stable $\implies$ there exists a unique MLE (i.e., $l_Y$ has a unique maximum). \\
If ${\mathbb F} = {\mathbb C}$, the converse also holds, i.e., there exists a unique MLE $\implies Y$ is stable.
\end{itemize}
Moreover, if $\Psi$ is an MLE given $Y$, then the set of all MLEs given $Y$ is $\{g^\dag \Psi g \ |\ g \in (G_{{\rm SL}})_Y\}$.
\end{theorem}
\begin{remark}
In the setting of the above theorem, for $h \in G_{{\rm SL}}$, we also have
\[
\bigl\{\text{MLEs given } h \cdot Y\bigr\} = (h^{-1})^\dag \bigl\{\text{MLEs given } Y\bigr\} h^{-1}.
\]
Thus, for any $h \in G_{{\rm SL}}$, the MLE given $Y$ is unique if and only if the MLE given $h \cdot Y$ is unique.
\end{remark}
Now let ${\mathbb F} = {\mathbb R}$ and suppose $Y$ is already a point with minimal norm in its orbit.
Then for an appropriate $\lambda \in {\mathbb R}_{>0}$, we have that $\lambda I$ is an MLE and the set of all MLEs is $\{\lambda g^\dag g \ |\ g \in (G_{{\rm SL}})_Y\}$.
In particular, we have a unique MLE if and only if $(G_{{\rm SL}})_Y \subseteq O_n$, the orthogonal group.
Further, since $(G_{{\rm SL}})_Y$~is closed, it must be compact.
The stabilizer of any other point in its $G_{\rm SL}$-orbit is obtained by conjugation and remains compact.
In particular, if $Y$ is any tuple of samples such that the MLE exists uniquely, then $(G_{{\rm SL}})_Y$ is compact.
This will be important to us, so we record the statement for later use:
\begin{corollary} \label{cor:uniq->compact}
Suppose we are in the setting of Theorem~\ref{theo:AKRS}, with ${\mathbb F} = {\mathbb R}$.
If the MLE given $Y$ exists uniquely, then $(G_{{\rm SL}})_Y$ is compact.
\end{corollary}
\noindent
When ${\mathbb F}={\mathbb C}$, the same hypothesis and argument shows that $(G_{{\rm SL}})_Y$ is finite.
However, we will only need Corollary~\ref{cor:uniq->compact} in the case that ${\mathbb F}={\mathbb R}$.
\subsection{Notions of generic stability}
Let $G$ be an algebraic group (over ${\mathbb F})$ and let $V$ be a rational representation (over ${\mathbb F}$). Then we define:
\begin{align*}
V^{{\rm ss}} & = \{v \in V \ |\ v \text{ is $G$-semistable}\}, \\
V^{{\rm ps}} & = \{v \in V \ |\ v \text{ is $G$-polystable}\}, \\
V^{{\rm st}} & = \{v \in V \ |\ v \text{ is $G$-stable}\}.
\end{align*}
We call $V^{{\rm ss}}$ (resp.\ $V^{{\rm ps}}, V^{{\rm st}}$) the \emph{semistable} (resp.\ \emph{polystable}, \emph{stable}) \emph{locus}. If the group is not clear from context then we write $V^{G\text{-}{\rm ss}}, V^{G\text{-}{\rm ps}}, V^{G\text{-}{\rm st}}$.
\begin{definition} \label{defn.gen.stable}
Let ${\mathbb F} = {\mathbb R}$ or ${\mathbb C}$, and let $G$ be an algebraic group (over ${\mathbb F}$) with a rational action on a vector space $V$ (over ${\mathbb F}$).
Then, we say $V$ is \emph{generically $G$-semistable} (resp.\ \emph{polystable}, \emph{stable}) if $V^{{\rm ss}}$ (resp.\ $V^{{\rm ps}}, V^{{\rm st}}$) contain a non-empty Zariski-open subset of $V$.
Further, we say that $V$ is \emph{unstable} if $V^{{\rm ss}} = \emptyset$.
\end{definition}
These notions are particularly well-behaved in the case that ${\mathbb F}={\mathbb C}$, as we will see in the following.
We refer to \cite[Corollary~2.15, Lemma~2.16]{DM-mle} for a succinct proof of the following standard result:
\begin{lemma}\label{lem.loc.const}
Suppose ${\mathbb F} = {\mathbb C}$. Let $V$ be a rational representation of a complex reductive group $G$.
Then, the subsets $V^{{\rm ss}}$ and $V^{{\rm st}}$ are Zariski-open and the subset $V^{{\rm ps}}$ is Zariski-constructible, i.e., it is a union of Zariski-locally closed subsets.
Moreover, $V$ is generically semistable if and only if it is not unstable.
\end{lemma}
Zariski-open subsets of a vector space, whenever non-empty, are complements of lower dimensional subvarieties, which have Lebesgue measure zero.
Zariski-constructible subsets of a vector space, on the other hand, have Lebesgue measure zero unless they contain a Zariski-open subset, in which case their complement has Lebesgue measure zero.
Hence, we can conclude the following:
\begin{corollary}\label{cor:mle-gen.stable}
Suppose we are in the setting of Theorem~\ref{theo:AKRS}, with ${\mathbb F} = {\mathbb C}$.
Fix a number of samples~$m$ and let $V=({\mathbb C}^n)^m$.
Then, for the diagonal action of $G_{\rm SL}$ we have
\begin{itemize}
\item $V$ is generically semistable $\Longleftrightarrow$ $l_Y$ is almost surely bounded from above
\item $V$ is generically polystable $\Longleftrightarrow$ an MLE exists almost surely;
\item $V$ is generically stable $\Longleftrightarrow$ there exists a unique MLE almost surely;
\item $V$ is unstable $\Longleftrightarrow$ $l_Y$ is always unbounded from above.
\end{itemize}
Moreover, the first and last condition are complementary.
Here we say a property holds almost surely if it holds for all $Y$ in $V$ up to a set of Lebesgue measure zero.
\end{corollary}
Let us also mention one lemma that will be useful for us later
\begin{lemma}\label{lem.increase}
Suppose $G$ is a complex algebraic group and let $V$ be a rational representation over~${\mathbb C}$.
If $V^{\oplus m}$ is generically $G$-stable (resp.\ $G$-semistable), then $V^{\oplus n}$ is generically $G$-stable (resp.\ $G$-semistable) for all $n \geq m$ with respect to the diagonal actions of $G$.
\end{lemma}
\begin{proof}
Suppose $V^{\oplus m}$ is generically $G$-stable.
We have an inclusion $(V^{\oplus m})^{{\rm st}} \subseteq (V^{\oplus n})^{{\rm st}}$ with respect to the diagonal actions of $G$.
So $(V^{\oplus n})^{{\rm st}}$ is non-empty and further it is Zariski open by Lemma~\ref{lem.loc.const}.
Thus, $V^{\oplus n}$ is generically $G$-stable.
The argument for semistability is similar.
\end{proof}
\subsection{Stabilizers in general position}\label{subsec:sgp}
Let ${\mathbb F} = {\mathbb C}$ for this section.
Let $V$ be a rational representation of a reductive group $G$.
We say that $H$ is a \emph{stabilizer in general position (s.g.p.)} if there is a non-empty Zariski-open subset $U \subseteq V$ such that for all $v \in U$, the stabilizer $G_v$ is isomorphic to $H$.
The s.g.p.\ is unique up to conjugation.
Its existence is far from obvious and follows from Luna's slice theorem, see e.g., \cite[Theorem~7.2]{Popov-Vinberg}.
Indeed, when ${\mathbb F} = {\mathbb R}$, stabilizers in general position often do not exist.
Matsushima's criterion tells us that if an orbit of a point is closed, then the stabilizer is reductive.
Hence, if $V$ is generically polystable, then the s.g.p.\ must be reductive. The converse was proved by Popov:
\begin{theorem}[\cite{Popov}]\label{theo:Popov}
Let $\rho\colon G \rightarrow \operatorname{GL}(V)$ be a rational representation of a reductive group. Then,~$V$ is generically polystable if and only if the stabilizer in general position is reductive.
\end{theorem}
\begin{corollary}\label{cor:crit gen stab}
Let $\rho\colon G \rightarrow \operatorname{GL}(V)$ be a rational representation of a reductive group and let $K$ denote the kernel of $\rho$. Let $H$ be the stabilizer in general position. The following are equivalent.
\begin{enumerate}
\item $V$ is generically stable;
\item $\dim(H) = \dim(K)$;
\item $\dim(G_v) = \dim(K)$ for some $v \in V$;
\end{enumerate}
\end{corollary}
\begin{proof}
Clearly $(1) \implies (2) \implies (3)$.
For $(2) \implies (1)$, Observe that $\dim(H) = \dim(K)$ implies that $G_v/K$ is finite for generic $v \in V$. The kernel of a morphism of (affine) algebraic groups between reductive groups is reductive, so $K$ is reductive. Since $G_v/K$ is finite (for generic $v \in V$), this means that $G_v$ and $K$ have the same identity component and hence $G_v$ is also reductive. In particular, it means that $H$ is reductive. Hence $V$ is generically polystable by Theorem~\ref{theo:Popov}, and of further generically stable because $G_v/K$ is finite for generic $v \in V$.
For $(3) \implies (2)$, we observe that the set of points $U = \{v \in V\ |\ \dim(G_v) \leq \dim(K)\}$ is Zariski open.
Note that $U = \{v \in V\ |\ \dim(G_v) = \dim(K)\}$ since $K \subseteq G_v$ for all $v \in V$.
Since $U$ is non-empty Zariski open, it follows that $\dim(H) = \dim(K)$ as well.
\end{proof}
\subsection{A criterion for generic (poly)stability}\label{subsec:index}
Let still be ${\mathbb F} = {\mathbb C}$ for this section.
Starting from the late 1960s, there has been an interest in classifying actions that are generically polystable or stable, see for example, \cite{ave,Elashvili,SK,AMPopov}.
From this line of research, we will recall a few results that will be important for us.
If $S$ is a simple algebraic group, then the Killing form defined by $(X,Y)\mapsto \tr(\rm ad(X)\rm ad(Y))$ is a nondegenerate symmetric $S$-invariant bilinear form on the Lie algebra ${\mathfrak s}$ of $S$.
Up to a scalar,~${\mathfrak s}$~has only one $S$-invariant symmetric bilinear form. If $\rho\colon S\to \operatorname{GL}(V)$
and $d\rho\colon {\mathfrak s}\to {\rm End}(V)$ is the corresponding representation of the Lie algebra, then $(X,Y)\mapsto\tr(d\rho(X)d\rho(Y))$ is a nonzero symmetric $S$-invariant bilinear form on~${\mathfrak s}$.
So there is a constant $\iota_S(V)$, called the \emph{index} of the representation, such that
\begin{align*}
\tr(d\rho(X)d\rho(Y))=\iota_S(V)\tr(\rm ad(X)\rm ad(Y))
\end{align*}
for all $X,Y\in {\mathfrak s}$.
The index is additive.
Furthermore, we have $\iota_{{\rm SL}_n}({\mathbb C}^n) = \frac1{2n}$ for the defining representation of ${\rm SL}_n$.
Andreev, Vinberg, and Elashvili proved the following criterion for generic stability in~\cite[Theorem]{ave}.
\begin{theorem}[\cite{ave}]\label{thm:ave}
Let $\rho\colon G \rightarrow \operatorname{GL}(V)$ be a rational representation of a connected semisimple\footnote{Semisimple groups are reductive.}
group.
Let $H$ be the stabilizer in general position.
If $\iota_S(V) > 1$ for all simple normal subgroups $S \subseteq G$, then $\dim(H) = 0$.
In particular, $V$ is generically $G$-stable.
\end{theorem}
Elashvili proved a very similar criterion for generic polystability in \cite[Theorem 2]{Elashvili}.
\begin{theorem}[\cite{Elashvili}]\label{thm:elashvili}
Let $\rho\colon G \rightarrow \operatorname{GL}(V)$ be a rational representation of a connected semisimple
group.
Let $H$ be the stabilizer in general position.
If $\iota_S(V) \geq 1$ for all simple normal subgroups $S\subseteq G$, then the Lie algebra of $H$ is the Lie algebra of a torus.
In particular, $H$ is reductive, so $V$ is generically $G$-polystable.
\end{theorem}
Just to put these results in context, let us consider the tensor action, i.e., the action of $G = \smash{\prod_{i=1}^k {\rm SL}_{d_k}}$ on ${\mathbb C}^{d_1,\dots,d_k;m}$. In this case, $G$ is a connected semisimple group and its simple normal subgroups are just ${\rm SL}_{d_1},{\rm SL}_{d_2},\dots,{\rm SL}_{d_k}$ and the index for each ${\rm SL}_{d_i}$ is $\frac{m \prod_{j \neq i} d_j}{2 d_i}$.
Finally, Elashvili has classified all irreducible representations that satisfy the hypotheses of Theorem~\ref{thm:elashvili} but are not generically stable, see \cite[Theorem 9]{Elashvili} and Theorem~\ref{thm:elashvili classification} below.
\section{Castling transforms}\label{sec:castling}
In this section, we take ${\mathbb F} = {\mathbb R}$ or ${\mathbb C}$.
Let $\rho\colon G \rightarrow \operatorname{GL}(V)$ be an $n$-dimensional representation of an algebraic group $G$.
We will assume $\rho(G) \subseteq {\rm SL}(V)$.
For $0 < k < n$, we have a natural action of $G \times {\rm SL}_k$ on $V \otimes {\mathbb F}^k$, where $G$ acts on $V$ and ${\rm SL}_k$ acts on ${\mathbb C}^k$.
Similarly, we have an action of $G$ on $V^*$ and ${\rm SL}_{n-k}$ on ${\mathbb F}^{n-k}$, which together gives an action of $G \times {\rm SL}_{n-k}$ on $V^* \otimes {\mathbb F}^{n-k}$.
We refer to the action of $G \times {\rm SL}_{n-k}$ on $V^* \otimes {\mathbb F}^{n-k}$ as a {\em castling transform} of the action of $G \times {\rm SL}_k$ on $V \otimes {\mathbb F}^k$.
The main feature of castling transforms is that we get a bijection between the $G \times {\rm SL}_k$-orbits in a non-empty Zariski-open subset of $V \otimes {\mathbb F}^k$ and the $G \times {\rm SL}_{n-k}$-orbits in a non-empty Zariski-open subset of $V^* \otimes {\mathbb F}^{n-k}$.
Moreover, this bijection of orbits preserves stabilizers up to isomorphism.
Hence, when ${\mathbb F} = {\mathbb C}$, the stabilizer in general position is preserved under castling transforms.
Moreover, generic semistability/polystability/stability will also be preserved under castling transforms.
We will now explain all this in more detail, but first we need to recall Grassmannians.
\subsection{Grassmannians}
Let ${\mathbb F} = {\mathbb R}$ or ${\mathbb C}$.
Suppose $V$ is an $n$-dimensional vector space over ${\mathbb F}$.
Let~$\Gr(k,V)$ denote the Grassmannian of $k$-planes in $V$.
It is naturally embedded in ${\mathbb P}(\smash{\bigwedge^k(V)})$ as a closed subvariety cut out by the Pl\"ucker relations, where $\smash{\bigwedge^k(V)}$ denotes the $k^{\text{th}}$ exterior power of~$V$.
This embedding is constructed as follows.
Identify $V$ with ${\mathbb F}^n$ by choosing a basis $e_1,\dots,e_n$.
Then, a basis for $\smash{\bigwedge^k(V)}$ is $\{e_{i_1} \wedge e_{i_2} \wedge \dots \wedge e_{i_k} \ |\ 1 \leq i_1 < i_2 < \dots < i_k \leq n\}$.
For any subset~$I \subseteq [n]$ of size $k$, we write $e_I$ to denote $e_{i_1} \wedge e_{i_2} \wedge \dots \wedge e_{i_k}$ where $I = \{i_1,\dots,i_k\}$ with the $i_j$'s in increasing order.
We write $\Delta_I$ to denote the coordinate corresponding to $e_I$.
Now, for any subspace $L$ of $V$ of dimension $k$, take independent vectors $l_1,\dots,l_k$ in $L$ and consider the point $[l_1 \wedge l_2 \wedge \dots \wedge l_k] \in {\mathbb P}(\smash{\bigwedge^k(V)})$.
This point is independent of the choice of $l_i$ and only depends on the subspace $L$.
Thus, we obtain an injective map $\Gr(k,V) \rightarrow {\mathbb P}(\smash{\bigwedge^k(V)})$ whose image is a closed subvariety.
This map is called the Pl\"ucker embedding and endows the Grassmannian with the structure of a projective variety.
We refer to \cite{Fulton,Weyman,Procesi-book} for more details on Grassmannians.
The affine cone over the Grassmannian $\widehat\Gr(k,V)$ is a closed subvariety of $\smash{\bigwedge^k(V)}$.
Note that $\smash{\widehat\Gr(k,V)} = \{v_1 \wedge v_2 \wedge \dots \wedge v_k \ |\ v_i \in V\}$.
If the $v_i$'s are linearly dependent, then $v_1 \wedge v_2 \wedge \dots \wedge v_k = 0$, otherwise it is nonzero.
Let $\{e_1,\dots,e_k\}$ denote the standard basis for ${\mathbb F}^k$, and define
\begin{align}\label{eq:U}
U = \left\{{\textstyle\sum_{i=1}^k} v_i \otimes e_i \in V \otimes {\mathbb F}^k \ | \ v_1,\dots,v_k \text{ are linearly independent}\right\}.
\end{align}
Then, we have a map
\begin{align*}
\pi = \pi_{k,V} \colon U \longrightarrow \widehat\Gr(k,V) \setminus \{0\}, \qquad
{\textstyle\sum_{i=1}^k v_i \otimes e_i} \longmapsto v_1 \wedge v_2 \wedge \dots \wedge v_k.
\end{align*}
We claim that $U$ is a Zariski-locally trivial principal ${\rm SL}_k$-bundle over $\widehat\Gr(k,V) \setminus \{0\}$.
It is straightforward to see that it is a principal ${\rm SL}_k$-bundle, because $v_1 \wedge v_2 \wedge \dots \wedge v_k = w_1 \wedge w_2 \wedge \dots \wedge w_k$ if and only if there is a matrix $A = (a_{ij}) \in {\rm SL}_k$ such that $\sum_i a_{ij} v_j = w_i$ for all $i$.
To see that is Zariski-locally trivial needs an explanation.
A similar result, namely that $U$ is a Zariski-locally trivial principal $\operatorname{GL}_k$-bundle over $\Gr(k,V)$ is well known, see e.g., \cite[pg.~511]{Procesi-book}.
We modify their argument appropriately.
First, we note that $\widehat\Gr(k,V) \setminus \{0\}$ is covered by affine open subsets $\{X_I : I \subseteq [n], |I| = k\}$, where $X_I \coloneqq \{p\ |\ \Delta_I(p) \neq 0\}$.
If we identify $V$ with ${\mathbb F}^n$ as mentioned above, $U$ can be viewed as the $k \times n$ matrices of full rank.
For a matrix $M \in \operatorname{Mat}_{k,n}$, and a subset $I \subseteq [n]$ of size $k$, let $M_I$ denote the $k \times k$ submatrix of $M$ obtained by considering the columns labeled by elements in $I$, and let $p_I(M) = \det(M_I)$.
Then $\pi^{-1}(X_I) = \{M \in \operatorname{Mat}_{k,n} \ |\ p_I(M) \neq 0\}$.
Without loss of generality, we can take $I = \{1,2,\dots,k\}$, so we have an isomorphism $\pi^{-1}(X_I) \rightarrow \operatorname{Mat}_{k,n-k} \times {\mathbb F}^* \times {\rm SL}_k$ given by $M = [A \ |\ B] \mapsto (DA^{-1}B ,\det(A), AD^{-1})$ where $D$ is the diagonal matrix with diagonal entries $(\det(A), 1,1,\dots,1)$. The map in the reverse direction is $(P,\lambda,Q) \mapsto [Q D \ |\ QP]$ where $D = {\rm diag}(\lambda,1,\dots,1)$. Next, observing that $X_I \cong \operatorname{Mat}_{k,n-k} \times {\mathbb F}^*$\footnote{It is well known in the projective setting that the locus where $\Delta_I(p) \neq 0$ is isomorphic to $\operatorname{Mat}_{k,n-k}$, and we are just pulling back to the affine cone.} gives us an isomorphism $\pi^{-1}(X_I) \longrightarrow X_I \times {\rm SL}_k$.
Everything we said above also works if you consider the Euclidean topology because Zariski-open subsets are open in the Euclidean topology and polynomial maps are continuous in the Euclidean topology as well.
Hence, $U$ is a locally trivial principal ${\rm SL}_k$-bundle over $\widehat\Gr(k,V) \setminus \{0\}$ in the Euclidean topology as well.
The projection of a locally-trivial bundle onto its base is an open map. One can check this condition on a trivializing cover of the base. In other words, it suffices to check that projection of a trivial bundle onto its base is open. For the Euclidean topology, it is well known that projection maps are open. For the Zariski topology, projection maps are also open. When the underlying field is algebraically closed, this follows from flatness, but remains true even when the underlying field is not algebraically closed, see Appendix~\ref{app:projection} for a proof.
To summarize, we get the following result:
\begin{lemma} \label{lem.quot.map}
Let $V$, $U$, and $\pi_{k,V}$ be defined as above.
Then $U$ is a Zariski-locally trivial principal ${\rm SL}_k$-bundle over $\smash{\widehat\Gr(k,V)} \setminus \{0\}$ via the map $\pi_{k,V}$.
In particular, $\pi_{k,V}$ is an open map (and also a quotient map) when considering either the Zariski or Euclidean topology.
\end{lemma}
\subsection{Castling transforms}
Let $\rho\colon G \rightarrow \operatorname{GL}(V)$ be a representation of an algebraic group $G$ and we will assume $\rho(G) \subseteq {\rm SL}(V)$.
Let $\dim(V) = n$.
We have an action of $G \times {\rm SL}_k$ on $V \otimes {\mathbb F}^k$ and an action of $G \times {\rm SL}_{n-k}$ on $V^* \otimes {\mathbb F}^{n-k}$. Let
\[
U = \left\{{\textstyle\sum_{i =1}^k} v_i \otimes e_i \in V \otimes {\mathbb F}^k\ | \ v_1,\dots,v_k \text{ are linearly independent} \right\} \subseteq V \otimes {\mathbb F}^k,
\]
as in~\eqref{eq:U} and let
\[
U' = \left\{{\textstyle\sum_{i=1}^{n-k}} w_i \otimes e_i \in V^* \otimes {\mathbb F}^{n-k}\ | \ w_1,\dots,w_{n-k} \text{ are linearly independent} \right\} \subseteq V^* \otimes {\mathbb F}^{n-k}.
\]
Since $U$ is a principal ${\rm SL}_k$-bundle over $\widehat\Gr(k,V) \setminus \{0\}$, we have a bijection between the ${\rm SL}_k$-orbits in~$U$ and the points of $\widehat\Gr(k,V) \setminus \{0\}$.
This bijection is $G$-equivariant since $\pi_{k,V}$ is $G$-equivariant
and the actions of $G$ and of ${\rm SL}_k$ on $V \otimes {\mathbb C}^k$ commute.
So, we have $G$-equivariant bijections:
\begin{align}\label{eq:bijections}
\text{${\rm SL}_k$-orbits in $U$}
\ \longleftrightarrow\ %
\widehat{\Gr}(k,V) \setminus \{0\}
\ \longleftrightarrow\ %
\widehat{\Gr}(n-k,V^*) \setminus \{0\}
\ \longleftrightarrow\ %
\text{${\rm SL}_{n-k}$-orbits in $U'$}
\end{align}
The first bijection was explained above and the last bijection follows by the same argument.
The middle bijection comes from the well understood ${\rm SL}(V)$-equivariant isomorphism $\smash{\bigwedge^k(V)} \cong \smash{\bigwedge^{n-k}(V^*)}$.
The following result is implicit in \cite{Elashvili}, but we furnish a proof for completeness.
\begin{lemma}\label{stab.equal}
Let $T \in U$.
Then, we have an isomorphism of algebraic groups
\[
\Stab_G(\pi_{k,V}(T)) \cong \Stab_{G \times {\rm SL}_k} (T).
\]
\end{lemma}
\begin{proof}
This holds since $\pi_{k,V}$ is a $G$-equivariant principal ${\rm SL}_k$-bundle.
Indeed, let $p\colon G \times {\rm SL}_k \rightarrow G$ denote the projection onto the first factor.
It is easy to see that $p(\Stab_{G \times {\rm SL}_k} (T)) \subseteq \Stab_G(\pi_{k,V}(T))$.
Now suppose $g \in \Stab_G (\pi_{k,V}(T))$.
Then, $\pi_{k,V}(T) = g \cdot \pi_{k,V}(T)$ implies that $\pi_{k,V}(T) = \pi_{k,V}(g \cdot T)$ by $G$-equivariance.
Since $\pi_{k,V}$ is a principal ${\rm SL}_k$-bundle, it follows that there exists a unique $A\in{\rm SL}_k$ such that $A \cdot (g \cdot T) = T$, i.e., $(g,A) \cdot T = T$.
Thus we have proved that every $g \in \Stab_G(\pi_{k,V}(T))$ has a unique preimage under $p$ in $\Stab_{G \times {\rm SL}_k} (T)$.
We conclude that $p$ restricted to $\Stab_{G \times {\rm SL}_k} (T)$ is a (group) isomorphism onto its image, which is $\Stab_G(\pi_{k,V}(T))$.
To establish that this is an isomorphism of algebraic groups (over ${\mathbb F}$), we need to establish that it is an isomorphism of varieties.
To do so, we give a map in the reverse direction as follows.
Write~$T = \sum_{i=1}^k v_i \otimes e_i$.
Let $g \in \Stab_G(\pi_{k,V}(T))$.
Since $g$ stabilizes the span of $v_1,\dots,v_k$, we get that $g \cdot v_i = \sum_j c_{i,j}(g) \, v_j$ where the $c_{i,j}(g)$ are regular functions on $\Stab_G(\pi_{k,V}(T))$.
Moreover, the matrix $C = (c_{i,j}(g))_{1\leq i,j \leq k}$ is invertible.
Then $(g,\smash{C^{-1}})$ is the unique preimage of $g$ in $\Stab_{G \times {\rm SL}_k} (T)$ under $p$.
Thus the map $g \mapsto (g,\smash{C^{-1}})$ is the inverse of $p$ restricted to $\Stab_{G \times {\rm SL}_k}(T)$, and it is clearly a morphism of algebraic varieties.
\end{proof}
As a consequence of the bijections~\eqref{eq:bijections} and Lemma~\ref{stab.equal}, we thus obtain the following corollaries.
\begin{corollary}\label{cor.cas.stab.pres}
We have a natural bijection between the $G\times {\rm SL}_k$-orbits in $U$ and the $G \times {\rm SL}_{n-k}$ orbits in $U'$ that preserves stabilizers (up to isomorphism).
\end{corollary}
\begin{corollary}\label{cor.cas.stab}
Let ${\mathbb F} = {\mathbb C}$.
Then the stabilizer in general position for the action of $G \times {\rm SL}_k$ on $V \otimes {\mathbb C}^k$ is isomorphic to the stabilizer in general position for the action of $G \times {\rm SL}_{n-k}$ on $V^* \otimes {\mathbb C}^{n-k}$.
\end{corollary}
In fact, the invariant ring is also preserved by castling transforms~\cite{SK} (see also~\cite[Prop.~2.1]{Kac}).
\begin{lemma}[\cite{SK}]\label{lem.cas.inv.ring}
Let ${\mathbb F} = {\mathbb C}$.
Then the invariant ring for the action of $G \times {\rm SL}_k$ on $V \otimes {\mathbb C}^k$ is (canonically) isomorphic to the invariant ring for the action of $G \times {\rm SL}_{n-k}$ on $V^* \otimes {\mathbb C}^{n-k}$.
\end{lemma}
The discussion above culminates in the following result that will be very important for us:
\begin{corollary}\label{cor:castling}
Let ${\mathbb F} = {\mathbb R}$ or ${\mathbb C}$.
Then $V \otimes {\mathbb F}^k$ is generically $G \times {\rm SL}_k$-semistable (polystable, stable) if and only if $V^* \otimes {\mathbb F}^{n-k}$ is generically $G \times {\rm SL}_{n-k}$-semistable (polystable, stable).
\end{corollary}
\begin{proof}
By \cite[Proposition~2.23]{DM-mle}, it suffices to prove the statement for ${\mathbb F} = {\mathbb C}$.
So, let us assume that~${\mathbb F} = {\mathbb C}$.
Generic semistability is the same as having a non-trivial invariant ring.
Hence, it follows from Lemma~\ref{lem.cas.inv.ring} that castling transforms preserve generic semistability.
The fact that castling transforms preserve generic polystability follows from Corollary~\ref{cor.cas.stab} and Theorem~\ref{theo:Popov}.
That castling transforms preserve generic stability follows similarly from Corollaries~\ref{cor.cas.stab} and~\ref{cor:crit gen stab}, provided we can show that the kernels of the two actions have the same dimension.
To see this, let $K = {\rm ker}(\rho)$, where $\rho\colon G \rightarrow \operatorname{GL}(V)$ is the action of $G$ on $V$.
Now, let us consider the kernel of $\tilde{\rho}\colon G \times {\rm SL}_k \rightarrow \operatorname{GL}(V \otimes {\mathbb C}^k)$.
For $(g,A) \in G \times {\rm SL}_k$, we have $\tilde{\rho}(g,A) = \rho(g) \otimes A$.
So, if $(g,A)$ is in the kernel, then $\rho(g) = c \rm I$ and $A = c^{-1} I$ for some $c \in {\mathbb C}^*$.
But $A \in {\rm SL}_k$, so $c$ must be an $k^{\text{th}}$ root of unity. For each such $c$, the subvariety $H_c = \{g \in G \ |\ \rho(g) = cI\}$ is either empty or a coset of~$K$.
Since the kernel is a finite union of $H_c \times \{c^{-1} I\}$, its dimension equals the dimension of $K$.
On the other hand, the kernel for the action of $G$ on $V^*$ is also $K$, so the same argument shows that the kernel for the action of $G \times {\rm SL}_{n-k}$ on $V^* \otimes {\mathbb C}^{n-k}$ also has the same dimension as~$K$.
\end{proof}
For complex Gaussian group models, we saw in Theorem~\ref{theo:AKRS} that invariant-theoretic stability notions characterize the boundedness of the log-likelihood function and the existence and uniqueness of MLEs precisely.
However, for real models, the relation between generic stability and almost sure existence of a unique MLE is less tight.
To bridge this gap, we will need the following results:
\begin{lemma}\label{lem:open castling}
Suppose $P \subseteq V \otimes {\mathbb F}^k$ is an open subset in the Euclidean (resp.\ Zariski) topology, then $(\pi_{n-k,V^*})^{-1} \pi_{k,V} (P \cap U)$ is a non-empty open subset of $U'$ in the Euclidean (resp.\ Zariski) topology.
\end{lemma}
\begin{proof}
Let us first argue this for Euclidean topology.
Observe that $P \cap U$ is an open subset of $V \otimes {\mathbb C}^k$.
Further, since $U^c$ is a proper subvariety and hence has empty interior, we know that $P \cap U$ must be non-empty. Now, the statement follows since $\pi_{k,V}$ is an open map by Lemma~\ref{lem.quot.map}.
The argument for Zariski topology is analogous.
\end{proof}
An immediate corollary of the above lemma is the following:
\begin{corollary}\label{lem:castle compact}
Let $P = \{T \in V \otimes {\mathbb F}^k\ |\ \Stab_{G \times {\rm SL}_k}(T) \text{ is not compact} \}$.
Similarly, let $P' = \{S \in V^* \otimes {\mathbb F}^{n-k}\ |\ \Stab_{G \times {\rm SL}_{n-k}}(T) \text{ is not compact}\}$.
Then $P$ contains a Euclidean (resp.\ Zariski) open subset of $V \otimes {\mathbb F}^k$ if and only if $P'$ contains a Euclidean (resp.\ Zariski) open subset of $V^* \otimes {\mathbb F}^k$.
\end{corollary}
\begin{proof}
It suffices to prove one direction.
Suppose $P$ contains a non-empty Euclidean (resp. Zariski) open subset $\widetilde{P}$.
Then, by Lemma~\ref{lem:open castling}, $(\pi_{n-k,V^*})^{-1} \pi_{k,V} (\widetilde{P} \cap U)$ is a Euclidean (resp. Zariski) open subset of $V^* \otimes {\mathbb F}^{n-k}$, and it is contained in $P'$ by Corollary~\ref{cor.cas.stab.pres}.
\end{proof}
We need to give a technical clarification in the above corollary with respect to notion of compactness.
There are two natural topologies one can give a Lie subgroup $H$ of a Lie group $G$.
The first is the inherent topology on $H$ by virtue of being a Lie group in itself, and the second is the subspace topology by virtue of being a subspace of $G$.
In the proof above, we are really using the inherent topology because the isomorphism of stabilizers furnished by Corollary~\ref{cor.cas.stab.pres} is an abstract isomorphism.
However, we will later need to use the lemma in the context of Corollary~\ref{cor:uniq->compact}, which refers to the subspace topology.
While for immersed Lie subgroups the inherent topology can differ from the subspace topology, the two topologies coincide for embedded Lie subgroups.
Since stabilizer subgroups are closed, they are embedded Lie subgroups and there is no ambiguity.
\subsection{Castling transforms for tensor actions}
We now discuss explicitly the relevance of castling transforms to tensor actions and hence to tensor normal models.
Here we are interested in the action of $\prod_{i=1}^k {\rm SL}_{d_i}$ on ${\mathbb F}^{d_1,\dots,d_k;m}$, which we succinctly denote by $\rho_{d_1,\dots,d_k;m}$.
The ground field~${\mathbb F}$ is assumed to be either ${\mathbb R}$ or ${\mathbb C}$.
If we need to specify it, we will add a subscript.
Let $G = \prod_{i=1}^{k-1} {\rm SL}_{d_i}$ and consider its natural action on $V = {\mathbb F}^{d_1,\dots,d_{k-1};m}$, which in our notation is $\rho_{d_1,\dots,d_{k-1};m}$.
Then, the action of $G \times {\rm SL}_{d_k}$ on $V \otimes {\mathbb F}^{d_k}$ is simply $\rho_{d_1,\dots,d_k;m}$.
It is well known that $V$ and $V^*$ are related by an automorphism on the group~$G$, which does not affect any of the notions of stability.%
\footnote{If we compose a representation~$\rho$ of~${\rm SL}(d)$ with the automorphism $g \mapsto g^{-T}$, the result is isomorphic to the dual representation of $\rho$, and similarly for the product group $G$.}
Hence, we call $\rho_{d_1,\dots,N-d_k;m}$ the \emph{castling transform} of $\rho_{d_1,\dots,d_k;m}$, where~$N = \dim V = md_1\cdots d_{k-1}$ and we assume that $N > d_k$.
Thus Corollary~\ref{cor:castling} implies the following important result:
\begin{corollary}\label{cor.ten.cas.pres}
Let $d_1,\dots,d_k,m \in {\mathbb Z}_{>0}$ and suppose that $N = m \prod_{i=1}^{k-1} d_i > d_k$.
Then, $\rho_{d_1,\dots,d_k;m}$ is generically semistable (polystable, stable) if and only if $\rho_{d_1,\dots,N-d_k;m}$ is generically semistable (polystable, stable).
\end{corollary}
Given this result, we will make some definitions for later use.
For positive integers $d_1,\dots,d_k$ and~$m$, we call $(d_1,\dots,d_k;m)$ a \emph{datum} and $\rho_{d_1,\dots,d_k;m}$ the corresponding representation.
Observe that permuting the $d_i$ leaves the group and representation unchanged up to isomorphism, hence does not change the generic stability properties of the representation.
\begin{definition}
We say two data $(d_1,\dots,d_k;m)$ and $(d_1',\dots,d_k';m)$ are \emph{castling-equivalent} if $\rho_{d_1,\dots,d_k;m}$ and $\rho_{d'_1,\dots,d'_k;m}$ are related by a sequence of castling transforms (of the form described above) and permutations of the dimensions.
We say the datum $(d_1,\dots,d_k;m)$ is \emph{minimal} in its castling equivalence class if it minimizes $\prod_{i=1}^k d_i$.
\end{definition}
\begin{lemma}\label{lem:min-castle}
Consider the datum $(d_1,\dots,d_k;m)$.
Without loss of generality, we assume that $d_1 \leq d_2 \leq \dots \leq d_k$.
Let $N = m \cdot \smash{\prod_{i=1}^{k-1}} d_i$.
Then, if $\smash{\frac N2} < d_k < N$, the datum is not minimal in its castling equivalence class.
\end{lemma}
\begin{proof}
We only need to show that if $\frac N2 < d_k < N$, then the datum is not minimal.
To see this, observe that we have a castling transform that takes $(d_1,\dots,d_k;m)$ to $(d_1,\dots,d_{k-1}, N-d_k;m)$ and the latter is smaller since $N - d_k < d_k$.
\end{proof}
\begin{remark}\label{rmk:d1 redundant}
If $d_1 = 1$, then $\rho_{d_1,d_2,\dots,d_k;m}$ and $\rho_{d_2,\dots,d_k;m}$ are equal up to isomorphism of the group and representation, so we can often assume without loss of generality that $d_i \geq 2$.
\end{remark}
Even though it will not be relevant to us, we observe that each castling equivalence class contains a unique minimal datum (up to permutation).
This follows from the fact that if any two data are related by (minimal) sequence of castling transforms, then the sequence of dimensions of representations produced by these transforms is monotonous, the proof of which is exactly the same as the proof of \cite[Proposition~29]{Manivel}.
\section{Stability for tensor actions}\label{sec:stability}
In this section, we will prove Theorem~\ref{thm:recursive}, which gives a recursive characterization of the generic stability properties for the tensor actions $\rho_{d_1,\cdots,d_k;m}$.
Without loss of generality, we may assume that $d_1 \leq d_2 \leq \dots \leq d_k$.
By Corollary~\ref{cor.ten.cas.pres}, we know that the properties we are looking are invariant under the castling transform in part~(3) of the theorem, so the majority of our work will be spent on the terminal cases.
We now prove each part of the theorem separately.
For the first part, we need a simple lemma.
It follows from the first fundamental theorem of invariant theory for the special linear group, a result that dates back to Weyl~\cite{Weyl}, but also has an elementary proof (see also \cite[p.~7, Example]{KP}).
\begin{lemma}\label{lem:fft}
Consider the action of $G = {\rm SL}_d$ on $V = \operatorname{Mat}_{d,r}$ by left multiplication.
If $d > r$, then every point $v \in V$ is $G$-unstable.
In contrast, if $d \leq r$, then $V$ is generically $G$-stable.
\end{lemma}
\begin{proof}
Suppose $d>r$.
Then, any $v \in \operatorname{Mat}_{d,r}$ has rank at most $r$, so we can find $g\in{\rm SL}_d$ such that the range of $g v$ is a subspace of the span of the first $r<d$ standard basis vectors.
Then, $\varphi(t) := g^{-1} {\rm diag}(t^{d-r},\dots,t^{d-r}, t^{-r}, \dots,t^{-r}) g \in {\rm SL}_d$ for all~$t\neq0$, and $\varphi(t) v \to 0$ as $t\to0$.
Now suppose that $d \leq r$.
By Lemma~\ref{lem.increase}, it suffices to prove the claim in the case that $d=r$.
Suppose $v \in \operatorname{Mat}_{d,d}$ is invertible (a Zariski-open set).
Then its ${\rm SL}_d$-orbit is equal to $\det^{-1}(\det v)$, hence closed.
Since moreover its stabilizer is trivial, we conclude that $V$ is generically stable.
Note that Lemma~\ref{lem.increase} was stated only for ${\mathbb F} ={\mathbb C}$.
There are many ways to adapt the argument for ${\mathbb F} = {\mathbb R}$, e.g., one can use \cite[Proposition~2.23]{DM-mle}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:recursive}, part (1)]
As a representation of ${\rm SL}_{d_k}$, the tensor space ${\mathbb F}^{d_1,\dots,d_k;m}$ is isomorphic to $\operatorname{Mat}_{d_k,md_1d_2\cdots d_{k-1}}$ and hence every point is unstable by Lemma~\ref{lem:fft}, since $d_k > d_1 \cdots d_{k-1} m$.
Hence every point is also unstable for the action of the larger group $G = \smash{\prod_{i=1}^k {\rm SL}_{d_i}}$.
\end{proof}
For the second part, we will need the following result.
\begin{lemma} \label{lem:stab.case2}
Let $\pi\colon H \rightarrow {\rm SL}_d \subseteq \operatorname{GL}_d$ be a $d$-dimensional representation of an algebraic group~$H$.
Consider the action of $G = H \times {\rm SL}_d$ on $\operatorname{Mat}_{d,d}$ given by $(h,g) \cdot A = \pi(h) A g^{-1}$.
For any full-rank matrix $A \in \operatorname{Mat}_{d,d}$, the stabilizer is given by $G_A = \{(h,A^{-1}\pi(h)A) \ |\ h \in H\}$.
In particular, the stabilizer in general position is isomorphic to~$H$.
\end{lemma}
\begin{proof}
Straightforward.
\end{proof}
One point to note is that the kernel of the tensor action $\rho = \rho_{d_1,\dots,d_k;m}$ is finite.
So stability is equivalent to having a closed orbit and finite stabilizer.
In particular, for ${\mathbb F} = {\mathbb C}$, Corollary~\ref{cor:crit gen stab} shows that generic stability of $\rho$ is the same as the stabilizer in general position being finite.
\begin{proof}[Proof of Theorem~\ref{thm:recursive}, part (2), for ${\mathbb F} = {\mathbb C}$]
Let us define $H = {\rm SL}_{d_1} \times {\rm SL}_{d_2} \times \dots \times {\rm SL}_{d_{k-1}}$ and $W = {\mathbb C}^{d_1,\dots,d_{k-1};m}$.
Then we can view $G \cong H \times {\rm SL}_{d_k}$ and ${\mathbb C}^{d_1,\dots,d_k;m} \cong W \otimes {\mathbb C}^{d_k} \cong \operatorname{Mat}_{d_k,d_k}$, since $d_k = d_1 \cdots d_{k-1} m$.
So, the stabilizer in general position is $H$ by Lemma~\ref{lem:stab.case2}, which is reductive.
Hence, $\rho = \rho_{d_1,\dots,d_k;m}$ is generically polystable by Theorem~\ref{theo:Popov}.
As discussed above, the kernel of $\rho$ is a finite group, so $\rho$ is generically stable if and only if the stabilizer in general position~$H$ is finite.
This happens precisely when $d_1 = d_2 = \dots = d_{k-1} = 1$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:recursive}, part (2), for ${\mathbb F} = {\mathbb R}$]
This follows from \cite[Proposition~2.23]{DM-mle}.
\end{proof}
We already proved the third part of the theorem when we discussed the castling transforms of tensor actions.
\begin{proof} [Proof of Theorem~\ref{thm:recursive}, part (3)]
This follows from Corollary~\ref{cor.ten.cas.pres}.
\end{proof}
We now prove the fourth and last part of the theorem, which is perhaps the most complicated.
Here we wish to apply Theorems~\ref{thm:ave} and~\ref{thm:elashvili}.
Recall from Section~\ref{subsec:index} that the simple normal subgroups of $G = {\rm SL}_{d_1} \times {\rm SL}_{d_2} \times \dots \times {\rm SL}_{d_k}$ are just ${\rm SL}_{d_1},{\rm SL}_{d_2},\dots,{\rm SL}_{d_k}$.
To compute the index of $V = {\mathbb C}^{d_1,\dots,d_k;m}$ with respect to some~${\rm SL}_{d_i}$, note that $V \cong ({\mathbb C}^{d_i})^{\oplus M}$ as an ${\rm SL}_{d_i}$-representation, where $M = \frac{md_1\cdots d_k}{d_i}$.
Now, the index of ${\mathbb C}^{d_i}$ with respect to ${\rm SL}_{d_i}$ is $\frac{1}{2d_i}$ and is additive.
It follows that the index of ${\mathbb C}^{d_1,\dots,d_k;m}$ with respect to ${\rm SL}_{d_i}$ is given by $\smash{\frac{M}{2d_i} = \frac{md_1d_2\cdots d_k}{2d_i^2}}$.
Since by assumption $d_1 \leq d_2 \leq \dots \leq d_k$, the smallest of these indices is the one for ${\rm SL}_{d_k}$, given by $\smash{\frac{md_1d_2\cdots d_{k-1}}{2d_k}}$.
When~$d_k \leq \frac12 m d_1 d_2 \cdots d_{k-1}$, as we assume in part~(4) of the theorem, all indices therefore are at least one, so Theorems~\ref{thm:ave} and~\ref{thm:elashvili} are applicable.
When $m=1$, then the representation of~$G$ on~$V$ is irreducible.
Elashvili has classified all irreducible representations of semisimple groups which are generically polystable, but not generically stable.
From the classification one can extract the following, see \cite[Theorem 9]{Elashvili} and also~\cite[p.~9]{BRVR}.
\begin{theorem}[\cite{Elashvili}]\label{thm:elashvili classification}
Consider the irreducible representation $V = {\mathbb C}^{d_1,\dots,d_k;m}$ of $G={\rm SL}_{d_1}({\mathbb C}) \times \cdots \times {\rm SL}_{d_k}({\mathbb C})$.
Assume that $2 \leq d_1 \leq \cdots \leq d_k \leq \frac{d_1 \cdots d_{k-1}}2$.
Then, $V$ satisfies the hypotheses of Theorem~\ref{thm:elashvili}, hence is generically $G$-polystable.
Moreover, $V$ is not generically $G$-stable if and only if $k=3$ and $(d_1,d_2,d_3) = (2,d,d)$ for some $d\geq2$.
\end{theorem}
Note that this result proves part~(4) of the theorem when ${\mathbb F} = {\mathbb C}$ and $m=1$.
To deal with the case that $m\geq2$, we will still use of this theorem, together with a knowledge of the s.g.p.'s.
For $(d_1,d_2,d_3)=(2,2,2)$, the stabilizer of $v=e_1^{\otimes 3}+e_2^{\otimes 3}$ is a s.g.p.
It includes and has the same Lie algebra as the two-dimensional torus
$\{ (s,t,u) \in G : s, t, u \text{ diagonal}, stu = 1 \}$.
For $(d_1,d_2,d_3)=(2,d,d)$, $d>2$, the stabilizer of $v = e_1 \otimes I + e_2 \otimes A$, where $I$ denotes the $d\times d$ identity matrix and $A$ is a generic $d\times d$ diagonal matrix, is a s.g.p.
It includes and has the same Lie algebra as the $(d-1)$-dimensional torus $\{ (1,t,t^{-1}) \in G : t \text{ diagonal} \}$.
\begin{proof} [Proof of Theorem~\ref{thm:recursive}, part (4) for ${\mathbb F} = {\mathbb C}$]
Since $d_k \leq \smash{\frac{d_1 \cdots d_{k-1} m}2}$, the index of~$V = {\mathbb C}^{d_1,\dots,d_k;m}$ with respect to any simple normal subgroup of~$G$ is greater than or equal to one (as discussed above).
When the inequality is strict, then $\rho$ is generically stable by Theorem~\ref{thm:ave}.
Now suppose that $d_k = \smash{\frac{d_1 \cdots d_{k-1} m}2}$.
Then $\rho$ is still generically polystable by Theorem~\ref{thm:elashvili}.
We now characterize when the representation is generically stable.
If $d_1 = \cdots = d_{k-1} = 1$ then $d_k \leq \frac m2 < m$, so $\rho$ is generically stable by Lemma~\ref{lem:fft}.
Now assume that $d_{k-1} \geq 2$.
Then, $d_k = \frac{m d_1 \cdots d_{k-1}}2 \geq m$.
This means that if we consider the action of the larger group $H = {\rm SL}_{d_1} \times \dots \times {\rm SL}_{d_k} \times {\rm SL}_m$ on $V = {\mathbb C}^{d_1} \otimes \dots \otimes {\mathbb C}^{d_k} \otimes {\mathbb C}^m$ then the dimension $d_k$ is still the largest among the dimensions $d_1,d_2,\dots,d_k,m$.
Accordingly, we can apply Theorem~\ref{thm:elashvili classification} to find that $V$ is generically $H$-stable (hence also generically $G$-stable\footnote{One way to see this is by using Corollary~\ref{cor:crit gen stab}.}), except if $(d_1,\dots,d_k;m)$ is one of the following cases:
\begin{enumerate}[label=(\alph*)]
\item $(1,\dots,1,2,d,d;1)$ for some $d\geq2$,
\item $(1,\dots,1,2,2;2)$,
\item $(1,\dots,1,d,d;2)$ for some $d>2$,
\item $(1,\dots,1,2,d;d)$ for some $d>2$.
\end{enumerate}
In case~(a), we have $m=1$ and hence $G\cong H$, so $V$ is not generically $G$-stable either.
To deal with the case that $m=2$, we observe that an s.g.p.\ for $G$ can be obtained by intersecting a generic $H$-conjugate of an s.g.p.\ for $H$ with the subgroup~$G$.
From the description of the s.g.p.'s above, we can observe the following.
In case~(b), the s.g.p.~for $G$ has dimension one (the dimension drops by one compared to $H$), while in case~(c) it has dimension $d-1$ (same as for the $H$-action).
Thus we see that $V$ is in either case not generically $G$-stable.
In contrast, in case~(d) the s.g.p.\ for $G$ is finite, so $V$ is generically $G$-stable.%
\footnote{Alternately, cases~(b), (c), and (d) follow from the results on matrix normal models in~\cite{DM-mle}.}
This concludes the proof.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:recursive}, part (4) for ${\mathbb F} = {\mathbb R}$]
This follows from \cite[Proposition~2.23]{DM-mle}.
\end{proof}
Finally, we need to prove that if $\rho$ is not generically semistable then it is unstable.
For~${\mathbb F}={\mathbb C}$, this statement is contained in Lemma~\ref{lem.loc.const}.
For ${\mathbb F}={\mathbb R}$, it then follows from \cite[Corollary 2.22 and Proposition~2.23]{DM-mle}.
This concludes the proof of Theorem~\ref{thm:recursive}.
\section{A uniform characterization}\label{sec:uniform}
In this section we prove Theorem~\ref{thm:main-inv}, which gives a non-recursive characterization.
Following~\cite{BRVR}, we define the following quantities for positive integers $k$, $d_1,\dots,d_k$, and $m$:
\begin{align*}
R(d_1,\dots,d_k;m) &\coloneqq m \prod_{i=1}^k d_i + \sum_{n=1}^k (-1)^n G_n(d_1,\dots,d_k),
\shortintertext{where}
G_n(d_1,\dots,d_k) &\coloneqq \sum_{1\leq i_1 < \ldots < i_n \leq k} \gcd(d_{i_1},\dots,d_{i_n})^2,
\shortintertext{as well as}
g_{\max}(d_1,\dots,d_k) &\coloneqq \max_{i<j} \gcd(d_i,d_j).
\shortintertext{and}
\Delta(d_1,\dots,d_k; m) &\coloneqq m \prod_{i=1}^k d_i - 1 - \sum_{i=1}^k (d_i^2 - 1).
\end{align*}
By convention, we define $g_{\max}(d) = 1$ for any $d \in {\mathbb Z}_{>0}$, and we always assume that $k\geq1$.
We saw earlier that generic semistability (polystability, stability) for tensor actions is symmetric in the $d_i$'s, as well as invariant under the castling transform in part~(3) of Theorem~\ref{thm:recursive}.
It is also invariant under removing dimensions $d_i$ that are equal to one.
It is not hard to verify that the quantities $R(d_1,\dots,d_k;m)$, $g_{\max}(d_1,\dots,d_k)$, and $\Delta(d_1,\dots,d_k;m)$ have the same invariance properties.
Hence, to prove Theorem~\ref{thm:main-inv}, it suffices to consider the case when $(d_1,\dots,d_k;m)$ is a minimal datum, and we may also assume that the $d_i$ are sorted.
Our analysis follows the same lines as the proof of~\cite[Proposition 5.3]{BRVR}.
\begin{lemma}\label{lem:R minimal}
Suppose $(d_1,\dots,d_k;m)$ is a minimal datum, and $d_1 \leq d_2 \leq \dots \leq d_k$.
Then:
\begin{enumerate}
\item $R < 0$ if and only if $d_k > md_1d_2\cdots d_{k-1}$;
\item $R = 0$ if and only if $d_k = md_1d_2\cdots d_{k-1}$;
\item $R > 0$ if and only if $d_k \leq \frac{1}{2} md_1d_2 \cdots d_{k-1}$.
\end{enumerate}
\end{lemma}
\begin{proof}
According to Lemma~\ref{lem:min-castle}, any minimal datum satisfies either $d_k > md_1d_2\cdots d_{k-1}$, $d_k = md_1d_2\cdots d_{k-1}$, or $d_k \leq \frac{1}{2} md_1d_2 \cdots d_{k-1}$.
If $d_1 = \dots = d_k = 1$ then the lemma is immediate, since $R = m - 1$.
Otherwise, we may assume that $d_1\geq2$ by removing all dimensions equal to one.
We may also assume that $m\geq2$, since when $m=1$ the lemma is already proved in~\cite[Proposition 5.3]{BRVR}.
Finally, observe that if we prove the ``if'' directions for all three statements, then the ``only if'' directions are automatic.
Hence, we proceed to prove the ``if'' directions in all three cases under the assumptions that $d_1\geq2$ and $m\geq 2$.
Let us write $B_n$ for the terms in $G_n$ that involve~$d_k$, and $A_n = G_n(d_1,\dots,d_{k-1})$ for all other terms.
Note that $A_k=0$ and $B_1 = d_k^2$.
Thus:
\begin{align}\label{eq:convenient}
R(d_1,\dots,d_k;m)
= m \prod_{i=1}^k d_i - d_k^2 + \sum_{n=1}^{k-1} (-1)^n (A_n - B_{n+1})
\end{align}
\emph{Case (1):} Suppose $d_k > d_1 \cdots d_{k-1} m$.
Then $d_k = d_1 \cdots d_{k-1} m + \alpha$ for some $\alpha\geq1$, and using~\eqref{eq:convenient},
\begin{align*}
R
&= -\alpha d_k + \sum_{n=1}^{k-1} (-1)^n (A_n - B_{n+1}) \\
&= -\alpha^2 - \alpha d_1 \cdots d_{k-1} m + \sum_{n=1}^{k-1} (-1)^n (A_n - B_{n+1})
\end{align*}
Clearly, $A_n \geq B_{n+1}$ for all $n$, so we can leave out the terms for odd $n$ and obtain the bound
\begin{align*}
R
&\leq -\alpha^2 - \alpha d_1 \cdots d_{k-1} m + \sum_{n\geq2 \text{ even}} (A_n - B_{n+1}) \\
&\leq -\alpha^2 - \alpha d_1 \cdots d_{k-1} m + \sum_{n\geq2 \text{ even}} A_n \\
&< - 2 d_1 \cdots d_{k-1} + \sum_{n\geq2 \text{ even}} A_n,
\end{align*}
using that $m\geq2$ and $\alpha\geq1$.
Now we are in the same situation as in \cite[Eq.~(9)]{BRVR} and find that~$R<0$.
\smallskip
\emph{Case (2):} Suppose $d_k = d_1 \cdots d_{k-1} m$.
Here we have $B_{n+1} = A_n$ for all $n$, so using \eqref{eq:convenient},
\begin{align*}
R = m \prod_{i=1}^k d_i - d_k^2 = 0.
\end{align*}
\smallskip
\emph{Case (3):} Suppose $d_k \leq \frac12 d_1 \cdots d_{k-1} m$.
If $k=1$ then $d_1 \leq \frac m2$ and
\begin{align*}
R = md_1 - d_1^2 = d_1 (m - d_1) \geq \frac{m d_1}2 > 0.
\end{align*}
We now discuss the case that $k\geq2$.
Here,
\begin{align*}
R
&= m \prod_{i=1}^k d_i - d_k^2 + \sum_{n=1}^{k-1} (-1)^n (A_n - B_{n+1}) \\
&= \frac14 d_1^2 \cdots d_{k-1}^2 m^2 - (\frac12 d_1 \cdots d_{k-1} m - d_k)^2 + \sum_{n=1}^{k-1} (-1)^n (A_n - B_{n+1}) \\
&\geq \frac14 d_1^2 \cdots d_{k-1}^2 m^2 - (\frac12 d_1 \cdots d_{k-1} m - d_{k-1})^2 + \sum_{n=1}^{k-1} (-1)^n (A_n - B_{n+1}) \\
&= d_{k-1}^2 \left( d_1 \cdots d_{k-2} m - 1 \right) + \sum_{n=1}^{k-1} (-1)^n (A_n - B_{n+1}),
\end{align*}
where the inequality follows because $d_{k-1} \leq d_k \leq \frac{1}{2} d_1\cdots d_{k-1}m$.
Leaving out the even terms, which are non-negative since $A_n \geq B_{n+1}$, we obtain
\begin{align*}
R
&\geq d_{k-1}^2 \left( d_1 \cdots d_{k-2} m - 1 \right) + \sum_{n=1}^{k-1} (-1)^n (A_n - B_{n+1}) \\
&\geq d_{k-1}^2 \left( d_1 \cdots d_{k-2} m - 1 \right) - \sum_{n\geq1 \text{ odd}} (A_n - B_{n+1}) \\
&> d_{k-1}^2 \left( d_1 \cdots d_{k-2} m - 1 \right) - \sum_{n\geq1 \text{ odd}} A_n.
\end{align*}
Each of the $\binom{k-1}n$ GCDs contributing to $A_n$ are $\leq d_{k-1}$, so
\begin{align*}
\sum_{n\geq1 \text{ odd}} A_n
\leq d_{k-1}^2 \sum_{n\geq1 \text{ odd}} \binom{k-1}n
= d_{k-1}^2 2^{k-2}
\end{align*}
and hence
\begin{align}\label{eq:R bound}
R
> d_{k-1}^2 \left( d_1 \cdots d_{k-2} m - 1 - 2^{k-2} \right)
\geq d_{k-1}^2 \left( 2^{k-2} m - 1 - 2^{k-2} \right)
\end{align}
For $m\geq2$ and $k\geq2$, it holds that
\begin{align}\label{eq:inner}
2^{k-2} m - 1 - 2^{k-2}
\geq 2^{k-2} - 1
\geq 0,
\end{align}
and hence we conclude that $R>0$.
\end{proof}
\begin{remark}
Write $\mathcal{Z}(d_1,\dots,d_k) := \sum_n (-1)^{n+1} \sum_{i_1 < i_2 < \dots < i_n} {\rm gcd}(d_{i_1}, d_{i_2},\dots,d_{i_n})$.
Then, for $d_1,\dots,d_k \in {\mathbb Z}_{\geq 1}$, one can interpret $\mathcal{Z}(d_1,\dots,d_k)$ as the cardinality of $\bigcup_{i=1}^k \left({\mathbb Z}[\frac{1}{d_i}]/{\mathbb Z}\right)$ in ${\mathbb Q}/{\mathbb Z}$.
In particular, $\mathcal{Z}(d_1,\dots,d_k) \geq 0$.
Further, observe that $R(d_1,\dots,d_k;m) = m \prod_{i=1}^k d_i - \mathcal{Z}(d_1^2,\dots,d_k^2)$.
\end{remark}
An alternate and short proof of the ``if" statements in cases $(1)$ and $(2)$ in the above theorem is as follows.
Observe that the quantity $R$ is invariant under the transformation $(d_1,\dots,d_k;m) \rightarrow (d_1,\dots,d_{k-1}, d_k^*;m)$ where $d_k^* = m\prod_{i=1}^{k-1}d_i - d_k$ even in the case when some of the entries are negative or zero.
Thus in case $(1)$ we get $R(d_1,\dots,d_k;m) = R(d_1,\dots,d_{k-1},d^*_k;m) < 0$ since $md_1\dots d_{k-1} d^*_k < 0$ and $\mathcal{Z}(d_1^2,\dots,d_{k-1}^2, (d^*_k)^2) \geq 0$.
In case $(2)$, using that $\mathcal{Z}(d_1^2,\dots,d_{k-1}^2, 0) = 0$, one can deduce $R(d_1,\dots,d_k;m) = R(d_1,\dots,d_{k-1},0;m) = 0$.
Now, we can prove Theorem~\ref{thm:main-inv}.
\begin{proof}[Proof of Theorem~\ref{thm:main-inv}]
Generic semistability (polystability, stability) for tensor actions is invariant under the castling transform in part~(3) of Theorem~\ref{thm:recursive} and under permuting the dimensions~$d_i$.
The same is true for the quantities $R$, $\Delta$, and $g_{\max}$.
So, we can assume that~$d_1 \leq d_2 \leq \dots \leq d_k$ and that $(d_1,\dots,d_k;m)$ is a minimal datum.
\smallskip
\emph{Case (1):}
Suppose $R > 0$.
Then we know from Lemma~\ref{lem:R minimal} that $d_k \leq \frac12 md_1 d_2 \cdots d_{k-1}$.
If $k = 1$, then $d_1 \leq \frac m 2$, so we must have $m\geq 2$.
Further, $g_{\max} = 1$, so $R > 0$ implies that $R \geq g_{\max}^2$.
Finally, $\rho$ is always generically stable because the action of ${\rm SL}_{d_1}$ on $({\mathbb C}^{d_1})^{\oplus m}$ is generically stable as long as $m \geq d_1$ (we have $m \geq 2 d_1$).
This concludes the proof in case that $k=1$.
Now, we deal with $k\geq2$.
We may assume that $d_1 \geq 2$ by removing all dimensions equal to one (if all $d_i = 1$ then we can reduce to the case $k=1$ discussed above).
We now distinguish two cases:
\begin{itemize}
\item $m\geq 2$:
In this case we show that $R \geq g_{\max}^2$ and characterize equality.
If $k>2$ then \eqref{eq:inner} is not tight, and we see from \eqref{eq:R bound} that
\begin{align*}
R > d_{k-1}^2 \geq g_{\max}^2.
\end{align*}
For $k=2$, we are in the matrix case.
Since $2 \leq d_1 \leq d_2 \leq \frac12 m d_1$, we find that
\begin{align*}
R(d_1,d_2;m)
&= md_1d_2 - d_1^2 - d_2^2 + \gcd(d_1,d_2)^2
= (md_1 - d_2) d_2 - d_1^2 + g_{\max}^2 \\
&\geq \frac12 md_1^2 - d_1^2 + g_{\max}^2
= \left( \frac m2 - 1 \right) d_1^2 + g_{\max}^2
\geq g_{\max}^2,
\end{align*}
with equality if and only if $d_1 = d_2$ and $m=2$, in which case also $g_{\max} = d_1 = d_2 \geq 2$.
Thus we have proved that $R \geq g_{\max}^2$, with equality if and only if $k=2$ and $(d_1,d_2) = (d,d)$ for some $d\geq2$.
By part~(4) of Theorem~\ref{thm:recursive}, this is precisely the case where $\rho$ is generically polystable but not generically stable (when $d_k \leq \frac12m d_1 d_2 \cdots d_{k-1}$ and $m\geq2$).
\item $m=1$:
\cite[Proposition 6.1]{BRVR} shows that in this case $\Delta \geq -2$, with equality precisely in the case that $k=3$ and $(d_1,d_2,d_3)=(2,d,d)$ for some $d\geq 2$.
(If $\Delta > -2$ then in fact $\Delta \geq 2$, but we do not need this.)
By part~(4) of Theorem~\ref{thm:recursive}, this is precisely the case where $\rho$ is generically polystable but not generically stable (when $d_k \leq \frac12m d_1 d_2 \cdots d_{k-1}$ and $m=1$).
\end{itemize}
\smallskip
\emph{Case (2):}
Suppose $R = 0$.
Then we know from Lemma~\ref{lem:R minimal} that $d_k = md_1 d_2 \cdots d_{k-1}$.
By part~(2) of Theorem~\ref{thm:recursive}, $\rho$ is generically polystable, and generically stable if and only if $d_1 = \cdots = d_{k-1} = 1$.
When $k=1$, we have $g_{\max}=1$ (by definition) and this condition is always satisfied.
Otherwise, $d_k = md_1d_2 \cdots d_{k-1}$ means that $g_{\max} = \max_{i<j} \gcd(d_i,d_j) = \max_{i<k} d_i$.
Thus, we find that in either case, $g_{\max}=1$ if and only if $\rho$ is generically stable.
\smallskip
\emph{Case (3):}
Suppose $R < 0$.
By Lemma~\ref{lem:R minimal}, we know that $d_k > md_1 d_2 \cdots d_{k-1}$.
Hence $\rho$ is unstable by part~(1) of Theorem~\ref{thm:recursive}.
\end{proof}
\section{Maximum likelihood estimation for tensor normal models}\label{sec:mle}
In this section, we will prove Theorem~\ref{thm:main} which characterizes the boundedness of the likelihood function and the existence and uniqueness of MLEs for the tensor normal models.
The tensor normal models are the Gaussian group models corresponding to the tensor action.
Thus the results on generic stability for tensor actions translate directly to results on maximum likelihood estimation for tensor normal models via Theorem~\ref{theo:AKRS}.
This connection is perfect for ${\mathbb F} = {\mathbb C}$, whereas some more effort is required for ${\mathbb F} = {\mathbb R}$.
A technical point to note is that $G = \smash{\prod_{i=1}^k {\rm SL}_{d_i}}$ is not a subset of $\operatorname{GL}(V)$, $V = {\mathbb F}^{d_1,\dots,d_k;m}$, which is needed to apply Theorem~\ref{theo:AKRS} verbatim.
However, this is a small issue, as we may simply replace $G$ by its homomorphic image $\rho_{d_1,\dots,d_k;m}(G)$, and note that notions of semistability, polystability, and stability are the same for both groups.
\begin{proof} [Proof of Theorem~\ref{thm:main}]
We first consider the case of ${\mathbb F} = {\mathbb C}$.
Consider the action of $G = \prod_{i=1}^k {\rm SL}_{d_i}({\mathbb C})$ on ${\mathbb C}^{d_1,\dots,d_k}$.
The associated Gaussian group model is $\mathcal{M}_{\mathbb C}(d_1,\dots,d_k)$.
Thus, Corollary~\ref{cor:mle-gen.stable} implies that Theorem~\ref{thm:main-inv} translates precisely to Theorem~\ref{thm:main}.
We now discuss the relation between the real and the complex case.
For both ${\mathbb F} = {\mathbb R}$ and ${\mathbb C}$, Theorem~\ref{thm:main-inv} shows that generic semistability is equivalent to generic polystability.
Further, generic semistability (resp.\ polystability) over ${\mathbb C}$ is equivalent to generic semistability (resp.\ polystability) over ${\mathbb R}$, see \cite[Proposition~2.23]{DM-mle}.
Finally, for both ${\mathbb F} = {\mathbb R}$ and ${\mathbb C}$, generic semistability is equivalent to almost sure boundedness of log-likelihood function because the semistable locus (over ${\mathbb F}$) is either empty or a (non-empty) Zariski-open subset (in particular the complement of a measure zero subset), see \cite[Corollary~2.15, Proposition~2.21, Corollary~2.22]{DM-mle}.
In fact, we claim that the following are equivalent:
\begin{enumerate}
\item $\rho_{d_1,\dots,d_k;m}$ is generically semistable for ${\mathbb F} = {\mathbb C}$.
\item $\rho_{d_1,\dots,d_k;m}$ is generically semistable for ${\mathbb F} = {\mathbb R}$.
\item $\rho_{d_1,\dots,d_k;m}$ is generically polystable for ${\mathbb F} = {\mathbb C}$.
\item $\rho_{d_1,\dots,d_k;m}$ is generically polystable for ${\mathbb F} = {\mathbb R}$.
\item For the tensor normal model $\mathcal{M}_{\mathbb C}(d_1,\dots,d_k)$, we have almost sure boundedness of log-likelihood function for $m$ samples.
\item For the tensor normal model $\mathcal{M}_{\mathbb R}(d_1,\dots,d_k)$, we have almost sure boundedness of log-likelihood function for $m$ samples.
\item For the tensor normal model $\mathcal{M}_{\mathbb C}(d_1,\dots,d_k)$, an MLE exists almost surely for $m$ samples.
\item For the tensor normal model $\mathcal{M}_{\mathbb R}(d_1,\dots,d_k)$, an MLE exists almost surely for $m$ samples.
\end{enumerate}
The equivalence of (1)---(6) was discussed above. The implications $(3) \implies (7)$ and $(4) \implies (8)$ follow from Theorem~\ref{theo:AKRS} since the complement of a Zariski-open subset has Lebesgue measure zero. Further, it is also immediate that $(7) \implies (5)$ and $(8) \implies (6)$. This shows the equivalence of all eight statements.
Moreover, $\rho_{d_1,\dots,d_k;m}$ is generically stable over ${\mathbb F} = {\mathbb C}$ if and only if the same holds for ${\mathbb F} = {\mathbb R}$, see again~\cite[Proposition~2.23]{DM-mle}.
In either case, generic stability implies the almost sure existence of a unique MLE by Theorem~\ref{theo:AKRS}.
However, the converse is not necessarily true when ${\mathbb F} = {\mathbb R}$, and this is what needs to be investigated.
To summarize, the only cases we need to further study are the cases in which $\rho_{d_1,\dots,d_k;m}$ is generically polystable but not generically stable.
According to Theorem~\ref{thm:recursive}, these are the castling equivalence classes of the minimal data below:
\begin{enumerate}
\item $d_k = md_1d_2\cdots d_{k-1}$ and $d_1 \cdots d_{k-1} > 1$.
\item $(d_1,\dots,d_k,m) = (1,1,\dots,1,d,d;2)$ with $d \geq 2$.
\item $(d_1,\dots,d_k,m) = (1,1,\dots,1,2,d,d;1)$ with $d \geq 2$.
\end{enumerate}
To conclude the proof of Theorem~\ref{thm:main}, we need to show for these we do not have the almost sure existence of a unique MLE also over ${\mathbb F}={\mathbb R}$.
By Corollary~\ref{cor:uniq->compact} and Corollary~\ref{lem:castle compact}, it suffices to prove that in any of these three minimal cases there is a Euclidean open subset consisting of points with non-compact stabilizers for each of the above minimal data.
Note that Euclidean open subsets have positive Lebesgue measure.
For case (1), observe that the proof of Lemma~\ref{lem:stab.case2} works even when the underlying field is ${\mathbb R}$.
So, in fact, there is a non-empty Zariski-open subset of $V$ (in particular, a set of positive measure) where the stabilizer is isomorphic to $\prod_{i=1}^{k-1} {\rm SL}_{d_i}({\mathbb R})$, which is non-compact unless $d_1 = \cdots = d_{k-1} = 1$.
We now address case (2) and distinguish two cases:
\begin{itemize}
\item $d\geq3$:
For generic $v \in \smash{\operatorname{Mat}_{d,d}^2 = ({\mathbb R}^d \otimes {\mathbb R}^d)^{\oplus 2}}$, we give a sequence of elements in the stabilizer with no convergent subsequence (hence proving that the stabilizer is not compact).
It was proved in \cite[Lemma~6.2]{DM-mle} that for generic $v \in \smash{\operatorname{Mat}_{d,d}^2}$, there exists $(g,h) \in G_v$ such that $g$ and $h$ has eigenvalues with absolute value not equal to $1$.
Since $G_v \subseteq {\rm SL}_d \times {\rm SL}_d$, this means that $\{(g^n,h^n)\}_{n \in {\mathbb Z}_{>0}}$ is a sequence of elements in $G_v$ with no convergent subsequence.
Hence $G_v$ is not compact. This gives in fact a Zariski open subset consisting of points with non-compact stabilizer.
\item $d=2$:
It is easy to see that the stabilizer of $v_{a,b} = \smash{\left(\begin{psmallmatrix} 1 &0 \\ 0 & 1 \end{psmallmatrix},\begin{psmallmatrix} a & 0 \\ 0 & b \end{psmallmatrix} \right) \in \operatorname{Mat}_{2,2}^2}$ is not compact for any $a,b\in{\mathbb R}$ (cf.~the discussion below Theorem~\ref{thm:elashvili classification}).
Now, let us consider $W = \{(A,B) \in \operatorname{Mat}_{2,2}^2 \ | \ \det(A) \neq 0, \det(tI -A^{-1}B) \text{ has distinct real roots}\}$.
Then, it is easy to see that every $w \in W$ is in the ${\rm SL}_d \times {\rm SL}_d$ orbit of $v_{a,b}$ for an appropriate choice of $a$ and $b$ (indeed, just the eigenvalues of $A^{-1}B$).
Next, observe that $W$ is a full-dimensional semi-algebraic set, indeed it is described by one Zariski-open conditions ($\det(A) \neq 0$) and one inequality (the discriminant of $\det(tI - A^{-1}B)$ is larger than zero).
Thus, $W$ is an Euclidean-open subset (hence, a set of positive Lebesgue measure), and every point in $W$ has a non-compact stabilizer.
\end{itemize}
Finally, case (3) follows from case (2) in view of Lemma~\ref{lem:res.comp} below.
\end{proof}
\begin{lemma}\label{lem:res.comp}
Let $H \subseteq G$ be a closed subgroup of an algebraic group and let $V$ be a rational representation of $G$. Let $v \in V$. If $G_v$ is compact, then so is $H_v$.
\end{lemma}
\begin{proof}
$H_v = G_v \cap H$ is a closed subset of $G_v$ and hence compact if $G_v$ is compact.
\end{proof}
We end this section with a proof of Corollary~\ref{cor:threshold}.
\begin{proof}[Proof of Corollary~\ref{cor:threshold}]
Since Theorem~\ref{thm:main} does not differentiate between ${\mathbb F} = {\mathbb R}$ and ${\mathbb F} = {\mathbb C}$, it suffices to prove this in the case of ${\mathbb F} = {\mathbb C}$.
Here, statistical notions correspond precisely to stability notions by Corollary~\ref{cor:mle-gen.stable}, so we will make our arguments in the language of stability.
First, observe that $\lceil r \rceil \leq {\rm mlt}_b (= {\rm mlt}_e)$ because $\rho_{d_1,\dots,d_k;m}$ is unstable for unless $m \geq r$ by part~$(1)$ of Theorem~\ref{thm:main-inv}.
Now, let $c = \lceil r \rceil$, so $d_k = c d_1\cdots d_{k-1} - \alpha$ for some $0 \leq \alpha < d_1d_2\cdots d_{k-1}$.
To show ${\rm mlt}_u \leq c+1$, it suffices to show that $\rho_{d_1,\dots,d_k,c+1}$ is generically stable by Lemma~\ref{lem.increase}.
We see that $\rho_{d_1,\dots,d_k;c+1}$ is castling equivalent to $\rho_{d_1,\dots,d_{k-1},d_1d_2\cdots d_{k-1} + \alpha;c+1}$. It suffices to show that one of them is generically stable. Observe that both $A = d_k$ and $B = d_1d_2\cdots d_{k-1} + \alpha$ are larger than $d_{k-1}$, so the dimensions are already in order. Since $A + B = (c+1)d_1 \cdots d_{k-1}$, we get that either $A$ or $B$ is $\leq \frac{1}{2} (c+1)d_1\cdots d_{k-1}$. Hence, we get generic stability for $\rho_{d_1,\dots,d_k;c+1}$ by parts~(3) and (4) of Theorem~\ref{thm:recursive} unless $(d_1,\dots,d_k;c+1)$ (or $(d_1,\dots,d_{k-1},B;c+1)$) is one of $(2,d,d;1)$ or $(d,d;2)$.
The former is not possible because $c+1 \geq 2$ and the latter is not possible because $k \geq 3$.
\end{proof}
\section{Dimension of the GIT quotient}\label{sec:git quotient}
In this section, let the underlying field be ${\mathbb F} = {\mathbb C}$.
Let $V$ be a rational representation of a reductive group $G$.
Then, the GIT quotient $\mathbb PV\!\!\gitsymbol\!G$ is defined as ${\rm Proj}({\mathbb C}[V]^G)$, the projective variety associated to the ring of invariants (with its natural grading).
Given what we have computed, we can also compute the dimension of the GIT quotient for the action of $G = \prod_i {\rm SL}_{d_i}$ on $V = {\mathbb F}^{d_1,\dots,d_k;m}$. This relies on Rosenlicht's theorem \cite[Theorem~2]{Rosenlicht} (see also the proof of \cite[Lemma~3.1]{BRVR}).
\begin{theorem}[Rosenlicht]
Let $V$ be a rational representation of a connected semisimple group $G$.
Let $H$ be the stabilizer in general position.
Then, $\dim (\mathbb PV\!\!\gitsymbol\!G) = \dim(\mathbb{P}V) - \dim(G) + \dim(H)$, where $\dim (\mathbb PV\!\!\gitsymbol\!G) = -1$ if and only if $\mathbb PV\!\!\gitsymbol\!G = \emptyset$.
\end{theorem}
For the tensor action, this means that
\begin{align}\label{eq:rosenlicht concrete}
\dim(\mathbb PV\!\!\gitsymbol\!G) = \Delta(d_1,\dots,d_k;m) + \dim H,
\end{align}
where $\Delta = \Delta(d_1,\dots,d_k;m) = m \prod_{i=1}^k d_i - 1 - \sum_{i=1}^k (d_i^2 - 1)$ as defined above and where $H$ is the stabilizer in general position.
\begin{proof} [Proof of Theorem~\ref{thm:git}]
By Lemma~\ref{lem.cas.inv.ring}, the dimension of the GIT quotient is invariant under castling transforms, so we may assume that $(d_1,\dots,d_k;m)$ is minimal.
We handle each case separately:
\emph{Case (1):} Suppose $R < 0$.
Then $\rho$ is unstable by Theorem~\ref{thm:main-inv}.
This means that the invariant ring is given by ${\mathbb C}[V]^G = {\mathbb C}$ and that $\mathbb PV\!\!\gitsymbol\!G$ is empty.
\emph{Case (2):} Suppose $R = 0$.
Then $m d_1 d_2 \cdots d_{k-1} = d_k$ by Lemma~\ref{lem:R minimal}.
We identify $V \cong \operatorname{Mat}_{d_k,d_k}$.
For the left-right action of ${\rm SL}_{d_k} \times {\rm SL}_{d_k}$, the ring of invariants is ${\mathbb C}[\det]$, where $\det$ denotes the determinant polynomial.
The same is true when we restrict to the second ${\rm SL}_{d_k}$, say.
Since $\{1\} \times {\rm SL}_{d_k} \subseteq G \subseteq {\rm SL}_{d_k} \times {\rm SL}_{d_k}$, the ring of invariants for $\rho$ is also ${\mathbb C}[\det]$.
Thus, $\mathbb PV\!\!\gitsymbol\!G$ is a single point.
\emph{Case (3):} Suppose $R > 0$.
Whenever $\rho$ is generically stable, \eqref{eq:rosenlicht concrete} implies that the dimension of the GIT quotient is $\Delta$ (recall that the kernel of $\rho$ is zero-dimensional), while if $\rho$ is only generically polystable we need to add the dimension of the stabilizer in general position.
There are two cases to consider:
\begin{itemize}
\item $m=1$ and $\Delta = -2$:
In this case, $k=3$ and $(d_1,d_2,d_3) = (2,d,d)$ for some $d\geq 2$, as we saw in the proof of Theorem~\ref{thm:main-inv}.
If $d=2$ then the s.g.p.\ is two-dimensional, while if $d>2$ it is $(d-1)$-dimensional (see proof of Theorem~\ref{thm:recursive}, part~(4)).
Thus, since $g_{\max} = d$,
\begin{align*}
\dim(\mathbb PV\!\!\gitsymbol\!G)
= \Delta + \dim H
= - 2 + \begin{lrdcases}
2 & \text{ if } g_{\max} = 2 \\
g_{\max} - 1 & \text{ if } g_{\max} > 2
\end{lrdcases}
= \max(g_{\max}-3,0),
\end{align*}
which is also contained in \cite[Theorem 1.2]{BRVR}.
\item $m=2$ and $R = g_{\max}^2 > 1$:
In this case, similarly, $k=2$ and $(d_1,d_2) = (d,d)$ for some $d\geq 2$, again by the proof of Theorem~\ref{thm:main-inv}.
If $d=2$ then the s.g.p.\ is one-dimensional, while if $d>2$ then the s.g.p.~is $(d-1)$-dimensional (see proof of Theorem~\ref{thm:recursive}, part~(4)).
Thus $\dim H = g_{\max} - 1$ in either case and hence
\begin{equation*}
\dim(\mathbb PV\!\!\gitsymbol\!G)
= \Delta + \dim H
= 1 + \left( g_{\max} - 1 \right)
= g_{\max}. \qedhere
\end{equation*}
\end{itemize}
\end{proof}
|
2,877,628,091,179 | arxiv | \section*{Appendices}
\section{Diquark and tetraquark operators} \label{app:tetraquark}
In this appendix we present some additional details of the diquark and tetraquark operators. Consider a diquark operator, $\delta(\vec{x},t) = \sum \text{CGs} \ q^T(\vec{x},t) (C \Gamma) q (\vec{x},t)$, where CGs refers to the Clebsch-Gordan coefficients in Equation~\eqref{eqn:diquark} and various indices have been suppressed. Under proper Lorentz transformations, this operator transforms in the same way as the analogous fermion bilinear and the continuum spin, $J$, of the diquark for different choices of $\Gamma$ is given in Table~\ref{tab:gamma}. Under a parity transformation, the operator transforms to $\mathcal{P}\delta(\vec{x},t)\mathcal{P}^{-1} = \eta_P q^T(-\vec{x},t) C \Gamma q(-\vec{x},t)$, where $\eta_P = \pm 1 $ is a parity factor that depends on the gamma matrix $\Gamma$ as given in the table. The analogous anti-diquark operator has the same transformation properties. The parity of the tetraquark operator is the product of the parity factors of the diquark and anti-diquark operators.
Taking the Hermitian conjugate of the diquark, we obtain $\delta^\dagger = h_\Gamma s_C s_F \bar{\delta}$, where the symmetry of the Dirac gamma matrix $h_\Gamma = \pm 1$ is shown in Table~\ref{tab:gamma} and $s = \xi_1 \xi_3$ are phases arising from the exchange symmetry of the colour $(s_C)$ and flavour $(s_F)$ Clebsch-Gordan coefficients of the diquark operator. The phase $\xi_1 = \pm 1$ arises from reversing the order of the SU(3) irreps,
\begin{equation}\langle D_1, d_1; D_2, d_2 | D, d \rangle = \xi_1 \langle D_2, d_2; D_1, d_1 | D, d \rangle, \end{equation}
and the phase $\xi_3 = \pm 1$ arises from complex conjugating the irreps,
\begin{equation}\langle D_1, d_1; D_2, d_2 | D, d \rangle = \xi_3 \langle \bar{D}_1, \bar{d}_1; \bar{D}_2, \bar{d}_2 | \bar{D}, \bar{d} \rangle.\end{equation}
We use the phase conventions of Refs.~\cite{deSwart:1963pdg,Kaeding:1995vq}. For tetraquark operators with $G$-parity symmetry as in Equation~\eqref{eqn:tetragparity}, the $G$-parity is given by $G = \tilde{G} \xi_J \xi_1 \xi_3$ where $\xi_J = (-1)^{J_1 + J_2 - J}$ is the phase arising when the arguments of the $SU(2)$ Clebsch-Gordan coefficients are interchanged and $\xi_1$ and $\xi_3$ here arise from the exchange symmetry of the $SU(3)_F$ Clebsch-Gordan coefficients of the tetraquark operator.
When the flavour irreps of the quarks in the diquark are identical, the overall colour-flavour-spin coupling in the diquark must be antisymmetric due to Fermi statistics. To see this schematically, consider the diquark $\delta = \sum C_{ab} \ q_a q_b$ with overall coupling coefficients $C$. If $C_{ab}$ is symmetric, the sum would be exactly zero because $q_a q_b$ is antisymmetric. The symmetry arising from spin $(C\Gamma)_{\alpha \beta} = s_\Gamma (C\Gamma)_{\beta \alpha}$ is given in Table~\ref{tab:gamma} and the symmetries arising from colour and flavour are discussed above.
\begin{table}[h!]
\begin{center}
\begin{tabular}{c|cccccccc}
& 1 & $\gamma_5$ & $\gamma_0 \gamma_5$ & $\gamma_0$ & $\gamma_i$ & $\gamma_i \gamma_0$ & $\gamma_5 \gamma_i$ & $[\gamma_i,\gamma_j]$\\
\hline
$\Gamma$ & $a_0$ & $\pi$ & $\pi_2$ & $b_0$ & $\rho$ & $\rho_2$ & $a_1$ & $b_1$ \\
\hline
$J$ & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 \\
\hline
$\eta_P$ & $-$ & $+$ & $+$ & $-$ & $+$ & $+$ & $-$ & $-$ \\
\hline
$h_\Gamma$ & $+$ & $-$ & $+$ & $+$ & $+$ & $-$ & $+$ & $-$\\
\hline
$s_\Gamma$ & $-$ & $-$ & $-$ & $+$ & $+$ & $+$ & $-$ & $+$
\end{tabular}
\caption{For different Dirac gamma matrices, we show the notation $\Gamma$ used to denote the gamma matrix, the continuum spin $J$, the parity factor $\eta_P$, the hermiticity factor $h_\Gamma$ and the spin coupling symmetry $s_\Gamma$.}
\label{tab:gamma}
\end{center}
\end{table}
\section{One-gluon exchange model}
\label{subsec:onegluon}
In a simple one-gluon exchange model of a diquark \cite{Jaffe:2004ph}, the two quarks interact via a colour-colour spin-spin interaction term,
\begin{equation}
\label{equ:onegluon}
H = - \alpha_s A_{12}(\lambda_1 \cdot \lambda_2) (\vec{S}_1 \cdot \vec{S}_2)
\end{equation}
where $A_{12}$ is a model-dependent mass term that behaves like $1/m_1 m_2$ in the heavy quark limit, $\lambda$ are the Gell-Mann matrices that span the Lie algebra of SU$(3)_C$ and $\vec{S}$ is the spin of the quark. The relative factors that arise for various colour irreps $R$ and spin $S$ are given in Table~\ref{tab:onegluon}. It can be seen that the most attractive diquark is the $(R,S) = (\underline{\bar{3}}, 0)$ configuration. Similarly, the most attractive anti-diquark is the $(\underline{3},0)$ configuration. Hence a scalar $J^P = 0^+$ tetraquark is expected to be the most favourable. Whilst other configurations are less favourable, this one-gluon exchange interaction is suppressed by the masses of the quarks such that in the heavy quark limit, a rich spectrum of tetraquark states with $J^P = 0^+, 1^+, 2^+$ is expected to be observed in models such as Ref.~\cite{Maiani:2004vq}.
In the case when the flavour irreps of the quarks within the diquark are identical, Fermi symmetry constrains the number of possible configurations. If the flavour irrep is antisymmetric, then the only allowed diquarks are the attractive configurations, $(\underline{\bar{3}}, 0)$ and $(\underline{6}, 1)$. On the other hand, when the flavour irrep is symmetric the only allowed diquarks are the repulsive configurations, $(\underline{\bar{3}}, 1)$ and $(\underline{6}, 0)$. A consequence of this is that the doubly-charmed $cc$ diquark is always repulsive with the least repulsive diquark being $(\underline{\bar{3}},1)$. However, the repulsive interaction is suppressed by the quark mass and so it is expected that such tetraquarks may exist in the heavy quark limit~\cite{MANOHAR199317}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{cc|cc}
&& \multicolumn{2}{c}{$S$}\\
& & 0 & 1 \\ \hline
\multirow{2}{*}{R}& $\underline{\bar{3}}$ & $\frac{1}{2}$ & $-\frac{1}{6}$ \\[0.25cm]
& $\underline{6}$ & $-\frac{1}{4}$ & $\frac{1}{12}$
\end{tabular}
\caption{The relative factors of the colour-colour spin-spin interaction within the diquark in equation~\eqref{equ:onegluon} for various color irreps and spin. }
\label{tab:onegluon}
\end{center}
\end{table}
\section{Non-relativistic quark model}
\label{subsec:nonrelativistic}
In a non-relativistic quark model, diquark states at rest with orbital angular momentum $L$ and spin angular momentum $S$ coupled to total angular momentum $J$ can be constructed as,
\begin{equation}
\big|\delta^{J,m}_{LS} \big\rangle = \sum_{m_L,m_S} \langle L, m_L; S, m_S| J,m \rangle \sum_{\alpha, \beta} \big\langle \tfrac{1}{2}, \alpha; \tfrac{1}{2}, \beta \big| S, m_S \big\rangle
\int\! \frac{d^3 q}{(2\pi)^3} \ Y^{m_L}_L (\hat{q}) f_{nL}(|\vec{q}|) b^\dagger_\alpha(\vec{q}) b^\dagger_\beta(-\vec{q}) | 0 \rangle \, ,
\end{equation}
where $b_\alpha^\dagger(\vec{q})$ is a creation operator for a quark of momentum $\vec{q}$ and $J_z$ component $\alpha$, and $f_{nL}(|\vec{q}|)$ is a model-dependent wavefunction that is determined by some interaction potential and is specified by $L$ and the principal quantum number $n$. Annihilating this state with the field expansion of the diquark operator, we obtain,
\begin{equation}
\begin{aligned}
\big\langle 0 \big| \delta^{J[\Gamma]} \big|\delta^{J,m}_{LS} \big\rangle =& \sum_{m_L,m_S} \langle L, m_L; S,m_S| J,m \rangle \sum_{\alpha,\beta} \big\langle \tfrac{1}{2}, \alpha; \tfrac{1}{2}, \beta \big| S, m_S \big\rangle \\
& \times \int \! \frac{d^3 q}{(2\pi)^3} \ Y^{m_L}_L (\hat{q}) f_{nL}(|\vec{q}|) \, u^T_{(\alpha)}(\vec{q}) C\Gamma u_{(\beta)} (-\vec{q}) \, ,
\end{aligned}
\end{equation}
where $u$ is a Dirac spinor. Expanding $u$ in the non-relativistic limit where $|\vec{q}|$ is much smaller than the mass of the quark, we find to leading order for $\Gamma = \gamma^5$,
\begin{equation}
\begin{aligned}
\big\langle 0 \big| \delta^{J[\gamma^5]} \big|\delta^{J,m}_{LS} \big\rangle =& \sum_{m_L,m_S} \langle L, m_L; S,m_S| J,m \rangle \\ &\times \underbrace{\left( \big\langle \tfrac{1}{2}, -\tfrac{1}{2}; \tfrac{1}{2}, \tfrac{1}{2} \big| S, m_S \big\rangle - \big\langle \tfrac{1}{2}, \tfrac{1}{2}; \tfrac{1}{2}, -\tfrac{1}{2} \big| S, m_S \big\rangle \right)}_{\sim\delta_{S0} \delta_{m_S 0}} \underbrace{\int \! \frac{d^3 q}{(2\pi)^3} \ Y^{m_L}_L (\hat{q})}_{\sim \delta_{L0}\delta_{m_L0}} f_{nL}(|\vec{q}|) \, .
\end{aligned}
\end{equation}
Hence, $\delta^{J[\gamma^5]}$ overlaps with the $qq({}^{2S+1}\!L_J={}^1\!S_0)$ diquark construction. Similar results for other $\Gamma$ are shown in Table~\ref{tab:nonrel}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{c|cccccccc}
& 1 & $\gamma_5$ & $\gamma_0 \gamma_5$ & $\gamma_0$ & $\gamma_i$ & $\gamma_i \gamma_0$ & $\gamma_5 \gamma_i$ & $[\gamma_i,\gamma_j]$\\
\hline
$qq({}^{2S+1}\!L_J)$ & ${}^3\!P_0$ & ${}^1\!S_0$ & ${}^1\!S_0$ & - & ${}^3\!S_1$ & ${}^3\!S_1$ & ${}^3\!P_1$ & ${}^1\!P_1$ \\
\end{tabular}
\caption{The non-relativistic overlap of the diquark operator $\delta^{J[\Gamma]}$ onto the diquark state $qq({}^{2S+1}\!L_J)$.}
\label{tab:nonrel}
\end{center}
\end{table}
\section{Operator lists}\label{app:ops}
The interpolating operators used to calculate the spectra are listed in Table~\ref{tab:hiddencharmops} for the hidden-charm sector and Table~\ref{tab:doublecharmops} for the doubly-charmed sector.
\begin{table}[h!]
\begin{center}
\begin{tabular}{c|c|c}
\hline
$T_1^{++}$ & $A_1^{+-}$ &$T_1^{+-}$\\
\hline
$\delta^{b_0}_{\bar{3},3} \bar{\delta}^{b_1}_{3,\bar{3}}$ & $\delta^{a_0}_{\bar{3},3} \bar{\delta}^{a_0}_{3,\bar{3}}$ & $\delta^{a_0}_{\bar{3},3} \bar{\delta}^{a_1}_{3,\bar{3}}$ \\
$\delta^{\rho}_{\bar{3},3} \bar{\delta}^{\pi}_{3,\bar{3}}$ & $\delta^{\pi}_{\bar{3},3} \bar{\delta}^{\pi}_{3,\bar{3}}$ & $\delta^{\rho}_{\bar{3},3} \bar{\delta}^{\pi}_{3,\bar{3}}$ \\
$\delta^{\rho}_{\bar{3},3} \bar{\delta}^{\rho}_{3,\bar{3}}$ & $\delta^{\rho}_{6,3} \bar{\delta}^{\rho}_{\bar{6},\bar{3}}$ & $\delta^{\rho}_{\bar{3},3} \bar{\delta}^{\rho_2}_{3,\bar{3}}$ \\
$D^{[000]}_{A_1} \bar{D}^\ast{}^{[000]}_{T_1}$ & $D^{[000]}_{A_1} \bar{D}^{[000]}_{A_1}$ & $D^{[000]}_{A_1} \bar{D}^\ast{}^{[000]}_{T_1}$ \\
$D^\ast{}^{[000]}_{T_1} \bar{D}^\ast{}^{[000]}_{T_1}$ & $D^{[100]}_{A_2} \bar{D}^{[100]}_{A_2}$ & $D^{[100]}_{A_2} \bar{D}^\ast{}^{[100]}_{A_1}$ \\
$\eta_c{}^{[000]}_{A_1} \rho^{[000]}_{T_1}$ & $D^\ast{}^{[000]}_{T_1} \bar{D}^\ast{}^{[000]}_{T_1}$ & $D^{[100]}_{A_2} \bar{D}^\ast{}^{[100]}_{E_2}$ \\
$J/\psi^{[000]}_{T_1} \pi^{[000]}_{A_1}$ & $\eta_c{}^{[000]}_{A_1} \pi^{[000]}_{A_1}$ & $J/\psi^{[000]}_{T_1} \rho^{[000]}_{T_1}$ \\
$J/\psi^{[100]}_{A_1} \pi^{[100]}_{A_2}$ & $\eta_c{}^{[100]}_{A_2} \pi^{[100]}_{A_2}$ & $J/\psi^{[100]}_{A_1} \rho^{[100]}_{E_2}$ \\
$J/\psi^{[100]}_{E_2} \pi^{[100]}_{A_2}$ & $J/\psi^{[000]}_{T_1} \rho^{[000]}_{T_1}$ & $J/\psi^{[100]}_{E_2} \rho^{[100]}_{A_1}$ \\
& & $J/\psi^{[100]}_{E_2} \rho^{[100]}_{E_2}$\\
&& $\chi_{c0}{}^{[100]}_{A_1} \pi^{[100]}_{A_2}$
\end{tabular}
\caption{The interpolating operators used to calculate the spectra in the isospin-1 hidden-charm sector. For the tetraquark operators, we use the notation $\delta_{R_1,F_1}^{\Gamma_1} \bar{\delta}_{R_2,F_2}^{\Gamma_2}$ where $R_1 (R_2)$ is the colour irrep, $\Gamma_1 (\Gamma_2)$ is the gamma matrix and $F_1 (F_2)$ is the flavour irrep of the diquark (anti-diquark) operator. For meson-meson operators, the optimised single-meson operators used are denoted by $M_\Lambda^{[n_1n_2n_3]}$, where $M$ indicates the meson, $\Lambda$ is the lattice irrep and $[n_1n_2n_3]$ is the momentum in units of $\frac{2\pi}{L}$. Note that all momenta related to $[n_1n_2n_3]$ by an allowed lattice rotation are summed over as shown in Equation~\eqref{eqn:mesonmeson}.}
\label{tab:hiddencharmops}
\end{center}
\end{table}
\begin{table}[h!]
\begin{center}
\begin{tabular}{c|c|c|c || c |c}
\multicolumn{4}{c||}{$I=0$} & \multicolumn{2}{c}{$I=\frac{1}{2}$} \\
\hline
$A_1^{+}$ & $T_1^{+}$ & $E^{+}$ & $T_2^{+}$ & $A_1^+$ & $T_1^+$ \\
\hline
$\delta^{a_0}_{6,1} \bar{\delta}^{b_0}_{\bar{6},3}$ & $\delta^{b_1}_{\bar{3},1} \bar{\delta}^{a_0}_{3,3}$ & $\delta^{a_1}_{6,1} \bar{\delta}^{b_1}_{\bar{6},3}$ & $\delta^{a_1}_{6,1} \bar{\delta}^{b_1}_{\bar{6},3}$ & $\delta^{b_0}_{\bar{3},1} \bar{\delta}^{b_0}_{3,\bar{6}}$ & $\delta^{a_1}_{6,1} \bar{\delta}^{b_0}_{\bar{6},3}$\\
$\delta^{a_1}_{6,1} \bar{\delta}^{b_1}_{\bar{6},3}$ & $\delta^{\rho_2}_{\bar{3},1} \bar{\delta}^{\pi_2}_{3,3}$ & $\delta^{b_1}_{\bar{3},1} \bar{\delta}^{a_1}_{3,3}$ & $\delta^{b_1}_{\bar{3},1} \bar{\delta}^{a_1}_{3,3}$ & $\delta^{b_1}_{\bar{3},1} \bar{\delta}^{a_1}_{3,3}$ & $\delta^{b_1}_{\bar{3},1} \bar{\delta}^{a_0}_{3,3}$\\
$\delta^{b_0}_{\bar{3},1} \bar{\delta}^{a_0}_{3,3}$ &$\delta^{\rho}_{\bar{3},1} \bar{\delta}^{\pi_2}_{3,3}$ & $D^{[110]}_{A_2} D^\ast{}^{[110]}_{B_2} $ & $D^{[100]}_{A_2} D^\ast{}^{[100]}_{E_2}$ &$\delta^{\pi}_{6,1} \bar{\delta}^{\pi}_{\bar{6},\bar{6}}$ & $\delta^{\rho}_{\bar{3},1} \bar{\delta}^{\pi}_{3,3}$\\
$\delta^{b_1}_{\bar{3},1} \bar{\delta}^{a_1}_{3,3}$ &$\delta^{\rho}_{\bar{3},1} \bar{\delta}^{\pi}_{3,3}$ & & $D^{[110]}_{A_2} D^\ast{}^{[110]}_{A_1}$ & $\delta^{\rho}_{\bar{3},1} \bar{\delta}^{\rho}_{3,\bar{6}}$ &$\delta^{\rho}_{\bar{3},1} \bar{\delta}^{\rho}_{3,\bar{6}}$ \\
$D_{A_1}^{[000]} D(2S)_{A_1}^{[000]}$ & $D^{[000]}_{A_1} D^\ast{}^{[000]}_{T_1}$ & & $D^{[110]}_{A_2} D^\ast{}^{[110]}_{B_1}$ & $D^{[000]}_{A_1} D_s{}^{[000]}_{A_1}$ & $D^{[000]}_{A_1} D_s^\ast{}^{[000]}_{T_1}$\\
& $D^{[100]}_{A_2} D^\ast{}^{[100]}_{A_1}$ & & $D^\ast{}^{[100]}_{A_1} D^\ast{}^{[100]}_{E_2}$ & $D^{[100]}_{A_2} D_s{}^{[100]}_{A_2}$ & $D^\ast{}^{[000]}_{T_1} D_s{}^{[000]}_{A_1}$\\
& $D^{[100]}_{A_2} D^\ast{}^{[100]}_{E_2}$ && & $D^{[110]}_{A_2} D_s{}^{[110]}_{A_2}$ & $D^{[100]}_{A_2} D_s^\ast{}^{[100]}_{A_1}$\\
&$D^\ast{}^{[000]}_{T_1} D^\ast{}^{[000]}_{T_1}$ & & & $D^\ast{}^{[000]}_{T_1} D_s^\ast{}^{[000]}_{T_1}$ & $D^{[100]}_{A_2} D_s^\ast{}^{[100]}_{E_2}$\\
& $D^\ast{}^{[100]}_{A_1} D^\ast{}^{[100]}_{E_2}$ & & & $D^\ast{}^{[100]}_{A_1} D_s^\ast{}^{[100]}_{A_1}$ & $D^\ast{}^{[100]}_{A_1} D_s{}^{[100]}_{A_2}$\\
&$D^\ast{}^{[100]}_{E_2} D^\ast{}^{[100]}_{E_2}$ & & & $D^\ast{}^{[100]}_{E_2} D_s^\ast{}^{[100]}_{E_2}$ & $D^\ast{}^{[100]}_{E_2} D_s{}^{[100]}_{A_2}$\\
&& & && $D^\ast{}^{[000]}_{T_1} D_s^\ast{}^{[000]}_{T_1}$\\
&& & && $D^\ast{}^{[100]}_{A_1} D_s^\ast{}^{[100]}_{E_2}$\\
&&&&& $D^\ast{}^{[100]}_{E_2} D_s^\ast{}^{[100]}_{A_1}$ \\
&&&&& $D^\ast{}^{[100]}_{E_2} D_s^\ast{}^{[100]}_{E_2}$ \\
\end{tabular}
\caption{As Table~\ref{tab:hiddencharmops} but for the doubly-charmed sector with isospin-0 (left columns) and isospin-$\frac{1}{2}$ (right columns).}
\label{tab:doublecharmops}
\end{center}
\end{table}
\section{Summary}\label{sec:summary}
We have described the construction of a general class of operators resembling compact tetraquarks which have a range of different diquark--anti-diquark structures, transform irreducibly under the symmetries of the lattice and respect other relevant symmetries. As a first demonstration, these operators have been used in conjunction with meson-meson operators to compute correlation functions in the isospin-1 hidden-charm and doubly-charmed sectors using the distillation framework. Finite-volume spectra were extracted by analysing the correlation functions with the variational method. It was found that the addition of tetraquark operators to a basis of meson-meson operators did not significantly affect the finite-volume spectrum and subsequently, would not affect the scattering amplitudes. Because a diverse set of operators was used, for the first time we were able to reliably extract the multiple energy levels associated with degenerate non-interacting meson-meson levels. In all channels, we find that the number of energy levels is equal to the number of non-interacting meson-meson levels expected in the energy region considered and the majority of energies were at most slightly shifted from the non-interacting levels. Hence, there are no strong indications that there are any bound states or narrow resonances present.
This study sets out the groundwork and technology for future work. Calculations with larger lattice volumes and/or at non-zero overall momentum would be necessary to reliably determine scattering amplitudes via the L\"{u}scher method and so rigorously discern the bound state and resonance content in the various channels. In addition, there is strong motivation to study how the results change when the light quark mass is varied. Calculations with lower quark masses would require large volumes. As discussed in Ref.~\cite{Peardon:2009}, to maintain a given smearing radius in the distillation approach, the number of distillation vectors used must scale with the volume. Because tetraquark elementals are of rank-4, increasing the number of distillation vectors vastly increases the computational cost and so, to make the calculations feasible, an extension of the distillation framework would be required, for example, a stochastic version~\cite{Morningstar:2011ka} or an alternative basis of vectors. Calculations with more distillation vectors would also enable the dependence of the results on the degree of tetraquark-operator smearing to be investigated further. Whilst the extracted spectra in the channels studied did not significantly change upon the addition of tetraquark operators to the operator basis, there are many other channels where tetraquarks have been suggested to exist and more detailed lattice QCD investigations are of interest, for example, in the isospin-0 hidden-charm sector, the open-charm sector, the bottom sector and the light scalar mesons.
\section{Discussion and comparison with previous studies}
\label{sec:interpretations}
In this section we discuss the results in the context of expectations from phenomenological models and compare with previous lattice calculations. In a simple one-gluon-exchange model of a diquark as described in Appendix~\ref{subsec:onegluon}, the two quarks interact via a colour-colour spin-spin interaction and the most attractive diquark and anti-diquark configurations have (colour irrep, spin) $= (\underline{\bar{3}},0)$ and $(\underline{3},0)$ respectively. Therefore, the most favourable tetraquark has $J^{P} = 0^{+}$ which subduces into $\Lambda^{P} = A_1^{+}$. However, a large quark mass suppresses the spin-spin interactions and so some models expect spin-1 diquark configurations involving heavy quarks to occur and form a tetraquark multiplet~\cite{Maiani:2004vq}. This multiplet contains tetraquarks with $J^P=1^+$ and $2^+$ which subduce into the $T_1^+$ irrep and the $E^+,T_2^+$ irreps respectively. Besides the quark mass, this one-gluon-exchange interaction does not depend on the flavours of the quarks. However, when the flavour irreps of the two quarks (antiquarks) are the same, Fermi symmetry requires the overall diquark (anti-diquark) configurations to be antisymmetric and this restricts the allowed structures.
In the hidden-charm isospin-1 sector, there are no identical quarks/antiquarks and so no constraints from symmetry on the allowed configurations. Models~\cite{Maiani:2004vq,Maiani:2014aja} suggest that the lightest tetraquark multiplet has $J^{PG} = 0^{+-}, 1^{++}, 1^{+-}, 2^{+-}$ and we have performed a thorough investigation in all these channels except for $J^{PG}=2^{+-}$. In the $\Lambda^{PG} = A_1^{+-}$ channel, expected to be the most attractive, and the $\Lambda^{PG} = T_1^{++}$ and $T_1^{+-}$ channels, there are no hints of a narrow state or any significant interactions in the computed spectra. No experimental candidate has been observed with $J^{PG}=0^{+-}$, nor is there currently any charged charmonium-like candidate with undetermined $J^{PG}$ that is light enough to be identified as the lowest-lying $J^{PG} = 0^{+-}$ tetraquark. The observed $Z_c^+(3900)$ has $J^{PG} = 1^{++}$~\cite{Olive:2016xmw} and has been suggested to be a candidate for a tetraquark. That we see no sign of it is consistent with previous lattice QCD calculations presented in Ref.~\cite{Prelovsek:2014swa} which also calculated the finite volume spectrum using meson-meson and tetraquark operators. Our results are also consistent with other lattice QCD calculations \cite{Chen:2014afa,Chen:2015jwa,Ikeda:2016zwx} which do not find evidence of a bound state or narrow resonance in this channel. There is currently no well-established experimental candidate with $J^{PG} = 1^{+-}$, but if the $X(3872)$ is a tetraquark its isospin-1 partner would appear in this channel. Again, that we see no clear signal for a state here is consistent with previous lattice QCD calculations~\cite{Padmanath:2015era}. That study also found that the spectrum in this channel was insensitive to the addition of tetraquark-like operators to the operator basis.
In the doubly-charmed sector, possible diquark configurations are further constrained by Fermi statistics. The $(\bar{3},0)$ and $(6,1)$ $cc$ diquarks are forbidden as they are symmetric under the interchange of quarks and only the $(\bar{3},1)$ and $(6,0)$ diquarks are allowed. In the one-gluon exchange model given in Appendix~\ref{subsec:onegluon}, the colour-colour spin-spin interaction is repulsive for these allowed configurations and is least repulsive for $( \bar{\underline{3}},1)$. The attractive $(\underline{3},0)$ $\bar{q}\bar{q}$ anti-diquark configurations are required to be antisymmetric in flavour in $F=\underline{3}$ and the most attractive configuration has isospin, $I=0$. Therefore, the most favourable tetraquark has $(I)J^P = (0)1^+$. Other attractive configurations include $(I)J^P = (0)0^+$, $(0)2^+$ containing a $(\underline{\bar{6}},1)$ anti-diquark and $(I)J^P = (\tfrac{1}{2})0^+, (\tfrac{1}{2})1^+$ from picking the $I=\tfrac{1}{2}$ components of the $(\underline{3},0)$ anti-diquark. However, no signs of these are seen in any of the computed spectra in the many doubly-charmed channels we studied. That we find no significant deviation between the spectra including and excluding tetraquark operators is consistent with the results presented in Ref.~\cite{Guerrieri:2014nxa} which computed the spectrum in the $(I)J^P = (0)1^+$ channel. That study used meson-meson and tetraquark operators but, because the operator basis was more restricted than ours, was unable to extract all of the multiple levels which correspond to degenerate meson-meson levels in the non-interacting limit. Computations presented in Ref.~\cite{Ikeda:2013vwa} find an attractive interaction in the $(I)J^P = (0)1^+$ channel using a less direct approach in which lattice QCD computations are used to extract a potential which is then used to determine scattering amplitudes. They do not find a bound state or resonance for a range of light quark masses corresponding to $m_\pi = 410 - 700$ MeV and conclude that this attractive interaction gets stronger with decreasing pion mass, further motivating studies of how the results vary as the light quark mass decreases towards the physical point.
In one-gluon exchange models, the colour-colour spin-spin interaction is always repulsive for the $cc$ diquark, but the repulsion is suppressed by the quark mass which suggests that doubly-bottomed tetraquarks may be more favourable than doubly-charmed tetraquarks. This is supported by lattice QCD calculations of finite volume-spectra using bases of meson-meson and tetraquark-like operators which suggest the existence of a $(I)J^P = (0)1^+$ doubly-bottomed tetraquark~\cite{Francis:2016hui}. Further support comes from lattice calculations of the potential between two static bottom quarks in the presence of two light antiquarks~\cite{Bicudo:2012qt,Bicudo:2015vta, Bicudo:2015kna, Peters:2016isf, Bicudo:2017szl}. This potential is found to lead to a bound state with $(I)J^P = (0)1^+$. Our doubly-charmed $(I)\Lambda^P = (0)T_1^+$ spectrum is not inconsistent with there being an attractive interaction although there were no obvious signs of a bound state in this channel. This is also consistent with recent phenomological studies \cite{Karliner:2017qjm,Eichten:2017ffp,Czarnecki:2017vco} which suggest the doubly-bottom tetraquark is bound and the doubly-charmed tetraquark is unbound. Further calculations using bottom quarks and a L\"{u}scher analysis would be of interest. Computations involving the bottom quark with the fermion action used in this study are not straightforward since discretisation effects would be large. It is possible to implement the bottom quark with alternative actions such as Non-Relativistic QCD but this is beyond the scope of this study.
Overall, our study has improved on previous lattice QCD investigations of tetraquarks in two ways. The first is that we use a diverse set of tetraquark and meson-meson operators so that we can reliably obtain a large number of energy levels in each channel and, for the first time in a lattice QCD calculation, robustly extract the multiple energy levels associated with meson-meson energy levels which are degenerate in the non-interacting limit.\footnote{Recall that, as discussed in Section~\ref{subsec:mesonmeson}, these can occur when at least one of the mesons has non-zero spin.} This is important for future spectrum calculations involving the scattering of mesons with non-zero spin. The second is that we have computed spectra in a large number of channels proposed to contain the hypothetical lightest tetraquark multiplet -- some of these channels have not been studied before.
\section{Introduction}\label{sec:introduction}
There are an abundance of experimentally-observed mesons containing one or more heavy quarks~\cite{Olive:2016xmw} and these provide a window on a rich variety of strong-interaction physics. In particular, many of them, the so-called `$X,Y,Z$'s', are not compatible with quark model expectations. Clear examples of this incompatibility are the charged charmonium-like $Z_c^+(3900)$ and $Z_c^+(4430)$ which cannot be solely $c\bar{c}$ and must contain at least one additional quark-antiquark pair. One possible explanation for such exotic states is that they are tetraquarks, compact bound states of four quarks. Others, for example Ref.~\cite{Rupp:2016jdk}, suggest that compact tetraquarks are not required to explain the observed spectrum. Recent reviews of some of the $X,Y,Z$'s, with interpretations such as compact tetraquarks, molecular mesons, hybrid mesons and threshold cusps, can be found in Ref.~\cite{Brambilla:2014jmp,Chen:2016qju,Esposito:2016noz,Lebed:2016hpi,Guo:2017jvc, Olsen:2017bmm}. As well as hidden-charm $c\bar{c}q\bar{q}$ configurations, doubly-charmed $cc\bar{q}\bar{q}$ tetraquarks have been hypothesised~\cite{Moinester:1995fk, PhysRevD.71.014008, Hyodo:2012pm, Esposito:2013fma}, but there are currently no experimental candidates for these.
Quantum Chromodynamics (QCD) is the fundamental theory of the strong interaction and, in principle, should predict whether four-quark states exist and whether these are consistent with the expectations of tetraquark models or other interpretations. The only ab-initio framework for performing systematically-improvable calculations at the hadronic scale is lattice QCD: spacetime is discretised on a finite four-dimensional Euclidean lattice and Monte Carlo techniques are used to compute correlation functions from which observables can be extracted. The discrete spectrum of finite-volume energy eigenstates is obtained from calculations of two-point correlation functions involving interpolating operators which have the required quantum numbers. Hadron-hadron scattering amplitudes, and hence the properties of resonances and other scattering phenomena, can be calculated via the L\"uscher formalism~\cite{Luscher:1990ux,Luscher:1991cf} which relates finite-volume spectra to infinite-volume scattering amplitudes. There is currently no extension of this formalism to three or more hadron scattering channels that is practical to use in calculations but this is an active area where progress is being made. A more in-depth review of the L\"uscher formalism and a discussion on its applications and extensions can be found in Ref.~\cite{Briceno:2017max}.
The Hadron Spectrum Collaboration has developed a range of interpolating operators resembling quark-antiquark~\cite{Dudek:2010,Thomas:2012} and meson-meson~\cite{Dudek:2012gj,Dudek:2012xn} structures which transform irreducibly under the symmetries of the lattice and efficiently interpolate the states of interest. These operators have proven very successful in recent computations of finite-volume spectra which are then used to determine scattering amplitudes~\cite{Dudek:2012gj, Dudek:2012xn,Dudek:2014qha, Wilson:2014cna, Wilson:2015dqa, Dudek:2016cru, Briceno:2016mjc, Moir:2016srx, Briceno:2017qmb}. As has been emphasised in studies such as those in Refs.~\cite{Dudek:2012xn,Wilson:2015dqa}, not including a sufficiently diverse set of relevant operators in the calculations could lead to an unreliable determination of finite-volume spectra and, in turn, incorrect scattering amplitudes. Hence, it is desirable to consider operators with other potentially-relevant colour-flavour-spatial-spin structures, resembling compact tetraquarks, and investigate whether their inclusion has any impact on the extracted spectra. The main goal of this work is to develop a very general class of operators with compact tetraquark structures, which transform irreducibly under the symmetries of the lattice and which respect other relevant symmetries. We will test these constructions in lattice QCD computations of spectra in hidden-charm and doubly-charmed channels. These include isospin-1 $J^{PG} = 0^{+-}, 1^{++}, 1^{+-}$ hidden-charm spectra\footnote{It is important to emphasise that $G$ refers to $G$-parity since $C$-parity is not a good quantum number for charged states. Note that $C = -G$ in isospin-1 for the neutral component.} which are relevant for exotic charged charmonium-like states and where the lightest tetraquark multiplet is expected to appear~\cite{Maiani:2004vq}, and isospin-0 $J^P = 0^+, 1^+, 2^+$ and isospin-$\tfrac{1}{2}$ strange $J^P = 0^+, 1^+$ exotic doubly-charmed spectra.
There have been a number of recent lattice QCD studies of tetraquarks containing one or more heavy quarks. Computations have not found any clear indication for the presence of hidden or open-charm tetraquarks~\cite{Ikeda:2013vwa,Prelovsek:2014swa,Guerrieri:2014nxa,Padmanath:2015era}. Other recent lattice QCD calculations relevant for the channels we study can be found in Refs.~\cite{Chen:2014afa,Chen:2015jwa,Ikeda:2016zwx}. In the bottom sector, there is some evidence supporting the existence of a doubly-bottom $(I)J^P = (0)1^+$ tetraquark where finite-volume spectrum calculations find an energy level below the relevant meson-meson thresholds~\cite{Francis:2016hui}. In addition, a number of computations of the potential between two static quarks in the presence of two light quarks~\cite{Bicudo:2012qt, Brown:2012tm, Bicudo:2015vta, Bicudo:2015kna, Peters:2016isf, Bicudo:2016ooe, Bicudo:2017szl} have found evidence for a bound state~\cite{Bicudo:2012qt,Bicudo:2015vta, Peters:2016isf, Bicudo:2016ooe, Bicudo:2017szl}. We discuss these studies further in the context of our results in Section~\ref{sec:interpretations}.
The structure of the rest of this paper is as follows. We begin in Section~\ref{sec:tetraquark} by describing the construction of a general class of tetraquark operators which transform irreducibly under the symmetries of the lattice. In Section~\ref{sec:methodology}, the methodology for calculating the finite-volume spectrum with large bases of operators in the distillation framework is presented. The resulting spectra in the hidden-charm and doubly-charmed sectors are presented in Section~\ref{sec:results}. Some systematic effects and the stability of the extracted spectra are investigated in Section~\ref{sec:stability}. We discuss the results in light of phenomenological and other lattice QCD studies in Section~\ref{sec:interpretations} before giving a summary in Section~\ref{sec:summary}. Appendices present some additional properties of diquark and tetraquark operators, give quark model interpretations of the diquark structures and list the operators used to calculate the finite-volume spectra.
\section{Results} \label{sec:results}
As a first application of these tetraquark operator constructions, we perform calculations on an anisotropic lattice of volume $(L/a_s )^3 \times (T/a_t) = 16^3 \times 128$ where $L$ is the spatial extent of the lattice, $T$ is the temporal extent, $a_s \approx 0.12$ fm is the spatial lattice spacing and $a_t$ is the temporal lattice spacing such that the anisotropy $\xi = \frac{a_s}{a_t} \approx 3.5$. We use 478 configurations generated from a tree level Symanzik improved gauge action and a Clover fermion action with $N_f = 2+1$ flavours of dynamical quarks. The mass parameter of the two degenerate light quarks is such that $m_\pi = 391$ MeV while the strange quark is tuned so that its mass approximates the physical value~\cite{Edwards:2008,Lin:2009}. The quenched Clover charm quark mass parameter is tuned to reproduce the physical $\eta_c$ meson mass \cite{Liu:2012}. When quoting results in physical units, we set the scale using the mass of the $\Omega$ baryon from the measured value on this lattice, $a_t m^{\text{latt.}}_{\Omega} = 0.2951(22)$ \cite{Edwards:2011}, and the experimental mass $m_{\Omega}^{\text{exp.}} = 1672.45(29)$ MeV \cite{Olive:2016xmw}, giving $a_t^{-1} = \frac{m_{\Omega}^{\text{exp.}}}{a_t m_{\Omega}^{\text{latt.}}} = 5667$ MeV. Using several lattice volumes, the anisotropy was measured to be $\xi_{\pi} = 3.444(6)$ from the dispersion relation of the pion \cite{Dudek:2012gj} and $\xi_{D} = 3.454(6)$ from the $D$ \cite{Moir:2013ub}. Using only the $16^3$ volume, we find $\xi_{\eta_c} = 3.484(2)$ from the $\eta_c$. For the purposes of this study, where the anisotropy is only used to compute the location of non-interacting meson-meson energy levels, we will use the value of $\xi_{\pi}$.
To indicate the location of non-interacting meson-meson energy levels on plots, for stable mesons we use the relativistic dispersion relation giving, $E = \sqrt{m_1^2 + \vec{p}_1^{\,2}} + \sqrt{m_2^2 + \vec{p}_2^{\,2}}$, as discussed in Section~\ref{subsec:mesonmeson}. The masses of relevant stable mesons on this lattice ensemble are given in Table~\ref{tab:mesons} and the variationally-optimised operators for these mesons, $\Omega^M_{\Lambda, \mu}$, are constructed from linear combinations of single-meson operators as discussed above. For the $\rho$ meson, which is unstable on this lattice, we compute `non-interacting $M$-$\rho$ energy levels', where $M$ is a stable meson, using the relativistic dispersion relation for $M$ and the finite-volume $\rho$ energy levels obtained on this ensemble as given in Table~\ref{tab:mesons}, i.e.~${E = \sqrt{m_M^2 + \vec{p}_M^{\,2}} + \rho_\Lambda^{\vec{p}}}$. The optimised $\rho$ operators, $\Omega^\rho_{\Lambda, \mu}$, are linear combinations of meson-meson and single-meson operators~\cite{Dudek:2012xn}. It should be emphasised that in this study the only uses of these non-interacting energy levels are to show their location on plots and as an indication for which meson-meson operators should be included in the operator basis.
The meson-meson and tetraquark operators used to calculate the spectrum in each channel are listed in Appendix~\ref{app:ops}. For this lattice volume, we use $\tilde{N}_{\text{vecs}} = 24$ for tetraquark operators and $N_{\text{vecs}} = 64$ for other operators unless stated otherwise. The choice of meson-meson operators has already been discussed in Section~\ref{subsec:mesonmeson}. For the tetraquark operators, ideally all relevant operators of the form described in Section~\ref{sec:tetraquark} would be included in the operator basis, but the computational cost can then become too high for the calculations to be practical for the purposes of this study. In the doubly-charmed isospin-0 $\Lambda^P = A_1^+, E^+, T_2^+$ channels we are able to include all the tetraquark operator constructions allowed by Fermi symmetry. For the remaining channels, we use a subset of tetraquark operators that are expected to overlap onto lower-lying states. Appendices~\ref{subsec:onegluon} and \ref{subsec:nonrelativistic} describe how a non-relativistic model can provide a guide to which diquark/anti-diquark configurations are expected to overlap most efficiently onto lower-lying tetraquarks. In short, tetraquark operators containing a diquark (anti-diquark) with a $\gamma^5$ or $\gamma^0 \gamma^5$ gamma matrix structure and in colour irrep $\underline{\bar{3}}$ $(\underline{3})$ are expected to overlap most efficiently onto a ground-state tetraquark,\footnote{Using Jaffe's terminology, this configuration is commonly known as the `good' diquark.} and at the very least our basis should include such operators. However, it was found that the $\gamma^5$ and $\gamma^0 \gamma^5$ structures do not overlap onto the energy eigenstates in sufficiently distinct ways (the correlation matrix contains approximately linearly-dependent rows/columns). Therefore, instead of having redundant operators, we included a selection of other tetraquark operators to give more diverse bases.
\begin{table}
\begin{center}
\begin{tabular}{c|c}
Meson & Mass (MeV)\\
\hline
$\pi$ & 391.4(7) \\
$D$ & 1885.1(4) \\
$D^\ast$ & 2008.9(6) \\
$D_s$ & 1950.9(3) \\
$D_s^\ast$ & 2071.2(5) \\
$\eta_c$ & 2964.4(2) \\
$J/\psi$ & 3044.7(2) \\
$\chi_{c0}$ & 3426.3(6) \\
\end{tabular}
\hspace{1cm}
\begin{tabular}{c|c}
& Energy (MeV) \\
\hline
$\rho^{[000]}_{T_1}$ & 890(5)\\
$\rho^{[100]}_{A_1}$ & 1027(4)\\
$\rho^{[100]}_{E_2}$ & 1089(5)\\
&\\
& \\
& \\
& \\
& \\
\end{tabular}
\caption{Ground state masses of stable mesons (left) and the energy of the lowest-lying finite-volume energy level for lattice irrep $\Lambda$ and momentum relevant for the $\rho$ meson denoted by $\rho_\Lambda^{\vec{p}}$ (right) as measured on our ensemble \cite{Dudek:2012gj, Dudek:2012xn, Liu:2012, Moir:2013ub}. Only the statistical uncertainty is quoted. }
\label{tab:mesons}
\end{center}
\end{table}
We now present computed spectra for a range of channels, beginning with a detailed discussion of the $\Lambda^{PG} = T_1^{++}$ irrep in the isospin-1 hidden-charm sector before presenting other isospin-1 hidden-charm results and then moving to the doubly-charmed sector.
\subsection{Isospin-1 hidden-charm sector}
\label{subsec:hiddencharm}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.8\textwidth]{T1pP2I2S0corr.pdf}
\caption{$C_{ii}(t) e^{(m_{J/\psi} + m_{\pi})t}$ in arbitrary units for the tetraquark operators given in the legend in the $\Lambda^{PG} = T_1^{++}$ isospin-1 hidden-charm channel. Error bars are smaller than the size of the points shown. }
\label{fig:correlator}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.85\textwidth]{T1pP2I2S0heatmap.pdf}
\caption{Normalised magnitude of elements in the matrix of two-point correlation functions, $|C_{ij}|/\sqrt{C_{ii}C_{jj}}$, on timeslice 3 in the $\Lambda^{PG} = T_1^{++}$ isospin-1 hidden-charm channel. The first three operators are tetraquark operators and the remaining are meson-meson operators ordered as in Table~\ref{tab:hiddencharmops}.}
\label{fig:heatmap}
\end{center}
\end{figure}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.86\textwidth]{combined_T1p_plot.pdf}
\caption{The central plot shows the spectrum in the hidden-charm isospin-1 $\Lambda^{PG} = T_1^{++}$ channel calculated using the basis of meson-meson and tetraquark operators given in Table~\ref{tab:hiddencharmops} of Appendix~\ref{app:ops}. Boxes give the computed energies with their vertical extent representing the one-sigma statistical uncertainty on either side of the mean and, solely as a visual aid, they are coloured according to their dominant meson-meson operator overlap. Horizontal lines denote the non-interacting meson-meson energy levels with an adjacent number indicating the degeneracy if it is larger than one.
The corresponding principal correlators are shown on the left ordered by increasing energy from bottom to top: the data (points) and fits (curves) for $t_0 = 9$ are plotted as $\lambda^{\mathfrak{n}}(t,t_0) e^{E_\mathfrak{n}(t-t_0)}$ showing the central values and one sigma statistical uncertainties; in each case the fit is reasonable with $\chi^2/N_\mathrm{d.o.f} \sim 1$.
The histograms on the right show the operator-state overlaps, $Z_i^\mathfrak{n} = \langle \mathfrak{n} | \mathcal{O}_i^\dagger | 0 \rangle$, for each energy level. The operators are given in the legend and the overlaps are normalised so that the largest value for one given operator across all energy levels is equal to one. }
\label{fig:combined}
\end{center}
\end{figure}
As an illustration of the results, we first discuss some features of the spectrum computed in the $\Lambda^{PG} = T_1^{++}$ irrep\footnote{The lowest spin in this irrep is $J^{PG} = 1^{++}$ and note that $C=-G$ in isospin-1 for the neutral component.} of the isospin-1 hidden-charm sector (with flavour content $c\bar{c} l \bar{l}$ where the light quark and antiquark are coupled to isospin-1). The basis of operators used is given in Table~\ref{tab:hiddencharmops} of Appendix~\ref{app:ops}. Note that, because we do not include contributions arising from a charm quark and antiquark annihilating, our operator bases do not contain any single-meson operators.
The diagonal elements of the matrix of correlators for the three tetraquark operators are shown in Figure~\ref{fig:correlator} -- signals are seen to be precise and significantly non-zero. In Figure~\ref{fig:heatmap} we present the two-point correlator matrix on timeslice $3$. This shows that some of the off-diagonal elements between tetraquark and meson-meson operators are non-zero.
After applying the variational method, principal correlators for the lowest six energy levels are shown in Figure~\ref{fig:combined}. We fit these to Equation~\eqref{eqn:fit} and in each case find a reasonable description having $\chi^2/N_\mathrm{d.o.f} \sim 1$ -- the resulting spectrum is given in the figure. It can be seen that the number of energy levels in the computed spectrum is equal to the number of non-interacting meson-meson levels expected in the energy region considered and they all lie close to the non-interacting levels. As discussed in Section~\ref{subsec:mesonmeson} and indicated in the figure, some of the non-interacting meson-meson energy levels are degenerate. Because our basis of operators has sufficiently different structures, we are able to cleanly extract nearly-degenerate energy levels.
Normalised operator-state overlaps are also shown in Figure~\ref{fig:combined} and we see that every energy level has a dominant overlap onto one meson-meson operator. Additionally, the third and fourth levels have dominant overlaps onto two linearly independent $J/\psi\pi$ operators -- this is not surprising since around this energy there are two degenerate non-interacting levels. We cannot draw strong quantitative conclusions about the tetraquark operator overlaps because the absolute normalisations are somewhat arbitrary and renormalisation factors would be needed to relate the overlaps to physical quantities, but we do see that most states have some overlap onto one or more tetraquark operators.
For comparison, Figure~\ref{fig:hiddencharm} (left panel) shows the $\Lambda^{PG} = T_1^{++}$ spectrum calculated with the full basis of meson-meson and tetraquark operators, with \emph{only meson-meson operators} and with \emph{only tetraquark operators}. No significant deviations are observed between the spectrum computed using the full basis and that computed using the basis of only meson-meson operators. If only tetraquark operators are used, some poorly determined energy levels are found but the spectrum is not reliably extracted and this suggests that these tetraquark operators alone do not constitute a sufficient basis of operators.
\begin{figure}[h]
\begin{center}
\includegraphics[width=\textwidth]{ccbarplots.pdf}
\caption{As in the spectrum plot of Figure~\ref{fig:combined} but showing the spectra for the isospin-1 hidden-charm sector with $\Lambda^{PG} = T_1^{++}, A_1^{+-},T_1^{+-}$. Within each plot, the left, middle and right column shows the spectrum determined using the full basis of meson-meson and tetraquark operators, only meson-meson operators and only tetraquark operators respectively.}
\label{fig:hiddencharm}
\end{center}
\end{figure}
Moving to other channels in the isospin-1 hidden-charm sector, extracted spectra for the $\Lambda^{PG} = A_1^{+-},T_1^{+-}$ irreps\footnote{The lowest spin in each of these irreps is respectively $J^{PG} = 0^{+-}, 1^{+-}$.} are shown in Figure~\ref{fig:hiddencharm}. In general, a similar pattern of features is seen as was found for $\Lambda^{PG} = T_1^{++}$: there are no significant deviations between the spectra calculated using the full basis and using only meson-meson operators, the spectrum is not reliably determined if only tetraquark operators are used, and with a full basis of operators the number of energy levels is equal to the number of non-interacting meson-meson levels expected and they lie close to the non-interacting levels. Furthermore, the operator-state overlaps follow the same qualitative pattern as shown in Figure~\ref{fig:combined}.
From previous studies, when a narrow resonance is present in elastic scattering, an `extra' finite-volume energy level is observed in that energy region but no evidence for such an extra level is seen in our spectra. The results suggest that there are only weak hadron-hadron interactions and no strong indications of a bound state or narrow resonance in these channels. However, the situation is not as straightforward when one considers coupled-channel scattering or broad resonances~\cite{Dudek:2016cru,Briceno:2016mjc}. To draw rigorous conclusions and determine whether there are bound states or resonances present, a L\"{u}scher analysis, where the finite-volume spectra are related to the scattering amplitudes, is necessary. To reliably constrain the scattering amplitudes, this would require calculations at non-zero momentum and/or different volumes which is beyond the scope of this first study. An important conclusion is that the addition of a class of operators resembling compact tetraquarks has little consequence on the finite-volume spectrum and, in turn, the scattering amplitudes.
\subsection{Doubly-charmed sector}
\label{subsec:results:doubly}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.9\textwidth]{ccplots.pdf}
\caption{As Figure~\ref{fig:hiddencharm} but for the isospin-0 doubly-charmed sector with quark flavour $cc\bar{l}\bar{l}$. Dashed lines indicate kinematic thresholds where a non-interacting level is not expected. Dotted lines indicate non-interacting meson-meson levels where the corresponding operators have not been included in the operator basis. Ellipses indicate that additional energy levels have been extracted in/above these regions but we have not plotted them as we have not included all relevant meson-meson operators in these energy regions.}
\label{fig:doublycharmed1}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.7\textwidth]{ccqsplots.pdf}
\caption{As Figure~\ref{fig:doublycharmed1} but for the isospin-$\frac{1}{2}$ doubly-charmed sector with quark flavour $cc\bar{l}\bar{s}$.}
\label{fig:doublycharmed2}
\end{center}
\end{figure}
Turning to the doubly-charmed sector, Figure~\ref{fig:doublycharmed1} shows spectra for flavour content $cc\bar{l}\bar{l}$ in the $\Lambda^P = T_1^+, E^+, T_2^+$ isospin-0 channels\footnote{The lowest spin $J^P = 1^+$ appears in $T_1^+$ and the lowest spin $J^P = 2^+$ appears in $E^+, T_2^+$.} and Figure~\ref{fig:doublycharmed2} shows spectra for flavour $cc \bar{l}\bar{s}$ with isospin-$\frac{1}{2}$ in the irreps $\Lambda^P = A_1^+, T_1^+$. It can be again seen that there are no significant deviations between the spectra including and excluding tetraquark operators, and the spectra can not be reliably extracted using only tetraquark operators. Using the full basis of operators (see Table \ref{tab:doublecharmops}), the number of energy levels in each spectrum is equal to the number of expected non-interacting meson-meson energy levels in the relevant energy region. Because the basis of operators used has sufficiently diverse structures, we are able to extract many nearly-degenerate energy levels. In addition, we find that every energy level has a dominant meson-meson operator overlap. As in the results of the hidden-charm sector, we emphasise that the addition of a class of operators resembling compact tetraquarks does not significantly alter the finite-volume spectrum extracted.
The lowest-lying $DD$ and $D^\ast D^\ast$ levels in $s$-wave are forbidden in the $J^P=0^+,2^+$ isospin-0 doubly-charmed channels: the flavour wavefunction is antisymmetric in isospin-0 whilst the spin and spatial wavefunctions are symmetric, giving an overall antisymmetric wavefunction which is forbidden by Bose symmetry. These channels are particularly appealing to look for a tetraquark because if a low-lying state exists, it would lie far below the allowed non-interacting meson-meson energy levels and would be easily identified. Additionally, a low-lying $J^P=2^+$ stable tetraquark would subduce into both of the irreps $\Lambda^P = E^+, T_2^+$ and so appear with little ambiguity -- no such energy levels are seen in Figure~\ref{fig:doublycharmed1}. A $J^P=0^+$ tetraquark would appear in the $\Lambda^P = A_1^+$ irrep and, although a plot is not shown, we calculated the spectrum in this channel with the operators given in Table~\ref{tab:doublecharmops}. The first allowed non-interacting meson-meson energy level is $D \, D(2S)$, where $D(2S)$ is the first radial excitation of the $D$ meson.\footnote{We use a single-meson operator with structure $[\gamma^5 \overleftrightarrow{D} \overleftrightarrow{D}]$, where the two derivatives are coupled to $J=0$, for the $D(2S)$ rather than an optimised operator.} We do not find any energy levels below the $DD(2S)$ threshold at $\sim 4500$ MeV.
We now draw particular attention to the spectrum in the $\Lambda^P = T_1^+$ isospin-0 channel where the non-interacting $DD^\ast$ and $D^\ast D^\ast$ levels can have degeneracy two. We have reliably extracted two energy levels (the third and fourth) that have dominant overlap onto the two relevant $DD^\ast$ operators and two energy levels (fifth and sixth) that have dominant overlap onto the two relevant $D^\ast D^\ast$ operators. It can be seen that each pair of energy levels is non-degenerate which suggests there is some interaction. In order to quantify this, a further analysis requiring computations on different volumes and overall non-zero momentum is needed to relate the finite-volume spectrum to the scattering amplitudes via the L\"uscher formalism. It is also important to stress that a reliable determination of the coupled $s$ and $d$-wave scattering amplitudes in this channel depends on our ability to robustly extract these multiple energy levels.
\section{Calculation of the spectrum}\label{sec:methodology}
To determine the spectrum in each quantum-number channel, we calculate a matrix of two-point correlation functions using a basis of interpolating operators with appropriate quantum numbers,
\begin{equation}
C_{ij}(t) = \langle 0 | \mathcal{O}_{i}^{}(t) \mathcal{O}_{j}^{\dagger}(0) | 0 \rangle \, ,
\label{eqn:corr}
\end{equation}
between a creation operator $\mathcal{O}_j^\dagger(0)$ at the source with Euclidean time 0 and an annihilation operator $\mathcal{O}_i(t)$ at the sink with Euclidean time $t$. Inserting a complete set of energy eigenstates into this expression gives,
\begin{equation}
C_{ij}(t) = \sum_\mathfrak{n} \frac{1}{2E_\mathfrak{n}} Z_i^{\mathfrak{n}\ast} Z_j^\mathfrak{n} e^{-E_\mathfrak{n} t} \, ,
\label{eqn:2ptspectra}
\end{equation}
where $|\mathfrak{n}\rangle$ is an energy eigenstate with energy $E_\mathfrak{n}$ and the operator-state matrix elements, $Z_i^\mathfrak{n} \equiv \langle \mathfrak{n} | \mathcal{O}_i^\dagger(0) | 0 \rangle$, are also referred to as \emph{overlaps}. Note that in a finite volume, the set of energy eigenstates is discrete. The spectrum can be extracted by utilising the variational method \cite{Michael:1985, Luscher:1990, Blossier:2009kd}: a generalised eigenvalue problem $C_{ij}(t)v^\mathfrak{n}_j = \lambda^\mathfrak{n}(t,t_0)C_{ij}(t_0)v^\mathfrak{n}_j$ is solved for some appropriate choice of $t_0$, the eigenvalues $\lambda^\mathfrak{n}$, known as \emph{principal correlators}, are related to $E_{\mathfrak{n}}$ and the eigenvectors $v_i^\mathfrak{n}$ are related to the overlaps. In our implementation of the variational method described in Refs.~\cite{Dudek:2007,Dudek:2010}, we fit the principal correlators to the function,
\begin{equation}
\lambda^\mathfrak{n}(t,t_0) = (1-A_\mathfrak{n})e^{-E_\mathfrak{n}(t-t_0)} + A_\mathfrak{n}e^{-E_\mathfrak{n}'(t-t_0)} \, ,
\label{eqn:fit}
\end{equation}
where the fit parameters are $E_\mathfrak{n}$, $E_\mathfrak{n}'$, and $A_\mathfrak{n}$. The second exponential is used to account for possible contamination due to excited states. The eigenvectors can be used to construct \emph{optimised operators}, $\Omega_\mathfrak{n}^\dagger \sim \sum_i v^\mathfrak{n}_i \mathcal{O}_i^\dagger$~\cite{Dudek:2012gj}, the optimal linear combination of the operators that interpolates state $\mathfrak{n}$. As will be discussed later, these optimised operators are useful for the construction of operators resembling pairs of mesons.
In principle, any operator can interpolate every state with the same quantum numbers according to Equation~\eqref{eqn:2ptspectra} and one can use a basis containing the tetraquark operators described above to fully calculate the finite-volume spectrum. However, in practice, previous studies such as Refs.~\cite{Dudek:2012xn,Wilson:2015dqa} have highlighted that a sufficiently diverse set of interpolating operators must be used if finite-volume spectrum is to be extracted reliably. Even in the absence of interactions, the spectra we study will contain meson-meson-like states or admixtures of such states (and other multi-hadron combinations at higher energies). It has been show in previous work~\cite{Dudek:2012gj,Dudek:2012xn,Dudek:2014qha,Wilson:2014cna,Wilson:2015dqa,Moir:2016srx,Dudek:2016cru,Briceno:2016mjc, Briceno:2017qmb} that such states can be efficiently interpolated by including meson-meson-like operators. Therefore, to efficiently and reliably extract the finite-volume spectra, our operator bases will contain operators of meson-meson and tetraquark structure.
\subsection{Meson-meson operators}
\label{subsec:mesonmeson}
In this section we briefly review how operators with a meson-meson-like structure can be constructed from the product of two single-meson-like operators -- further details are given in Refs.~\cite{Dudek:2012gj,Dudek:2012xn}.
Following Refs.~\cite{Dudek:2010,Thomas:2012}, fermion-bilinear operators of continuum spin $J$ and momentum $\vec{p}$ are constructed as,
\begin{equation}
\mathcal{O}^{J,m}(\vec{p},t) = \sum_{\vec{x}} e^{i\vec{p} \cdot \vec{x}} \ \bar{q}(\vec{x},t) [\Gamma \overleftrightarrow{D} \dots \overleftrightarrow{D}]^{J,m} q(\vec{x},t) \, ,
\end{equation}
where we have suppressed colour, flavour and spinor indices for clarity. The quark and antiquark fields are distillation-smeared as discussed below, the quark and antiquark flavour representations are chosen and coupled to give the desired flavour quantum numbers, and $[\Gamma \overleftrightarrow{D} \dots \overleftrightarrow{D}]^{J,m}$ consists of a Dirac gamma matrix $\Gamma$ and gauge-covariant derivatives $\overleftrightarrow{D}$ coupled together to give spin $J$. In the case when $\vec{p} = 0$, $m$ refers to the $J_z$ component, whilst when $\vec{p} \neq 0$ we construct helicity operators using Wigner-D matrices as described in Ref.~\cite{Thomas:2012} and $m$ then refers to the helicity. Since we are working in a finite cubic spatial volume of extent $L$ with periodic boundary conditions, the momentum is quantised to ${\vec{p} = \frac{2\pi}{L}(n_x,n_y,n_z)}$ where $(n_x,n_y,n_z)$ is a triplet of integers -- we use $[n_x n_y n_z]$ as a shorthand notation to denote $\vec{p}$. As for tetraquark operators, lattice operators which transform in lattice irrep $\Lambda$ (row $\mu$) are constructed by subducing the continuum operators to obtain $\mathcal{O}^{[J]}_{\Lambda, \mu}(\vec{p},t) = \sum_m S^{J,m}_{\Lambda,\mu} \mathcal{O}^{J,m}(\vec{p},t)$. We refer to these as \emph{single-meson operators}.
\emph{Meson-meson operators}~\cite{Dudek:2012gj,Dudek:2012xn} are built from products of two single-meson operators,
\begin{equation}
\mathcal{O}_{\Lambda,\mu}^P(\vec{p}=\vec{0},t) = \sum_{\mu_1,\mu_2,\hat{q}} \mathbb{C}(\vec{p}=\vec{0},\Lambda^P,\mu; \vec{q} , \Lambda_1 , \mu_1; -\vec{q} , \Lambda_2 , \mu_2)
\; \Omega^{M_1}_{\Lambda_1,\mu_1}(\vec{q},t) \; \Omega^{M_2}_{\Lambda_2,\mu_2}(-\vec{q},t) \, ,
\label{eqn:mesonmeson}
\end{equation}
where we have restricted to overall zero momentum, $\Omega^{M_i}_{\Lambda_i, \mu_i}$ is an optimised operator for interpolating meson $M_i$ transforming in the lattice irrep $\Lambda_i$ (row $\mu_i$) and the sum runs over the lattice irrep rows and all momentum directions $\hat{q}$ related by an allowed lattice rotation to couple $\Lambda_1 (\vec{q}) \otimes \Lambda_2 (-\vec{q}) \to \Lambda^P (\vec{p} = \vec{0})$ using generalised Clebsch-Gordan coefficients $\mathbb{C}$. The construction of these meson-meson operators follows the methodology given in Refs~\cite{Dudek:2012gj,Dudek:2012xn} and a more detailed discussion of operators containing mesons with non-zero spin will be presented in a forthcoming publication. Analogous constructions can be used for meson-meson operators with overall non-zero momentum, but in this study we only calculate spectra at overall zero momentum.
A guide to which meson-meson operators should be included in the basis is given by the non-interacting meson-meson energy levels in the energy regions we consider. These are calculated from the relativistic dispersion relation $E = \sqrt{ m_1^2 + \vec{p}_1^{\,2}} + \sqrt{m_2^2 + \vec{p}_2^{\,2}}$ for stable single-mesons. In cases when a single-meson has non-zero spin, there can be multiple ways to couple the orbital and spin angular momenta together to a given meson-meson $J^P$ which subduce into the same irrep, leading to degenerate levels in the non-interacting limit. For example, a pseudoscalar and vector can be coupled to $J^P = 1^+$ in either $s$-wave or $d$-wave. To see how this manifests on the lattice, consider the pseudoscalar with $\vec{p} = [100]$ and the vector with $\vec{p} = [-100]$ coupled to the lattice irrep $\Lambda^P = T_1^+$. The mesons transform in the irreps of the little group $\text{Dic}_4$: the pseudoscalar subduces into the $A_2$ irrep while the helicity-0 and helicity-1 components of the vector subduce into respectively the $A_1$ and $E_2$ irreps. It is possible to obtain $\Lambda^P = T_1^+$ from both $A_2 \otimes A_1$ and $A_2 \otimes E_2$ \cite{Moore:2006ng} and two different linear combinations of these would correspond to $s$ and $d$ wave in the continuum and infinite volume limit. In general, we must include a sufficient number of relevant meson-meson operators that are capable of extracting and disentangling these multiple energy levels. The comparison of spectra calculated with different operator bases in Section~\ref{sec:stability} demonstrates the importance of including a sufficient basis of meson-meson operators.
\subsection{Calculation of correlation functions}
\label{subsec:distillation}
We choose a basis of meson-meson operators as described above and tetraquark operators as described later in Section~\ref{sec:results} and compute the two-point correlation functions using the \emph{distillation} framework~\cite{Peardon:2009}. The combination of distillation and the techniques described here have been demonstrated in, for example, Refs.~\cite{Dudek:2010,Dudek:2011, Dudek:2012gj, Dudek:2012xn,Liu:2012, Wilson:2014cna, Wilson:2015dqa, Dudek:2016cru, Briceno:2016mjc, Moir:2016srx}. In brief, the distillation operator on timeslice $t$ which acts in 3-space ($\vec{x}$ and $\vec{y}$) and colour space ($r$ and $s$) is defined as, $\Box(\vec{x}r,\vec{y}s;t) = \sum_{n=1}^{N_{\text{vecs}}} \xi_n(\vec{x}r;t) \xi_n^\dagger(\vec{y}s;t)$, where in the implementation used here $\xi_n$ are the lowest $N_{\text{vecs}}$ eigenvectors of the gauge-covariant Laplacian. The quark and antiquark fields in interpolating operators are smeared by the distillation operator, $q \rightarrow \Box q$, which removes high-frequency modes and increases the overlap onto lower-lying states. Distillation also allows for a factorisation of the two-point correlation functions as contractions of \emph{perambulators}, $\tau_{nm}(t',t) = \xi^\dagger_n(t') M^{-1} (t',t) \xi_m(t)$, where $M$ is the Dirac matrix, and \emph{elementals} which describe the operators with various structures projected onto definite momentum, and this enables the efficient computation of the correlation function matrix for a large basis of operators. Meson elementals $\Phi^{\alpha \beta}_{n_1 n_2}(\vec{p},t)$ are presented in Ref.~\cite{Dudek:2012xn} and tetraquark elementals are given by,
\begin{equation}
\Psi^{\alpha \beta \gamma \delta}_{n_1 n_2 n_3 n_4}(\vec{p}=\vec{0}, t) = \sum_{\vec{w}p, \vec{x}q, \vec{y}r, \vec{z}s} \mathbb{C}_{pqrs} \ \xi_{n_3}^\dagger(\vec{w}p;t) \xi_{n_4}^\dagger(\vec{x}q;t) (C \Gamma_1)^{\alpha \beta} (\Gamma_2 C)^{\gamma \delta} \xi_{n_1} (\vec{y} r;t) \xi_{n_2} (\vec{z} s;t) \, ,
\end{equation}
where $n_i$ index distillation vectors, Greek letters label the Dirac spinor indices and $\mathbb{C}_{pqrs}$ are combinations of SU$(3)_C$ Clebsch-Gordan coefficients that couple the colour representations $\underline{\bar{3}} \otimes \underline{\bar{3}} \otimes \underline{3} \otimes \underline{3} \rightarrow \underline{1}$ as in the tetraquark operators in Section~\ref{sec:tetraquark}. Meson elementals are matrices with $(4 N_{\text{vecs}})^2$ independent components and tetraquark elementals are rank-4 with $(4 N_{\text{vecs}})^4$ independent components. This means that the cost of calculations involving tetraquark operators (multiplying and tracing perambulators and elementals) increases rapidly when the number of vectors is increased. Therefore, if the calculation is to be feasible, the number of vectors must not be too large.
To keep the cost of contractions reasonable by using a relatively small number of vectors for tetraquark operators, whilst maintaining a larger number of vectors for other operators, we introduce a second distillation operator, $\tilde{\Box}(\vec{x}r,\vec{y}s;t) = \sum_{\tilde{n}=1}^{\tilde{N}_{\text{vecs}}} \xi_{\tilde{n}}(\vec{x}r;t) \xi_{\tilde{n}}^\dagger(\vec{y}s;t)$, composed of the lowest $\tilde{N}_{\text{vecs}}$ vectors where $\tilde{N}_{\text{vecs}} < N_{\text{vecs}}$. Quark/antiquark fields in tetraquark operators are smeared with $\tilde{\Box}$ whereas those in other operators are smeared with $\Box$. As an example, consider a meson-meson operator given by $\mathcal{O} \sim (\bar{c} \Box \Gamma \Box u) (\bar{d} \Box \Gamma \Box c)$ and a tetraquark operator, $\mathcal{T} \sim \left( (\tilde{\Box} c)^T C\Gamma (\tilde{\Box} u) \right) \left((\bar{c} \tilde{\Box}) \Gamma C (\bar{d} \tilde{\Box})^T\right)$, where we are suppressing various indices and factors which are not relevant for this discussion. One of the connected contributions to the correlation function between these two operators is, schematically,
\begin{equation}
\langle \mathcal{O}(t) \mathcal{T}(0)^\dagger \rangle \sim \Phi_{n_1 n_2}(t) \Phi_{n_3 n_4}(t) \tau_{\tilde{n}_3 n_1}(0,t) \tau_{\tilde{n}_4 n_3} (0,t) \tau_{n_4 \tilde{n}_1} (t,0) \tau_{n_2 \tilde{n}_2}(t,0) \Psi_{\tilde{n}_1 \tilde{n}_2 \tilde{n}_3 \tilde{n}_4}(0) \, ,
\label{eqn:distillation}
\end{equation}
where $n_i = 1, \ldots , N_\text{vecs}$, $\tilde{n}_i = 1, \ldots , \tilde{N}_\text{vecs}$, and we have suppressed spinor indices. Here, the perambulators $\tau(t,0)$ are $4 N_{\text{vecs}} \times 4\tilde{N}_{\text{vecs}}$ rectangular matrices, $\Phi(t)$ are $4N_{\text{vecs}} \times 4N_{\text{vecs}}$ square matrices and $\Psi(0)$ is of rank-4 with $(4\tilde{N}_{\text{vecs}})^4$ components. The viability of having a lower number of distillation vectors for tetraquark operators and some tests of varying the number of vectors are discussed in Section~\ref{subsec:varydistillation}. Although not utilised in this study, another possible use of employing more than one distillation operator is to increase the number of operators in the variational basis through including operators with different smearings.
To further reduce the computation time, we only calculate one half of the off-diagonal elements in the matrix of two-point correlation functions between a tetraquark operator and meson-meson operator, and then obtain the other half using the Hermiticity of the correlation matrix. In addition, we neglect contributions where a charm quark and antiquark annihilate: these are expected to be small due to OZI suppression and this has been found to be the case empirically in lattice calculations~\cite{Levkova:2010ft}. The elements of the two-point correlation function matrix that we compute are shown in Figure~\ref{fig:wick} where we show a schematic representation of the types of Wick contractions required.
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{wick.pdf}
\caption{A schematic representation of the types of Wick contractions required to compute the two-point correlation function matrices in this study. We use $\Phi$ (grey) to depict the single-meson elementals, $\Psi$ (red) to depict the tetraquark elementals and the lines joining them to depict perambulators.}
\label{fig:wick}
\end{center}
\end{figure}
\section{Systematics and stability of the extracted spectra}
\label{sec:stability}
Before discussing the results further, we consider some systematic effects which may have an impact on them and present some tests of varying the operator basis and the number of distillation vectors.
\subsection{Systematic uncertainties}
As a first application of these tetraquark operator constructions, we have performed calculations on a relatively small lattice volume, with spatial extent $L \sim 2$ fm, and this may be too small to distinguish the spatial structures of the extended meson-meson and compact tetraquark. The tetraquark operator can be Fierz rearranged as a linear combination of meson-meson operators multiplied by a factor of $1/L^{3}$~\cite{Padmanath:2015era}, suppressing the overlap of a possible tetraquark state onto the meson-meson operators. Further calculations, beyond the scope of this study, would be required to give some indication on how the results vary with the volume.
As this is a first demonstration, we have performed calculations with unphysically-heavy light quarks, corresponding to $m_\pi = 391$ MeV, and the presence/absence of tetraquarks may depend on the mass of the light quarks. Ultimately, calculations with light-quark masses approaching their physical values are required for comparison with experiment. On the other hand, studying how the spectra change as the quark masses are varied would give insight into the relevant QCD interactions and could be compared with expectations in different models.
Other possible systematic uncertainties include discretisation effects and the tuning of the charm quark mass. These issues were addressed in Ref.~\cite{Liu:2012} and we do not repeat the discussion here.
\subsection{Varying the operator basis}
Some tests of how varying the operator bases affects the results have already been presented in Section~\ref{sec:results}. In summary, it was found that there were no significant changes in the low-lying spectra when only meson-meson operators were used compared to using the full basis of tetraquark and meson-meson operators. However, reliable spectra could not be extracted if only tetraquark operators were used.
As an illustration of what could happen if a sufficiently diverse set of meson-meson operators is not used, we show in Figure~\ref{fig:lessmes} spectra in the doubly-charmed isospin-0 $\Lambda^P = T_1^+$ channel computed using different operator bases. Note that degenerate meson-meson energy levels would be present here in the non-interacting limit as discussed in Section~\ref{subsec:mesonmeson}. Column $A$ shows the spectrum computed using the full basis of meson-meson and tetraquark operators (see Table~\ref{tab:doublecharmops}) and we see that the number of energy levels is equal to the number of expected non-interacting meson-meson energy levels in the energy region considered. We make the same conclusion for column $B$ which shows the spectrum calculated using only meson-meson operators. Column $C$ shows the spectrum using only meson-meson operators without the $D^{[100]}_{A_2} D^\ast{}_{E_2}^{[100]}$ operator which is relevant for the $DD^\ast$ level at $\approx 4100$ MeV -- it is seen that now one fewer energy level is extracted and the second $D D^\ast$ level moves slightly higher in energy which is as expected when not enough operators are used~\cite{Dudek:2012xn}. The right column $D$ shows the spectrum calculated with the operators as in $C$ supplemented with the tetraquark operators -- an additional level is found compared to $C$ high up in the spectrum. This demonstrates the necessity of accounting for all the relevant meson-meson energy levels in the energy region being considered and using a sufficient basis of operators of different structures. Otherwise there is the danger that this level could be mistakenly taken as a signal for the presence of a tetraquark.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.6\textwidth]{T1p2I0S0lessmes.pdf}
\caption{As Figure~\ref{fig:doublycharmed1} but for the $\Lambda^{P} = T_1^+$ isospin-0 $cc\bar{l}\bar{l}$ channel with different bases of operators: $A$ uses the full basis of meson-meson and tetraquark operators, $B$ uses only meson-meson operators, $C$ uses only meson-meson operators minus one $DD^\ast$ operator as described in the text, and $D$ uses the operators as in $C$ supplemented with the tetraquark operators.}
\label{fig:lessmes}
\end{center}
\end{figure}
\subsection{Varying the number of distillation eigenvectors}
\label{subsec:varydistillation}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.8\textwidth]{T1p2I0S032.pdf}
\caption{As Figure~\ref{fig:doublycharmed1} but showing the $\Lambda^{P} = T_1^+$ isospin-0 $cc\bar{l}\bar{l}$ spectrum calculated using tetraquark operators with different numbers of distillation vectors, $\tilde{N}_{\text{vecs}} = 16,24,32$. The left three columns are using the full basis of meson-meson and tetraquark operators, and the right three columns are using only tetraquark operators.}
\label{fig:hugedata}
\end{center}
\end{figure}
If the number of distillation eigenvectors used for the tetraquark operators is too low, the operator may not efficiently interpolate states of interest as it may be too smeared and so no longer resemble a compact tetraquark. However, as discussed in Section~\ref{subsec:distillation}, the computational cost involving tetraquark operators scales much more strongly than meson-meson operators with the number of distillation eigenvectors and therefore the number used can not be too large if the calculations are to be feasible. In this section, we test how sensitive the results are to varying number of distillation eigenvectors.
The spectrum in the doubly-charmed isospin-0 $\Lambda^{P} = T_1^+$ channel is shown in Figure~\ref{fig:hugedata} using different numbers of distillation vectors for tetraquark operators, $\tilde{N}_{\text{vecs}} = 16,24,32$, with both the full basis of meson-meson and tetraquark operators (see Table~\ref{tab:doublecharmops}) and only tetraquark operators. It can be seen that the results are not sensitive to the number of distillation eigenvectors used.
We also computed all the spectra using $N_{\text{vecs}} = \tilde{N}_{\text{vecs}} = 24$, i.e.\ the same number of distillation vectors for both meson-meson and tetraquark operators. The results were found to be consistent with the spectra presented in Section~\ref{sec:results} (which used $\tilde{N}_{\text{vecs}} = 24$ for tetraquark operators and $N_{\text{vecs}} = 64$ for meson-meson operators). This shows that the results are also not sensitive to the number of distillation vectors used for the meson-meson operators.
In summary, these tests suggest that the results are not very sensitive to the number of distillation vectors being used. In addition, a recent study in Ref.~\cite{Woss:2016tys} demonstrated that a small number of distillation vectors is sufficient to extract finite-volume spectra as long as one does not consider higher momenta, higher spin or highly excited states. Because we are considering overall zero momentum and relatively low-lying states, this gives further support to our conclusion.
\section{Tetraquark operator construction}\label{sec:tetraquark}
To construct interpolating operators which resemble a compact tetraquark, we combine a diquark operator with an anti-diquark operator. The diquark operator is built from two quark fields coupled together to obtain appropriate colour, flavour and spin quantum numbers and, analogously, the anti-diquark operator is built from two antiquarks. The diquark and anti-diquark are then combined to form a colour singlet with the desired flavour and spin. These constructions provide, with no loss in generality, a convenient way to build a diverse class of \emph{tetraquark operators} which have the required quantum numbers and respect appropriate symmetries. In this section we present an overview of these operators: we begin by describing the flavour and colour structures before presenting expressions for the diquark, anti-diquark and tetraquark operators. Further details and a discussion of some additional properties can be found in Appendix~\ref{app:tetraquark}. Further model-dependent understanding on how the different diquark configurations interpolate different states can be found in Appendices~\ref{subsec:onegluon} and \ref{subsec:nonrelativistic}.
The diquark operator is constructed by coupling two quark fields together to definite colour, flavour and continuum spin. In colour space, the quarks belong in the fundamental representation of SU$(3)_C$ and so the diquark is in either the antisymmetric $\underline{\bar{3}}$ representation or the symmetric $\underline{6}$ representation. In flavour space, we use SU$(3)_F$ constructions to form a convenient basis of operators, but this does not imply any assumption of SU$(3)_F$ symmetry in the theory -- as long as a sufficient basis is used, an arbitrary flavour combination can be constructed from a linear combination of these operators. The up, down and strange ($u,d$ and $s$) quarks belong in the fundamental representation of SU$(3)_F$ and the charm ($c$) quark is placed in a singlet. The quarks are coupled together to obtain the desired flavour irrep out of $\underline{1}, \underline{3}, \underline{\bar{3}}$ and $\underline{6}$. For example, coupling two $u,d,s$ quarks as $\underline{3} \otimes \underline{3} \rightarrow \bar{\underline{3}}$, this irrep gives a component with flavour quantum numbers (isospin, strangeness) $= (0,0)$ with flavour structure
$\frac{1}{\sqrt{2}} \left(u d - d u \right)$, and a $ (\frac{1}{2},-1)$ multiplet with flavour structure $ \frac{1}{\sqrt{2}} \left(us - su \right)$ and $\frac{1}{\sqrt{2}} \left(ds - s d \right)$. Alternatively, coupling a $c$ quark with a $u,d,s$ quark as $\underline{1} \otimes \underline{3} \rightarrow \underline{3}$ gives a $(0,-1)$ component with flavour structure $c s$, and a $ (\frac{1}{2}, 0)$ multiplet with flavour structure $c u$ and $c d$. If both quarks are in the same flavour representation, Fermi symmetry requires that the overall operator is antisymmetric under the interchange of the quarks and this constrains the allowed diquark configurations.
The diquark operator in colour irrep $R$ (row $r$), flavour irrep $F$ (row $f$) and continuum spin $J$ ($J_z$ component $m$) is,
\begin{equation}
\delta^{J[\Gamma]}_{ RF; rfm}(\vec{x},t) = \sum_{r_a,r_b} \langle \underline{3}, r_a; \underline{3}, r_b | R, r \rangle \sum_{f_a,f_b} \langle F_a, f_a; F_b, f_b | F, f\rangle \ q_{F_a;r_a f_a}^T(\vec{x},t) C \Gamma_m q_{F_b;r_b f_b} (\vec{x},t)
\label{eqn:diquark}
\end{equation}
where spinor indices have been suppressed, $q(\vec{x},t)$ is a quark field smeared with the distillation operator as discussed in Section~\ref{subsec:distillation}, $\langle D_a, d_a; D_b, d_b | D, d \rangle$ are the SU(2) or SU(3) Clebsch-Gordan coefficients that couple the irreps $D_a \otimes D_a \rightarrow D$ with $d_a$, $d_b$ and $d$ the irrep rows, $C$ is the charge conjugation matrix such that $\gamma_0 = C \gamma_0^T C$, and $\Gamma$ is a Dirac gamma matrix which determines $J$, $m$ and other properties of the diquark operator as shown in Table~\ref{tab:gamma} in Appendix~\ref{app:tetraquark}. Choosing an appropriate $\Gamma$ gives access to spins up to $J=1$ -- in order to access higher spins or excitations, this operator can be generalised by including gauge-covariant derivatives in a similar way to the fermion-bilinear operator constructions discussed in Ref.~\cite{Dudek:2010}.
In the anti-diquark operator, the antiquarks belong in the anti-fundamental representation of SU$(3)_C$ and therefore couple to colour irrep $\underline{3}$ or $\bar{\underline{6}}$. The up, down and strange antiquarks belong in the $\underline{\bar{3}}$ irrep of SU$(3)_F$ and the charm antiquark is in the singlet. Possible flavour irreps for the anti-diquark are therefore $\underline{1}, \underline{3}, \underline{\bar{3}}$ and $\underline{\bar{6}}$. If both antiquarks are in the same flavour irrep, Fermi symmetry again constrains the allowed configurations. The anti-diquark operator is defined in an analogous way to the diquark operator as,
\begin{equation}
\bar{\delta}^{J[\Gamma]}_{RF; rfm}(\vec{x},t) = \sum_{r_a,r_b} \langle \underline{\bar{3}}, r_a; \underline{\bar{3}}, r_b | R, r \rangle \sum_{f_a,f_b} \langle F_a, f_a; F_b, f_b | F, f\rangle \ \bar{q}_{F_a;r_af_a}(\vec{x},t) \Gamma_m C \bar{q}_{F_b;r_bf_b}^T(\vec{x},t)
\end{equation}
where $\bar{q}(\vec{x},t)$ is a (smeared) antiquark field. Here the charge conjugation matrix comes after the Dirac gamma matrix so that under the charge conjugation operator ${\mathcal{C} \, \bar{\delta}^{J[\Gamma]}_{RF} \, \mathcal{C}^{-1} = \delta^{J[\Gamma]}_{\bar{R}\bar{F}}}$, and this ensures a convenient definition of tetraquark operators with definite $G$-parity as discussed later.
Tetraquark operators are formed by coupling a diquark operator and an anti-diquark operator to a colour singlet with definite flavour and spin. The only possible diquark and anti-diquark colour combinations which give a colour singlet are $\underline{\bar{3}} \otimes \underline{3}$ and $\underline{6} \otimes \underline{\bar{6}}$ and this restricts the possible diquark--anti-diquark configurations. The flavour quantum numbers of the tetraquark operator are obtained by coupling the appropriate flavour irreps of the diquark and anti-diquark and then choosing the desired row. By projecting onto zero momentum, tetraquark operators have definite parity and, in channels where $G$-parity is a good quantum number, operators with definite $G$-parity can be constructed. The tetraquark operator, projected onto momentum $\vec{p}$, is,
\begin{equation}
\begin{aligned}
\mathcal{T}^{J[\Gamma_1,\Gamma_2]}_{[R_1,R_2]F[F_1,F_2];fm}(\vec{p},t) = \sum_{\vec{x}} e^{i\vec{p} \cdot \vec{x}} \ \sum_{m_1,m_2} \langle J_1,m_1;J_2,m_2 | J,m \rangle \ \sum_{r_1,r_2} \langle R_1, r_1; R_2, r_2 | \underline{1} \rangle \\
\times \sum_{f_1,f_2} \langle F_1, f_1; F_2, f_2 | F,f \rangle \ \delta^{J_1[\Gamma_1]}_{R_1 F_1; r_1 f_1 m_1} (\vec{x},t) \ \bar{\delta}^{J_2[\Gamma_2]}_{R_2 F_2; r_2 f_2 m_2} (\vec{x},t) \, .
\end{aligned}
\label{eqn:tet}
\end{equation}
For the remainder of this study, we only consider $\vec{p} = 0$ and so this operator has definite parity $P$ which is determined by the gamma matrices $\Gamma_1$ and $\Gamma_2$ as described in Appendix~\ref{app:tetraquark}. In channels where $G$-parity is a good quantum number, a tetraquark operator with definite $G$-parity is given by,
\begin{equation}
\mathcal{T}^{J[\Gamma_1,\Gamma_2],P G}_{[R_1,R_2]F[F_1,F_2];fm}(\vec{p}=\vec{0},t) = \mathcal{T}^{J[\Gamma_1,\Gamma_2]}_{[R_1,R_2]F[F_1,F_2];fm}(\vec{0},t) + \tilde{G} \; \mathcal{T}^{J[\Gamma_2,\Gamma_1]}_{[\bar{R}_2,\bar{R}_1]F[\bar{F}_2,\bar{F}_1];fm}(\vec{0},t) \, ,
\label{eqn:tetragparity}
\end{equation}
where $\tilde{G} = \pm 1 $. The $G$-parity of this operator is $G = \tilde{G} \xi_J \xi_1 \xi_3$ where $\xi_J, \xi_1, \xi_3$ are phases arising from the exchange symmetry of the Clebsch-Gordan coefficients in Equation~\eqref{eqn:tet} as described explicitly in Appendix~\ref{app:tetraquark}.
This tetraquark operator has definite spin $J$ in the continuum but the lattice discretisation breaks rotational symmetry and so $J$ is no longer a good quantum number. With a cubic lattice discretisation and volume, the symmetry group is reduced to the octahedral group $\mathrm{O}_h$ for states at rest~\cite{Johnson:1982yq} and broken further to the little group for states with non-zero momentum~\cite{Moore:2005dw}. The distribution, or \emph{subduction}, of $J \leq 4$ into the irreps of $\mathrm{O}_h$ is tabulated in Table~\ref{tab:subduction}. We construct lattice tetraquark operators which transform in lattice irrep $\Lambda$ (row $\mu$) by subducing the continuum operators as described in Ref.~\cite{Dudek:2010},
\begin{equation}
\mathcal{T}^{\Lambda[J[\Gamma_1,\Gamma_2]],P (G)}_{[R_1,R_2]F[F_1,F_2];f\mu}(\vec{p}=\vec{0},t) = \sum_m S_{\Lambda, \mu}^{J,m} \ \mathcal{T}^{J[\Gamma_1,\Gamma_2],P (G)}_{[R_1,R_2]F[F_1,F_2];fm}(\vec{p}=\vec{0},t) \, ,
\end{equation}
where $S$ are subduction coefficients. The generalisation to $\vec{p} \neq \vec{0}$ involves the construction of helicity operators and then the subduction to irreps of the little group of $\vec{p}$ as discussed in Ref.~\cite{Thomas:2012}. We will use these lattice tetraquark operators to calculate correlation functions in lattice QCD from which the finite-volume spectrum can be extracted.
\begin{table}
\begin{center}
\begin{tabular}{ c| c}
$J$ & $\Lambda$ \\
\hline
0 & $A_1$ \\
1 & $T_1$ \\
2 & $E \oplus T_2$ \\
3 & $A_2 \oplus T_1 \oplus T_2$ \\
4 & $A_1 \oplus E\oplus T_1 \oplus T_2$\\
\end{tabular}
\caption{Subduction of continuum spin $J$ into lattice irreps $\Lambda$ of $O_h$ for $J\leq 4$.}
\label{tab:subduction}
\end{center}
\end{table}
|
2,877,628,091,180 | arxiv | \section{Introduction}
Given a finite group $G$ and a locally
compact, Hausdorff, paracompact $G$-space $X$,
the $n$-th direct product $X^n$ admits a natural action
of the wreath product $G_n = G \sim S_n$ which is a semi-direct product
of the $n$-th direct product $G^n$ of $G$ and the
symmetric group $S_n$. The main goal of the present paper is to
study the equivariant topological K-theory $K_{G_n} (X^n)$,
for all $n$ together, and discuss several applications which are
of independent interest.
We will first show that a direct sum
$$ {\cal F}_G (X) = \bigoplus_{ n \geq 0}K_{G_n} (X^n) \bigotimes {\Bbb C}
$$
carries several wonderful structures.
More explicitly,
we show that $ {\cal F}_G (X)$ admits a natural Hopf algebra structure
with a certain induction functor as multiplication and a certain
restriction functor as comultiplication (cf. Theorem~\ref{th_hopf}).
When $X$ is a point, $K_{G_n} (X^n)$ is the Grothendieck
ring $R(G_n)$, and we recover the standard Hopf algebra
structure of $\bigoplus_{n \geq 0} R(G_n)$
(cf. e.g. \cite{M1, M2, Z}). A key lemma used here is a straightforward
generalization to equivariant K-theory
of a statement in the representation theory of
finite groups concerning the restriction of an induced
representation to a subgroup.
We show that $ {\cal F}_G (X)$ is a free $\lambda$-ring generated by
$K_G(X)\bigotimes {\Bbb C} $ (cf. Proposition~\ref{prop_ring}).
We write down explicitly the Adams operations $\varphi^n$'s
in $ {\cal F}_G (X)$. Incidentally we also obtain an equivalent way
of defining the Adams operations in
$K_G(X) \bigotimes {\Bbb C} $ (not over $ {\Bbb Z} $) by means of the wreath
products, generalizing a definition by Atiyah \cite{A} in terms
of the symmetric group
in the ordinary (i.e. non-equivariant) K-theory setting.
When $X$ is a point we recover the $\lambda$-ring structure
of $\bigoplus_{n \geq 0} R(G_n)$ (cf. \cite{M1}).
As a graded algebra $ {\cal F}_G (X)$
has a simple description as a certain supersymmetric algebra in terms
of $K_G(X)\bigotimes {\Bbb C} $ (cf. Theorem~\ref{th_main}).
The proof uses a theorem in \cite{AS} and the structures
of the centralizer group of an element in $G_n$ and of the
fixed point set of the action of $a \in G_n$ on $X^n$ which we
work out in Sect.~\ref{sec_wreath}.
In particular, this description indicates that $ {\cal F}_G (X)$ has the size of
a Fock space of a certain infinite-dimensional Heisenberg superalgebra
which we will construct in terms of natural
additive maps in K-theory (cf. Theorem~\ref{th_heisenberg}).
Our results above generalize Segal's work \cite{S2},
and our proofs are direct generalizations of those in
\cite{S2} (also see \cite{Z, M1}).
What Segal studied in \cite{S2}, partly motivated by
remarks in Grojnowski \cite{Gr}, is the space
$\bigoplus_{ n \geq 0}K_{S_n} (X^n) \bigotimes {\Bbb C} $
for compact $X$ which corresponds to our special case when $G$ is trivial
and then $G_n$ is the symmetric group $S_n$.
The paper \cite{Gr} was in turn
motivated by a physical paper of Vafa and Witten \cite{VW}.
Our present work grew out of an attempt
to understand Segal's outlines \cite{S2} and was also stimulated by
Nakajima's lecture notes on Hilbert schemes \cite{N}
(also cf. \cite{N1, Gr}).
Our first main observation in this paper is
that there is a natural way to add the group $G$ into
Segal's scheme and this allows several different applications
as discussed below. These applications are of independent
interest on their own. We expect that there is also a natural
way to incorporate $G$ into the remaining part of \cite{S2}.
Our addition of $G$ has already highly non-trivial consequences
even in the case when $X$ is a point.
By tensoring ${\cal F}_G (pt)$ with the group algebra
of the Grothendieck ring $R(G)$, we obtain the underlying vector
space for the vertex algebra
associated to the lattice $R(G)$ \cite{B, FLM}.
When $G$ is a finite subgroup of $SL_2(\Bbb C)$, this
will lead to a group theoretic construction of
the Frenkel-Kac-Segal vertex
representation of an affine Kac-Moody Lie algebra, which can be
viewed as a new form of McKay correspondence \cite{Mc}.
Detail along these lines will be developed in a forthcoming paper.
An interesting case of our study is that $X $ is the complex plane $ {\Bbb C} ^2$
acted upon by a finite subgroup $G$ of $SL_2(\Bbb C)$.
Let $\widetilde{ {\Bbb C} ^2 /G}$ denote
the minimal resolution of singularities of $ {\Bbb C} ^2 /G$
(cf. e.g. \cite{N}). Via the McKay correspondence \cite{Mc}, we show that
either $K_{G_n} (( {\Bbb C} ^2)^n) \otimes {\Bbb C} $
or $K_{S_n} ((\widetilde{ {\Bbb C} ^2 /G})^n) \otimes {\Bbb C} $
has the same dimension as the homology group of
the Hilbert scheme of $n$ points on $\widetilde{ {\Bbb C} ^2 /G}$
(cf. \cite{G}).
This fact has a straightforward generalization,
cf. Remark~\ref{rem_dim}.
Our message here is that the wreath product plays an important
role in the study of the Hilbert scheme of
$n$ points on $\widetilde{ {\Bbb C} ^2 /G}$ in exactly the way
a symmetric group $S_n$ does for the Hilbert scheme of
$n$ points on ${ {\Bbb C} ^2 }$ (cf. \cite{N, BG}), which
is in turn a special case of the former when $G$ is trivial.
We will discuss these in more detail in another occasion.
For a smooth manifold $X$ acted upon by $G$, Dixon, Harvey,
Vafa and Witten \cite{DHVW}
introduced a notion of orbifold Euler characteristic $e(X, G)$
in their study of string theory of orbifolds. We show that
the orbifold Euler characteristic $ e(X^n, G_n)$
is uniquely determined by $e(X, G)$ and $n$.
In terms of a generating function,
our formula reads (see Theorem~\ref{th_orbi}):
%
\begin{eqnarray} \label{eq_formu}
\sum_{ n \geq 1} e(X^n, G_n) q^n
= \prod_{ r =1}^{\infty} (1 -q^r)^{- e(X, G)}.
\end{eqnarray}
%
By putting $G =1$ and thus $e(X, G) = e(X)$,
we recover a formula of Hirzebruch-H\"ofer \cite{HH}.
By using Eq.~(\ref{eq_formu}) and G\"ottsche's
formula \cite{G}, we show that $X^n /G_n$
admits a resolution of singularities whose Euler characteristic coincides
with the orbifold Euler characteristic $e(X^n, G_n)$
assuming that $X$ is a smooth quasi-projective surface and $X/G$ has
a resolution of singularities whose Euler characteristic is $e (X, G)$.
In this paper the language of equivariant K-theory is used.
We should also mention the very relevant construction of
Heisenberg superalgebra on a direct sum over $n$
of the homology group $H(X^{[n]})$ of Hilbert
scheme of $n$ points on a smooth quasi-projective surface $X$,
due to Nakajima \cite{N1} and Grojnowski \cite{Gr} independently. However
the constructions and computations in terms of K-theory are
simpler and work for more general spaces. Bezrukavnikov and Ginzburg
\cite{BG} have proposed a way to obtain a direct isomorphism
from $K_{S_n} (X^n) \bigotimes \Bbb C$ to $H(X^{[n]})$ for an algebraic
surface $X$. Independently
M.~de Cataldo and L.~Migliorini has recently established this isomorphism
for complex surfaces \cite{CM}.
The plan of this paper is as follows. In Sect.~\ref{sec_wreath}
we give a presentation of the wreath product $G_n$ and
study its action on $X^n$. In Sect.~\ref{sec_hopf} we
construct a Hopf algebra structure on $ {\cal F}_G (X)$.
In Sect.~\ref{sec_main} we give a description of $ {\cal F}_G (X)$
as a graded algebra.
In Sect.~\ref{sec_ring}
we give a $\lambda$-ring structure on $ {\cal F}_G (X)$.
In Sect.~\ref{sec_heis} we
construct the Heisenberg superalgebra which
acts on $ {\cal F}_G (X)$ irreducibly. In Sect.~\ref{sec_orbi} we calculate
the orbifold Euler characteristic $e(X^n, G_n)$
and study in detail the special case when
$X$ is the complex plane acted upon by a finite subgroup of $SL_2 ( {\Bbb C} )$.
We have included some detail which are probably
trivial to experts hoping that this may benefit readers
with different backgrounds.
\section{The wreath product and its action on $X^n$}
\label{sec_wreath}
Let $G$ be a finite group.
We denote by $G_*$ the set of conjugacy classes of $G$ and
$R(G)$ the Grothendieck ring of $G$. $R(G) \bigotimes_{ {\Bbb Z} } {\Bbb C} $
can be identified with the ring of class functions $C(G)$ on $G$
by taking the character of a representation.
Denote by $\zeta_c$ the order of the centralizer
of an element lying in the conjugacy class
$c$ in $G$. We define an inner product on $C(G)$ as usual:
%
\begin{eqnarray} \label{eq_inner}
( \chi |\psi ) = \frac1{|G|} \sum_{g \in G} \chi (g) \overline{\psi}(g),
\quad \chi, \psi \in C(G).
\end{eqnarray}
Let $G^n = G \times \ldots \times G$ be the direct product of
$n$ copies of $G$. Denote by $|G|$ the order of $G$, and by
$[g]$ the conjugacy class of $g \in G$.
The symmetric group $S_n$ acts on $G^n$
by permuting the $n$ factors:
$ s (g_1, \ldots, g_n) = (g_{s^{-1}(1)} , \ldots, g_{s^{-1}(n)} )$.
The {\em wreath product} $G_n = G \sim S_n$ is defined to be
the semidirect product of $G^n$ and $S_n$, namely the multiplication on
$G_n$ is given by
$(g, s)(h, t) = (g. s(h), st)$, where $g, h \in G^n, s, t \in
S_n$. Note that $G^n$ is a normal subgroup of
$ G_n$ by identifying $g \in G^n$ with
$(g,1) \in G_n$.
Given $a = (g, s) \in G_n$ where $g = (g_1, \ldots, g_n)$,
we write $s \in S_n$ as a product
of disjoint cycles: if $z= (i_1, \ldots, i_r)$ is one of them,
the {\em cycle-product} $g_{i_r} g_{i_{r-1}} \ldots g_{i_1} $
of $a$ corresponding to the cycle $z$
is determined by $g$ and $z$ up to conjugacy.
For each $c \in G_*$ and each integer $r \geq 1$, let
$m_r (c)$ be the number of $r$-cycles in $s$ whose cycle-product
lies in $c$. Denote by $\rho (c)$ the partition having
$m_r (c)$ parts equal to $r$ ($r \geq 1$) and
denote by $\rho = ( \rho (c) )_{c \in G_*}$
the corresponding partition-valued
function on $G_*$. Note that
$|| \rho || : = \sum_{c \in G_*} |\rho (c)|
= \sum_{c \in G_*, r \geq 1} r m_r (c) = n$,
where $| \rho (c) |$ is the size of the partition $\rho (c)$.
Thus we have defined a map from $G_n$ to ${\cal P}_n (G_*)$,
the set of partition-valued
function $\rho = ( \rho (c) )_{c \in G_*}$ on $G_*$
such that $|| \rho || = n$ . The function $\rho$ is
called the {\em type} of $a = (g, s) \in G_n$.
Denote ${\cal P} (G_*) = \sum_{n \geq 0} {\cal P}_n (G_*)$.
Given a partition
$\lambda$ with $m_r$ $r$-cycles ($r \geq 1$), define
$z_{\lambda} = \prod_{r \geq 1} r^{m_r} m_r !$.
This is the order of the centralizer in $S_n$ of
an element of cycle-type $\lambda$.
We shall denote by $l(\lambda) = \sum_{ r \geq 1} m_r$
the length of $\lambda$.
Given a partition-valued function $\rho \in {\cal P}(G_*)$, we define
$l (\rho) = \sum_{ c \in G_*} l ( \rho (c))$ and
$$ Z_{\rho} = \prod_{c \in G_*} z_{ \rho (c)} \zeta_c^{l (\rho(c))}.$$
Denote by $\sigma_n (c)$ the class function of $G_n$ which takes
value $n \zeta_c$ at an $n$-cycle whose cycle-product lies in $c \in G_*$
and $0$ otherwise. For $\rho = \{m_r (c) \}_{c,r} \in {\cal P}(G_*)$,
we define
$$\sigma^{\rho} = \prod_{c \in G_*, r \geq 1} \sigma_r(c)^{m_r(c)}.
$$
We regard $\sigma^{\rho}$ as the class function on
$G_n$ which takes value $Z_{\rho}$ at elements of
type $\rho$ (where $n = || \rho ||$) and $0$ elsewhere.
We formulate some well-known facts below (cf. e.g. \cite{M2}) which
will be needed later.
\begin{proposition} \label{prop_type}
Two elements in $G_n$ are conjugate to each
other if and only if they have the same type.
The order of the centralizer in $G_n$ of
an element of type $\rho$ is $Z_{\rho}$.
\end{proposition}
We want to calculate the centralizer $Z_{G_n} (a)$ of $a \in G_n$.
First we consider the typical case that $a$ has one $n$-cycle.
As the centralizers of conjugate elements are conjugate subgroups,
we may assume that $a$ is of the form
$a = ( (g, 1, \ldots, 1), \tau)$
by Proposition~\ref{prop_type}, where $ \tau = (1 2 \ldots n)$.
Denote by $Z_G^{\Delta}(g)$, or $Z_G^{\Delta_n}(g)$ when it is necessary
to specify $n$, the following diagonal subgroup of $G^n$
(and thus a subgroup of $G_n$):
%
\begin{eqnarray*}
Z_G^{\Delta}(g) = \left\{ ( (h, \ldots, h), 1) \in G^n \mid h \in Z_G(g)
\right\}.
\end{eqnarray*}
The following lemma follows from a direct computation.
%
\begin{lemma} \label{lem_perm}
The centralizer $Z_{G_n} (a)$ of $a$ in $G_n$ is equal to the product
$Z_G^{\Delta}(g) \cdot \langle a \rangle$, where $\langle a \rangle$
is the cyclic subgroup of $G_n$ generated by $a$.
Moreover, $a^n \in Z_G^{\Delta}(g) $ and $|Z_{G_n} (a)| = n |Z_G(g)|.$
\end{lemma}
Take a generic element $a = ( g, s) \in G_n$
of type $\rho = ( \rho (c) )_{c \in G_*}$, where
$\rho (c) $ has $m_r (c)$ $r$-cycles ($r \geq 1$).
By Proposition~\ref{prop_type},
we may assume (by taking a conjugation if necessary)
that the $m_r (c)$ $r$-cycles are of the form
$$
g_{ur}(c) = ( (g, 1, \ldots,1), (i_{u1}, \ldots, i_{ur}) ),
1 \leq u \leq m_r (c), g \in c.
$$
Denote $ g_r (c) = ( (g, 1, \ldots,1), (12 \ldots r ) ).$
Throughout the paper, $\prod_{c,r}$ is understood as the
product $\prod_{c \in G_*, r \geq 1}$.
\begin{lemma} \label{lem_cent}
The centralizer $ Z_{G_n} (a)$ of $a \in G_n$ is isomorphic to
a direct product of the wreath products
\begin{eqnarray} \label{eq_centra}
\prod_{c,r} \left(
Z_{G_r} ( g_r (c) )
\sim S_{m_r (c)} \right).
\end{eqnarray}
Furthermore $Z_{G_r} ( g_r (c) )$ is isomorphic to
$Z^{\Delta_r}_G (g ) \cdot \langle g_{r} (c) \rangle$.
\end{lemma}
\begin{demo}{Proof}
It follows from the first part of Lemma~\ref{lem_perm}
that the centralizer $ Z_{G_n} (a)$ should contain
a certain subgroup naturally isomorphic to (\ref{eq_centra}).
By the second part of Lemma~\ref{lem_perm} we can count
that the order of (\ref{eq_centra}) is equal to $Z_{\rho}$.
The lemma now follows by
comparing with the order of $ Z_{G_n} (a)$ given
in Proposition~\ref{prop_type}.
\end{demo}
We will use $\star$ to denote the multiplication
in $C(G_n)$ which corresponds to the tensor product in $R(G_n)$.
We denote by $\underline{n}$
the trivial representation of $G_n$, and $\underline{1^n}$
the sign representation of $G_n$ in which $G^n$ acts trivially
and $S_n$ acts by $\pm 1$ depending on a permutation is
even or odd.
By abuse of notations, we also use the same symbols to
denote the corresponding characters as well.
The following lemma follows easily from the definitions.
\begin{lemma} \label{lem_orth}
\begin{enumerate}
\item[1)] Given $\rho, \widetilde{\rho} \in {\cal P}_n (G_*)$,
$
\sigma^{\rho} \star\sigma^{\widetilde{\rho} }
= \delta_{\rho, \widetilde{\rho} } Z_{\rho} \sigma^{\rho}.
$
In particular,
\begin{eqnarray}
\underline{n} &= & \sum_{|| \rho || = n} Z_{\rho}^{ -1} \sigma^{\rho},
\label{eq_triv} \\
%
\underline{1^n} &= &
\sum_{|| \rho || = n} (-1)^{ n -l(\rho)}Z_{\rho}^{ -1} \sigma^{\rho}.
\label{eq_sign}
\end{eqnarray}
\item[2)] $\left(
\sigma^{\rho} | \sigma^{\widetilde{\rho} }
\right)
= \delta_{\rho, \widetilde{\rho} } Z_{\rho}. $ In other
words, $ \sigma^{\rho}$ takes value $Z_{\rho}$
at the elements in $G_n$
of type $\rho$ and $0$ elsewhere.
\end{enumerate}
\end{lemma}
For a $G$-space $X$, we define an action of
$G_n$ on $X^n$ as follows: given $ a = ( (g_1, \ldots, g_n), s)$,
we let
%
\begin{eqnarray} \label{eq_action}
a . (x_1, \ldots, x_n)
= (g_1 x_{s^{-1} (1)}, \ldots, g_n x_{s^{-1} (n)})
\end{eqnarray}
%
where $x_1, \ldots, x_n \in X$. Next we want to determine the
fixed point set $( X^n )^a$ for $a \in G_n$.
Let us first calculate in the typical case
$a = ( (g, 1, \ldots, 1), \tau) \in G_n$.
Note that the centralizer group $Z_G(g)$ preserves
the $g$-fixed point set $X^g$.
\begin{lemma} \label{lem_elem}
The fixed point set is
\begin{eqnarray*}
( X^n )^a = \left\{ (x, \ldots, x) \in X^n\mid x= g x
\right\}
\end{eqnarray*}
which can be naturally identified with $X^g$.
The action of $Z_{G_n} (a)$ on $(X^n)^a$ can be identified canonically with
that of $Z_G (g)$ on $X^g$ together with the trivial action of
the cyclic group $\langle a \rangle$ (cf. Lemma~\ref{lem_perm}).
Thus
$$ (X^n)^a / Z_{G_n} (a) \approx X^g / Z_G (g).
$$
\end{lemma}
\begin{demo}{Proof}
Let $(x_1, \ldots, x_n) $ be in the fixed point
set $(X^n)^a$. By Eq.~(\ref{eq_action}) we have
%
\begin{eqnarray*}
(x_1, x_2, x_3, \ldots, x_n)
&= & a. (x_1, x_2, x_3, \ldots, x_n) \\
&= & (g x_n, x_1, x_2, \ldots, x_{n-1}).
\end{eqnarray*}
So all $x_i (i =1, \ldots, n)$ are equal to, say $x$,
and $gx = x$. The remaining statements follow from
Lemma~\ref{lem_perm}.
\end{demo}
All $Z_G (g)$ are conjugate and all
$X^g$ are homeomorphic to each other for different
representatives $g$ in a fixed conjugacy class $c \in G_*$.
Also the orbit space $X^g /Z_G (g)$ can be identified
with each other by conjugation for different
representatives of $g$ in $c \in G_*$.
We make a convention to denote $Z_G (g)$ (resp. $X^g$, $X^g /Z_G (g)$)
by $Z_G (c)$ (resp. $X^c$, $X^c /Z_G (c)$) by abuse of notations
when the choice of a representative $g$ in $c$ is immaterial.
\begin{lemma} \label{lem_fix}
Retain the notations in Lemma \ref{lem_cent}.
The fixed point set $( X^n )^a$ can be naturally identified
with $ \prod_{c,r} (X^c )^{m_r (c)} $.
Furthermore the orbit space
$ ( X^n )^a / Z_{G_n}(a)$ can be naturally
identified with
$$ \prod_{c,r} S^{m_r (c)} \left( X^{c} / Z_G (c) \right) $$
where $S^{m}(\cdot)$ denotes the $m$-th symmetric product.
\end{lemma}
\begin{demo}{Proof}
The first part easily follows from Lemma~\ref{lem_elem}.
By Lemma \ref{lem_cent} and Lemma \ref{lem_elem},
the action of $Z_{G_n}(a)$ on $( X^n )^a$ can be naturally
identified with that of
$$
\prod_{c,r} \left(
\left(
Z^{\Delta_r}_G (g ) \cdot \langle g_{r} (c) \rangle
\right)
\sim S_{m_r (c)}
\right)
$$
on $ \prod_{c,r} (X^c )^{m_r (c)} $
where $S_{m_r (c)}$ acts by permutation and
$\langle g_{r} (c) \rangle$ acts on $X^{c}$ trivially.
Thus the second part of the lemma follows.
\end{demo}
\section{The Hopf algebra structure on $ {\cal F}_G (X)$}
\label{sec_hopf}
Given a compact Hausdorff $G$-space $X$, we recall \cite{A1, S1} that
$K^0_G(X)$ is the Grothendieck group of
$G$-vector bundles over $X$. One can define $K_G^1 (X)$
in terms of the $K^0$ functor and a certain suspension operation,
and one puts
$$ K_G(X) = K^0_G(X) \bigoplus K^1_G(X).
$$
The tensor product of vector bundles gives rise to
a multiplication on $ K_G(X)$ which is super (i.e. $ {\Bbb Z} _2$-graded)
commutative. In this paper we will be only concerned about
the free part $K(X) \otimes {\Bbb C} $, which will be
denoted by $\underline{K}_G(X)$ subsequently.
We denote by $\dim {K}_G^i (X) (i = 0, 1)$
the dimension of ${K}_G^i (X) \otimes {\Bbb C} $.
If $X$ is locally compact, Hausdorff and
paracompact $G$-space, take the one-point
compactification $X^+$ with the extra point $\infty$ fixed by $G$.
Then we define $K_G^0 (X)$ to be the kernel of the map
$$ K^0_G(X^+) \longrightarrow K^0_G( \{\infty\} )
$$
induced by the inclusion map $\{\infty\} \hookrightarrow X^+$.
It is clear that this definition is equivalent to the
earlier one when $X$ is compact. We also define
$K^1_G(X) = K^1_G(X^+).$
Note that $K_{G} (pt)$ is isomorphic to the Grothendieck
ring $R(G)$ and $\underline{K}_{G} (pt)$ is
isomorphic to the ring $C(G)$ of class functions on $G$.
The bilinear map $\star$ induced from the tensor product
$$K_{G} (pt) \bigotimes K_{G} (X) \longrightarrow K_{G} (X) $$
gives rise to a natural $\underline{K}_{G} (pt)$-module structure
on $\underline{K}_{G} (X)$. Thus $\underline{K}_{G} (X)$
naturally decomposes into a direct sum over the set of
conjugacy classes $G_*$ of $G$.
The following theorem \cite{AS} (also cf. \cite{BC} )
gives a description of each direct summand.
\begin{theorem} \label{th_tech}
There is a natural $ {\Bbb Z} _2$-graded isomorphism
\begin{eqnarray*}
\phi : \underline{K}_{G} (X) \longrightarrow
\bigoplus_{[g]} \underline{K} \left( X^{g} /Z_{G} (g)
\right).
\end{eqnarray*}
\end{theorem}
Given $c \in G_*$, we denote by $\sigma_c$ the
class function which takes value
$\zeta_c$ at an element of $G$ lying in the conjugacy
class $c$ and $0$ otherwise. Then an element
in $\underline{K}_{G} (X) $ can be written as of the form
$\sum_{c \in G_*} \xi_{c} \sigma_c$, where
$\xi_c \in \underline{K} \left( X^{g} /Z_G (g) \right).$
More explicitly the isomorphism $\phi$ is defined as follows
when $X$ is compact:
if $E$ is a $G$-vector bundle over $X$ its restriction to
$X^g$ is acted by $g$ with base points fixed.
Thus $E|_{X^g }$ splits into a direct sum of
subbundles $E_{\mu}$ consisting of eigenspaces of
$g$ fiberwise for each eigenvalue $\mu$ of $g$.
$Z_G (g)$ acts on $X^g$ and one may check that
$\sum_{\mu} \mu E_{\mu}$ indeed lies in the $Z_G (g)$-invariant
part of $\underline{K}^0 (X^g) $.
Put
$$
\phi_{g} (E) = \sum_{\mu} \mu E_{\mu}
\in \underline{K}^0 \left( X^{g} /Z_G (g) \right)
= \underline{K}^0 \left( X^{g} \right)^{Z_G (g)}.
$$
The isomorphism $\phi$ on the $K^0$ part is given by
$ \phi = \bigoplus_{[g] \in G_*} \phi_{g}$.
Then one easily extends the isomorphism $\phi$ to $K^1$
as $K^1_G (X)$ can be identified with the kernel of the
map from $K^0_G (X \times S^1)$ to $K^0_G (X)$
given by the inclusion of a point in $S^1$.
When $X$ is a point the isomorphism $\phi$
becomes the map from a representation of $G$ to its character.
By some standard arguments using compact pairs \cite{A1}, the isomorphism
remains valid for a locally compact, Hausdorff and paracompact $G$-space.
The following lemma is well known.
\begin{lemma}
Given a finite group $G$,
a subgroup $H$ of $G$, and a $G$-space $X$. There is
a natural {\em induction functor}
$ \mbox{Ind} = \mbox{Ind} _H^G : K_H (X) \longrightarrow K_G (X)$.
In particular when $X$ is a point, the functor $ \mbox{Ind} _H^G$
reduces to the familiar induction functor of representations.
\end{lemma}
\begin{demo}{Proof}
Note that there is a $G$-equivariant isomorphism
%
\begin{eqnarray} \label{eq_equiva}
G \times_H X \stackrel{\mbox{Iso}}{\longrightarrow} G/H \times X
\end{eqnarray}
by sending $(g, x) \in G \times_H X$ to $(gH, gx)$. We remark
that although both sides of (\ref{eq_equiva}) remain well-defined
for a $H$-spcase $X$ without a $G$-action, the map $\mbox{Iso}$
makes sense only for a $G$-space $X$.
Denote by $p: G\times_H X \longrightarrow X$ the
composition of the projection $G/H \times X$ to $X$ with
the isomorphism (\ref{eq_equiva}).
As is well known \cite{S1}, one has a natural isomorphism
$$K_H (X) \longrightarrow K_G(G \times_H X)$$
by sending an $H$-equivariant vector bundle $V$ on $X$
to the $G$-equivariant vector bundle $G \times_H V$.
The composition $p \circ \pi$
of the projection $p: G\times_H X \longrightarrow X$
with the bundle map $\pi : G\times_H V \longrightarrow G\times_H X$
sends $(g, v)$ to $(g, \pi (v) )$. One easily check that this gives rise to
a well-defined $G$-equivariant vector bundle on $X$, which induces
the induction functor
$ \mbox{Ind} _H^G : K_H (X) \longrightarrow K_G (X)$.
\end{demo}
We denote by $ \mbox{Res} ^G_H$ (or $ \mbox{Res} _H$,
or even $ \mbox{Res} $, if there is no ambiguity)
the {\em restriction functor} from $K_G(X)$ to $K_H(X)$
by regarding a $G$-equivariant vector bundle as
an $H$-equivariant vector bundle. Denote
%
\begin{eqnarray*}
{\cal F}_G (X) = \bigoplus_{n \geq 0} \underline{K}_{G_n} (X^n),
\quad
{\cal F}_G^q (X) = {\bigoplus}_{n \geq 0}
q^n \underline{K}_{G_n} (X^n)
\end{eqnarray*}
where $q$ is a formal variable counting the graded structure of $ {\cal F}_G (X)$.
We introduce the notion of $q$-dimension:
$$\dim_q {\cal F}_G (X) = \sum_{n \geq 0} q^n \dim K_{G_n} (X^n).
$$
Define a multiplication $\cdot$ on $ {\cal F}_G (X)$ by a composition
of the induction map and the K\"unneth isomorphism $k$:
%
\begin{eqnarray} \label{eq_mult}
\underline{K}_{G_n} (X^n) \bigotimes \underline{K}_{G_m} (X^m)
\stackrel{k }{\longrightarrow} \underline{K}
_{G_n \times G_m} (X^{n +m} )
\stackrel{\mbox{Ind}}{\longrightarrow} \underline{K}_{G_{n+m}} (X^{n +m} ).
\end{eqnarray}
%
We denote by $ 1 $ the unit in $K_{G_0} (X^0) \cong {\Bbb C} $.
On the other hand we can define a comultiplication
$\Delta$ on $ {\cal F}_G (X)$, given by a composition of
the inverse of the K\"unneth isomorphism and the
restriction from $G_n$ to $ G_k \times G_{n -k}$:
%
\begin{eqnarray*}
\underline{K}_{G_n} (X^n)
\longrightarrow \bigoplus_{m =0}^n
\underline{K}_{G_{m } \times G_{n -m} } (X^n)
\stackrel{k^{-1}}{ \longrightarrow} \bigoplus_{m =0}^n
\underline{K }_{G_{m }} (X^m) \bigotimes
\underline{K}_{G_{n -m}} (X^{n -m}).
\end{eqnarray*}
%
We define the counit $\epsilon : {\cal F}_G (X) \longrightarrow {\Bbb C} $
by sending $\underline{K}_{G_n} (X^n)$ $(n >0)$ to $0$
and $ 1 \in K_{G_0} (X^0) \approx {\Bbb C} $ to $1$.
The antipode can be also easily defined (see Remark~\ref{rem_hopf}).
\begin{theorem} \label{th_hopf}
With various operations defined as above,
$ {\cal F}_G (X)$ is a graded Hopf algebra.
\end{theorem}
To prove Theorem \ref{th_hopf}, we will need some preparation.
Given two subgroups $H$ and $L$ of a finite group $\Gamma$
and a $\Gamma$-space $Y$,
and let $V$ be an $H$-equivariant vector bundle on $Y$.
We denote the action of $H$ on $V$ by $\rho$.
Choose a set of representatives
$S$ of the double coset $H \backslash \Gamma /L$.
$H_s = sHs^{-1} \cap L$ is a subgroup of $L$ for $s \in S$.
We denote by $V_s$ the $H_s$-equivariant vector bundle on $Y$
which is the same as $V$ as a vector bundle and has
the conjugated action
%
\begin{eqnarray}
\rho^s (x) = \rho (s^{-1} x s), \;\;x \in H_s. \label{eq_conj}
\end{eqnarray}
\begin{lemma} \label{lem_ser}
$ \mbox{Res} _L \mbox{Ind} _H^{\Gamma} V$ is isomorphic to the direct sum
of the $L$-equivariant vector bundles $ \mbox{Ind} ^L_{H_s} V_s$ for all
$s \in H \backslash \Gamma/ L$.
\end{lemma}
One easily shows that one can extend $V$ in Lemma~\ref{lem_ser} to
the whole $K_H(Y)$. In the case $Y = pt$,
an $H$-equivariant vector bundle is just an $H$-module,
and the induction and restriction functors become the more familiar
ones in representation theory. In such a case, the above lemma is
standard (cf. e.g. \cite{Ser}, Proposition 2.2). In view of
our construction of the induction functor and the restriction
functor, the proof of Lemma~\ref{lem_ser} is essentially
the same as in the case $X = pt$ which we refer to \cite{Ser}
for a proof.
\begin{demo}{Proof of Theorem \ref{th_hopf}}
We will show below that the comultiplication $\Delta$ is
an algebra homomorphism. The other Hopf algebra axioms are
easy to check.
We apply Lemma~\ref{lem_ser}
to the case $Y = X^N$, $L = G_m \times G_n$, $H = G_l \times G_r$,
and $\Gamma = G_N$, where $l + r = m + n = N$. In this
case the double coset $H \backslash \Gamma/L$ is isomorphic
to $(S_l \times S_r) \backslash S_N / (S_{m} \times S_{ n})$ since
$G_N = G^N \cdot S_N$ and $G_l \times G_r = G^N \cdot (S_l \times S_r)$.
Furthermore $(S_l \times S_r) \backslash S_N / (S_{m} \times S_{ n})$
is parametrized by the $2 \times 2$ matrices
%
\begin{eqnarray} \label{matr}
\left[ \begin{array}{cc}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{array} \right]
\end{eqnarray}
%
satisfying
\begin{eqnarray}
a_{ij} \in {\Bbb Z} _+, & i, j = 1, 2, \nonumber \\
a_{11} + a_{12} =m , & a_{21} + a_{22} = n, \label{eq_condition} \\
a_{11} + a_{21} =l , & a_{12} + a_{22} = r. \nonumber
\end{eqnarray}
We denote by $\cal M$ the set of all the $2 \times 2$ matrices
of the form (\ref{matr}) satisfying the conditions (\ref{eq_condition}).
Given $E \in \underline{K}_{G_l} (X^l), F \in \underline{K}_{G_r} (X^r),$
we calculate by using Lemma~\ref{lem_ser} as follows:
\begin{eqnarray} \label{eq_hopf}
\mbox{Res} _{(m,n)} \mbox{Ind} _{(l, r)}^N (E \boxtimes F ) &= &
\bigoplus_{A \in \cal M}
\mbox{Ind} ^{(m,n)}_A (1_{a_{11} } \otimes T^{(a_{12}, a_{21})} \otimes 1_{a_{22} })
( \mbox{Res} _{A'} (E \boxtimes F)) \nonumber \\
%
&= &
\bigoplus_{A \in \cal M} \mbox{Ind} ^m_{(a_{11}, a_{12})} F_1 \boxtimes
\mbox{Ind} ^n_{(a_{21}, a_{22})} F_2 .
\end{eqnarray}
%
Here the superscript or subscript ${(a,b)}$
is a short-hand notations for $G_a \times G_b$. $1_a$
stands for the identity operator from $\underline{K}_{G_a} (X^a)$
to itself, and $ \mbox{Ind} ^{a +b}_{(a,b)}$ for the induction functor from
$\underline{K}_{ G_a \times G_b} (X^{a +b})$
to $\underline{K}_{ G_{a +b} } (X^{a +b})$. $T^{(a,b)}$
denotes the canonical functor from
$\underline{K}_{ G_a} (X^{a})\otimes \underline{K}_{ G_b} (X^{b})$
to $\underline{K}_{ G_b} (X^{b}) \otimes \underline{K}_{ G_a} (X^{a})$
by switching the factors with an appropriate sign coming from the
$ {\Bbb Z} _2$ grading of K-theory. Given $A \in \cal M$ of the form
(\ref{matr}), the $A$ in the expressions $ \mbox{Res} _A$,
$ \mbox{Ind} _A$ etc stands for
$G_A \equiv G_{a_{11}} \times G_{a_{12}} \times G_{a_{21}} \times G_{a_{22}}$
while ${A'}$ for
$G_{A'} \equiv
G_{a_{11}} \times G_{a_{21}} \times G_{a_{12}} \times G_{a_{22}}$.
We wrote $(1 \otimes T^{(a_{12}, a_{21})} \otimes 1)
( \mbox{Res} _{A'} (E \boxtimes F))$
as $F_1 \boxtimes F_2$ instead of a direct
sum of the form $F_1 \boxtimes F_2$ in order to simplify notations,
with $F_i$ $ (i =1,2)$ the corresponding elements in
$K_{G_{a_{i1} } \times G_{a_{i2} } }(X^{a_{11} +a_{12}} ) $.
Now it is straightforward to check
that the statement that $\Delta$ is an algebra homomorphism
is just a reformulation of the identity obtained by summing
Eq.~(\ref{eq_hopf}) over all possible $(m, n)$ with $m +n = N$.
\end{demo}
\section{A description of $ {\cal F}_G (X)$ as a graded algebra}
\label{sec_main}
In this section, we give an explicit description of $ {\cal F}_G (X)$
as a graded algebra which in particular tells us the
dimension of $\underline{K}_{G_n}(X^n)$.
%
\begin{theorem} \label{th_main}
As a $( {\Bbb Z} _+ \times {\Bbb Z} _2)$-graded algebra
$ {\cal F}_G^q (X)$ is isomorphic to the supersymmetric algebra
$ {\cal S} \left( \bigoplus_{ n \geq 1} q^n \underline{K}_G(X)
\right)$. In particular,
$$
\dim_q { {\cal F}_G } (X) =
\frac{ \prod_{r \geq 1} (1 + q^r)^{ \dim K^1_G (X)} }{ \prod_{r \geq 1}
(1 - q^r)^{ \dim K^0_G (X)} }.
$$
\end{theorem}
%
The supersymmetric algebra here is equal to the tensor product
of the symmetric algebra
$ S \left( \bigoplus_{ n \geq 1} q^n \underline{K}^0_G(X)
\right)$ and the exterior algebra
$\Lambda \left( \bigoplus_{ n \geq 1} q^n \underline{K}^1_G(X) \right)$.
\begin{demo}{Proof}
Take $a \in G_n$ of type $\rho = \{m_r(c)\}_{c,r}$
as in Sect.~\ref{sec_wreath}.
By Lemma~\ref{lem_cent} and Lemma~\ref{lem_fix} and
the K\"unneth formula, we have
%
\begin{eqnarray}
K( (X^n)^a / Z_{G_n} (a) )
& \approx & \bigotimes_{c \in G_*, r \geq 1}
\left(
\left( K(X^{c})^{ Z_G (c)}
\right)^{\bigotimes m_r (c)}
\right)^{S_{m_r (c)} } \nonumber\\
%
& \approx & \bigotimes_{c \in G_*, r \geq 1}
{\cal S}^{m_r (c)} ( K(X^{c} / Z_G (c)) ). \label{eq_symm}
\end{eqnarray}
Thus if we take a summation of Eq. (\ref{eq_symm})
over all conjugacy classes of $G_n$
and all over $n \geq 0$, we obtain:
\begin{eqnarray*}
{\cal F}^q_G (X)
& \approx & \bigoplus_{ n \geq 0}
\bigoplus_{ \{m_r(c)\}_{c,r} \in {\cal P}_n ( G_*)}
q^n \bigotimes_{c, r}
{\cal S}^{m_r (c)} (\underline{K} (X^{c} / Z_G (c)) )
\quad\quad\quad\mbox{by Theorem } \ref{th_tech}, \label{eq_key} \\
%
& = & \bigoplus_{\{m_r(c)\}_{c,r} \in {\cal P} ( G_*)}
\bigotimes_{c, r} {\cal S}^{m_r (c)}
\left(q^r \underline{K}(X^{c} / Z_G (c)) \right) \nonumber \\
%
& = & \bigoplus_{\{m_r\}_{r} } \bigotimes_{ r \geq 1} {\cal S}^{m_r}
\left( \bigoplus_{c \in G_*} q^r \underline{K} (X^{c} / Z_G (c))
\right)
\mbox{ by letting $m_r = \sum_{ c \in G_*} m_r (c)$},
\label{eq_sum} \\
%
& = & \bigoplus_{\{m_r\}_{r} } \bigotimes_{ r \geq 1}
{\cal S}^{m_r} (q^r \underline{K}_G (X) )
\; \quad\quad\quad\quad\quad\quad\quad\quad\quad
\quad\quad\quad\mbox{by Theorem } \ref{th_tech}, \label{eq_again} \\
%
& = & {\cal S} \left( \bigoplus_{ r \geq 1} q^r \underline{K}_G(X) \right).
\nonumber
\end{eqnarray*}
The statement concerning $\dim_q { {\cal F}_G } (X)$ is an immediate consequence.
\end{demo}
\begin{remark} \rm \label{rem_spec}
\begin{enumerate}
\item[1)] Theorem \ref{th_main} in the case when
$G$ is the trivial group (and so $G_n = S_n$)
is due to Segal \cite{S2}. Our proof is adapted from his to the
wreath product setting.
\item[2)] If $G$ acts on $X$ freely, so does $G^n$ on $X^n$.
Then we have the isomorphism $K (X/G) \approx K_G (X) $. Note that
$ G^n$ is a normal subgroup of the wreath product $G_n$
and the quotient $G_n / G^n$ is isomorphic to $S_n$.
By Proposition 2.1 in Segal \cite{S1} we see that
%
\begin{eqnarray} \label{eq_free}
K_{G_n} (X^n) \approx
K_{G_n / G^n} (X^n / G^n) = K_{S_n} ( (X/G)^n).
\end{eqnarray}
Therefore Theorem \ref{th_main} follows from the special case
$G = 1$ of Theorem \ref{th_main} (applying to $X/G$) and Eq. (\ref{eq_free}).
\item[3)] When $X$ is a point, $ {\cal F}_G (pt) = {\bigoplus}_{n \geq 0} R( {G_n})$,
and $\sigma^{\rho}, \rho \in {\cal P}( G_*)$
form a linear basis for $ {\cal F}_G (pt)$ (cf. \cite{M2}). In particular,
$$\dim_q {\cal F}_G (pt) = \prod_{r \geq 1} (1 - q^r)^{ -|G_*|}.
$$
\item[4)] One may reformulate Theorem~\ref{th_main} in terms of the
de-localized equivariant cohomology \cite{BBM, BC} via the Chern character.
\end{enumerate}
\end{remark}
\begin{remark} \label{rem_hopf} \rm
The Hopf algebra defined in Sect.~\ref{sec_hopf}
can be identified via the isomorphism in Theorem~\ref{th_main}
with the standard one on the supersymmetric algebra
$ {\cal S} \left( \bigoplus_{ n \geq 1} \underline{K}_G(X) [n]
\right)$
by showing the sets of primitive vectors correspond to each other.
Here $\underline{K}_G(X) [n]$ denotes the
$n$-th copy of $\underline{K}_G(X)$ (see Theorem~\ref{th_main}). The
antipode of the former space can be transfered via the isomorphism
from the latter one.
\end{remark}
\section{The $\lambda$-ring structure on $ {\cal F}_G (X)$}
\label{sec_ring}
Let us denote by $c_n$ the conjugacy class in $G_n$ which
has the type of an $n$-cycle and whose cycle product lies in the
conjugacy class $c \in G_*$. We consider the following
diagram of K-theory
maps:
\begin{eqnarray*}
\underline{K}_G (X)
& \stackrel{\boxtimes n}{\longrightarrow} & \underline{K}_{G_n} (X^n)
%
\stackrel{\phi_n}{\longrightarrow}
%
\bigoplus_{[a] \in (G_n)_*}
\underline{K} \left( (X^n)^a / Z_{G_n} (a) \right) \\
& \stackrel{\stackrel{pr}{\rightleftarrows} }{\iota}&
%
\bigoplus_{c \in G_*}
\underline{K} \left( (X^n)^{c_n} / Z_{G_n} (c_n) \right)
\stackrel{\vartheta}{\longrightarrow}
%
\bigoplus_{c \in G_*} \underline{K} \left( X^{c} / Z_G (c) \right) \\
%
& \stackrel{\phi}{\longleftarrow} & \underline{K}_G (X) .
\end{eqnarray*}
Given a $G$-equivariant vector bundle $V$,
we define a $G_n$-action on the $n$-th outer
tensor product $V^{\boxtimes n}$ by letting
%
\begin{eqnarray} \label{eq_act}
( (g_1, \ldots, g_n), s). v_1 \otimes \ldots \otimes v_n
= g_1 v_{ s^{-1}(1)} \otimes \ldots \otimes g_n v_{ s^{-1}(n)}
\end{eqnarray}
%
where $g_1, \ldots, g_n \in G, s \in S_n$.
Clearly $V^{\boxtimes n}$ endowed with such a $G_n$ action
is a $G_n$-equivariant vector bundle over $X^n$.
Sending $V$ to $V^{\boxtimes n}$ gives rise to
the K-theory map $\boxtimes n$. $\phi_n$ is the isomorphism
in Theorem~\ref{th_tech} when applying to the case $X^n$ with the
action of $G_n$. $pr$ is the projection to the direct sum
over the conjugacy classes of $G_n$ which are of the type of an
$n$-cycle while $\iota$ denotes the inclusion map. $\vartheta$
denotes the natural identification given by Lemma~\ref{lem_fix}.
Finally the last map $\phi$ is the isomorphism given
in Theorem~\ref{th_tech}.
We shall now define a $\lambda$-ring
structure on ${\cal F}_G (X)$. It suffices to define the
Adams operations $\phi^n$ on ${\cal F}_G (X)$. We also
introduce several other K-theory operations which will
be needed later.
\begin{definition}
We define the following composition maps:
%
\begin{eqnarray*}
\psi^n &:= &
n \phi^{-1} \circ \vartheta \circ pr \circ \phi_n \circ \boxtimes n:
\underline{K}_G(X) \longrightarrow \underline{K}_G(X), \\
%
\varphi^n &:= &
n \phi_n^{-1} \circ \iota \circ {pr} \circ \phi_n \circ \boxtimes n:
\underline{K}_G(X) \longrightarrow \underline{K}_{G_n}(X^n), \\
%
{ch}_n &:= & \phi^{-1} \circ \vartheta \circ pr \circ \phi_n:
\underline{K}_{G_n}(X^n) \longrightarrow \underline{K}_G(X), \\
%
{\omega}_n &:= & n \phi_n^{-1} \circ \iota \circ \vartheta^{-1} \circ \phi:
\underline{K}_G(X) \longrightarrow \underline{K}_{G_n}(X^n).
\end{eqnarray*}
\end{definition}
We list some properties of these K-theory maps whose proof is straightforward.
\begin{proposition} \label{prop_property}
The following identities hold:
%
\begin{eqnarray*}
ch_n \circ \omega_n = n \;Id,\quad
\omega_n \circ \psi^n = n \varphi^n,\quad
ch_n \circ \varphi^n = n \psi^n.
\end{eqnarray*}
\end{proposition}
Recall that $\sigma_n (c)$ is the class function of $G_n$ which takes
value $n \zeta_c$ at an $n$-cycle whose cycle-product lies in $c \in G_*$
and $0$ otherwise.
%
\begin{lemma} \label{lem_iden}
\begin{enumerate}
\item[1)] $\varphi^n (V) =
\sum_{c \in G_*} \zeta_c^{-1} V^{ \boxtimes n} \star \sigma_n (c).$
%
\item[2)] Both $\psi^n$ and $\varphi^n$ are additive K-theory maps.
\end{enumerate}
\end{lemma}
Note that the order of the centralizer of an element
lying in the conjugacy class $c_n$ is equal to $n \zeta_c$.
The first part of the above lemma
now follows from the definition of $\varphi^n $ and
Lemma~\ref{lem_orth}. The second part can be proved in exactly
the same way as in the symmetric group case \cite{A}. We record
here only a useful formula in the proof:
Given $V, W $ two $G$-equivariant vector bundles,
let $[V]$ denote the corresponding element of $V$ in $K_G(X)$.
(In general we use $V$ itself to denote the corresponding
element in $K_G(X)$ by abuse of notation). Then
%
\begin{eqnarray}
([V] -[W])^{\boxtimes n} = \sum_{j =0}^n (-1)^j
\mbox{Ind} _{G_{n -j} \times G_j}^{G_n}
[V^{\boxtimes (n -j)} \boxtimes W^{\boxtimes j} ] \in K_{G_n}(X^n).
\end{eqnarray}
Here $V^{\boxtimes (n -j)}$ endows the standard $G_{n -j}$-action
given by substituting $n$ with $n -j$ in Eq. (\ref{eq_act}), and
$G_j$ acts on $W^{\boxtimes j}$ by the tensor product of the
standard $G_j$-action with the sign representation of $G_j$.
$\psi^n$'s are the Adams operations on $\underline{K}_G (X)$,
giving rise to the $\lambda$-ring structure on $\underline{K}_G (X)$.
Theorem \ref{th_main} ensures us that $ {\cal F}_G (X)$ as a $\lambda$-ring
is free and generated by $\underline{K}_G(X)$.
%
\begin{proposition} \label{prop_ring}
$ {\cal F}_G (X)$ is a free $\lambda$-ring generated
by $K_G(X) \bigotimes {\Bbb C} $,
with $\varphi^n$'s as the Adams operations.
\end{proposition}
\begin{remark} \rm
If $X$ is a point, then $K_{G_n} (pt) = R(G_n)$ and
$ {\cal F}_G (pt) = \bigoplus_{n \geq 0} R(G_n).$ Our result reduces to
the fact that $ {\cal F}_G (pt)$ is a free $\lambda$-ring
generated by $G_*$ \cite{M1}. In the case when $G =1$, the
proposition was due to Segal \cite{S2}.
\end{remark}
Denote by $\widehat{\cal F}_G^q (X)$ the completion
of $ {\cal F}_G^q (X)$ which allows formal infinite sums.
Given $V \in \underline{K}_G(X)$,
we introduce $H(V, q), E(V, q) \in \widehat{\cal F}_G^q (X)$ as follows:
%
\begin{eqnarray*}
H(V, q) & =& \bigoplus_{n \geq 0} q^n V^{\boxtimes n}, \\
%
E(V, q) & =& \bigoplus_{n \geq 0}
q^n V^{\boxtimes n} \star \underline{1^n }.
\end{eqnarray*}
%
\begin{proposition}
One can express
$H(V, q)$ and $E(V, q)$ in terms of $\varphi^r (V)$ as follows:
%
\begin{eqnarray}
H(V, q) & =& \exp \left( \sum_{r >0} \frac1r \varphi^r (V) q^r \right),
\label{eq_vo} \\
%
E(V, -q) & =& \exp \left(- \sum_{r >0} \frac1r \varphi^r (V) q^r \right).
\nonumber
\end{eqnarray}
\end{proposition}
\begin{demo}{Proof}
We shall prove Eq.~(\ref{eq_vo}) by using Eq.~(\ref{eq_triv}).
The formula for $E(V, -q)$ can be similarly obtained by using
Eq.~(\ref{eq_sign}).
%
\begin{eqnarray*}
H(V, q) & =& \bigoplus_{n \geq 0} q^n V^{\boxtimes n} \star
\underline{n} \\
%
& =& \bigoplus_{n \geq 0} q^n V^{\boxtimes n} \star
\left( \sum_{|| \rho || = n} Z_{\rho}^{ -1} \sigma^{\rho}
\right) \\
%
& =& \bigoplus_{n \geq 0} \sum_{|| \rho || = n}
\left( Z_{\rho}^{ -1} q^n V^{\boxtimes n} \star \sigma^{\rho}
\right) \\
%
& \stackrel{(\star)}{=}& \prod_{c \in G_*, r \geq 1}\frac1{m_r (c)!}
\left( \frac1r q^r \zeta_c^{ -1} V^{\boxtimes r} \star \sigma_r (c)
\right)^{m_r (c)} \\
%
& =& \exp \left( \sum_{ r \geq 1} \frac 1r q^r
\sum_{c \in G_*} \zeta_c^{ -1}
V^{\boxtimes r} \star \sigma_r (c)
\right) \quad\quad \mbox{ by Lemma } \ref{lem_iden}, \\
%
& =& \exp \left( \sum_{ r \geq 1} \frac 1r q^r \varphi^{r} (V)
\right) .
\end{eqnarray*}
Here the equation ($\star$) is understood by means of the multiplication
in $ {\cal F}_G (X)$ given by the composition (\ref{eq_mult}).
\end{demo}
\begin{corollary}
The $\lambda$-operations $\lambda^n$ on the $\lambda$ ring
$ {\cal F}_G (X)$ sends $V \in \underline{K}_G(X)$ to
$V^{\boxtimes n} \star \underline{1^n }$.
\end{corollary}
Combining with the additivity of $\varphi^r$, we have the following.
%
\begin{corollary}
The following equations hold for $V, W \in \underline{K}_G(X)$:
\begin{eqnarray*}
H( -V, q) & =& E(V, -q) \\
H( V \bigoplus W, q) & =& H( V, q)H( W, q).
\end{eqnarray*}
\end{corollary}
\section{$ {\cal F}_G (X)$ and a Heisenberg superalgebra}
\label{sec_heis}
We see from Theorem~\ref{th_main} that $ {\cal F}_G (X)$ has the
same size of the tensor product of the Fock space of an infinite-dimensional
Heisenberg algebra of rank $\dim K^0_G(X)$ and that of
an infinite-dimensional
Clifford algebra of rank $\dim K^1_G(X)$. It is our next step
to actually construct such an Heisenberg/Clifford algebra.
We will simply refer to as Heisenberg superalgebra from now on.
The dual of $\underline{K}_G(X)$, denoted by $\underline{K}_G(X)^*$,
is naturally $ {\Bbb Z} _2$-graded as
identified with $\underline{K}^0_G(X)^* \bigoplus \underline{K}^1_G(X)^*$.
Denote by $\langle \cdot, \cdot \rangle$ the pairing
between $\underline{K}_G(X)^*$ and $\underline{K}_G(X)$.
For any $n, m \geq 1$ and $\eta \in \underline{K}_G(X)^*$,
we define an additive map
%
\begin{eqnarray} \label{eq_ann}
a_{-m} (\eta) : \underline{K}_{G_n}(X^n) \longrightarrow
\underline{K}_{G_{n-m} }(X^{n-m} )
\end{eqnarray}
as the composition
%
\begin{eqnarray*}
\underline{K}_{G_n}(X^n)
& \stackrel{\mbox{Res} }{\longrightarrow}
& \underline{K}_{G_{m} \times G_{n -m} }(X^n)
%
\stackrel{k^{-1} }{\longrightarrow}
\underline{K}_{G_{m}} (X^{m} )\bigotimes
\underline{K}_{G_{n -m} }(X^{n -m} ) \\
%
&\stackrel{{ch}_m \otimes 1 }{\longrightarrow}
& \underline{K}_{G} (X ) \bigotimes \underline{K}_{G_{n -m}} (X^{n -m} )
%
\stackrel{\eta \otimes 1 }{\longrightarrow}
\underline{K}_{G_{n -m}} (X^{n -m} ).
\end{eqnarray*}
On the other hand, we define for any
$m \geq 1$ and $V \in \underline{K}_G (X)$ an additive map
%
\begin{eqnarray} \label{eq_creat}
a_{m} (V) : \underline{K}_{G_{n-m} }(X^{n-m} )
\longrightarrow \underline{K}_{G_n}(X^n)
\end{eqnarray}
as the composition
%
\begin{eqnarray*}
\underline{K}_{G_{n-m} }(X^{n-m} )
&\stackrel{\omega_m (V) \boxtimes \cdot}{\longrightarrow} &
\underline{K}_{G_m }(X^m) \bigotimes \underline{K}_{G_{n -m}} (X^{n -m} ) \\
%
& \stackrel{k }{\longrightarrow} &
\underline{K}_{G_m \times G_{n -m}}(X^n )
%
\stackrel{\mbox{Ind}}{\longrightarrow}
\underline{K}_{G_n}(X^n).
\end{eqnarray*}
Let $\cal H$ be the linear span of the
operators $a_{-m} (\eta), a_{m} (V), m \geq 1$,
$\eta \in \underline{K}_G(X)^*,$ $V \in \underline{K}_G(X)$.
Clearly $\cal H$ admits a natural $ {\Bbb Z} _2$-gradation
induced from that on $\underline{K}_G(X)$ and $ \underline{K}_G(X)^* $.
Below we shall use $[ \cdot , \cdot ]$ to denote the
supercommutator as well. It is understood that $[a, b]$ is
the anti-commutator $ab +ba$ when $a, b \in \cal H$ are both odd elements
according to the $ {\Bbb Z} _2$-gradation.
\begin{theorem} \label{th_heisenberg}
When acting on $ {\cal F}_G (X)$, $\cal H$ satisfies the
Heisenberg superalgebra commutation relations,
namely for $m , l \geq 1,$ $ \eta, \eta^{'} \in \underline{K}_G(X)^*, $
$V,W \in \underline{K}_G(X)$,
%
\begin{eqnarray}
[ a_{-m} (\eta), a_{l} (V)] & =& l \delta_{m,l} \langle \eta, V \rangle ,
\label{eq_heis} \\
%
{[ a_{m} (W) , a_{l} (V)]} & =& 0, \label{eq_induct} \\
%
{[ a_{-m} ({\eta}), a_{-l} ({\eta}^{'}) ] }& = & 0. \label{eq_restrict}
\end{eqnarray}
%
Furthermore $ {\cal F}_G (X)$ is an irreducible representation of the
Heisenberg superalgebra.
\end{theorem}
\begin{demo}{Proof}
We may assume that $V, \eta$ are homogeneous, say of
degree $v$ and $e$ where $v, e \in \{0, 1\}$, according to the
$ {\Bbb Z} _2$-grading of $\underline{K}_G(X)$ and its dual.
We keep using the notations in the proof of Theorem~\ref{th_hopf}.
Given $E \in \underline{K}_{G_r} (X^r)$, we first observe by the definitions
(\ref{eq_ann}) and (\ref{eq_creat}) that $a_{-m}(\eta) a_l (V) E$
(resp. $(-1)^{ve} a_l (V) a_{-m}(\eta) E$) is given by the composition
from the top to the bottom along the left (resp. right)
side of the diagram below:
%
\begin{eqnarray*}
\begin{array}{rcl}
& \underline{K}_{G_r} (X^r) & \\
& \downarrow \!{ \omega_l (V) \boxtimes \cdot} & \\
%
& \underline{K}_{G_l} (X^l)
\bigotimes \underline{K}_{G_r} (X^r) & \\
{\footnotesize Ind}\! \swarrow & &
%
\searrow\!{\footnotesize 1 \otimes \mbox{Res} } \\
\underline{K}_{G_N} (X^N) & & \underline{K}_{G_l} (X^l)
\bigotimes \underline{K}_{G_m} (X^m)
\bigotimes \underline{K}_{G_{r -m}} (X^{r -m}) \\
%
\Vert \;\;\;\quad & & \quad \downarrow T^{(l,m)} \otimes 1 \\
%
\underline{K}_{G_N} (X^N) & & \underline{K}_{G_m} (X^m)
\bigotimes \underline{K}_{G_l} (X^l)
\bigotimes \underline{K}_{G_{r -m}} (X^{r -m}) \\
%
{\footnotesize \mbox{Res} }\!{\searrow} & &
{\footnotesize P} %
{\swarrow}\!{\footnotesize \mbox{Ind} \otimes 1 } \\
%
& \underline{K}_{G_m} (X^m) \bigotimes \underline{K}_{G_n} (X^n) & \\
& \;\;\;\downarrow
\!{\footnotesize ch_m \otimes 1} & \\
%
& \underline{K}_{G} (X) \bigotimes \underline{K}_{G_n} (X^n) & \\
& \downarrow \!{\footnotesize \eta \otimes 1} & \\
& \underline{K}_{G_n} (X^n) &
\end{array}
\end{eqnarray*}
%
Here and below it is understood that when a negative integer
appears in indices the corresponding term is $0$.
To simplify notations, we put $ \mbox{Res} $ (resp. $ \mbox{Ind} $) instead of
the composition $k^{-1} \circ \mbox{Res} $ (resp. $ \mbox{Ind} \circ k$)
in the above diagram and below.
We denote by ${\cal M}'$ the set of all the
$2 \times 2$ matrices of the form (\ref{matr})
satisfying (\ref{eq_condition}) except the following two matrices
%
\begin{eqnarray*}
\left[ \begin{array}{cc}
0 & m \\
l & r -m
\end{array} \right],
%
\quad
\left[ \begin{array}{cc}
m & 0 \\
l-m & r
\end{array} \right].
\end{eqnarray*}
As in the proof of Theorem~\ref{th_hopf}, we apply Lemma~\ref{lem_ser}
to the case $Y = X^N, \Gamma = G_N, H = G_l \times G_r,$
and $L = G_m \times G_n$, where $l + r = m +n = N$.
%
\begin{eqnarray}
&& \mbox{Res} _{(m, n)} \mbox{Ind} _{(l, r)}^N (\omega_l (V) \boxtimes E ) \nonumber \\
&= &
\bigoplus_{A \in \cal M} \mbox{Ind} ^{(m,n)}_A
(1_{a_{11} } \otimes T^{(a_{12}, a_{21})} \otimes 1_{a_{22} } )
( \mbox{Res} _{A'} (\omega_l (V) \boxtimes E ) ) \nonumber \\
%
& = &
\bigoplus_{A \in \cal M} \mbox{Ind} ^m_{(a_{11}, a_{12})} F_1 \boxtimes
\mbox{Ind} ^n_{(a_{21}, a_{22})} F_2 \nonumber \\
%
& = &
\bigoplus_{A \in {\cal M}'} \mbox{Ind} ^m_{(a_{11}, a_{12})} F_1 \boxtimes
\mbox{Ind} ^n_{(a_{21}, a_{22})} F_2 \nonumber \\
& & \bigoplus F_1 \boxtimes \mbox{Ind} ^n_{(l, r-m)} F_2
\bigoplus F_1 \boxtimes \mbox{Ind} ^n_{(l-m, r)} F_2 \nonumber \\
%
& = &
\bigoplus_{A \in {\cal M}'} \mbox{Ind} ^m_{(a_{11}, a_{12})} F_1 \boxtimes
\mbox{Ind} ^n_{(a_{21}, a_{22})} F_2 \nonumber \\
%
& & \bigoplus ( 1_m \otimes \mbox{Ind} _{(l, r- m)}^n )
( T^{(l,m)} \otimes 1_{l -m} )
( 1_l \otimes \mbox{Res} _{(m, r -m)}) (\omega_l (V) \boxtimes E )
\label{eq_mess} \\
%
& & \bigoplus (1_m \otimes \mbox{Ind} _{(l -m, r)}^n) )
( \mbox{Res} _{(m, l -m)} \otimes 1_r ) (\omega_l (V) \boxtimes E ) . \nonumber
\end{eqnarray}
We get $0$ when applying the map $ch_m \otimes 1 $ to
$ \mbox{Ind} ^m_{(a_{11}, a_{12})} F_1 \boxtimes \mbox{Ind} ^n_{(a_{21}, a_{22})} F_2$
for $A \in {\cal M}'$ in (\ref{eq_mess})
by Lemma~\ref{lem_orth}. When applying
$( \eta \otimes 1 ) \circ ( ch_m \otimes 1 )$ to the
second term of the r.h.s. of (\ref{eq_mess}), we obtain
$ ( -1)^{ve} a_l (V) a_{-m}(\eta) E$. When applying $ch_m \otimes 1$ to
the third term of the r.h.s. of (\ref{eq_mess}),
we get $0$ if $ m \neq l$ by Lemma~\ref{lem_orth}.
In the case $m =l$, the third term of the r.h.s.
of (\ref{eq_mess}) is simply $\omega_l (V) \boxtimes E$. When applying
$(\eta \otimes 1 ) \circ ( ch_m \otimes 1)$ to it,
we get $l \langle \eta, V \rangle E$
by Proposition~\ref{prop_property}. Putting all these
pieces together, we have proved Eq.~(\ref{eq_heis}).
We may assume that $W$ is homogeneous of degree $w \in \{0, 1\}$
according to the $ {\Bbb Z} _2$-grading of $\underline{K}_G(X)$.
Eq.~(\ref{eq_induct}) is a consequence
of the transitivity of the induction functor:
%
\begin{eqnarray*}
&& \mbox{Ind} _{G_m \times G_{l +r} }^{G_{m + l +r} }
\left( \omega_m (W) \boxtimes \mbox{Ind} _{G_l \times G_r}^{G_{l +r} }
( \omega_l (V) \boxtimes E)
\right) \\
%
&= & \mbox{Ind} _{G_m \times G_{l +r} }^{G_{m + l +r} }
\left( \mbox{Ind} _{G_m \times G_l \times G_r}^{G_m \times G_{l +r} }
(\omega_m (W) \boxtimes \omega_l (V) \boxtimes E)
\right) \\
%
%
&= & \mbox{Ind} _{G_m \times G_l \times G_r}^{G_{m + l +r} }
(\omega_m (W) \boxtimes \omega_l (V) \boxtimes E) \\
%
&= & ( -1)^{vw} \mbox{Ind} _{G_l \times G_{m +r} }^{G_{m + l +r} }
\left( \mbox{Ind} _{G_l \times G_m \times G_r}^{G_l \times G_{m +r} }
( \omega_l (V) \boxtimes \omega_m (W) \boxtimes E)
\right) \\
%
&= & \mbox{Ind} _{G_l \times G_{m +r} }^{G_{m + l +r} }
\left( \omega_l (V) \boxtimes \mbox{Ind} _{G_m \times G_r}^{G_{m +r} }
( \omega_m (W) \boxtimes E)
\right).
\end{eqnarray*}
Similarly Eq.~(\ref{eq_restrict}) is a consequence
of the transitivity of the restriction functor.
The irreducibility of ${\cal F}_G(X)$ as a representation
of $\cal H$ follows immediately from the
$q$-dimension formula for ${\cal F}_G(X)$ given in
Theorem~\ref{th_main}.
\end{demo}
In the special case $G =1$, the Heisenberg superalgebra
was constructed by Segal \cite{S2} which differs slightly
from ours. Our proof follows his strategy of proof as well.
\begin{remark} \rm
One may consider the enlarged space
$$
V_G := {\cal F}_G (pt) \bigotimes {\Bbb C} [R(G)]
$$
where $ {\Bbb C} [R(G)]$ is the group
algebra of the lattice $R(G)$. Note that $V_G$ is the underlying
space for a lattice vertex algebra \cite{B, FLM}.
In particular, when $G$ is a finite
subgroup of $SL_2(\Bbb C)$,
the space $V_G$ is closely related to the Frenkel-Kac-Segal
vertex representation of an affine Lie algebra. In this way, we
are able to obtain a new link between the subgroups of $SL_2(\Bbb C)$
and the affine Lie algebras widely known as
the McKay correspondence.
Connections among symmetric functions,
$ {\cal F}_G (pt)$, $V_G$, and vertex operators will be developed
in a forthcoming paper.
More generally, one may consider
%
\begin{eqnarray} \label{eq_latt}
V_G(X) : = {\cal F}_G(X) \bigotimes {\Bbb C} [ K_G(X) ]
\end{eqnarray}
%
when $K_G(X)$ is torsion-free, where $ {\Bbb C} [ K_G(X) ] $
is the group algebra of the lattice $ {\Bbb C} [ K_G(X) ]$
(if $K_G(X)$ is not torsion-free
we replace $K_G(X)$ in (\ref{eq_latt}) by the free
part of $K_G(X)$ over $ {\Bbb Z} $).
\end{remark}
\section{The orbifold Euler characteristic $e(X^n, G_n)$}
\label{sec_orbi}
In the study of string theory on orbifolds,
Dixon {\em et al} \cite{DHVW}
came up with a notion of {\em orbifold Euler characteristic} defined
as follows:
$$e (X, G) = \frac1{|G|} \sum_{g_1 g_2 = g_2 g_1} e(X^{g_1, g_2}),$$
where $X$ is a smooth $G$-manifold.
$X^{g_1, g_2}$ denotes the common fixed point
set of $g_1$ and $g_2$ and $e(\cdot)$ denotes
the usual Euler characteristic.
One easily shows \cite{HH} that the orbifold Euler characteristic
can be equivalently defined as
%
\begin{eqnarray} \label{eq_euler}
e (X, G) = \sum_{ [g] \in G_*} e (X^g / Z_G(g) ) .
\end{eqnarray}
Denote by $X^{(n)}$ the $n$-th symmetric product of $X$.
Recall that Macdonald's formula \cite{M}
relates $e( X^{(n)})$ to $e(X)$ as follows:
%
\begin{eqnarray}
\sum_{n =0} e( X^{(n)}) q^n = ( 1 -q)^{- e(X)}. \label{eq_mac}
\end{eqnarray}
The following theorem relates the orbifold Euler
characteristic $e(X^n, G_n)$ to $e(X, G)$.
%
\begin{theorem} \label{th_orbi} We have
$\sum_{n \geq 0} e(X^n, G_n) q^n = \prod_{ r =1}^{\infty}
( 1 -q^r)^{- e(X, G)} .$
\end{theorem}
\begin{demo}{Proof}
For an alternative proof see Remark~\ref{rem_another} below.
By the definition of the orbifold Euler characteristic,
Lemmas~\ref{lem_cent} and Lemma~\ref{lem_fix}, we have
%
\begin{eqnarray*}
\sum_{n \geq 0} e(X^n, G_n) q^n
&= & \sum_{ n \geq 0} \sum_{ [a] \in (G_n)_*}
e \left( (X^n)^a / Z_{G_n} (a)
\right) q^n \quad\quad\quad\quad\quad\quad
\mbox{ by Eq. }(\ref{eq_euler}), \\
%
&\stackrel{(A)}= & \sum_{ n \geq 0} \sum_{\sum_r r m_r(c) = n}
\prod_{c \in G_*}
e \left( (X^c /Z_G(c))^{m_r(c)}
\right) q^n \label{eq_ele} \\
%
&= & \prod_{c \in G_*} \prod_{r \geq 1}
\left(
\sum_{ m_r(c) \geq 0}
e \left( (X^c /Z_G(c))^{m_r(c)}
\right) q^{m_r(c)}
\right) \nonumber \\
%
&= &
\prod_{c \in G_*} \prod_{r \geq 1} ( 1 -q^r)^{ - e(X^c /Z_G(c))}
\;\; \mbox{ by applying }(\ref{eq_mac}) \mbox{ to } X^c /Z_G(c),
\label{eq_macd} \\
%
&= & \frac1{ \prod_{ r =0}^{\infty}
( 1 -q^r)^{ e(X, G)} }
\;\;\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
\mbox{ by Eq. }(\ref{eq_euler}).
\end{eqnarray*}
Here Eq.~(A) follows from
Lemma~\ref{lem_cent} and Lemma~\ref{lem_fix}.
\end{demo}
In the case when $G$ is trivial, we recover a formula given in \cite{HH}.
%
\begin{remark} \rm \label{rem_another}
According to Atiyah and Segal \cite{AS} the orbifold Euler
characteristic can be calculated in terms of
equivariant K-theory:
\begin{eqnarray} \label{eq_diff}
e(X, G)= \dim K^0_G (X) - \dim K^1_G (X).
\end{eqnarray}
Theorem~\ref{th_orbi} follows from Theorem~\ref{th_main}
by applying Eq.~(\ref{eq_diff}) to $K_{G_n} (X^n)$.
\end{remark}
One is interested \cite{DHVW, HH} in finding a resolution of
singularities
$$\widetilde{X/G} \longrightarrow X/G$$
with the property
$$
e(X, G) = e(\widetilde{X/G}).
$$
We assume that $X$ is a smooth
quasi-projective surface with such a property in the
following discussions. Denote by $X^{[n]}$ the Hilbert
scheme of $n$ points on $X$.
According to G\"ottsche \cite{G}, the Euler characteristic
of $X^{[n]}$ is given by:
%
\begin{equation} \label{eq_got}
\sum_{n \geq 0} e( X^{[n]}) q^n
= \prod_{ r =0}^{\infty} ( 1 -q^r)^{- e(X)}.
\end{equation}
We note that $X^n/ G_n $ is naturally identified with $ (X/G)^n /S_n$.
The following commutative diagram
\begin{eqnarray*}
\begin{array}{ccc}
{\widetilde{X/G} }^{[n]} & \rightarrow
& \left(\widetilde{X/G}\right)^n /S_n \\
\downarrow & & \downarrow \\
X^n/ G_n & \equiv & (X/G)^n /S_n
\end{array}
\end{eqnarray*}
implies that the Hilbert scheme
${\widetilde{X/G} }^{[n]}$ is a resolution of singularity of $X^n/ G_n$
(indeed it is semismall).
By applying Eq.~(\ref{eq_got}) to $\widetilde{X/G}$ and
comparing with Theorem~\ref{th_orbi}, we have the following corollary.
%
\begin{corollary} \label{cor_gen}
Let $X$ be a smooth quasi-projective surface and assume that
there exists a smooth resolution ${\widetilde{X/G} }$ of singularities
of the orbifold $X/G$ such that $e(\widetilde{X/G} ) = e(X, G)$.
Then there exists a resolution of singularities of $X^n/ G_n$
given by $ {\widetilde{X/G} }^{[n]}$ satisfying
\begin{eqnarray*}
e(X^n, G_n) = e \left( {\widetilde{X/G} }^{[n]} \right).
\end{eqnarray*}
\end{corollary}
The assumption of the existence of the
resolution ${\widetilde{X/G} }$ of singularities
of $X/G$ above is necessary as this is the special case of
$X^n /G_n$ for $n =1$.
In the setting of Corollary~\ref{cor_gen}, we conjecture
that ${\widetilde{X/G} }^{[n]} $ is a {\em crepant}
resolution of $X^n /G_n$ provided that
${\widetilde{X/G} }$ is a crepant resolution of singularities
of $X/G$.
We consider a special case in detail.
Let $X$ be the complex plane $ {\Bbb C} ^2$
acted upon by a finite subgroup $G$ of $SL_2(\Bbb C)$.
Via the McKay correspondence \cite{Mc}, there
is a one-to-one correspondence between the finite subgroups of
$SL_2(\Bbb C)$ and the Dynkin diagrams of simply-laced
types $A_n$, $D_n$, $E_6,$ $E_7$ and $E_8$. Let us denote
by $\frak g$ the simple Lie algebra corresponding to $G$.
From the exact correspondence we know that
the rank of $\frak g$ is $|G_*| -1$.
The quotient $ {\Bbb C} ^2/ G$ has an isolated simple singularity at $0$. There
exists a minimal resolution $\widetilde{ {\Bbb C} ^2/ G }$ of $ {\Bbb C} ^2/ G$
well known as ALE-spaces (cf. e.g. \cite{N}).
It is known that the second homology group $H_2 (\widetilde{ {\Bbb C} ^2/ G }, {\Bbb Z} )$
is isomorphic to the root lattice of $\frak g$, cf. e.g. \cite{N}.
In particular
%
\begin{equation} \label{eq_wonder}
\dim K \left(\widetilde{ {\Bbb C} ^2/ G } \right)
= \dim H \left(\widetilde{ {\Bbb C} ^2/ G } \right) = |G_*|.
\end{equation}
%
So if we apply Theorem~\ref{th_main} to the case $\widetilde{ {\Bbb C} ^2/ G }$
with a trivial group action, we have
%
\begin{eqnarray*}
\sum_{n \geq 0} \dim K_{S_n}
\left( \widetilde{ {\Bbb C} ^2/ G }^{n} \right) q^n
= \prod_{r \geq 1} (1 - q^r)^{ - |G_*|} .
\end{eqnarray*}
On the other hand,
by the Thom isomorphism \cite{S1} we have
$$
K_{G_n} (X^n) \approx K_{G_n} (pt ) = R(G_n).
$$
It follows from
the part~3) of Remark~\ref{rem_spec} and Eq. (\ref{eq_wonder}) that
%
\begin{eqnarray*}
\sum_{n \geq 0} \dim {K}_{G_n} \left( {\Bbb C} ^{2n} \right) q^n
= \prod_{r \geq 1} (1 - q^r)^{ - |G_*|} .
\end{eqnarray*}
We can obtain another numerical coincidence from a
somewhat different point of view as follows.
For a general quasi-projective smooth surface $Y$, a well-known result
of Fogarty says that the Hilbert
scheme of $n$ points on $Y$, denoted by $Y^{[n]}$,
is a smooth $2n$ dimensional manifold.
The Betti numbers of $Y^{[n]}$ were computed by G\"ottsche \cite{G}.
In particular, G\"ottsche's formula yields the dimension of the
homology group of $Y^{[n]}$:
%
\begin{eqnarray} \label{eq_dim}
\sum_{n \geq 0} \dim H( Y^{[n]}) q^n
= \prod_{r \geq 1} (1 - q^r)^{ - \dim H(Y)}.
\end{eqnarray}
%
We apply Eq. (\ref{eq_dim}) to the case $Y = \widetilde{ {\Bbb C} ^2/ G }$.
It follows from Eq. (\ref{eq_wonder}) that
%
\begin{eqnarray*}
\sum_{n \geq 0} \dim H( \widetilde{ {\Bbb C} ^2/ G }^{[n]}) q^n
= \prod_{r \geq 1} (1 - q^r)^{ - |G_*|}.
\end{eqnarray*}
Therefore we have proved the following.
%
\begin{proposition} \label{prop_iden}
The spaces $ {K}_{G_n} \left( {\Bbb C} ^{2n} \right) \bigotimes {\Bbb C} $,
$ {K}_{S_n} ( \widetilde{ {\Bbb C} ^2/ G }^{n} ) \bigotimes {\Bbb C} $
and $ H( \widetilde{ {\Bbb C} ^2/ G }^{[n]}) $ have the same dimension.
\end{proposition}
Since the minimal resolution $\widetilde{ {\Bbb C} ^2 /G}$ of $ {\Bbb C} ^2 /G$
has no odd dimensional homology, we have (cf. \cite{HH, N})
$$
e(\widetilde{ {\Bbb C} ^2 /G}) = \dim H(\widetilde{ {\Bbb C} ^2 /G})
= e ( {\Bbb C} ^2, G) =|G_*|.
$$
The following corollary
is a special and important case of Corollary~\ref{cor_gen}.
%
\begin{corollary} \label{cor_elem}
Let $G$ be a finite subgroup of $SL_2(\Bbb C)$
and $\widetilde{ {\Bbb C} ^2 /G}$ be the minimal resolution of $ {\Bbb C} ^2 /G$.
Then the Hilbert scheme of $n$ points on $\widetilde{ {\Bbb C} ^2 /G}$
is a resolution of singularities of $ {\Bbb C} ^{2n} /G_n$ whose Euler
characteristic is equal to the orbifold Euler
characteristic $e( {\Bbb C} ^{2n}, G_n)$.
\end{corollary}
\begin{remark} \rm \label{rem_dim}
The fact that the (graded) dimension of $K_{S_n} \left( X^{n} \right)$
equals that of the homology group of the Hilbert scheme
of $n$ points of $X$ for a more general surface $X$ holds
by the same argument as above. Bezrukavnikov and Ginzburg
\cite{BG} have proposed a way to establish
a direct isomorphism between these two spaces for an
algebraic surface $X$.
M.~de Cataldo and L.~Migliorini has recently established an isomorphism
independently for any complex surface \cite{CM}.
Proposition~\ref{prop_iden} can be generalized as follows.
Assume that $X$ is a quasi-projective surface acted upon by $G$ and
there exists a smooth resolution of singularities $\widetilde{X/ G }$
of $X/G$ such that the dimension of $K_G^i (X)$ $(i =0, 1)$
equals that of the even (resp. odd) dimensional homology
group of $\widetilde{X/ G }$.
Then we conclude that the dimension
of $K_{G_n} \left( X^n \right)$ is equal to that
of the homology group of the Hilbert scheme of $n$ points
on $\widetilde{X/ G }$.
We conjecture the existence of a natural isomorphism from
$ {K}_{G_n} \left( X^{n} \right) \bigotimes {\Bbb C} $
to $ H( \widetilde{ X/ G }^{[n]}) $, assuming the (necessary)
existence of an isomorphism from $K_G (X) \bigotimes {\Bbb C} $
to $ H( \widetilde{ X/ G }) $ or $K( \widetilde{ X/ G }) \bigotimes {\Bbb C} $.
We believe that this is just a first
indication of intriguing relations between the Hilbert
scheme of $n$ points on $\widetilde{X/ G }$
and the wreath product $G_n$.
We will elaborate on this in another occasion.
When $G$ is trivial, it reduces to
well-known relations between the Hilbert scheme
of $n$ points and the symmetric group $S_n$ (cf.~\cite{N, BG}).
\end{remark}
\begin{remark} \rm
Let us consider a special case of Corollary~\ref{cor_elem}
by putting $G = {\Bbb Z} _2$. The wreath product $G_n$ in this case is exactly
the Weyl group of $B_n$ or $C_n$.
It is interesting to compare with a ``Hilbert scheme''
associated to a reductive group of type $B_n$ (or $C_n$)
defined in \cite{BG}.
\end{remark}
\noindent{\bf Acknowledgement.} The starting point of
this work is the insight of Segal \cite{S2}.
I am grateful to him for his permission of using his unpublished results.
I am also grateful to Igor Frenkel who
first emphasized to me the importance of \cite{S2}
and to Mikio Furuta whose generous help is indispensable
for me to understand \cite{S2} for inspiring and fruitful discussions.
I thank Mark de Cataldo for helpful conversations
on Hilbert schemes. I also thank the referee for
his comments and suggestions which help to
improve the presentation of the paper.
This paper is a modified version of my MPI preprint
``Equivariant K-theory and wreath products'' in August 1998.
It is a pleasure to acknowledge
the warm hospitality of the Max-Planck Institut f\"ur Mathematik at Bonn.
\vspace{.1in}
\noindent{\bf Note 1 added.} The idea here of relating the
representation rings of wreath products associated to finite
groups of $SL_2 ( {\Bbb C} )$ and vertex representations of affine Lie
algebras has been fully developed in a paper ``Vertex
representations via finite groups and the McKay correspondence''
by I.~Frenkel, N.~Jing and the author. \vspace{.1in}
\noindent{\bf Note 2 added.} The connections among Hilbert
schemes, wreath products and K-theory have been developed in my
recent paper "Hilbert schemes, wreath products, and the McKay
correspondence".
\frenchspacing
|
2,877,628,091,181 | arxiv | \section{Introduction}
A conformal measure for a discrete dynamical system made it first
appearance in the work of D. Sullivan, \cite{S}, in connection with
rational maps on the Riemann sphere, before the notion was coined and introduced
in a more general setting by M. Denker and M. Urbanski in
\cite{DU}. Conformal measures corresponding to various potential
functions now play important roles in the study of dynamical systems,
e.g. in holomorphic dynamics where they are used
as a tool to study the structure of the Julia set, among others. In
the setting of topological Markov shifts conformal measures arise
naturally via the thermodynamic formalism introduced by D. Ruelle, and
they constitute a key ingredient in the study of topological
Markov shifts with a countable number of states, \cite{Sa1},\cite{Sa2}. During the last
decade it has been realised that they also make their appearance in
connection with quantum statistical models that are based on
$C^*$-algebras and one-parameter group actions arising from local
homeomorphisms. This relation, which until recently only involved probability
measures on the dynamical system side and KMS states on the operator
algebra side, was extended in \cite{Th4} to a bijective correspondence
between general (possibly infinite) conformal measures and KMS weights. It is
this new connection between dynamical systems and $C^*$-algebras which
motivates the present study. Conformal measures are often required to
be probability measures, but for dynamical systems on non-compact
spaces, such as countable state Markov shifts for example, it is crucial to allow the measures to be
infinite. Nonetheless our knowledge of general, possibly infinite conformal measures is
lacking when it comes to the problem of determining the KMS weights in
the $C^*$-algebraic setting mentioned above, maybe because of reluctance to consider measures that are not conservative.
The
present paper seeks to improve on this by introducing a method to
construct conformal measures for a local homeomorphism on a locally compact
non-compact Hausdorff space which exploits the possibility of taking
limits along sequences that go to infinity. The method works in a
quite general non-compact setup and produces a non-zero fixed
measure for the dual Ruelle operator when the pressure of the potential is non-positive, and not
only when it is zero. More precisely, given a locally compact second countable
Hausdorff space $X$ and a local homeomorphism $\sigma : X \to X$, we
shall say that $(X,\sigma)$ is \emph{cofinal} when the equality
\begin{equation}\label{a25}
X = \bigcup_{i,j \in \mathbb N} \sigma^{-i}\left(
\sigma^j\left(U\right)\right)
\end{equation}
holds for every open non-empty subset $U \subseteq X$, and that
$(X,\sigma)$ is \emph{non-compact} when
\begin{equation}\label{c25}
\bigcup_{k=0}^{\infty} \sigma^k(U)
\end{equation}
is not pre-compact (i.e. does not have compact closure) for any open
non-empty subset $U \subseteq X$. Assuming that $(X,\sigma)$ has these
two properties and given any continuous real-valued
function $\phi : X \to \mathbb R$, referred to in the following as the
\emph{potential}, there is a natural notion of
pressure $\mathbb P(\phi)$, and the method to be described produces regular non-zero $\phi$-conformal measures when $\mathbb
P(-\phi) \leq 0$. The measures can be finite or infinite, but they are always
dissipative when $\mathbb P(-\phi) < 0$.
This construction is an extension of the
method by which positive eigenvectors of an infinite non-negative
matrix can be produced, and has its roots in the way harmonic functions are
constructed for Markov chains. See \cite{Th4} and \cite{Th5} for more details on this
connection. The extension we present here is also inspired by the PhD thesis of
Van T. Cyr, \cite{Cy}, where a similar method was used to produce
conformal measures for transient potentials with zero pressure on mixing
countable state Markov shifts. In that setting our results are more
complete because we obtain also a necessary condition for the
existence of a $\phi$-conformal measure: For a locally compact non-compact irreducible
Markov shift and a uniformly continuous potential $\phi$, a $\phi$-conformal measure exists if and only if $\mathbb P(-\phi) \leq 0$.
Besides the countable state Markov shifts a large and interesting
class of examples come from transcendental entire maps. When there are no critical
points in the Julia set the map is a local homeomorphism on its Julia
set, giving rise to a dynamical system which is both cofinal and
non-compact, possibly after the
deletion of a single exceptional fixed point. A prominent
example of this is given by the exponential family $\lambda e^z$ and in Section
\ref{exp} we consider the hyperbolic members of this family obtained by
choosing $\lambda$ real and between $0$ and $e^{-1}$, in order to show that the method
we introduce combines nicely with those developed
by Mayer and Urbanski in \cite{MU1}, \cite{MU2}, and gives rise to conformal
measures for the geometric potential $\log |z|$, not only when the
pressure function is zero, which by a result in \cite{MU1} occurs
exactly when the inverse temperature $t$ is equal to the Hausdorff
dimension $HD$ of the radial Julia set, but for all $t \geq HD$. The
additional conformal measures arise immediately from the general
results once we have shown that the notion of pressure employed here
agrees with that used by Mayer and Urbanski.
As indicated above our interest in conformal measures stems
from the bijective correspondence between $\beta
\phi$-conformal measures on $X$ and gauge invariant $\beta$-KMS
weights for the
one-parameter group arising from the
triple $(X,\sigma, \phi)$ by a well known canonical construction. In
the final section we give a brief introduction to this connection and summarise
the consequences of our results in the operator algebra setting. In
particular, they allow us to extend the results from \cite{Th4}
concerning the $\beta$-KMS weights of the gauge action on a simple graph
algebra to actions coming from an arbitrary uniformly continuous
potential.
\section{Conformal measures}\label{Sec1}
Throughout the paper $X$ is a locally compact second countable Hausdorff space, $\sigma : X \to
X$ is a local homeomorphism and $\phi : X \to \mathbb R$ is a
continuous function.
\begin{defn}\label{a38} A regular and non-zero Borel measure $m$ on
$X$ is a \emph{$\phi$-conformal measure} when
\begin{equation}\label{a39}
m\left(\sigma(A)\right) = \int_A e^{\phi(x)} \ d m(x)
\end{equation}
for every Borel subset $A \subseteq X$ with the property that $\sigma
: A \to X$ is injective.
\end{defn}
Let $C_c(X)$ be the space of continuous compactly supported functions
on $X$. We study conformal measures via the \emph{Ruelle operator}
$L_{\phi} : C_c(X) \to C_c(X)$ defined
such that
\begin{equation}\label{a47}
L_{\phi}(f)(x) = \sum_{y \in \sigma^{-1}(x)} e^{\phi(y)} f(y) .
\end{equation}
Since $L_{\phi} : C_c(X) \to C_c(X)$ is linear and takes non-negative
functions to non-negative functions it follows from the
Riesz representation theorem that $L_{\phi}$ defines a map
$m \mapsto L_{\phi}^*(m)$ on the set of regular Borel measures $m$ on $X$ by the
requirement that
\begin{equation*}\label{a40}
\int_X f \ d L_{\phi}^*(m) \ = \ \int_X L_{\phi}(f) \ dm
\end{equation*}
for all $f \in C_c(X)$.
\begin{lemma}\label{a101} Let $m$ be a non-zero regular Borel
measure on $X$. The following are equivalent:
\begin{enumerate}
\item[1)] $m$ is ${\phi}$-conformal.
\item[2)] $ L_{-\phi}^*(m) = m$.
\item[3)] When $U \subseteq X$ is an open subset such that $\sigma: U
\to X$ is injective and $g \in C_c(U)$,
$$
\int_{\sigma(U)} g \circ \left(\sigma|_U\right)^{-1} \ dm = \int_U ge^{\phi} \ d m.
$$
\item[4)] There is a covering $X = \bigcup_i U_i$ of $X$ by open
subsets $U_i$ such that, for every $i$, $\sigma: U_i
\to X$ is injective and for all $g \in C_c(U_i)$,
$$
\int_{\sigma(U_i)} g \circ \left(\sigma|_{U_i}\right)^{-1} \ dm = \int_{U_i} ge^{\phi} \ d m.
$$
\end{enumerate}
\end{lemma}
\begin{proof} The equivalence between 1) and 2) was observed by
Renault in \cite{Re2}, and a version of it appeared, in a slightly
different setting, already in \cite{DU} where conformal measures
were first introduced. Since the lemma is a crucial tool in the
following, we present the proof here. We denote the characteristic function of a set $S$ by
$1_S$.
1) $\Rightarrow$ 3): Let $U$ be as in 3). By linearity and continuity it suffices to
establish the desired identity when $g = 1_A$ for some Borel
subset $A \subseteq
U$. In this case the identity is the same as (\ref{a39}).
3) $\Rightarrow$ 4) is trivial.
4) $\Rightarrow$ 2): By linearity it suffices to
show that
$$
\int_X L_{-\phi}(f) \ dm = \int_X f \ dm
$$
when $f$ is a supported in $U_i$. In that case the definition of
$L_{-\phi}$ shows that
$$
\int_X L_{-\phi}(f) \ dm = \int_{\sigma(U_i)}
e^{-\phi\left(\left(\sigma|_{U_i}\right)^{-1}(x)\right)} f\left(
\left(\sigma|_{U_i}\right)^{-1}(x)\right) \ dm,
$$
which, thanks to 4) is equal to $\int_{U_i} f \ dm = \int_X f \ dm$.
2) $\Rightarrow$ 1): To establish (\ref{a39}) we write $A = \sqcup_i
A_i$ as a disjoint union of Borel sets $A_i$ such that for each $i$ there is an open and relatively compact subset $U_i \subseteq X$
where $\sigma$ injective, and $A_i \subseteq U_i$. Then
$m\left(\sigma(A)\right) = \sum_i m\left(\sigma(A_i)\right)$
and
$$
\int_A e^{\phi(x)} \ dm(x) = \sum_i \int_{A_i} e^{\phi(x)} \ dm(x).
$$
It suffices therefore to establish (\ref{a39}) when $A \subseteq U$
for some open and relatively compact subset $U \subseteq X$ where $\sigma$ is injective. Let
$V \subseteq U$ be open. It
follows from 2) that
$$
\int_{\sigma(V)}
e^{-\phi\left(\left(\sigma|_U\right)^{-1}(x)\right)} f\left(
\left(\sigma|_U\right)^{-1}(x)\right) \ dm = \int_V f \ dm,
$$
when $f \in C_c(V)$. Inserting for $f$ an increasing sequence from $C_c(V)$
converging up to $e^{\phi}1_V$, we find that (\ref{a39}) holds when
$A=V$ is an open subset of $U$. Since both measures,
$$
A \mapsto m\left(\sigma (A)\right) \ \text{and} \ A \mapsto \int_{A} e^{\phi} \
dm
$$
are finite Borel measures on $U$, they are also regular. It follows therefore that (\ref{a39}) holds for every
Borel subset $A$ of $U$.
\end{proof}
\section{Pressures associated with a cofinal local homeomorphism}
In this section we add first the assumption that $\sigma$ is cofinal,
i.e. for every open non-empty subset $U$ of $X$, the identity
(\ref{a25}) holds. We will use the notation $\|f\|$ for the supremum norm of
a bounded function $f$ on $X$, and we introduce the notation $\phi_n$
for the function
$$
\phi_n(x) = \sum_{j=0}^{n-1} \phi\left(\sigma^j(x)\right) .
$$
\begin{lemma}\label{b26} Assume that $(X,\sigma)$ is cofinal. Let $K$ be a compact subset of $X$ and $g \in
C_c(X)$ a non-zero and non-negative function. There is an $N \in \mathbb N$ and a constant $C > 0$ such that
\begin{equation}\label{b27}
\left|L^n_{\phi}(f)(x)\right| \leq C\left\|f\right\|
\sum_{1 \leq i,j \leq N} L^{n+ j-i}_{\phi}(g)(x)
\end{equation}
for all $n > N$, all $x \in X$, and every function $f \in C_c(X)$ with
support in $K$.
\end{lemma}
\begin{proof} There is a $\delta > 0$ and an open non-empty subset $U
\subseteq X$ such that $g(x) \geq \delta$ for all $x \in U$. Since
$\sigma$ is cofinal and $K$ compact there is an $N \in \mathbb N$ such that
\begin{equation}\label{c27}
K \subseteq \bigcup_{1 \leq i,j \leq N}
\sigma^{-i}\left(\sigma^j(U)\right) .
\end{equation}
Set
\begin{equation*}
\begin{split}
&C_1 = \sup \left\{ e^{-\phi_i(z)}: \ 1 \leq i \leq N, \ z \in \operatorname{supp} g\right\} , \\
& C_2 = \sup \left\{ e^{\phi_i(z)}: \
1 \leq i \leq N,
\ z \in K\right\}.
\end{split}
\end{equation*}
Consider an element $f \in C_c(X)$ with support in
$K$ and let $x \in X, n \in \mathbb N$, $n > N$. Since
$\sigma^{-n}(x) \cap K$ is a finite set, it follows from (\ref{c27})
that we can write $ \sigma^{-n}(x) \cap K$ as a disjoint union
$$ \sigma^{-n}(x) \cap K = \sqcup_{1 \leq i,j \leq N} B_{i,j},
$$
where $B_{i,j} \subseteq \sigma^{-i}\left(\sigma^j(U)\right)$ are finite sets.
For $y \in B_{i,j}$ we choose $a_y \in U$ such that $\sigma^i(y) = \sigma^j(a_y)$. Then
\begin{equation}\label{c30}
L_{\phi}^n(f)(x) \ \ \ = \sum_{y \in \sigma^{-n}(x) \cap K} e^{\phi_n(y)}
f(y) = \sum_{1 \leq i,j \leq N} \sum_{y\in B_{i,j}} e^{\phi_n(y)}
f(y) .
\end{equation}
For $y \in B_{i,j}$,
$$
e^{\phi_n(y)}|
f(y)| = e^{\phi_i(y)}e^{\phi_{n-i}(\sigma^i(y))} |f(y)| =
e^{\phi_i(y)}e^{\phi_{n-i}(\sigma^j(a_y))}|f(y)|.
$$
Since $|f(y)| \leq \delta^{-1} \left\|f\right\|g(a_y)$, we find that
\begin{equation}\label{c33}
\begin{split}
&e^{\phi_n(y)}|
f(y)| \leq \delta^{-1}\left\|f\right\|
e^{\phi_i(y)}e^{\phi_{n-i}(\sigma^j(a_y))}g(a_y)\\
& =\delta^{-1} \|f\|
e^{\phi_i(y)}e^{-\phi_j(a_y)}e^{\phi_{n-i+j}(a_y)}g(a_y) \leq
\left\|f\right\| \delta^{-1}C_1C_2e^{\phi_{n-i+j}(a_y)}g(a_y) .
\end{split}
\end{equation}
To control the ambiguity of the association $y \mapsto
a_y$,
note that
$$
M_i = \sup_{y \in K} \# \left\{ z \in K : \ \sigma^i(z) = \sigma^i(y)
\right\}
$$
is finite for every $i \in \mathbb N$. Set $M = \max_{1 \leq i \leq N}
M_i$. Then
\begin{equation}\label{c31}
\begin{split}
\sum_{y \in B_{i,j}} e^{\phi_{n-i+j}(a_y)}g(a_y) \leq
M \sum_{ z \in
\sigma^{-n-j+i}(x)}e^{\phi_{n-i+j}(z)}g(z) = M L^{n-i+j}_{\phi}(g)(x).
\end{split}
\end{equation}
By combining (\ref{c30}), (\ref{c33}) and (\ref{c31}) we obtain
(\ref{b27}) if we set
$C = \delta^{-1}C_1C_2 M$.
\end{proof}
\begin{cor}\label{b28} Assume that $\sigma$ is cofinal. Let $f,g \in
C_c(X)$ be non-zero and non-negative. It follows that
$$
\limsup_n \left(L^n_{\phi}(f)(x)\right)^{\frac{1}{n}} = \limsup_n
\left(L^n_{\phi}(g)(x)\right)^{ \frac{1}{n}}
$$
for all $x\in X$.
\end{cor}
\begin{proof} Let $K$ be a compact set containing the support of $f$ and
let $C$ and $N$ be the numbers from Lemma \ref{b26}. If $\lambda > 0$ and $L^n_{\phi}(g)(x) \leq \lambda^n$
for all $n \geq k$, it follows from (\ref{b27}) that for all $n \geq
k + N $ there is a $j \in [n-N,n+N]$ such that
$$
\left( C \|f\| N^2\right)^{-1} L^n_{\phi}(f)(x) \leq \lambda^j .
$$
Thus
$$
\left( C \|f\| N^2\right)^{-\frac{1}{n}}
\left(L^n_{\phi}(f)(x)\right)^{\frac{1}{n}} \leq \alpha_n \lambda,
$$
for all large $n$,
where $\alpha_n = \max \left\{ \lambda^{\frac{j}{n} -1} : \ n-N \leq j
\leq n+N \right\}$. Since $ \lim_{n \to \infty} \alpha_n =\lim_{n \to \infty} \left( C \|f\|
N^2\right)^{-\frac{1}{n}} = 1$ we
conclude first that
$\limsup_n
\left(L^n_{\phi}(f)(x)\right)^{ \frac{1}{n}} \leq \lambda$ and then
that
$$
\limsup_n
\left(L^n_{\phi}(f)(x)\right)^{ \frac{1}{n}} \leq \limsup_n
\left(L^n_{\phi}(g)(x)\right)^{ \frac{1}{n}}.
$$
This argument shows that if one of the two limes superiors is finite
then so is the other, and by symmetry that they agree. Hence if one is
infinite, so is the other.
\end{proof}
In the following we denote by $C_c(X)^+$ the set of non-negative and
non-zero elements of $C_c(X)$. Using the convention that $\log 0 =
-\infty$ and $\log \infty = \infty$, it follows from Corollary
\ref{b28} that when $\sigma$ is cofinal,
we can define
$$
\mathbb P_x(\phi)= \log \left( \limsup_n
\left(L^n_{\phi}(f)(x)\right)^{\frac{1}{n}}\right) \ \in \ \left[-\infty,\infty\right],
$$
independently of which element $f \in
C_c(X)^+$ we use. We subsequently define the \emph{pressure} $\mathbb P(\phi)$ of
$\phi$ to be
$$
\mathbb P(\phi) = \sup_{x \in X} \mathbb P_x(\phi) ,
$$
which is again an extended real number, i.e. $\mathbb P(\phi) \in [-\infty,\infty]$.
\subsection{The pressures associated to a cofinal Markov
shift}\label{MS1}
In this section we relate the pressure defined above to the Gurevich
pressure known from topological Markov shifts. Because we can, and since
it is important for the applications to graph $C^*$-algebras, we work
in the same generality as in \cite{Th4} and \cite{Th5}, rather than restricting
the attention to irreducible or mixing Markov shifts.
Let $G$ be a directed graph with vertex set $V_G$ and edge
set $E_G$. We assume that both $V_G$ and $E_G$ are countable, and that
$G$ is 'row-finite', in the sense that the number
of edges emitted from any vertex is finite. Furthermore, we assume
that there are no sinks, i.e. every vertex emits an edge.
An \emph{infinite path} in $G$ is an element $p = (p_i)_{i=1}^{\infty} \in
\left(E_G\right)^{\mathbb N}$ such that $r(p_i) = s(p_{i+1})$ for all
$i$, where we have used the notation $r(e)$ and $s(e)$ for the range and source of an
edge $e \in E_G$, respectively. A finite path $\mu = e_1e_2 \dots e_n$ is
defined similarly, and we extend the range and source maps to finite
paths such that $s(\mu) = s(e_1)$ and
$r(\mu) = r(e_n)$. The number of edges in $\mu$ is its \emph{length}
and we denote it by $|\mu|$. We let $\mathcal P(G)$ denote the set of
infinite paths in $G$ and extend the source map to $\mathcal P(G)$ such that
$s(p) = s(p_1)$ when $p = (p_i)_{i=1}^{\infty}$. To describe the
topology of $\mathcal P(G)$, let $\mu =
e_1e_2\cdots e_n$ be a finite path in $G$. We can then consider the \emph{cylinder}
$$
Z(\mu) = \left\{p \in \mathcal P(G) : \ p_i = e_i, \ i =1,2, \dots,n
\right\}.
$$
$\mathcal P(G)$ is a totally disconnected second countable locally compact
Hausdorff space in the topology for which the collection of cylinders is
a base, \cite{KPRR}. The left shift $\sigma : \mathcal P(G) \to
\mathcal P(G)$ is the
map defined such that
$$
\sigma\left(e_0e_1e_2 e_3 \cdots\right) = e_1e_2e_3 \cdots .
$$
Note that $\sigma$ is a local homeomorphism. It is not difficult to
see that $(\mathcal P(G),\sigma)$ is cofinal if and only if $G$ is cofinal in the
sense introduced in \cite{KPRR}: If $v\in V_G$ and $p \in \mathcal P(G)$, there
is a finite path $\mu$ in $G$ and an $i \in \mathbb N$ such that
$s(\mu) = v$ and $r(\mu) = s(p_i)$. In the following we assume that
$G$ is a countable graph such that
\begin{enumerate}
\item[$\bullet$] $G$ is row-finite,
\item[$\bullet$] $G$ has no sinks, and
\item[$\bullet$] $G$ is cofinal.
\end{enumerate}
For simplicity we summarise these conditions by saying that $G$ is
\emph{cofinal} when they hold. We denote by $NW_G$ the set of vertexes
$v$ which are
contained in a loop, meaning that there is finite path $\mu$ such that
$s(\mu) = r(\mu) = v$. These vertexes are called \emph{non-wandering},
and together with the edges they emit they constitute an irreducible
(or strongly connected) subgraph of $G$ which
we also denote by $NW_G$.
For a continuous real-valued function
$\phi : \mathcal P(G) \to \mathbb R$ the \emph{Gurevich pressure}
$P_{NW_G}(\phi)$ of the restriction of $\phi$ to $\mathcal P(NW_G)$ is defined to be
\begin{equation}\label{c24}
\limsup_n \frac{1}{n} \log \sum_{\sigma^n(y) =y} e^{\phi_n(y)} 1_{[v]}(y),
\end{equation}
where $v$ is a vertex in $NW_G$ and $[v] = \left\{ x \in \mathcal P(G)
: \ s(x) = v\right\}$, cf. e.g. \cite{Sa2}. It has been shown in increasing
generality by O.Sarig, \cite{Sa1}, \cite{Sa2}, that the 'limsup' above
is a limit which is independent of the choice of $v$ when $G$ is mixing (or primitive), and $\phi$ satisfies some
condition on its variation. In the general irreducible case, the
sequence involved in (\ref{c24}) will not converge, but the
independence of the choice of vertex is a general fact, at least when
$\phi$ is uniformly continuous in an appropriate metric. To make this
precise, set
$$
\operatorname{var}_k(\phi) = \sup \left\{\left|\phi(x) -\phi(y)\right|: \ x_i =
y_i, \ i =1,2,\dots, k \right\} .
$$
We shall work with the assumption that
$\lim_{k \to \infty} \operatorname{var}_k (\phi) = 0$. Note that this condition
is implied by the uniform continuity of $\phi$ with respect to any
metric $d$ on $\mathcal P(G)$ with the property that for any $\delta > 0$
there is a $k \in \mathbb N$ such that $x_i
= y_i, i = 1,2,\cdots,k \ \Rightarrow \ d(x,y) \leq \delta$.
The next lemma follows from Proposition 3.2 in \cite{Sa2} when $G$ is
primitive and $\phi$ has the Walters property.
\begin{lemma}\label{c19} Assume that $G$ is cofinal and that $\lim_{k \to \infty} \operatorname{var}_k(\phi) =
0$. The value of (\ref{c24}) does not depend on
the choice of $v \in NW_G$, and for every finite path $\mu$ in $NW_G$,
$$
P_{NW_G}(\phi) = \limsup_n \frac{1}{n} \log \sum_{\sigma^n(y) = y}
e^{\phi_n(y)} 1_{Z(\mu)}(y) .
$$
\end{lemma}
\begin{proof} Let $v = s(\mu)$. Then
$$
\sum_{\sigma^n(y) = y}
e^{\phi_n(y)} 1_{Z(\mu)}(y) \leq \sum_{\sigma^n(y) = y}
e^{\phi_n(y)} 1_{[v]}(y)
$$
for all $n$, and hence
\begin{equation}\label{c20}
\limsup_n \frac{1}{n} \log \sum_{\sigma^n(y) = y}
e^{\phi_n(y)} 1_{Z(\mu)}(y) \ \leq \ \limsup_n \frac{1}{n} \log \sum_{\sigma^n(y) = y}
e^{\phi_n(y)} 1_{[v]}(y) .
\end{equation}
Let $w $ be any vertex in $NW_G$ and let $\epsilon > 0$. Since $NW_G$ is irreducible, there is a finite path $p$ in $G$ such that $s(p) = r(\mu), \ r(p)
= w$. By
assumption there is a $ k \in \mathbb N$ such that $\operatorname{var}_k(\phi) \leq
\epsilon$. Let $w_1,w_2, \cdots, w_l$ be the vertexes that can be reached from
$w$ by a path of length $k$. For each $w_i$ we choose a finite path
$q_i$ such that $s(q_i) = w_i, \ r(q_i) = s(\mu)$. Let $n > k$ and set
$$
M_i = \left\{y \in [w]: \ \sigma^n(y) = y, \ r(y_k) = w_i \right\},
$$
and define $\chi_i : M_i \to \left\{y \in Z(\mu) : \ \sigma^{n+L_i}(y)
= y \right\}$, where $L_i = |\mu p| + |q_i|+ k$, such that
$\chi_i(y)$ is the infinite path which repeats the loop starting with
$\mu$, followed by $p$, the first $n+k$ edges in $y$ and ends with
$q_i$. In symbols,
$$
\chi_i(y) = \left(\mu py_{[1,n+k]}q_i\right)^{\infty} .
$$
By using that $\operatorname{var}_k(\phi) \leq \epsilon$ we find that
$$
e^{\phi_n(y)} \leq e^{n \epsilon} e^{\phi_n\left(\sigma^{|\mu p|}\left(\chi_i(y)\right)\right)} .
$$
By comparing $\phi_n\left(\sigma^{|\mu p|}\left(\chi_i(y)\right)\right)$ to
$\phi_{n+L_i}\left(\chi_i(y)\right)$ one sees that
$$
e^{\phi_n\left(\sigma^{|\mu p|}\left(\chi_i(y)\right)\right)} \leq
C_ie^{\phi_{n+L_i}\left(\chi_i(y)\right)},
$$
where
$$
C_i = \sup \left\{ e^{-\phi_{|q_i|+k}(z)} : \ z \in
[w] \right\} \cdot \sup \left\{
e^{-\phi_{|\mu p|}(z)} : \ z \in Z(\mu) \right\} .
$$
Since $\left\{y \in [w] : \ \sigma^n(y) = y\right\} = \sqcup_{i=1}^l
M_i$ and each $\chi_i$ is injective, this leads to the estimate
$$
\sum_{ \sigma^n(y) = y} e^{\phi_n(y)}1_{[w]}(y) \ \leq \ Ce^{n\epsilon} \sum_{i=1}^l
\ \ \sum_{\sigma^{n+L_i}(y) = y } e^{\phi_{n+L_i}(y)}1_{Z(\mu)}(y) ,
$$
where $C = \max_{1 \leq i \leq l} C_i$, and we conclude therefore that
$$
\limsup_n \frac{1}{n} \log \sum_{\sigma^n(y) = y}
e^{\phi_n(y)}1_{[w]}(y) \leq \epsilon + \limsup_n \frac{1}{n} \log \sum_{\sigma^n(y) = y}
e^{\phi_n(y)}1_{Z(\mu)}(y) .
$$
Since $\epsilon > 0$ was arbitrary we can combine with (\ref{c20}) to
get
\begin{equation*}\label{c23}
\begin{split}
&\limsup_n \frac{1}{n} \log \sum_{\sigma^n(y) = y}
e^{\phi_n(y)}1_{[w]}(y) \leq \limsup_n \frac{1}{n} \log \sum_{\sigma^n(y) = y}
e^{\phi_n(y)} 1_{Z(\mu)}(y) \\
& \ \ \ \ \ \ \ \ \ \ \ \ \leq \limsup_n \frac{1}{n} \log \sum_{\sigma^n(y) = y}
e^{\phi_n(y)} 1_{[v]}(y).
\end{split}
\end{equation*}
The desired conclusion follows from this by using the freedom in the choice of $\mu,v$ and $w$.
\end{proof}
\begin{prop}\label{c5} Let $X = \mathcal P(G)$ be the space of infinite paths
in a cofinal graph $G$,
and let $\sigma$ be the left shift on $\mathcal P(G)$. Let $\phi :
\mathcal P(G) \to
\mathbb R$ be a continuous function such that $\lim_{k \to \infty}
\operatorname{var}_k(\phi) = 0$. Then
\begin{enumerate}
\item[a)] $\mathbb P(\phi) = -\infty$ when $NW_G = \emptyset$, and
\item[b)] $\mathbb P(\phi)$ is the Gurevich pressure $P_{NW_G}(\phi)$ of
$\phi|_{\mathcal P(NW_G)}$ when
$NW_G \neq \emptyset$.
\end{enumerate}
\end{prop}
\begin{proof} Let $x \in \mathcal P(G)$ and let $\epsilon > 0$ be
arbitrary. Choose $k \in \mathbb N$ such that $\operatorname{var}_k(\phi) \leq
\epsilon$ and let $\mu$ be the path of length $k$ with $x \in
Z(\mu)$. If $x \notin \mathcal P(NW_G)$, the set $\sigma^{-n}(x)
\cap Z(\mu)$ is empty for all $n$ and hence
$L^n_{\phi}\left(1_{Z(\mu)}\right)(x) = 0$ for all $n$, leading to
the conclusion that $\mathbb P_x(\phi) = - \infty$. Assume then that
$x \in \mathcal P(NW_G)$. Let $n > k$. There is an
obvious bijection $\chi : \sigma^{-n}(x) \cap Z(\mu) \to \left\{ y \in
Z(\mu) : \sigma^n(y) = y\right\}$ such that $\chi(z)_i = z_i, \ i =
1,2, \cdots, n+k$. Since $\operatorname{var}_k(\phi) \leq \epsilon$ this leads first
to the estimates
\begin{equation}\label{h61}
\sum_{\sigma^n(y)=y} e^{\phi_n(y)- n\epsilon}1_{Z(\mu)}(y) \leq L^n_{\phi}\left(
1_{Z(\mu)} \right)(x) \leq \sum_{ \sigma^n(y)=y} e^{\phi_n(y)+
n\epsilon}1_{Z(\mu)}(y),
\end{equation}
and then by Lemma \ref{c19} to the conclusion that $P_{NW_G}(\phi) -
\epsilon \leq \mathbb P_x(\phi) \leq P_{NW_G}(\phi)+\epsilon$. Hence
$$
\mathbb P_x(\phi) = \begin{cases} - \infty, & \ x \notin \mathcal
P(NW_G) \\ P_{NW_G}(\phi) , & \ x \in \mathcal P(NW_G) . \end{cases}
$$
\end{proof}
\section{Constructing conformal measures}
In this section we return to the setting where $X$ is a second
countable locally compact Hausdorff space, $\sigma: X \to X$ is a
cofinal local homeomorphism and $\phi : X \to \mathbb R$ is
continuous. We add the assumption that $(X,\sigma)$
is \emph{non-compact}, in the sense that there is no open non-empty subset $U
\subseteq X$ such that $\bigcup_{n=0}^{\infty}\sigma^n(U)$ is
pre-compact. Since $(X,\sigma)$ is assumed cofinal, this additional
condition is satisfied when there is just a single point $x$ whose
orbit closure $\overline{\bigcup_{k=1}^{\infty} \phi^k(x)}$ is not
compact.
In the following we write $\lim_{k \to \infty} x_k = \infty$
when $\{x_k\}$ is a sequence in $X$ with the property that for any
compact subset $K \subseteq X$ there is an $N \in \mathbb N$ such that
$x_k \notin K \ \forall k \geq N$.
\begin{lemma}\label{a69} Assume that $(X,\sigma)$ is cofinal and
non-compact. Let $h \in C_c(X)^+$. There is a sequence $\{x_k\}$ in $X$ such that
$$
\sum_{n=0}^{\infty} L_{\phi}^n(h)(x_k) > 0
$$
for all $k$ and $\lim_{x \to \infty} x_k = \infty$.
\end{lemma}
\begin{proof} Set $U = \left\{ x \in X : \ h(x) > 0 \right\}$ and let
$V_1 \subseteq V_2 \subseteq V_3 \subseteq \cdots$ be a sequence of
open pre-compact subsets in $X$ such that $X = \bigcup_k V_k$. Since
$(X,\sigma)$ is non-compact there is for every $k \in \mathbb N$ an element
$$
x_k \in \left( \bigcup_{n=0}^{\infty} \sigma^n\left(U\right)
\backslash V_k \right).
$$
The sequence $\{x_k\}$ has the stated properties.
\end{proof}
A sequence $\{x_k\}$ with the properties stipulated in Lemma \ref{a69}
will be called \emph{$h$-diverging}.
\begin{lemma}\label{a16} Assume that $(X,\sigma)$ is cofinal and
non-compact, and that
$$
\sum_{n=0}^{\infty} \left|L_{\phi}^n(f)(x)\right| < \infty
$$
for all $x \in X$ and all $f \in C_c(X)$. Let $h \in C_c(X)^+$. For every $h$-diverging sequence $\{x_k\} \subseteq X$ there is a sub-sequence
$\left\{x_{k_i}\right\}$ such that the limit
\begin{equation}\label{a71}
\lim_{i \to \infty} \ \frac{\sum_{n=0}^{\infty} L_{\phi}^n(f)(x_{k_i})}{\sum_{n=0}^{\infty}
L_{\phi}^n(h)(x_{k_i})}
\end{equation}
exists for all $f \in C_c(X)$.
\end{lemma}
\begin{proof} Let $V_1 \subseteq V_2 \subseteq V_3 \subseteq \dots$ be
a sequence of
open relatively compact sets such that $X = \bigcup_i V_i$. Then
\begin{equation}\label{a62}
C_c(X) = \bigcup_i C_0(V_i),
\end{equation}
where $C_0(V_i)$ denotes the Banach space of continuous functions on
$V_i$ that vanish at infinity.
Since $C_0(V_i)$ is separable there is a sequence $\{g_n\}
\subseteq C_c(X)$ such that $ \left\{g_n\right\} \cap
C_0(V_i) $ is dense in $C_0(V_i)$ for all $i$. It follows from Lemma
\ref{b26} that for each $i$ there are $N_i \in \mathbb N$ and $C_i >0$
such that
\begin{equation*}\label{c39}
\left|\sum_{n=0}^{\infty} L^n_{\phi}(f)(x)\right| \leq C_iN_i^2\left\|f\right\|
\sum_{n=0}^{\infty} L^n_{\phi} (h)(x)
\end{equation*}
for all $f \in C_0(V_i)$ and all $x \notin \bigcup_{j=0}^{N_i}
\sigma^j\left(\overline{V_i}\right)$. Since $\lim_{k \to \infty} x_k =
\infty$ this implies that
\begin{equation}\label{a70}
\left|\frac{\sum_{n=0}^{\infty} L_{\phi}^n(f)(x_k)}{\sum_{n=0}^{\infty}
L_{\phi}^n(h)(x_k)} \right| \leq C_iN_i^2 \left\|f\right\|
\end{equation}
for all $f \in C_0(V_i)$ and all sufficiently large $k \in \mathbb
N$. A diagonal sequence argument shows that there is a
sub-sequence $\{x_{k_i}\}$ such that
$$
\lim_{i \to \infty} \frac{\sum_{n=0}^{\infty} L_{\phi}^n(g_j)(x_{k_i})}{\sum_{n=0}^{\infty}
L_{\phi}^n(h)(x_{k_i})}
$$
exists for all $j$. Let $f \in C_0(V_i)$. It follows from (\ref{a70}) that
\begin{equation*}\label{c40}
\begin{split}
&\left|\frac{\sum_{n=0}^{\infty} L_{\phi}^n(g_j)(x_{k_l})}{\sum_{n=0}^{\infty}
L_{\phi}^n(h)(x_{k_l})} \ - \ \frac{\sum_{n=0}^{\infty} L_{\phi}^n(f)(x_{k_l})}{\sum_{n=0}^{\infty}
L_{\phi}^n(h)(x_{k_l})}\right| = \left|\frac{\sum_{n=0}^{\infty} L_{\phi}^n(g_j-f)(x_{k_l})}{\sum_{n=0}^{\infty}
L_{\phi}^n(h)(x_{k_l})}\right| \\
&\leq C_iN_i^2 \left\|g_j -f\right\|
\end{split}
\end{equation*}
for all large $l$ when $g_j \in C_0(V_i)$. Since $\left\{g_n\right\}
\cap C_0(V_i)$ is dense in $C_0(V_i)$ it follows from this, in
combination with
(\ref{a62}), that the limit (\ref{a71})
exists for all $f \in C_c(X)$.
\end{proof}
\begin{lemma}\label{a17} Assume that $(X,\sigma)$ is cofinal and
non-compact, and that $\mathbb P(\phi) < 0$. Let $h \in C_c(X)^+$. There
is a regular Borel measure $m$ on $X$ such that $L_{\phi}^*(m) = m$
and $\int_X h \ dm = 1$.
\end{lemma}
\begin{proof} It follows from the definition of $\mathbb P(\phi)$ that
$\sum_{n=0}^{\infty} \left|L_{\phi}^n(f)(x)\right| < \infty$ for all
$x \in X$ and all $f \in C_c(X)$. By Lemma \ref{a16} and Lemma
\ref{a69} there is a sequence
$\left\{x_k\right\}$ in $X$ such that $\lim_{k \to \infty} x_k =
\infty$ and the limit
$$
\lim_{k \to \infty} \ \frac{\sum_{n=0}^{\infty} L_{\phi}^n(f)(x_{k})}{\sum_{n=0}^{\infty}
L_{\phi}^n(h)(x_{k})}
$$
exists for all $f \in C_c(X)$. Riesz' representation theorem provides
us therefore with a regular Borel measure $m$ on $X$ such that
$$
\int_X f \ dm =\lim_{k \to \infty} \ \frac{\sum_{n=0}^{\infty} L_{\phi}^n(f)(x_{k})}{\sum_{n=0}^{\infty}
L_{\phi}^n(h)(x_{k})}
$$
for all $f \in C_c(X)$. Note that
$$
\int_X f \ dm - \int_X L_{\phi}(f) \ dm = \lim_{k \to \infty} \ \frac{f(x_{k})}{\sum_{n=0}^{\infty}
L_{\phi}^n(h)(x_{k})} \ = \ 0
$$
for all $f \in C_c(X)$ since $\lim_{k \to \infty} x_k =
\infty$.
\end{proof}
\begin{thm}\label{a63} Assume that $(X,\sigma)$ is cofinal and non-compact,
and that $\mathbb P(-\phi) \leq 0$. Let $h \in C_c(X)$ be
non-negative and non-zero. There is a $\phi$-conformal measure $m$ such that $\int_X h \ dm
= 1$.
\end{thm}
\begin{proof} Let $n \in \mathbb N$ and note that $\mathbb P\left(-\phi -
\frac{1}{n}\right) = \mathbb P(-\phi) - \frac{1}{n} < 0$. Hence Lemma
\ref{a39} and Lemma \ref{a17} give us a $\phi +
\frac{1}{n}$-conformal measure $m_n$ for each $n \in \mathbb N$, with
the additional property that $\int_X h \ dm_n =1$. Let $V_1
\subseteq V_2 \subseteq \cdots$ be the sets from the proof of Lemma
\ref{a16}. It follows from the way the
$m_n$'s were constructed, in particular from (\ref{a70}), that there are numbers $M_k >
0$, not depending
of $n$, such that
\begin{equation}\label{d1}
\left|\int_X f \ d m_n \right| \leq M_k\|f\|
\end{equation}
for all $f \in C_0(V_k)$ and all $n$. (It is necessary here to check
that the constant $C$ in Lemma \ref{b26} can be chosen independently
of $n$.) Thus the sequence of linear functionals on $C_0(V_k)$
arising from integration with respect to the $m_n$'s are contained
in the ball of radius $M_k$ in the dual space of $C_0(V_i)$. By
compactness of this ball in the weak$^*$-topology we deduce that the
sequence has a convergent subsequence for each $k$. Combining
(\ref{a62}) with a diagonal sequence argument this leads to the
conclusion that there is a subsequence $\left\{m_{n_i}\right\}$ such
that the limit $\lim_{i \to \infty} \int_X f \ d m_{n_i}$ exists for
all $f \in C_c(X)$. By the Riesz representation theorem this gives us
a regular Borel measure $m$ on $X$ such that
$$
\lim_{i \to \infty} \int_X f \ dm_{n_i} = \int_X f \ dm
$$
for all $f\in C_c(X)$. In particular, $\int_X h \ dm =
1$. To check
that $m$ is ${\phi}$-conformal, let $U$ be an open subset of
$X$ such that $\sigma$ is injective on $U$. For each $i$ and each $g
\in C_c(U)$ we have that
$$
\int_{\sigma(U)} g\circ \left(\sigma|_U\right)^{-1}(x) \ dm_{n_i}(x) = \int_U g(x)
e^{\phi(x) + n_i^{-1}} \ dm_{n_i}(x)
$$
by Lemma \ref{a101}. Since $g \in C_c(V_k)$ for some sufficiently large
$k$ and since $ge^{\phi + n_i^{-1}}$ converges uniformly to
$ge^{\phi}$, it follows from (\ref{d1}) that we can take the limit $i \to \infty$ to find that
$$
\int_{\sigma(U)} g\circ \left(\sigma|_U\right)^{-1}(x) \ dm(x) = \int_U g(x)
e^{\phi} \ dm(x) .
$$
Hence $m$ is ${\phi}$-conformal; again by Lemma \ref{a101}.
\end{proof}
\subsection{Conformal measures for cofinal Markov shifts}\label{MS2}
We return now to the setting of Section \ref{MS1}.
\begin{lemma}\label{c16} Let $X = \mathcal P(G)$ be the space of infinite paths
in a cofinal graph $G$,
and let $\sigma$ be the left shift on $\mathcal P(G)$. Let $\phi :
\mathcal P(G) \to
\mathbb R$ be a function such that $\lim_{k \to \infty}
\operatorname{var}_k(\phi) = 0$. Assume that there is a
${\phi}$-conformal
measure $m$. Then $\mathbb P(-\phi) \leq 0$.
\end{lemma}
\begin{proof} Thanks to Proposition \ref{c5} we can assume that $NW_G
\neq \emptyset$. Let $v \in NW_G$ and $\epsilon > 0$ be arbitrary. There is a
$k \in \mathbb N$ such that $\operatorname{var}_k(\phi) \leq \epsilon$. Consider a finite
path $\mu$ in $NW_G$ of length $k$ with $s(\mu) = v$. Then
$$
\sum_{\sigma^n(y) = y} e^{-\phi_n(y)} 1_{Z(\mu)}(y) \leq
\sum_{y \in \sigma^{-n}(x)} e^{-\phi_n(y) +n\epsilon} 1_{Z(\mu)}(y)
$$
for all $x \in Z(\mu)$ and all $n > k$, cf. (\ref{h61}). It follows that
\begin{equation}\label{c98}
\begin{split}
&m(Z(\mu)) \sum_{\sigma^n(y) = y} e^{-\phi_n(y)} 1_{Z(\mu)}(y) \leq
e^{n \epsilon} \int_{Z(\mu)} L^n_{-\phi}\left(1_{Z(\mu)}\right) \ d m
\\
& \leq e^{n\epsilon} \int_{\mathcal P(G)} L^n_{-\phi}\left(1_{Z(\mu)} \right) \ d m =
e^{n\epsilon} m(Z(\mu))
\end{split}
\end{equation}
for all large $n$.
Note that $m(Z(\mu)) > 0$ since $m$ is ${\phi}$-conformal and
$(X,\sigma)$ is cofinal. It follows therefore from (\ref{c98}) that
$$
\limsup_n \frac{1}{n} \log \sum_{\sigma^n(y) = y} e^{-\phi_n(y)}
1_{Z(\mu)}(y) \leq \epsilon .
$$
Thanks to Lemma \ref{c19} and Proposition \ref{c5}, this completes the proof.
\end{proof}
To formulate the next theorem we need a stronger condition on $\phi$
when $NW_G$ is non-empty and finite. Following Walters, \cite{W}, we say
that $\phi$ \emph{satisfies Bowen's condition} on $NW_G$ when there is
a $C > 0$ such that
$$
\left| \sum_{i=0}^{n-1} \left[ \phi\left(\sigma^i(x)\right) -
\phi\left(\sigma^i(y)\right)\right]\right| \leq C
$$
for all $(x_i)_{i=0}^{\infty} ,(y_i)_{i=0}^{\infty} \in \mathcal
P(NW_G)$ such that $x_i = y_i, i = 0,1,2, \cdots, n-1$, and all $n
\geq 1$.
\begin{thm}\label{c41} Assume $G$ is a cofinal graph and that
$\phi : \mathcal P(G) \to \mathbb R$ is a function such that
$\lim_{k \to \infty} \operatorname{var}_k(\phi) =0$.
\begin{enumerate}
\item[1)] Assume that $NW_G = \emptyset$. There is a
${\phi}$-conformal measure for the left shift on $\mathcal P(G)$.
\item[2)] Assume that $NW_G$ is non-empty and finite. Assume that
$\phi$ satisfies Bowen's condition on $NW_G$. There is a
${\phi}$-conformal measure for the left shift on $\mathcal P(G)$ if and only if $\mathbb P(-\phi) = 0$,
and it is then unique up multiplication by a scalar.
\item[3)] Assume that $NW_G$ is infinite. There is an
${\phi}$-conformal measure for the left shift if and only if $\mathbb P(-\phi) \leq
0$.
\end{enumerate}
\end{thm}
\begin{proof} It is easy to see that in case 1) and 3) there is an element $x \in \mathcal P(G)$ such that $\lim_{j \to \infty}
\sigma^j(x) = \infty$. Hence $(X,\sigma)$ is non-compact in these cases
and the stated conclusions follow from a) of Proposition \ref{c5},
Lemma \ref{c16} and Theorem \ref{a63}.
Consider then case 2). Let $p$ be the global period of $NW_G$. Then
$\mathcal P(NW_G)$ is the disjoint union of compact and open sets $X_i, i = 1,2,
\cdots ,p$, such that $\sigma(X_i) = X_{i+1}$, mod $p$. Furthermore,
the restriction of $\sigma^p$ to $X_p$ is a mixing subshift of finite
type. Since $\phi$ satisfies Bowen's condition on $NW_G$ by assumption,
it follows that $\phi_p$ satisfies Bowen's condition on $X_p$ with
respect to $\sigma^p$. It follows therefore from Theorem 1.3 and Theorem 2.16 in \cite{W} that there is a
$\phi_p$-conformal measure for $\sigma^p$ on $X_p$ if and only if the
pressure of the restriction of $-\phi_p$ to $X_p$ (with respect to
$\sigma^p$) is zero. It follows from b) of Proposition \ref{c5} that
this pressure is $p \mathbb P(-\phi)$. Since a
$\phi$-conformal measure for $\sigma$ on $\mathcal P(G)$ will
restrict to a $\phi_p$-conformal measure for $\sigma^p$ on $X_p$, we
deduce that there can only be a $\phi$-conformal measure for $\sigma$ on
$\mathcal P(G)$ if $\mathbb P(-\phi) = 0$. Assume then that $\mathbb
P(-\phi) = 0$. It follows from the theorems of Walters quoted
above that there
is a $\phi_p$-conformal measure for $\sigma^p$ on $X_p$, and that it is
unique up to multiplication by a scalar.
It suffices now
to show that a $\phi_p$-conformal measure $\nu$ for $\sigma^p$ on $X_p$ extends
uniquely to a $\phi$-conformal measure $\mu$ for $\sigma$ on $\mathcal
P(G)$. For each $i < p$ there is a partition $X_i = \sqcup_j B_{i,j}$
of $X_i$ into compact and open sets
such that $\sigma^{p-i} : B_{i,j} \to X_p$ is injective for each
$j$. To extend $\nu$ to a $\phi$-conformal measure $\mu$ on $\mathcal
P(NW_G)$ we must therefore define $\mu$ on $X_i$ by the requirement that
\begin{equation}\label{u1}
\int_{X_i} g \ d \mu = \sum_j \int_{\sigma^{p-i}(B_{i,j})} g \left(
\left(\sigma^{p-i}|_{B_{i,j}}\right)^{-1}(x)\right)
e^{-\phi_{p-i}\left(\left(\sigma^{p-i}|_{B_{i,j}}\right)^{-1}(x)\right)}
\ d \nu(x)
\end{equation}
when $g \in C(X_i)$. In particular, we see that an extension of $\nu$ to a
$\phi$-conformal measure for $\sigma$ on $\mathcal P(NW_G)$ is unique. To see
that such an extension exists it is possible to show that the recipe
(\ref{u1}) provides the required extension by using that $\nu$ is
$\phi_p$-conformal on $X_p$. Alternatively, one can first extend $\nu$
to a measure $\nu$ on $\mathcal P(NW_G)$ such that $\nu(X_i) = 0, i
\neq p$, and then take
$$
\mu = \sum_{i=0}^{p-1} \left(\mathcal L_{-\phi}^*\right)^i(\nu),
$$
where $\mathcal L_{-\phi}$ denotes the compression of $L_{-\phi}$ to
$C(\mathcal P(NW_G))$, i.e.
$$
\mathcal L_{-\phi}(g)(x) \ = \sum_{y \in \sigma^{-1}(x) \cap \mathcal
P(NW_G)} \ e^{-\phi(y)} g(y)
$$
when $g \in C(\mathcal P(NW_G))$. To extend $\mu$ from $\mathcal
P(NW_G)$ to $\mathcal P(G)$ note that
\begin{equation}\label{c57}
\int_{\sigma([v])} g \circ \left(\sigma|_{[v]}\right)^{-1} \ d\mu \ =
\ \int_{[v]} ge^{\phi} \ d\mu
\end{equation}
when $v \in NW_G$ and $g \in C([v])$. Set $H_0 = NW_G$ and
$$
H_{n} = \left\{ w \in V_G : \ r\left(s^{-1}(w)\right) \subseteq H_{n-1}
\right\}
$$
for $n\geq 1$. Then $H_n \subseteq H_{n+1}$ for all $n$ and $\bigcup_n H_n = V_G$ because $G$ is cofinal,
cf. the proof of Lemma 2.4 in \cite{Th5}. When $w \in H_1$ we can define a regular
Borel measure $\mu_w$ on
$[w]$ by the requirement that
$$
\int_{[w]} f \ d\mu_w = \int_{\sigma([w])} e^{-\phi \left( \left(\sigma|_{[w]}\right)^{-1}(x)\right)} f \circ
\left(\sigma|_{[w]}\right)^{-1}(x) \ d\mu(x)
$$
for all $f \in C([w])$.
We extend $\mu$ to a regular Borel measure on
$\left\{x \in \mathcal P(G) : \ s(x) \in H_1 \right\}$,
by setting
$$
\mu(B) = \mu\left(
B \cap \mathcal P(NW_G)\right) + \sum_{w \in H_1 \backslash H_0} \mu_w(B\cap [w]).
$$
Then (\ref{c57}) holds for all $v \in H_1$. Continuing by induction
we get a regular Borel measure on $\mathcal P(G)$ such that
(\ref{c57}) holds for $v \in V_G$. It follows then from Lemma
\ref{a39} that $\mu$ is ${\phi}$-conformal, and it is clear from the
construction that it is the only
$\phi$-conformal measure extending $\nu$.
\end{proof}
\begin{remark} In the cases 1) and 3) of Theorem \ref{c41} a $\phi$-conformal measure is
generally not unique.
The reason for the introduction of Bowen's condition in case 2) is
that we do not know (more precisely, the author does not know) if there can be a $\phi$-conformal measure on a
mixing one-sided subshift of finite type when $\mathbb P(-\phi) < 0$ and the potential
$\phi$ is continuous, but does not satisfy Bowen's condition. By
Theorem 2.16 in \cite{W} it is possible to use a condition slightly
weaker than Bowen's, but beyond that nothing seems to be known. When $\mathbb P(-\phi) = 0$
there \emph{is} a $\phi$-conformal measure, also when $\phi$ is only assumed to
be continuous. This follows from Theorem 6.9 in \cite{Th2} since the spectral
radius of the Ruelle operator is $1$ when $\mathbb P(\phi) = 0$ by
Theorem 1.3 in \cite{W}. It is, however, not clear if the measure is unique in
general. Therefore, without assuming Bowen's condition or the slightly
weaker
condition used by Walters in Theorem 2.16 of \cite{W}, the only thing
we can say in case 2) is that there is a $\phi$-conformal measure if
$\mathbb P(-\phi) = 0$, and none if $\mathbb P(-\phi) > 0$.
\end{remark}
For a mixing topological Markov shift O. Sarig has shown
the existence of an $e^{\mathbb P(\phi)}$-eigenmeasure for the dual
Ruelle operator when the potential has summable variation and is recurrent,
\cite{Sa1},\cite{Sa2}. Van Cyr extended this to transient potentials
in his thesis, \cite{Cy}. We can now supplement their results as
follows.
\begin{thm}\label{h20} Let $G$ be a countable connected directed graph
with finite out-degree at each vertex. Assume that $G$ is not
finite. Let $\phi : \mathcal P(G) \to
\mathbb R$ be a function such that $\lim_{k \to \infty} \operatorname{var}_k (\phi)
= 0$, and let $t \in \mathbb R$. There is a non-zero regular Borel measure $m$ on $\mathcal P(G)$ such that
$$
L_{\phi}^*(m) = e^t m
$$
if and only if $t \geq \mathbb P(\phi)$.
\end{thm}
\begin{proof} $\mathcal P(G)$ is locally compact because $G$ has
finite out-degree at each vertex, and second countable because $G$
is countable. The dynamical system $(\mathcal P(G),\sigma)$ is
cofinal because $G$ is connected, and non-compact because $G$ is
also infinite. Therefore the
theorem follows from Lemma \ref{a101} and Theorem \ref{c41} 3) after the
observation that $\mathbb P(\phi - t) = \mathbb P(\phi) - t$.
\end{proof}
It follows from Sarigs results that there is a unique regular conservative
$e^t$-eigenmeasure for the dual Ruelle operator when $t = \mathbb P(\phi)$, provided $\phi$ is
recurrent, $G$ is aperiodic, and $\sum_{k=2}^{\infty} \operatorname{var}_k(\phi)
< \infty$, cf. \cite{Sa2}. His results do not require $G$ to have
finite out-degree at the vertexes. As will be shown in the next
section, at least in the locally
compact setting, $e^t$-eigenmeasures
must be dissipative when $ t > \mathbb P(\phi)$, and when $t= \mathbb
P(\phi)$ and $\phi$ is transient.
\section{Dissipativity}\label{dis}
In this section we assume only that $X$ is a second countable locally
compact Hausdorff space, $\sigma : X \to X$ is a local homeomorphism and
$\phi : X \to \mathbb R$ is continuous.
\begin{lemma}\label{a34} Assume that $m$ is
${\phi}$-conformal. Then $m \circ \sigma^{-1}$ is absolutely
continuous with respect to $m$.
\end{lemma}
\begin{proof} Write $X = \sqcup_{i \in \mathbb N} A_i$ as a disjoint
union where the
$A_i$'s are Borel subsets of $X$ such that
$\sigma$ is injective on each $A_i$, and let $B \subseteq X$ be a
Borel set. Since the
conformality assumption implies that
\begin{equation*}\label{a35}
m(B \cap \sigma(A_i)) = m\left(\sigma\left(\sigma^{-1}(B) \cap
A_i\right)\right) = \int_{\sigma^{-1}(B) \cap A_i} e^{\phi(x)}
\ dm(x),
\end{equation*}
it follows that $m\left(B\right) = 0 \ \Rightarrow \
m\left(\sigma^{-1}(B) \cap A_i\right) = 0 \ \forall i \ \Rightarrow
\ m(\sigma^{-1}(B)) = 0$.
\end{proof}
In general $m \circ \sigma^{-1}$ is not equivalent to $m$; it is if
and only if $m(X \backslash \sigma(X)) = 0$. It follows from Lemma \ref{a34} that a ${\phi}$-conformal measure
$m$ gives rise to a \emph{Hopf decomposition} of $X$. That is,
$$
X = C \sqcup D,
$$
where $C$ and $D$ are disjoint Borel sets with the following
properties, cf. \S 1.3 in \cite{Kr}:
\begin{enumerate}
\item[1)] $\sigma(C) \subseteq C$,
\item[2)] For every Borel subset $A \subseteq C$,
$$
m \left( A \backslash \bigcap_{n=1}^{\infty} \bigcup_{k=n}^{\infty} \sigma^{-k}(A)\right) = 0.
$$
\item[3)] $D = \bigcup_{n=1}^{\infty} D_n$ where each $D_n$ is
wandering in the sense that
$$
D_n \cap \sigma^{-k}(D_n) = \emptyset, \ k = 1,2,3, \cdots .
$$
\end{enumerate}
The set $C$ is the \emph{conservative part} of $\sigma$.
For a given ${\phi}$-conformal measure $m$, the Hopf decomposition
is unique modulo $m$-null sets, and we say that $m$
is \emph{dissipative} when the conservative part is an $m$-null set. Thus
$m$ is dissipative if and only if $X$ is the union of a countable collection
of wandering Borel sets, up to an $m$-null set.
\begin{lemma}\label{a45} Let $m$ be
a ${\phi}$-conformal measure. Assume that $\sum_{n=1}^{\infty}
L^n_{-\phi}(f)(x) < \infty$ for $m$-almost every $x$ when $f\in
C_c(X)$ is non-negative. It follows that $m$ is dissipative.
\end{lemma}
\begin{proof} Let $C$ be the conservative part of $\sigma$. Assume for a contradiction that $m(C) >
0$. Since $m$ is regular there is a non-negative $f \in C_c(X)$ such
that
$$
C_1 = \left\{ x \in C : \ f(x) \geq 1 \right\}
$$
has positive
$m$-measure. Note that $m(C_1) < \infty$. Since $\sum_{k=1}^{\infty} L_{-\phi}^k(f)(x) < \infty$ for
$m$-almost all $x$ by assumption, there is an $N \in \mathbb N$ such that
$$
C_2 = \left\{ x \in C_1 : \ \sum_{k=1}^{\infty} L^k_{-\phi}(f)(x) \leq N
\right\}
$$
has positive $m$-measure. Note that
\begin{equation}\label{d3}
Nm(C_2) \geq \int_{C_2} \sum_{k=1}^{\infty}
L^k_{-\phi}(f)(x) \ dm(x) = \sum_{k=1}^{\infty} \int_X 1_{C_2}
L^k_{-\phi}(f) \ dm .
\end{equation}
Let $\{g_k\}$ be a uniformly bounded sequence from $C_c(X)$ which converges $m$-almost
everywhere to $1_{C_2}$. Then
\begin{equation*}\label{d4}
\begin{split}
& \int_X 1_{C_2}
L^k_{-\phi}(f) \ dm = \lim_{n \to \infty} \int_X g_n
L^k_{-\phi}(f) \ dm = \lim_{n \to \infty} \int_X
L^k_{-\phi}(f g_n \circ \sigma^k) \ dm \\
& = \lim_{n \to \infty} \int_X f g_n \circ \sigma^k \ dm = \int_X f
1_{C_2}\circ \sigma^k \ dm \geq \int_{C_2}
1_{C_2}\circ \sigma^k \ dm .
\end{split}
\end{equation*}
Inserted into (\ref{d3}) we get that
\begin{equation}\label{d5}
\int_{C_2}
\sum_{k=1}^{\infty} 1_{C_2}\circ \sigma^k \ dm \leq Nm(C_2) < \infty.
\end{equation}
However, since $\sigma$ is infinitely recurrent on $C$ by 2) above, we know that
$$
\sum_{k=1}^{\infty} 1_{C_2} \circ \sigma^k (x) = \infty
$$
for $m$-almost every $x \in C_2$. This contradicts (\ref{d5}) since
$m(C_2) > 0$.
\end{proof}
The following is an immediate consequence of Lemma \ref{a45}.
\begin{prop}\label{c43} Assume that $(X,\sigma)$ is cofinal and that
$\mathbb P(-\phi) < 0$. It follows that every
${\phi}$-conformal measure for $\sigma$ is dissipative.
\end{prop}
We remark that Lemma \ref{a45} can also be used to show that for a
transient potential, as defined by O. Sarig in \cite{Sa1}, any
conformal measure is dissipative.
Although the conformal measures for a potential $\phi$ with $\mathbb
P(-\phi)$ negative must be dissipative, they may very well be finite. This
occurs already for constant potentials on certain locally compact
mixing Markov shifts.
\section{Conformal measures in the exponential family}\label{exp}
Let $h : \mathbb C \to \mathbb C$ be a holomorphic map and $J(h)$
the Julia set of $h$. We assume that $h$ is transcendental, i.e. is
not a polynomial, and then $J(h)$ is closed, unbounded and totally invariant under $h$, viz. $h^{-1}(J(h))
= J(h)$. If we assume that $h'(z) \neq 0$ for all $z \in J(h)$, it
follows that $h$ is locally
injective on $J(h)$ and hence that $h : J(h) \to J(h)$ is a local
homeomorphism. By Montel's theorem, Theorem 3.7 in
\cite{Mi}, there is a set $\mathcal E(h) \subseteq \mathbb C$,
consisting of at most one point $x$, which must be totally
$h$-invariant in the sense that $h^{-1}(x) = \{x\}$, such that for any
open subset $U$ of $\mathbb C$ with $U \cap J(h) \neq \emptyset$,
$\bigcup_{i,j \in \mathbb N}^{\infty} h^{-j}\left(h^i(U)\right) = \mathbb C \backslash \mathcal E(h)$.
It follows that $J(h)\backslash \mathcal E(h)$ is
totally $h$-invariant, locally compact in the relative topology and
that $h : J(h) \backslash \mathcal E(h) \to J(h)\backslash \mathcal
E(h)$ is cofinal. Another application of Montel's theorem shows that
$\left(J(h)\backslash \mathcal E(h),h\right)$ is also non-compact. Hence the results of the previous
sections apply to this dynamical system.
In this setting probability conformal measures have been constructed by Mayer and Urbanski in
\cite{MU1} and \cite{MU2} for a large class of entire functions when
the potential $\phi$ is chosen carefully. When $h$ comes from the
exponential family $h(z) = \lambda e^z$, the potential considered by
Mayer and Urbanski is
\begin{equation}\label{g7}
\phi(z) = \log |z|,
\end{equation}
or some 'tame' perturbation of $\phi$. The inverse temperature $\beta$
for
which a finite $\beta \phi$-conformal measure exists is
invariably unique, but we can now show that at least for the hyperbolic members
of the exponential family $\lambda e^z$ the situation is very different when
infinite conformal measures are also considered. To substantiate this we assume that $h = E_{\lambda}$ where $E_{\lambda}(z) = \lambda
e^{z}$ for some $0 < \lambda <
e^{-1}$. In this case $\mathcal E(E_{\lambda}) = \emptyset$.
It follows from Proposition 4.5 in \cite{MU1} that
$$
A_{\beta} : = \sup_{x \in J(E_{\lambda})} \sum_{y \in
E_{\lambda}^{-1}(x)} |y|^{-\beta} < \infty
$$
for all $\beta > 1$, a fact which is also easy to verify directly in
the present case. This implies, in particular, that we can define $L_{-\beta \phi}$ by
the same formula as above, viz.
$$
L_{-\beta \phi}(g)(x) = \sum_{y \in E_{\lambda}^{-1}(x)} |y|^{-\beta}g(y),
$$
as a positive linear operator on the vector space of bounded functions on
$J(E_{\lambda})$ for all $\beta > 1$. In order to combine the methods
and results of this paper with those of \cite{MU1}, we only have to prove the
following lemma.
\begin{lemma}\label{g6} When $E_{\lambda} : J(E_{\lambda})
\to J(E_{\lambda})$ for some $\lambda \in ]0,e^{-1}[$ and $\phi$ is
the potential (\ref{g7}),
$$
\mathbb P(- \beta\phi) = \limsup_n \frac{1}{n} \log L^n_{-\beta \phi}(1) (x)
$$
for all $x \in J(E_{\lambda})$ and all $\beta > 1$.
\end{lemma}
\begin{proof} It is shown in \cite{MU1} that $\limsup_n \frac{1}{n}
\log L^n_{-\beta \phi}(1) (x)$ is independent of $x \in
J(E_{\lambda})$. Since we clearly have the inequality
$$
\limsup_n \frac{1}{n} \log L^n_{-\beta \phi}(1) (x) \geq \limsup_n
\frac{1}{n} \log L^n_{-\beta \phi}(f) (x)
$$
for any non-negative $f \in C_c(J(E_{\lambda}))$ with $\|f\| \leq 1$, it suffices
therefore to find a single element $x_0 \in J(E_{\lambda})$ and a non-negative
function $f \in C_c(J(E_{\lambda}))$ such that
\begin{equation}\label{g10}
\limsup_n
\frac{1}{n} \log L^n_{-\beta \phi}(f) (x_0) \geq \limsup_n \frac{1}{n}
\log L^n_{-\beta \phi}(1) (x_0) .
\end{equation}
For this purpose we need the following observation which follows from
Lemma 5.3 in \cite{MU1}:
\begin{obs}\label{g3} Set $B_R = \left\{z \in \mathbb C: \ |z| \leq
R\right\}$. Then
$$
\lim_{R \to\infty} \sup_{x \in J(E_{\lambda})} \sum_{y \in E_{\lambda}^{-1}(x)
\backslash B_R } |y|^{-\beta} \ = \ 0
$$
when $\beta > 1$.
\end{obs}
It is well-known that $J(E_{\lambda})$ contains a fixed point $x_0$
for $E_{\lambda}$. It follows
from Observation \ref{g3} that there is an $R > 0$ such that
\begin{equation}\label{g8}
m : = \sup_{x \in J(E_{\lambda})} \sum_{y \in E_{\lambda}^{-1}(x) \backslash B_R
} |y|^{-\beta} < \left|x_0\right|^{-\beta} .
\end{equation}
Let $f \in C_c(J(E_{\lambda}))$ be a non-negative function such that $f(z) = 1$ when
$z \in B_R \cap J(E_{\lambda})$. Then $f + 1_{J(E_{\lambda})
\backslash B_R} \geq 1$ on $J(E_{\lambda})$ and hence
\begin{equation}\label{f22}
L_{-\beta \phi}(f)(x) + L_{-\beta \phi}\left( 1_{J(E_{\lambda})
\backslash B_R} \right)(x) \geq L_{-\beta \phi}(1)(x)
\end{equation}
for all $x\in J(E_{\lambda})$.
It follows from (\ref{f22}) first that
$$
L_{-\beta \phi}(f)(x) \geq L_{-\beta \phi}(1)(x) - m,
$$
and then that
$$
L^n_{-\beta \phi}(f)(x) \geq L^n_{-\beta \phi}(1)(x) - mL^{n-1}_{-\beta \phi}(1)(x) .
$$
for all $n$ and $x$. By using that
$$
L^n_{-\beta \phi}(1)(x_0) = \sum_{y \in
E^{-1}_{\lambda}(x_0)}|y|^{-\beta} L^{n-1}_{-\beta \phi}(1)(y) \
\geq \
\left|x_0\right|^{-\beta} L^{n-1}_{-\beta \phi}(1)(x_0),
$$
we obtain the estimate
$$
L^n_{-\beta \phi}(f)(x_0) \geq \left(|x_0|^{-\beta} -m \right) L^{n-1}_{-\beta \phi}(1)(x_0) .
$$
for all $n$. Thanks to (\ref{g8}) this proves (\ref{g10}).
\end{proof}
It follows from Lemma \ref{g6} that $\mathbb P(-\beta \phi) =
P(\beta)$, where $P$ is the pressure function considered
in Proposition 7.2 of \cite{MU1}. We can therefore deduce from
Proposition 7.2 in \cite{MU1} that there is a unique $\beta_0 > 1$ such that $\mathbb
P(-\beta_0 \phi) = 0$, and that $\mathbb P(-\beta \phi) < 0$ for all
$\beta > \beta_0$. As shown in \cite{MU1} the number $\beta_0$ is the
Hausdorff dimension $HD(J_r(E_{\lambda}))$ of the radial Julia set
$$
J_r(E_{\lambda}) = \left\{ z \in J(E_{\lambda}) : \ \liminf_n
|E_{\lambda}^n(z)| < \infty \right\} .
$$
In particular, $\beta_0 < 2$ by Corollary 1.4 in \cite{MU1}. A main
tool in \cite{MU1} for the study of $J_r(E_{\lambda})$ is a
${\beta_0 \phi}$-conformal measure. It follows now from Theorem
\ref{a63} that they exist for all $\beta \geq HD(J_r(E_{\lambda}))$,
and from Proposition \ref{c43} that they are dissipative when $\beta >
HD(J_r(E_{\lambda}))$, i.e. we have the following.
\begin{prop}\label{g11} For each $\lambda \in ]0,e^{-1}[$ and each
$\beta \geq HD(J_r(E_{\lambda}))$ there is an ${\beta
\phi}$-conformal measure for $E_{\lambda} : J(E_{\lambda}) \to
J(E_{\lambda})$. For $\beta > HD(J_r(E_{\lambda}))$ the measures are dissipative.
\end{prop}
As shown in \cite{MU1} there is a Borel probability measure on
$J(E_{\lambda})$ which is a $\beta
\phi$-conformal measure when $\beta = HD(J_r(E_{\lambda}))$, and
in \cite{MU2} it is shown that it is unique. The paper \cite{MU2}
contains much more information on this Borel probability measure.
If we consider the potential
$$
\psi(z) = \log \left|E_{\lambda}(z)\right|,
$$
there is a bijective correspondence between the ${\beta
\phi}$-conformal and the ${\beta \psi}$-conformal measures given
by sending the ${\beta \phi}$-conformal measure $m$ to the ${\beta
\psi}$-conformal measure $|z|^{\beta} dm(z)$. It follows from the
argument in
Remark 5.7 in \cite{Th3} that there are no finite ${\beta
\psi}$-conformal measure for any $\beta$, but Proposition \ref{g11}
shows that infinite conformal measures exist for all $\beta \geq HD(J_r(E_{\lambda}))$:
\begin{cor}\label{g12} For each $\lambda \in ]0,e^{-1}[$ and each
$\beta \geq HD(J_r(E_{\lambda}))$ there is an ${\beta
\psi}$-conformal measure for $E_{\lambda} : J(E_{\lambda}) \to
J(E_{\lambda})$. For $\beta > HD(J_r(E_{\lambda}))$ the measures are dissipative.
\end{cor}
\section{KMS weights from conformal measures}\label{last}
A local homeomorphism $\sigma$ on a locally compact Hausdorff space
$X$ gives rise to an
\'etale groupoid $\Gamma_{\sigma}$ and hence a $C^*$-algebra
$C^*(\Gamma_{\sigma})$ by a construction introduced in
increasing generality by Renault, Deaconu and Anantharaman-Delaroche,
\cite{Re1}, \cite{De}, \cite{An}. To
describe the construction, set
$$
\Gamma_{\sigma} = \left\{ (x,k,y) \in X \times \mathbb Z \times X :
\ \exists n,m \in \mathbb N, \ k = n -m , \ \sigma^n(x) =
\sigma^m(y)\right\} .
$$
This is a groupoid with the set of composable pairs being
$$
\Gamma_{\sigma}^{(2)} \ = \ \left\{\left((x,k,y), (x',k',y')\right) \in \Gamma_{\sigma} \times
\Gamma_{\sigma} : \ y = x'\right\}.
$$
The multiplication and inversion are given by
$$
(x,k,y)(y,k',y') = (x,k+k',y') \ \text{and} \ (x,k,y)^{-1} = (y,-k,x)
.
$$
$\Gamma_{\sigma}$ is a locally compact Hausdorff space in the topology where
open
sets $U,V$ in $X$ with the property that $\sigma^n$ is injective on $U$ and
$\sigma^m$ is injective on $V$ give rise to the open set
$$
\left\{ (x,k,y) \in U \times (n-m) \ \times V: \ \sigma^n(x) =
\sigma^m(y) \right\}
$$
in $\Gamma_{\sigma}$, and where sets of this form constitute a base for the topology. The
set $C_c(\Gamma_{\sigma})$ of compactly supported functions on
$\Gamma_{\sigma}$ is a
$*$-algebra with product
$$
fg(x,k,y) = \sum_{z, \ n- m =k } f(x,n,z)g(z,m,y)
$$
and involution $f^*(x,k,y) = \overline{f(y,-k,x)}$. The $C^*$-algebra
$C^*(\Gamma_{\sigma})$ is the completion of $C_c(\Gamma_{\sigma})$
with respect to a natural norm, cf. e.g. \cite{An} or
\cite{Th1}. Since we have been working with the condition that $\phi$
is cofinal it is worth pointing out that cofinality of $\phi$ is
almost equivalent to the simplicity of $C^*(\Gamma_{\sigma})$. In
fact, by
Theorem 4.16 in \cite{Th1}, $C^*(\Gamma_{\sigma})$ is simple if and only
if $\sigma$ is cofinal (a condition which was called 'irreducibility' in
\cite{Th1}) and $\left\{ x \in X : \ \sigma^k(x) = x \right\}$ has
non-empty interior for all $k \in \mathbb N$. In particular, if
$(X,\sigma)$ is non-compact, $C^*(\Gamma_{\sigma})$ is simple if and
only if $\sigma$ is cofinal.
A continuous
function $\phi : X \to \mathbb R$ gives rise to a continuous
one-parameter automorphism group $\alpha^{\phi}_t, \ t \in \mathbb R$,
on $C^*(\Gamma_{\sigma})$
defined such that
$$
\alpha^{\phi}_t(f)(x,k,y) = \lim_{n \to \infty} e^{i t \left(\phi_{k+n}(x) - \phi_n(y)\right)} f(x,k,y)
$$
when $f \in C_c(\Gamma_{\phi})$. The case where $\phi$ is the constant function
$1$ yields the so-called \emph{gauge action}.
Any regular Borel measure $m$ on $X$ gives rise to a densely defined
lower semi-continuous weight $\varphi_m$ on $C^*(\Gamma_{\sigma})$ such
that
$$
\varphi_m(f) = \int_{X} f(x,0,x) \ dm(x)
$$
when $f \in C_c(\Gamma_{\phi})$. This construction is the link
between conformal measures and KMS-weights because it turns out that
for any $\beta \in \mathbb R$, a regular Borel measure $m$ is $\beta\phi$-conformal if and
only if the weight $\varphi_m$ is a $\beta$-KMS-weight for the one-parameter
group $\alpha^{\phi}$, cf. Proposition 2.1 and Lemma 3.2 in
\cite{Th4}. Furthermore, all gauge-invariant $\beta$-KMS weights arise
in this way by Proposition 3.1 in \cite{Th4}. Thanks to this relation
between KMS-weights and conformal measures the preceding methods and
results have consequences for KMS-weights, some of which we now
summarise. For example we get the following from Theorem \ref{a63}.
\begin{thm}\label{e2} Assume that $(X,\sigma)$ is non-compact and
cofinal, and that $\phi : X \to \mathbb R$ is a continuous
function. Let $\beta \in \mathbb R$ be a real number such that
$\mathbb P(-\beta \phi) \leq 0$. Then there is a gauge invariant $\beta$-KMS weight for the
one-parameter group $\alpha^{\phi}$ on $C^*(\Gamma_{\sigma})$.
\end{thm}
When $G$ is a cofinal graph, as those considered in Sections \ref{MS1}
and \ref{MS2}, the $C^*$-algebra $C^*(\Gamma_{\sigma})$ coming from
the shift $\sigma$ on $\mathcal P(G)$ is
known as the graph $C^*$-algebra associated with $G$, cf. \cite{KPRR},
and it is usually denoted by $C^*(G)$.
From the results above regarding more general
potentials we easily
get the following consequences.
\begin{thm}\label{e3} Let $G$ be a cofinal graph and $\phi :
\mathcal P(G) \to \mathbb R$ a function such that $\lim_{k
\to \infty} \operatorname{var}_k(\phi) =0$. Consider the one-parameter group
$\alpha^{\phi}$ on $C^*(G)$.
\begin{enumerate}
\item[1)] Assume that $NW_G = \emptyset$. There is a gauge-invariant
$\beta$-KMS weight for $\alpha^{\phi}$ for
all $\beta \in \mathbb R$.
\item[2)] Assume that $NW_G$ is non-empty and finite. Assume that
$\phi$ satisfies Bowen's condition on $NW_G$. There is a gauge-invariant
$\beta$-KMS weight for $\alpha^{\phi}$ if and only if $\mathbb P(-\beta
\phi) = 0$, and if it exists this $\beta$-KMS weight is
unique up to multiplication by a scalar.
\item[3)] Assume that $NW_G$ is infinite. There is a gauge-invariant
$\beta$-KMS weight for $\alpha^{\phi}$ if and only if $\mathbb P(-\beta
\phi) \leq 0$.
\end{enumerate}
\end{thm}
\begin{proof} Combine Theorem \ref{c41} above with Proposition 3.1 in \cite{Th4}.
\end{proof}
It should be noted that in case 2) of Theorem \ref{e3} the function $\beta \mapsto
\mathbb P(-\beta \phi)$ may not have any zeroes, and hence KMS weights
(or states if $G = NW_G$) may not exist, cf. Example 3.7 in \cite{KR}.
\begin{cor}\label{e4} Let $G$ be a cofinal graph and $\phi :
\mathcal P(G) \to ]0,\infty[$ a function such that $\lim_{k
\to \infty} \operatorname{var}_k(\phi) =0$. Consider the one-parameter group
$\alpha^{\phi}$ on $C^*(G)$.
\begin{enumerate}
\item[1)] Assume that $NW_G = \emptyset$. There is a gauge-invariant $\beta$-KMS weight for $\alpha^{\phi}$ for
all $\beta \in \mathbb R$.
\item[2)] Assume that $NW_G$ is non-empty and finite. Assume that
$\phi$ satisfies Bowen's condition on $NW_G$. There is a
$\beta_0 \in ]0,\infty[$ such that there is a gauge invariant $\beta$-KMS weight for
$\alpha^{\phi}$ if and only if $\beta = \beta_0$, and it is then
unique up to multiplication by a scalar.
\item[3)] Assume that $NW_G$ is infinite. Assume also that $\phi$ is bounded
away from $0$ and $\infty$, i.e. there are $0 < a \leq b < \infty$
such that $\phi(y) \in [a,b]$ for all $y\in \mathcal P(G)$. There is
a $\beta_0 \in ]0,\infty]$ such that a gauge-invariant $\beta$-KMS weight
for $\alpha^{\phi}$ exists if and only if $\beta \geq \beta_0$.
\end{enumerate}
\end{cor}
\begin{proof} Only 2) and 3) require proof. Consider first case 3) and
assume that the Gurewich entropy $P_{NW_G}(0) = \mathbb P(0)$ of $NW_G$
is infinite. For any $\epsilon > 0$, any $x \in \mathcal P(NW_G)$ and any $\beta \in \mathbb R$ there is a finite path $\mu$ in
$NW_G$ such that
$$
\min \left\{e^{-\beta a}, e^{-\beta b}\right\}^ne^{-n\epsilon}
\sum_{\sigma^n(y) = y} 1_{Z(\mu)}(y) \ \leq
\ L^n_{-\beta \phi}\left(1_{Z(\mu)}\right)(x)
$$
for all large $n$, cf. (\ref{h61}). Since $e^{\mathbb P(0)} =
\limsup_n \left( \sum_{\sigma^n(y) = y}
1_{Z(\mu)}(y)\right)^{\frac{1}{n}} = \infty$, it follows that
$\mathbb P_x(-\beta \phi) = \infty$. By Lemma \ref{c16} there are
therefore no $\beta \phi$ conformal measure and we have to set
$\beta_0 = \infty$ in this case.
Case 2) and case 3) with $\mathbb P(0)$ finite can be handled
together. In both cases the proof depends
on the finiteness and continuity of the function $\beta \mapsto
\mathbb P(-\beta \phi)$. To establish these properties, note that there are
constants $ 0 < a
\leq b < \infty$
such that $a \leq \phi(x) \leq b$ for all $x\in \mathcal
P(NW_G)$. In case 3) this is an assumption, and in case 2) it
follows from compactness of $\mathcal P(NW_G)$. Let $\beta, \beta'
\in \mathbb R, \ \beta \leq \beta'$, and consider a non-negative $f \in
C_c(\mathcal P(G))$. Then
$$
\sum_{y \in
\sigma^{-n}(x)} e^{(\beta - \beta')nb } e^{-\beta
\phi_n(y)}f(y) \leq \sum_{y \in \sigma^{-n}(x)} e^{- \beta' \phi_n(y)}f(y) \leq \sum_{y \in
\sigma^{-n}(x)} e^{(\beta - \beta')n a} e^{-\beta
\phi_n(y)}f(y),
$$
leading to the estimates
$$
(\beta - \beta') b + \mathbb P(- \beta \phi) \leq \mathbb P(-\beta' \phi) \leq (\beta -
\beta') a + \mathbb
P(-\beta \phi) .
$$
It follows first that $\mathbb P(-\beta \phi) \in \mathbb
R$ for all $\beta \in \mathbb R$ since $\mathbb P(0)$ is finite; in case 2)
because $NW_G$ is and in case 3) by assumption. Once this is
established the estimates above show that $\beta \mapsto \mathbb P(-\beta \phi)$
is continuous, strictly decreasing and converges to $-\infty$ when
$\beta \to \infty$. In this way 2) and 3) follow from the
corresponding cases of Theorem \ref{e3}.
\end{proof}
If we take $\phi$ to be constant $1$ in Corollary \ref{e4}, we recover
Theorem 4.3 in \cite{Th4}.
|
2,877,628,091,182 | arxiv | \section{Introduction}
A fundamental expectation of a quantum theory of gravity is that it will cure the problems
that plague classical general relativity. One hopes, for example, that singularities get resolved in quantum
gravity \cite{cm69,bb84,kr97,ymo00,bt06}, that quantum gravity will provide the theoretcial foundation for
cosmic censorship \cite{rp79} and that it will give a better understanding of the relationship, already
predicted on the semi-classical level, between gravity and thermodynamics \cite{sh71,jb72,bch73,sh75}.
In the absence of a generally agreed upon framework for such a theory, a useful approach is to quantize
simplified classical gravitational models using canonical techniques, in the expectation that this will
lead to new ways to look at some of the issues raised above while, at the same time, pointing to what one
may expect out of the full theory. In this spirit, we recently developed \cite{vaz10} a novel approach to
Hawking evaporation taking place during the collapse of a self-gravitating dust ball. This approach,
based on an exact canonical quantization of the non-rotating, marginally bound gravity-dust
system \cite{ltb}, exploited
the matching conditions that must be satisfied at the apparent horizon by the wave functionals describing
the collapse and differs from the traditional approach in which a pre-existing black hole
is imagined to be surrounded by a tenuous field and the Bogoliubov transformation of the field operators
is computed in the black hole background \cite{vksw02,kmsv07,fgk10}.
The geometrodynamic constraints of all LeMa\^\i tre-Tolman-Bondi (LTB) models in any dimension, with or
without a cosmological
constant, are expressible in terms of a canonical chart consisting of the area radius, $R$, the dust proper
time, $\tau$, the mass function, $F$, and their conjugate momenta \cite{vws01,tsv08}. After a series of
canonical transformations in the spirit of Kucha\v r \cite{kk94}, the Hamiltonian constraint can be shown
to yield a Klein-Gordon-like Wheeler-DeWitt equation for the wave-functional. This equation can be solved
by quadrature and, in the simplest cases, closed form solutions can be obtained after regularization is
implemented on a lattice in a self-consistent manner. Self-consistency requires that the lattice
decomposition is compatible with the diffeomorphism constraint \cite{kmv06}. In these models, the dust
ball may be viewed as being made up of shells and the wave functional is described as the continuum limit
of an infinite product over the shell wave functions.
For the special case of marginal collapse with a vanishing cosmological constant in 3+1 dimensions, the
Wheeler-DeWitt equation can be solved explicity. We showed in \cite{vaz10} that matching the shell
wave-functions across the apparent horizon requires ingoing modes in the exterior to be accompanied by
outgoing modes in the interior and, vice-versa, ingoing modes in the interior to be accompanied by outgoing
modes in the exterior. In each case the relative amplitude of the outgoing wave is suppressed by the square
root of the Boltzmann factor at a ``Hawking'' temperature given by $T_H=(4\pi F)^{-1}$, where $F$ represents
twice the mass contained within the shell. Thus the temperature varies from shell to shell, decreasing from
the interior to the exterior, but it has the Hawking form for any given shell.
Two separate solutions are possible: one in which there is a flow of matter toward the apparent horizon both in
the exterior and in the interior, and another in which the flow is away from the horizon, again in both regions.
Matter undergoing continual collapse across the apparent horizon is described by a linear superposition of these
solutions and then, because ingoing waves in the interior are accompanied by outgoing waves in the exterior, the
horizon appears, to the external observer with no access to the interior, to possess a reflectivity given by the
Boltzmann factor at the above Hawking temperature. A different interpretation is also possible when the entire
shell wave functions are taken into account. Ingoing waves in the exterior must be accompanied by outgoing waves
in the interior, whose amplitude is also suppressed by the square root of the Boltzmann factor at the Hawking
temperature. We showed that the transmittance of the horizon is unity, whether for waves incident from the
exterior or the interior. Thus this outgoing wave in the interior passes through the apparent horizon unhindered
but, because its amplitude is suppressed by the Boltzmann factor at the Hawking temperature relative to the
ingoing modes in the exterior, the emission probability of the horizon is given by the same factor. The net
effect is therefore reminiscent of the quasi-classical tunneling of particles through the horizon in the
semi-classical theory \cite{kw95,sp99,gv99,pw00,mp04}.
The solutions just described relied on explicit solutions for the shell wave functions. These are available
only in the case of the marginal models with a vanishing cosmological constant. Our aim here is to extend
these results to non-marginally-bound LTB models in arbitrary dimension with a negative cosmological constant.
No explicit solutions can be given in this case. Nevertheless, we will show that the results mentioned in the
previous paragraphs are indeed generic and that they are a consequence only of the essential singularity of the
Klein-Gordon equation for shells at the apparent horizon. We will then discuss how diffeomorphism invariant
wave functionals may be reconstructed out of the shell wave functions. Hawking radiation from the apparent
horizon then appears as a consequence of the generic form of the Wheeler-DeWitt equation describing
dust collapse and not of any particular solution disucssed in the earlier work. This will provide a novel way
to compute the entropy of the final state black hole.
The plan of this paper is as follows. In section II we recall some key results for classical dust collapse
with a negative cosmological constant in arbitrary dimensions. In section III we present the exact wave
functional that is factorizable on a lattice and solves the Wheeler-DeWitt equation for dust collapse.
The (collapse) wave functional can be thought of as an
infinite product of shell wave functions, each occupying a lattice site. Matching shell wave functions across
the horizon by analytic continuation in section IV, we argue that ingoing waves in one region must be
accompanied by outgoing waves in the other. We superpose the two solutions to conserve the flux of shells
across the horizon and then reconstruct the wave functional from the shell wave functions by going to the
continuum limit. A consequence of matching shells across the apparent horizon is that the amplitude for
outgoing waves relative to ingoing ones is given by $e^{-S/2}$, where $S$ is the Bekenstein-Hawking entropy of the final state black hole.
We close with a brief discussion of our results in section V.
\section{Non-Marginal Dust collapse with $\Lambda\neq 0$ in $d$ Dimensions}
\subsection{The classical models}
Let us begin by briefly recalling some pertinent facts about spherical dust collapse in the presence of a
negative cosmological constant (see, for example, \cite{tgsv08} for details). The LTB models describe self-gravitating
time-like dust whose energy momentum tensor is $T_{\mu\nu} = \varepsilon(x) U_\mu U_\nu$, where $U^\mu(\tau,\rho)$
is the four velocity of the dust particles which are labeled by the $\rho$ and with proper time $\tau$. The line
element can be taken to be
\begin{equation}
ds^2 = -g_{\mu\nu} dx^\mu dx^\nu = d\tau^2 - \frac{R'^2(\tau,\rho)}{1+2 E(\rho)}d\rho^2+R^2(\tau,\rho)d\Omega_n^2
\label{ltbmetric}
\end{equation}
where $R(\tau,\rho)$ is the area radius, $E(\rho)$ is an arbitrary function of the shell label coordinate,
called the energy function, and $\Omega_n$ is the $n=d-2$ dimensional solid angle. Einstein's equations
in the presence of a negative cosmological constant, which we call $-\Lambda$,
\begin{equation}
G_{\mu\nu} + \Lambda g_{\mu\nu} = - \kappa_d T_{\mu\nu},
\end{equation}
yield one dynamical equation for the area radius,
\begin{equation}
{R^*}^2 = 2E(\rho) + \frac{F(\rho)}{R^{n-1}}-\frac{2\Lambda R^2}{n(n+1)},
\label{rdoteq}
\end{equation}
where the star refers to a derivative with respect to $\tau$ and $F(\rho)$ is a second arbitrary function of the
shell label coordinate, called the mass function. Above, $\kappa_d$ is given in terms of the $d-$dimensional
gravitational constant $G_d$ as $\kappa_d = 8\pi G_d$.
One also finds the energy density,
\begin{equation}
\varepsilon(\tau,\rho) = \frac n{2\kappa_d} \frac{{\widetilde F}}{R^n {\widetilde R}},
\label{energydensity}
\end{equation}
in terms of $F$, where the tilde refers to a derivative with respect to the label coordinate, $\rho$. Specific
models are obtained by making choices of the mass and energy functions. For the solutions of \eqref{rdoteq}
to describe gravitational collapse (as opposed to an expansion) one must impose the additional condition that
$R^*(t,r)<0$. The solutions to \eqref{rdoteq} have been explicitly given in \cite{tgsv08} and analyzed in
detail for the marginally bound case, $E(\rho)=0$. Each shell reaches a zero area radius, $R(\tau,\rho)=0$,
in a finite proper time, $\tau=\tau_s(\rho)$, which leads to a curvature singularity. Thus the proper time parameter lies in the interval $(-\infty,\tau_s]$. In general both naked singularities and black hole end states can form.
Trapped surfaces occur when
\begin{equation}
{\mathcal F}~ \stackrel{\text{def}}{=}~ 1-\frac F{R^{n-1}} + \frac{2\Lambda R^2}{n(n+1)} = 0,
\end{equation}
which determines the physical radius, $R_h$, of the apparent horizon. ${\mathcal F}$ is positive outside, {\it i.e.,}
when $R>R_h$, and negative inside, when $R<R_h$.
\subsection{The Canonical Formulation}
To develop a canonical formulation of the LTB models, one begins with the spherically symmetric
Arnowitt-Deser-Misner (ADM) metric
\begin{equation}
ds^2 = N^2 dt^2 - L^2 (dr+N^r dt)^2 - R^2 d\Omega_n^2,
\end{equation}
where $N(t,r)$ and $N^r(t,r)$ are, respectively, the lapse and shift functions and the Einstein-Hilbert
action for a self-gravitating dust ball
\begin{equation}
S_\text{EH} = \frac 1{2\kappa_d} \int d^d x \sqrt{-g} ({}^d{\mathcal R} + 2\Lambda) -\frac 12 \int d^d x\sqrt{-g}
(g_{\alpha\beta} U^\alpha U^\beta +1),
\end{equation}
where $U_\alpha=-\tau_{,\alpha}$ for non-rotating dust, whete $\tau$ is the dust proper time. The phase space
consists of the dust proper time, $\tau(t,r)$, the area radius, $R(t,r)$, the radial function, $L(t,r)$, and
their conjugate momenta, respectively $P_\tau(t,r)$, $P_R(t,r)$ and $P_L(t,r)$.
When the ADM metric is embedded in the spacetime described by \eqref{ltbmetric} it becomes possible, through
a series of canonical transformations described in detail in \cite{tsv08}, to re-express the canonical constraints
in terms of a new canonical chart consisting of the dust proper time, the area radius and the mass density function,
$\Gamma(r)$, defined by
\begin{equation}
F(r) = \frac{2\kappa_d}{n\Omega_n}\left[M_0 + \int_0^r \Gamma(r') dr'\right]
\label{massdensity}
\end{equation}
and new conjugate momenta, $P_\tau(t,r)$, ${\overline P}_R(t,r)$ and $P_\Gamma(t,r)$. The energy function is
expressible in this chart as
\begin{equation}
\frac 1{\sqrt{1+2E}} = \frac{2P_\tau}\Gamma
\end{equation}
and the transformations also absorb a boundary term, which is present in the original chart. The constraints
for the dust-gravity system in any dimension are
\begin{eqnarray}
{\mathcal H}^g &=& P_\tau^2 + {\mathcal F} {\overline P}_R^2 -\frac{\Gamma^2}{{\mathcal F}} \approx 0\cr\cr
{\mathcal H}_r &=& \tau'P_\tau + R' {\overline P}_R -\Gamma P_\Gamma' \approx 0,
\label{constraints}
\end{eqnarray}
where the prime denotes a derivative with respect to the ADM label coordinate, $r$. The Hamiltonian constraint
in \eqref{constraints} will be seen to contain no derivative terms, which makes it easier to quantize.
However, the Poisson brackets of the Hamiltonian with itself vanishes, indicating that the
Hamiltonian constraint does not generate hypersurface deformations. Rather, the transformations generated
by the Hamiltonian constraint act along the dust flow lines.
Of importance in what follows will be the following relationship between the dust proper time and the
remaining canonical variables (see, for example, \cite{vgks07})
\begin{equation}
\tau' = \frac{2P_\Gamma'}a \pm R' \frac{\sqrt{1-a^2 {\mathcal F}}}{a{\mathcal F}},
\end{equation}
where $a = 1/\sqrt{1+2E}$.\footnote{Einstein's equations guarantee that $1-a^2{\mathcal F} >0$ for $0<a<1$,
which corresponds to the case $E>0$.} The positive sign describes a collapsing dust cloud in the exterior
and an expanding dust cloud in the interior whereas the negative sign describes an expanding cloud in the exterior and a collapsing cloud in the interior. Integrating on a hypersurface of constant $t$ we have
the formal solution
\begin{equation}
\tau = 2\int_{t=\text{const.}} \frac{dP_\Gamma}a \pm \int_{t=\text{const.}}^R dR \frac{\sqrt{1-a^2 {\mathcal F}}}
{a{\mathcal F}} + \widetilde{\tau}(t),
\label{pt}
\end{equation}
where $\widetilde{\tau}(t)$ is undetermined. This integral can be difficult to solve for arbitrary ($r$
dependent) mass and energy functions, but when they are both constant beyond some boundary, $r_b$, then
the solution may be expressed as
\begin{equation}
a\tau = 2P_\Gamma \pm \int^R dR \frac{\sqrt{1-a^2{\mathcal F}}}{\mathcal F} + \widetilde{\tau}(t).
\label{ptforsads}
\end{equation}
In this case we are dealing with the static Schwarzschild-AdS geometry for which $2P_\Gamma$ may be associated
with the Killing time \cite{kk94}, so \eqref{ptforsads} gives the relationship between Schwarzschild-AdS and
Painlev\'e -Gullstrand time \cite{mp01}. Solutions in the absence of a cosmological constant have been given in
\cite{kmv06,vgks07}.
\section{Quantum States in A Lattice Decomposition}
When Dirac's quantization condition is used to raise the classical constraints to operator constraints, which
act on a wave functional, the Hamiltonian constraint turns into the Wheeler-DeWitt equation and the momentum
constraint imposes spatial diffeomorphism invariance on the wave functional. One sees that the second is solved
automatically by a wave-functional of the form
\begin{equation}
\Psi[\tau,R,F] = U\left[\int_{-\infty}^\infty dr \Gamma(r) {\mathcal W}(\tau(r),R(r),F(r))\right]
\label{wavefunctionaldef}
\end{equation}
provided that ${\mathcal W}$ contains no explicit dependence on $r$ and where $U:\mathbb{R} \rightarrow \mathbb{C}$ is
an arbitrary differentiable function of its argument.
The wave functional is factorizable on a lattice placed on the real line (eg., see \cite{vaz10}) if $U$ is
chosen to be the exponential map for then, taking the lattice spacing to be $\sigma$,
\eqref{wavefunctionaldef} can be written as
\begin{equation}
\Psi[\tau,R,F] = \lim_{\sigma\rightarrow 0} \prod_i \psi_i(\tau_i,R_i,F_i)
\end{equation}
where
\begin{equation}
\psi_i(\tau_i,R_i,F_i) = e^{\sigma\Gamma_i {\mathcal W}(\tau_i,R_i,F_i)}
\end{equation}
and we have used $X_i=X(r_i)$. Thus we can think of $\Psi[\tau,R,F]$ as an infinite product of shell wave
functions, each occupying a lattice site. Each shell wave function satisfies
\begin{equation}
\left[\hbar^2\left(\frac{\partial^2}{\partial\tau_i^2} + {\mathcal F}_i \frac{\partial^2}{\partial R_i^2} + A_i
\frac{\partial}{\partial R_i} + B_i\right) + \frac{\sigma^2\Gamma_i^2}{{\mathcal F}_i}\right]\psi_i = 0
\label{kgeqn}
\end{equation}
where $A_i=A(R_i,F_i)$ and $B_i=B(R_i,F_i)$ are functions capturing the factor ordering ambiguities
that are always present in the canonical approach. They can be uniquely determined by requiring the
above equation to be independent of the lattice spacing; one finds the general positive energy
solutions \cite{tsv08}
\begin{equation}
\psi_i = e^{\omega_i b_i}\times\exp\left\{-\frac{i\omega_i}\hbar \left[a_i\tau_i \pm \int^{R_i} dR_i
\frac{\sqrt{1-a_i^2{\mathcal F}_i}}{{\mathcal F}_i}\right]\right\},
\label{shellwavefunctions}
\end{equation}
where $\omega_i = \sigma \Gamma_i/2$ and the factor $e^{\omega_i b_i}$ amounts to a normalization. Note that
shell ``$i$'' crosses the apparent horizon when
${\mathcal F}_i=0$, which is an essential singularity of the wave equation. From the shell wave functions in
\eqref{shellwavefunctions} one reconstructs the wave functionals with the ansatz \eqref{wavefunctionaldef}
and $U=\exp$ as
\begin{equation}
\Psi[\tau,R,\Gamma] = e^{\frac 12\int dr \Gamma b(F(r))}\times \exp\left\{-\frac i{2\hbar}
\int dr\Gamma\left[a(F(r))\tau\pm \int^R_{r=\text{const.}} dR \frac{\sqrt{1- a^2(F(r)){\mathcal F}}}
{\mathcal F}\right]\right\}\\
\label{collapsewavefunctional}
\end{equation}
where we have set $a(r)=a(F(r))$ and $b(r)=b(F(r))$ as is required for diffeomorphism invariance.
\section{Tunneling}
The wave-functions of the previous section are defined in the interior as well as the exterior of the
apparent horizon, but the Wheeler-DeWitt equation has an essential singularity at the apparent horizon along
the path of integration. In order to match interior to exterior solutions, it is necessary to deform the path
in the complex $R$-plane. The direction of the deformation is chosen so that positive energy solutions decay.
While this deformed path does not correspond to the trajectory of any classical particle it represents a
tunneling of $s$-waves across the gravitational barrier represented by the apparent horizon. This is analogous
to the quasi-classical tunneling approach employed in semi-classical analyses \cite{kw95,sp99,gv99,pw00,mp04}.
\subsection{Shell Wave Functions}
From the expression for the phase in \eqref{shellwavefunctions},
\begin{equation}
{\mathcal W}^{(\pm)}_i = \frac{\omega_i}\hbar \left[a_i\tau_i \pm \int^{R_i} dR_i \frac{\sqrt{1-a_i^2{\mathcal F}_i}}{{\mathcal F}_i}
\right],
\label{phase}
\end{equation}
the phase velocity of the $i^\text{th}$ shell wave function is given by
\begin{equation}
\dot R_i = \mp \frac{a_i {\mathcal F}_i}{\sqrt{1-a_i^2{\mathcal F}_i}}.
\end{equation}
Thus the positive sign in \eqref{phase} describes ingoing waves in the exterior (${\mathcal F}_i>0$), whereas it describes
outgoing waves in the interior (${\mathcal F}_i<0$) and, likewise, the negative sign describes outgoing waves in the
exterior and ingoing waves in the interior.
A closed form solution for the integrals appearing in \eqref{collapsewavefunctional} and \eqref{pt} cannot be
given when the mass function and/or the energy function and/or the cosmological constant are non-vanishing. We may
however analyze their properties near the apparent horizon in the following way. Noting that ${\mathcal F}_i = 0$,
equivalently $R_i=R_{i,h}$, is a singularity of the integral appearing in the phase, ${\mathcal W}_i$, we define the integral
by analytically continuing to the complex plane and deforming the integration path so as to go around the pole at
$R_{i,h}$ in a semi-circle of radius $\epsilon$ drawn in the upper half plane. Let $L_\epsilon$ denote the
deformed path and let $S_\epsilon$ denote the semi-circle of radius $\epsilon$ around $R_{i,h}$, then
\begin{equation}
\int^{R_i} dR_i \frac{\sqrt{1-a_i^2{\mathcal F}_i}}{{\mathcal F}_i}~ \stackrel{\text{def}}{=}~ \lim_{\epsilon
\rightarrow 0} {\int\limits_{L_\epsilon}}^{R_i} dR_i \frac{\sqrt{1-a_i^2{\mathcal F}_i}}{{\mathcal F}_i}.
\end{equation}
Performing the integration from left to right,\footnote{The same result is obtained if the integration
is performed from right to left.} for $R_i = R_{i,h}+\epsilon$ we have
\begin{equation}
{\int\limits_{L_\epsilon}}^{R_{i,h}+\epsilon}dR_i \frac{\sqrt{1-a_i^2{\mathcal F}_i}}{{\mathcal F}_i} =
{\int\limits_{L_\epsilon}}^{R_{i,h}-\epsilon}dR_i
\frac{\sqrt{1-a_i^2{\mathcal F}_i}}{{\mathcal F}_i} + \int\limits_{S_\epsilon} dR_i \frac{\sqrt{1-a_i^2{\mathcal F}_i}}{{\mathcal F}_i}
\end{equation}
and, for the integral over the semi-circle, a Laurent series expansion about ${\mathcal F}_i=0$ gives to lowest
order
\begin{equation}
\int_{S_\epsilon} dR_i \frac{\sqrt{1-a_i^2 {\mathcal F}_i}}{{\mathcal F}_i} = \frac 1{{\mathcal F}'_i(R_{i,h})} \int_{S_\epsilon}
\frac{dR_i}{R_i-R_{i,h}}.
\end{equation}
This integral is half the integral over a complete circle taken in a clockwise manner,
\begin{equation}
\int\limits_{S_\epsilon} dR_i \frac{\sqrt{1-a_i^2{\mathcal F}_i}}{{\mathcal F}_i} = \frac 1{2{\mathcal F}'_i(R_{i,h})} \oint_{C_\epsilon}
\frac{dR_i }{R_i-R_{i,h}} = -\frac{i\pi}{2g_{i,h}},
\end{equation}
where $2g_{i,h} = {\mathcal F}'_i(R_{i,h})$ is the surface gravity of the horizon. Therefore we find
\begin{equation}
{\int\limits_{L_\epsilon}}^{R_{i,h}+\epsilon}dR_i \frac{\sqrt{1-a_i^2{\mathcal F}_i}}{{\mathcal F}_i} =
{\int\limits_{L_\epsilon}}^{R_{i,h}-\epsilon}dR_i \frac{\sqrt{1-a_i^2{\mathcal F}_i}}{{\mathcal F}_i} - \frac{i\pi}{2g_{i,h}}.
\end{equation}
The expression defining the proper time in \eqref{pt} involves a similar integral, but taken over a spatial slice.
Even so, the same argument can be made for this integral (see \cite{aas06,apgs09} for an analogous argument in the
semi-classical context). If we assume, moreover, that $a(F(r))$ and $P_\Gamma(r)$
are both regular across the horizon, the net result is that the phases in the exterior get matched
to the phases in the interior by the addition of a constant imaginary term,
\begin{equation}
{\mathcal W}_\text{out}^{(\pm)}(\tau_i,R_i,F_i) = {\mathcal W}_\text{in}^{(\pm)}(\tau_i,R_i,F_i) \mp \frac{i\pi\omega_i}
{\hbar g_{i,h}}.
\end{equation}
Thus we can give two independent solutions with support everywhere in the spacetime: an ingoing wave in the
exterior that is matched to an outgoing wave in the interior
\begin{equation}
\psi_i^{(1)}(\tau_i,R_i,F_i) = \left\{\begin{matrix}
e^{\omega_ib_i} \times \exp\left\{-\frac{i\omega_i}\hbar \left[a_i \tau_i + \int^{R_i} dR_i
\frac{\sqrt{1-a_i^2{\mathcal F}_i}}{{\mathcal F}_i}\right]\right\} & {\mathcal F}_i>0\cr\cr
e^{-\frac{\pi\omega_i}{\hbar{g_{i,h}}}}\times e^{\omega_ib_i} \times \exp\left\{-\frac{i\omega_i}\hbar
\left[a_i \tau_i + \int^{R_i} dR_i \frac{\sqrt{1-a_i^2{\mathcal F}_i}}{{\mathcal F}_i}\right]\right\} & {\mathcal F}_i < 0
\end{matrix}\right.
\end{equation}
and an outgoing wave in the exterior that is matched to an ingoing wave in the interior according to
\begin{equation}
\psi_i^{(2)}(\tau_i,R_i,F_i) = \left\{\begin{matrix}
e^{-\frac{\pi\omega_i}{\hbar{g_{i,h}}}}\times e^{\omega_ib_i} \times \exp\left\{-\frac{i\omega_i}\hbar
\left[a_i \tau_i - \int^{R_i} dR_i \frac{\sqrt{1-a_i^2{\mathcal F}_i}}{{\mathcal F}_i}\right]\right\} & {\mathcal F}_i > 0 \cr\cr
e^{\omega_ib_i} \times \exp\left\{-\frac{i\omega_i}\hbar \left[a_i \tau_i - \int^{R_i} dR_i
\frac{\sqrt{1-a_i^2{\mathcal F}_i}}{{\mathcal F}_i}\right]\right\} & {\mathcal F}_i < 0
\end{matrix}\right.
\end{equation}
The first of these solutions represents a flow towards the apparent horizon both in the exterior as well as
in the interior whereas the second represents a flow away from the apparent horizon in both regions.
While each solution is self-consistent, neither wave function accurately reflects the physical situation one
expects from semi-classical collapse, in which an ingoing shell proceeds all the way to the center and the
horizon emits thermal radiation into the exterior at the Hawking temperature.
To recover this picture we consider a linear superposition of the two wave functions
\begin{equation}
\psi_i = \psi^{(1)}_i + A_i \psi^{(2)}_i,
\end{equation}
where $A_i$ are complex valued constants. Following \cite{vaz10}, we fix these constants by requiring that
the current density is constant across the horizon. This implies that $|A_i|^2=1$ and we take $A_i=1$ for
every shell, which gives an absorption probability of unity for a shell to cross the apparent horizon from
the exterior. We therefore have
\begin{equation}
\psi_i = \left\{\begin{matrix}
\hskip -1.0in e^{\omega_ib_i} \times \exp\left\{-\frac{i\omega_i}\hbar \left[a_i \tau_i + \int^{R_i}
dR_i \frac{\sqrt{1-a_i^2{\mathcal F}_i}}{{\mathcal F}_i}\right]\right\} + & \cr
\hskip 1.0in +~ e^{-\frac{\pi\omega_i}{\hbar{g_{i,h}}}}\times e^{\omega_ib_i} \times \exp\left\{-\frac{i\omega_i}\hbar
\left[a_i \tau_i - \int^{R_i} dR_i \frac{\sqrt{1-a_i^2{\mathcal F}_i}}{{\mathcal F}_i}\right]\right\} & {\mathcal F}_i > 0 \cr\cr
\hskip -1.0in e^{-\frac{\pi\omega_i}{\hbar{g_{i,h}}}}\times e^{\omega_ib_i} \times \exp\left\{-\frac{i\omega_i}\hbar
\left[a_i \tau_i + \int^{R_i} dR_i \frac{\sqrt{1-a_i^2{\mathcal F}_i}}{{\mathcal F}_i}\right]\right\} + & \cr
\hskip 1.0in +~ e^{\omega_ib_i} \times \exp\left\{-\frac{i\omega_i}\hbar \left[a_i \tau_i - \int^{R_i} dR_i
\frac{\sqrt{1-a_i^2{\mathcal F}_i}}{{\mathcal F}_i}\right]\right\} & {\mathcal F}_i < 0
\end{matrix}\right.
\label{fullwavefunctions}
\end{equation}
The second term in the expression for $\psi_i$ in the exterior (${\mathcal F}_i>0$) is an outgoing wave that, to an external
observer, would represent a reflection with relative probability
\begin{equation}
\frac{P_{\text{ref},i}}{P_{\text{abs},i}} = e^{-\frac{2\pi\omega_i}{\hbar {g_{i,h}}}}.
\end{equation}
This is precisely the Boltzmann factor for the shell at temperature $T_{i,H}=\frac{\hbar {g_{i,h}}}
{2\pi k_B}$, where ${g_{i,h}}$ is the surface gravity of the apparent horizon.
This reflected piece is a purely quantum effect, necessitated by the
existence of an ingoing wave in the interior, {\it i.e.,} by requiring the continued collapse of the shell
beyond its apparent horizon. This continued collapse is represented by the second term in the expression
for $\psi_i$ in the interior (${\mathcal F}_i<0$). On the other hand, the ingoing wave in the exterior, represented
by the first term in the expression for $\psi_i$ when ${\mathcal F}_i>0$, is necessarily accompanied by an outgoing wave
in the interior occurring with a relative amplitude of $e^{-\frac{\pi\omega_i}{\hbar{g_h}}}$, which is
equal to the amplitude for ``reflection'' at the apparent horizon. This leads to the alternate picture
mentioned in the Introduction, in which the Hawking process can be viewed as an effective emission from
the apparent horizon.
\subsection{Phase Transition}
We can use our results above to examine what happens near the Hawking-Page Transition point \cite{hp83}.
It is worth noting that the Hawking temperature is independent of the energy function. This was first noted
in \cite{vgks07} in connection with non-marginal LTB models without a cosmological constant. Here we have shown
that the result is robust, holding even in the presence of a negative cosmological constant.
Now the apparent horizon is given for each shell as the solution of the equation
\begin{equation}
2x_{i,h}^{n+1} + n(n+1) x_{i,h}^{n-1} - n(n+1)F_i\Lambda^{\frac{n-1}2} = 0,
\label{hor}
\end{equation}
where $x_i = R_i\sqrt{\Lambda}$ is dimensionless, and it is straightforward to show that the surface gravity
for each shell is
\begin{equation}
{g_{i,h}} = \frac{\sqrt{\Lambda}}2\left[\frac{2x_{i,h}}n + \frac{n-1}{x_{i,h}}\right].
\label{surfgrav}
\end{equation}
For fixed $n$ and $\Lambda$,
\begin{equation}
\frac{d{g_{i,h}}}{dF_i} = \frac{\sqrt{\Lambda}}2 \frac{dx_{i,h}}{dF_i} \left[\frac 2n - \frac{n-1}{x_{i,h}^2}
\right],
\end{equation}
so using the fact that $dx_{i,h}/dF_i>0$, which follows directly from \eqref{hor}, we find that $d{g_{i,h}}/dF>0$
when $2x_{i,h}^2> n(n-1)$. This is the condition for positive specific heat. When $2x_{i,h}^2 < n(n-1)$ the
specific heat is negative and the two regimes are separated by the Hawking-Page (phase) transition \cite{hp83}, which
occurs at $2x_{i,h}^2=n(n+1)$. The wave functions of collapse in \eqref{fullwavefunctions} are
well behaved at the transition point.
\subsection{Wave Functionals}
We now turn to the question of how the collapsing shell wave functions described in the previous section may be combined
to yield wave functionals. Obviously the superposed wave functions of \eqref{fullwavefunctions} cannot be directly
used for this purpose as they are not the simple exponentials required for diffeomorphism invariance by
\eqref{wavefunctionaldef}. Instead, we take the continuum limit of the product of the shell wave functions
$\psi_i^{(1)}$ and $\psi_i^{(2)}$ separately to form two corresponding diffeomorphsim invariant wave functionals,
$\Psi_1$ and $\Psi_2$. Then, we take a linear combination of these wave functionals to form the full wave functional describing the collapse \cite{vaz10} .
Accordingly, the functional equivalent of the superposed wave functions in \eqref{fullwavefunctions} is
$\Psi=\Psi_1+\Psi_2$, or
\begin{equation}
\Psi= \left\{\begin{matrix}
\hskip -0.5in e^{\frac 12\int dr \Gamma b} \times \exp\left\{-\frac i{2\hbar} \int^R dr \Gamma \left[a\tau +
\int_{r=\text{const.}}^{R} dR \frac{\sqrt{1-a^2{\mathcal F}}}{{\mathcal F}}\right]\right\} + & \cr\cr
\hskip 0.5in +~ e^{-S/2}\times e^{\frac 12\int dr \Gamma b} \times \exp\left\{-\frac i{2\hbar}
\int dr \Gamma \left[a \tau - \int_{r=\text{const.}}^R dR \frac{\sqrt{1-a^2{\mathcal F}}}{{\mathcal F}}\right]\right\} & {\mathcal F} > 0 \cr\cr\cr
\hskip -0.5in e^{-S/2}\times e^{\frac 12\int dr\Gamma b} \times \exp\left\{-\frac
i{2\hbar} \int dr \Gamma \left[a \tau + \int_{r=\text{const.}}^R dR \frac{\sqrt{1-a^2{\mathcal F}}}{{\mathcal F}}\right]\right\} + & \cr\cr
\hskip 0.5in +~ e^{\frac 12\int dr \Gamma b} \times \exp\left\{-\frac i{2\hbar}\int dr \Gamma \left[a \tau -
\int_{r=\text{const.}}^R dR \frac{\sqrt{1-a^2{\mathcal F}}}{{\mathcal F}}\right]\right\} & {\mathcal F} < 0
\end{matrix}\right.
\end{equation}
When use is made of \eqref{hor} and \eqref{surfgrav}, the relative amplitude, $e^{-\frac\pi{2\hbar}
\int dr \Gamma/{g_h}}$, works out to precisely $e^{-S/2}$, where $S=A_h/4\hbar G_d$ is the
Bekenstein-Hawking entropy of the black hole. Thus the ratio of the reflection probability to the
probability for absorption is determined only by the entropy of the black hole,
\begin{equation}
\frac{P_\text{ref}}{P_\text{abs}} = e^{-S}
\end{equation}
and we have recovered the results of \cite{vaz10} in the more general setting of $d-$dimensional, non-marginal
collapse and in the presence of a (negative) cosmological constant.
\section{Conclusions}
In this paper we have used the wave-functionals of an exact midi-superspace quantization of the
non-marginally-bound LTB models in the presence of a cosmological constant and in an arbitrary number of
spatial dimensions to study the Hawking evaporation process. As in previous works, regularization was
performed on a lattice and the wave functionals were shown to be constructed out of wave functions
describing individual shells of collapsing dust. The apparent horizon is an essential singularity of
the Wheeler-DeWitt equation and the solutions of the latter could only be given by quadrature separately
in the exterior and in the interior of the apparent horizon. The central issue discussed here was how
the interior and exterior solutions can be matched across the horizon. To accomplish the required matching
we defined the integrals appearing in the solution of the Wheeler-DeWitt equation
by deforming the integration path in the complex plane to go around the pole at the apparent horizon. This
implied that crossing the horizon involved a rotation of the dust proper time in the complex plane and had
the effect of introducing an imaginary constant into the phase of the outgoing wave
functions. We were then able to show that an ingoing shell wave function in one region is required to be accompanied by an outgoing shell wave function in the other region. The relative amplitude of the outgoing
wave function in each case was shown to be given by the square root of the Boltzamann factor at the Hawking
temperature appropriate to the shell.
The approach in this paper enjoys several advantages over the approach via Bogoliubov coefficents, while also
producing an alternative and attractive view of the evaporation process. In the first place, no near horizon
expansion of the wave-functional is necessary. Secondly, the approach via Bogoliubov coefficients does not
get much beyond the semi-classical level because it is necessary to approximate the mass function in
such a way that it represents a massive black hole surrounded by tenuous dust. Hence one is effectively
looking at the semi-classical radiation from the event horizon of a static black hole. By contrast, no such
approximation to the mass function is necessary here, so we are genuinely examining the radiation from the
apparent horizon {\it during} collapse. Thirdly, the inner product used in the calculation of the Bogoliubov
coefficient is not the one that is uniquely determined by the lattice regularization (see Appendix B of
\cite{kmv06}) but one that is determined from the DeWitt supermetric. This can be justified only in
the approximation described above because, for this case alone, no measure is uniquely determined by the
regularization scheme \cite{vw09}). In all other cases, the measure is uniquely determined and different
from that provided by the DeWitt supermetric. The results we report here are independent of the inner product.
If the matter is assumed to undergo continued collapse we showed that the relative probability for the shell
wave function to cross the apparent horizon is unity, whether it is incident from the interior or the exterior.
This was argued to lead to two pictures of the evaporation process. In the first picture, one takes the point of
view of an external observer with no access to the interior. To this observer the horizon appears to possess a non-zero
reflectivity. On the other hand, the observer who has access to the entire wave function sees the outgoing
exterior portion of the shell wave function as a transmission of an outgoing interior wave across the horizon,
which exists because of the collapsing exterior. Thus the horizon can also be thought of as an emitter.
We showed that the Hawking temperature is independent of the energy function, {\it i.e.,} of the initial velocity
distribution of the shells, and that the shell wave functions are well behaved at the Hawking-Page transition
point during the collapse. Moreover, the relative amplitude for outgoing wave functionals is $e^{-S/2}$, where
$S$ is the Bekenstein-Hawking entropy of the final state black hole even in the presence of the cosmological
constant. This generalizes \cite{vaz10}, for which closed form solutions were available. The results are therefore
generic to dust collapse.
\bigskip\bigskip
\noindent{\bf Acknowledgements}
\bigskip
\noindent C.V. is grateful to T.P. Singh and to the Tata Institute of Fundamental Research
for their hospitality during the time this work was completed. K.L. and C.V. acknowledge useful conversations with
T.P. Singh and the partial support of the Templeton Foundation under Project ID $\#$ 20768.
\bigskip\bigskip
|
2,877,628,091,183 | arxiv | \section{Introduction}
Complex systems, e.g., meteorological, economical or physiological systems, generally exhibit nonequilibrium processes, one of whose manifestations is a lack of time reversibility. Time reversibility is the property of invariance with respect to time reversal, and its lack is defined as time irreversibility (TIR) \cite{Weiss1975,Kelly1979}. TIR is widely used to detect nonlinearity, a necessary condition for chaotic behaviour; therefore, a quantitative TIR measure for complex processes is of particular interest.
Statistically speaking, to quantify TIR, one could measure the probabilistic differences either between forward and backward series or between symmetric vectors \cite{Ramsey1995,Yao2020ND}; the latter approach is preferable because of its real-time advantage. However, joint probability estimation for symmetric vectors is not trivial, particularly for unknown complex systems. Model-based probability estimators are generally linear ones that adopt the assumption of a particular distribution for the observed signals \cite{Xiong2017}, which is sometimes not appropriate for complex systems. Model-free probability estimators, such as kernel probability estimators, have been widely employed in informational analysis \cite{Xiong2017,Hlav2007,Marina2008}; however, they reconstruct time series, during which the symmetric or corresponding vectors cannot be matched for TIR quantification. Due to the limitations of traditional probability estimators, scholars have proposed some simplified methods for TIR quantification. Costa et al. \cite{Costa2005} quantified the TIR of heartbeats by measuring the difference between the average activation and relaxation energies, and they then simplified the parameter by distinguishing only the probabilistic difference between the up and down of the signal waveforms \cite{Costa2008}, a similar approach that has also been employed by Porta \cite{Porta2008}, Guzik \cite{Guzik2006} and Ehlers et al. \cite{Ehlers1998} for quantifying temporal asymmetry. Lacasa et al. \cite{Lacasa2012,Lacasa2015,Flanagan2016} quantified the TIR by measuring the divergence between the in- and out-degree distributions of either the original or the horizontal visibility graph. Additionally, symbolic approaches \cite{Daw2003}, such as approaches based on data compression \cite{Kennel2004}, false flipped symbols \cite{Daw2000}, and permutation \cite{Yao2020ND,Yao2019Ys,Yao2019E,Martin2019}, have received considerable attention for their simplicity, fast execution, noise insensitivity, etc. These simplified methods for TIR quantification play important roles in nonequilibrium analyses of complex systems.
Among these simplifications, approaches based on order patterns have been gaining popularity due to their close relation to multi-dimensional vectors and the inherent temporal structural information captured by the ordinal scheme \cite{Bandt2002,Bian2012,Yao2020}. However, in permutation-based measures, ignorance of particular issues might lead to failures in the quantification of the TIR. Due to the existence of forbidden permutations (i.e., permutations that cannot occur), some vectors or order patterns might not have corresponding symmetric forms \cite{Yao2020ND,Yao2019Ys,Yao2019E}, making the measurement of the probabilistic differences using division-based parameters (e.g., the relative entropy) unreliable. Equal values may lead to self-symmetric order patterns \cite{Yao2020ND,Yao2019E,Yao2020}; therefore, the equal states should not be broken by random perturbations or sorted according their order of occurrence in quantitative TIR. Some of \emph{the authors of the present paper} have incorrectly employed symmetric permutations (SPs) instead of symmetric vectors in TIR quantification, and they found that the SP-based methods have been shown to be effective for characterizing nonlinearity in model series and enabling the reliable detection of nonlinearity in real-world physiological\cite{Yao2019Ys}. However, it is conceptually incorrect to employ SPs in TIR quantification because the SP and permutations of symmetric vectors (PSVs) are not always the same \cite{Yao2020ND}. In other words, the simplified approach based on SPs quantifies other characteristics rather than the TIR.
In this contribution, we conduct comparative research on the relationship between SPs and PSVs, and on this basis, we define a novel statistical parameter, the amplitude irreversibility (AIR), for characterizing nonequilibrium from the perspective of amplitude fluctuations. This joint analysis of the AIR and TIR contributes to a better understanding of quantitative nonequilibrium and enhances current knowledge regarding the significance of equal values in permutation-based TIR and AIR analyses. Furthermore, a comprehensive discussion of the strong similarity between the AIR and TIR improves the understanding of quantitative nonequilibrium from a broader perspective.
\section{Materials \& methods}
\subsection{Quantitative time irreversibility and symmetric permutations}
Statistically speaking, a process $X(t)$ is said to be time reversible if $\{X(t_{1}),X(t_{2}),\cdots,X(t_{m})\}$ and $\{X(-t_{1}),X(-t_{2}),\cdots,X(-t_{m})\}$ have the same joint probability distribution for every $t_{1},t_{2},\cdots,t_{m}$ and $m$; otherwise, it is time irreversible \cite{Weiss1975}. In addition, $\{X(t_{1}),X(t_{2}),\cdots,X(t_{m})\}$ and $\{X(-t_{1}+n),X(-t_{2}+n),\cdots,X(-t_{m}+n)\}$ will also have the same joint probability distribution for every $n$ and $m$ if $X(t)$ is time reversible; in particular, if $n$=$t_{1}+t_{m}$, $\{X(t_{1}),X(t_{2}),\cdots,X(t_{m})\}$ have the same probability distribution as the symmetric process $\{X(t_{m}),\cdots,X(t_{2}),X(t_{1})\}$, i.e., the process will exhibit temporal symmetry \cite{Kelly1979,Ramsey1995}. Therefore, the probabilistic difference between the forward and backward processes and between the symmetric joint distributions of a process are equivalent for measuring the TIR \cite{Yao2020ND,Yao2019E}.
If the forward-backward method is employed, it is necessary to obtain and then reverse the whole process, which is not trivial and may even be impossible for a system that generates uninterrupted data. Therefore, methods based on symmetric vectors are preferable due to their advantage of real-time analysis \cite{Yao2020ND}.
As mentioned in the Introduction, the calculation of the probabilistic difference between symmetric vectors is not trivial, and the raw time series are generally simplified or reconstructed to quantify the TIR. Permutation-based methods are of particular interest because the ordinal scheme \cite{Bandt2002,Bian2012} does not require any modelling assumptions and is closely related to the TIR \cite{Yao2020ND,Yao2019Ys,Yao2019E}. In the ordinal scheme, the elements in the multi-dimensional vector $X_{m}^{\tau}(i)=\{ x(i),x(i+\tau),\ldots,x(i+(m-1)\tau)\}$, where $m$ is the dimension and $\tau$ is the delay, are reorganized (e.g., in ascending order, $x(j_1)<x(j_2)<\cdots<x(j_i)$), and the indexes of the reordered vector are used to formulate the permutation $\pi_{i}=\{j_{1},j_{2}, \cdots, j_{i}\}$.
As an alternative to raw symmetric vectors, SPs are directly employed by some of \emph{the present authors} for quantifying the TIR in some reports \cite{Yao2019Ys}, which is proved to be an effective approach for detecting nonlinearity in complex processes. However, we find that the PSVs are not always equivalent to the SPs and that methods based on SPs therefore do not correctly characterize the statistical concept of TIR.
Let us consider the difference between SPs and PSVs for an example case in which $m$=5, as illustrated in Fig.~\ref{fig1}.
\begin{figure}[htb]
\centering
\includegraphics[width=7cm,height=7cm]{1.eps}
\caption{Relationships between symmetric vectors and their permutations. The vectors in \textbf{A} and \textbf{B} and those in \textbf{C} and \textbf{D} are symmetric with respect to the Y-axis; i.e., they exhibit temporal symmetry in terms of the X-axis angle. The vectors in \textbf{A} and \textbf{C} and those in \textbf{B} and \textbf{D} are symmetric with respect to the X-axis; i.e., they exhibit amplitude symmetry in terms of the Y-axis angle.}
\label{fig1}
\end{figure}
Consider the vector \{3,1,9,5,7\} in Fig.~\ref{fig1}A; its permutation is (2,1,4,5,3), and its symmetric vector, depicted in Fig.~\ref{fig1}B, is \{7,5,9,1,3\}, whose order pattern is (4,5,2,1,3). However, (2,1,4,5,3) and (4,5,2,1,3) are not symmetric. The same is true for the vectors \{-3,-1,-9,-5,-7\} and \{-7,-5,-9,-1,-3\} shown in Fig.~\ref{fig1}C and~\ref{fig1}D; their permutations (i.e., (3,5,4,1,2) and (3,1,2,5,4)) are not symmetric either. For the example shown in Fig.~\ref{fig1}, to quantify the TIR, one should measure the probabilistic differences in the PSVs, i.e., the differences between (2,1,4,5,3) and (4,5,3,1,2) and between (3,5,4,1,2) and (3,1,2,5,4), rather than the probabilistic differences in the SPs, i.e., between (2,1,4,5,3) and (3,5,4,1,2) or between (4,5,2,1,3) and (3,1,2,4,5).
Therefore, it is incorrect to employ the probabilistic differences of SPs for quantifying TIR \cite{Yao2020ND}.
\subsection{The definition of amplitude reversibility}
Nevertheless, SPs have been reported to effectively characterize nonlinear activities in the related literature \cite{Yao2019Ys}. Therefore, we assume that there might be some associations between the PSVs, SPs and time reversibility. To determine the possible associations, let us dig further into the relationships among the vectors and permutations in Fig.~\ref{fig1}.
For the symmetric patterns (2,1,4,5,3) and (3,5,4,1,2), their corresponding vectors in Fig.~\ref{fig1}A and \ref{fig1}C are symmetric with respect to the X-axis; the same is true for the vectors corresponding to the symmetric patterns (4,5,3,1,2) and (3,1,2,5,4), as shown in Fig.~\ref{fig1}B and \ref{fig1}D. Whereas the vectors that characterize the TIR show Y-axis symmetry, say, time symmetry, the vectors that correspond to the SPs show X-axis symmetry, in other words, amplitude symmetry. Inspired by this finding, we define a novel statistical concept called amplitude reversibility, the quantification of which can be simplified by means of SPs.
\textbf{The definition of amplitude reversibility.} Given a process $X(t)$, we subtract its mean $\mu$ as $X(t)=X(t)-\mu$, if $\{X(t_{1}),X(t_{2}),\cdots,X(t_{m})\}$ and $\{-X(t_{1}),-X(t_{2}),\cdots,-X(t_{m})\}$ have the same joint probability distribution for every $t_{1},t_{2},\cdots,t_{m}$ and $m$, then $X(t)$ is amplitude reversible; otherwise, X(t) is amplitude irreversible. Thus, the AIR, which targets fluctuations in the amplitude of a dynamic process, is also a parameter that characterizes complex activities.
As in the case of quantifying the TIR, in some real-world situations, it is not feasible to obtain and take the negative of the whole process of interest. Therefore, to enable real-time processing, the probabilistic difference between $\{X(t_{1}),X(t_{2}),\cdots,X(t_{m})\}$ and $\{-X(t_{1}),-X(t_{2}),\cdots,-X(t_{m})\}$ could instead be simplified to a probabilistic difference between SPs.
AIR is a mathematical concept derived from the incorrect quantization of the TIR and from the relationship between the SPs and vectors, while it contains important physical significance, i.e., the nonequilibrium characteristic of amplitude fluctuations.
Fluctuation theorems provide some analytical expressions to describe nonequilibrium states \cite{Sevick2008}. In the Evans-Searles fluctuation theorem \cite{Evans1994,Evans2002,Searles2004}, the dissipation function $\Omega$ of observing trajectories is a dimensionless dissipated energy and is defined as arbitrary values $A$ and $-A$, their probabilistic difference $p(A)/p(-A)$ describes the asymmetry in the distribution of $\Omega$ over a particular ensemble of trajectories. According to the Evans-Searles fluctuation theorem, the vector and its amplitude-reverse form in the AIR could be defined as $A$ and $-A$, and the AIR describes amplitude fluctuations by the probabilistic difference $p(A)/p(-A)$. In the report of Costa et al. \cite{Costa2005}, they made assumptions that the transitions in systems require a specific amount of energy and quantified the nonequilibrium by the difference between activation and relaxation, which is then simplified by the number of increments and decrements \cite{Costa2008}, a special case of permutation AIR when $m$=2. The order patterns, as a reliable alternative to the original vector, naturally arise from the time series and inherit structural information of vectors, and the permutation AIR is a reliable parameter to characterize the asymmetry in the distribution of permutations over an ensemble of ordinal trajectories.
\subsection{Equal-value permutation and vectors}
For the permutation TIR, equal values may lead to self-symmetric order patterns with important statistical implications, i.e., time reversibility or temporal symmetry \cite{Yao2020ND,Yao2019E,Yao2020}. Hence, equal values might also be expected to have similar implications in the case of the AIR measure.
In the original ordinal scheme and some variants thereof, equal values can be eliminated by adding small random perturbations or can be arranged in the order of occurrence; however, these approaches might yield false findings or misleading conclusions if there are a large number of equal values \cite{Yao2019E,Yao2020,Yao2019P}. Bian et al. \cite{Bian2012} proposed a desirable alternative for considering the contributions of equal states. In this improved permutation scheme, equal values are first placed in adjacent positions their order of occurrence, i.e., $\cdots<x(j_k)=x(j_l)< \cdots <x(j_x)=x(j_y)=x(j_z)<\cdots$, and the indexes of all the members of each such group of equal values in the original permutation, i.e., $\pi_{i}=\{\cdots,j_k,j_l,\cdots,j_x,j_y,j_z,\cdots\}$, are then set to be equal to the lowest index in the group, i.e., $\pi_{i}=\{\cdots,j_k,j_k,\cdots,j_x,j_x,j_x,\cdots\}$.
The example illustrated in Fig.~\ref{fig2} suggests that equal-value permutations play a crucial role in the formulation of the order patterns for the quantitative AIR measure.
\begin{figure}[htb]
\centering
\includegraphics[width=7cm,height=7cm]{2.eps}
\caption{Example of the effects of equal values on symmetric vectors and their permutations. The vectors in \textbf{A} and \textbf{B} and those in \textbf{C} and \textbf{D} are temporally symmetric, and the vectors in \textbf{A} and \textbf{C} and those in \textbf{B} and \textbf{D} are amplitude symmetric. The values in red are equal, and those shown in green are the alternatives if we organize the equal values according to the order of occurrence. The original (denoted by 'OP') and equal-value (underlined and denoted by 'EP') permutations are shown below each subplot.}
\label{fig2}
\end{figure}
For the vector depicted in Fig.~\ref{fig2}A, i.e., \{3,1,7,1,5\}, if we organize the two equal values (the '1's) according to the order of occurrence, then the permutation is (2,4,1,5,3), which is also the order pattern of the vector \{3,1,7,2,5\}. Therefore, the original order pattern does not reliably reflect the relationship among the elements of the vector, which is shared among the other three subplots. More importantly, for the vectors symmetric about the X-axis in Fig.~\ref{fig2}A and in Fig.~\ref{fig2}C, i.e., \{3,1,7,1,5\} and \{-3,-1,-7,-1,-5\}, respectively, their original permutations (2,4,1,5,3) and (3,5,1,2,4) are not symmetric. If we instead employ the equal-value ordinal scheme, the order patterns of the vectors in Fig.~\ref{fig2}A and ~\ref{fig2}C are (2,2,1,5,3) and (3,5,1,2,2), respectively, which are symmetric; similarly, the order patterns of the vectors in Fig.~\ref{fig2}B and~\ref{fig2}D (i.e., (2,2,5,1,3) and (3,1,5,2,2), respectively) are also symmetric. Therefore, equal-value permutations are crucial for the simplified quantitative AIR measure.
The equal-value ordinal scheme more reliably represents the temporal structure of a dynamic process and more accurately reflects the relationship among the elements in a vector than the original ordinal scheme. Because the only unique feature of equal-value permutation is that the indexes of equal values are rewritten, if there are no equal states, the equal-value ordinal method reduces to the original one.
Overall, the equal-value ordinal scheme is particularly relevant for the determination of the permutation TIR and AIR and should be the preferred choice in permutation analyses.
\subsection{Permutation TIR and AIR}
Let us further analyze the difference between AIR and TIR, and elucidate the physical meaning of AIR. Basically speaking, the TIR quantifies the probabilistic difference from the time reversal or temporal symmetry perspective, while the AIR measures the probabilistic divergence in amplitude fluctuations. As for permutation TIR and AIR, the relationship of vectors and their order patterns when $m$=2 and 3 are illustrated in Fig.~\ref{fig3}.
\begin{figure}[htb]
\centering
\includegraphics[width=8.5cm,height=7cm]{3.eps}
\caption{Vectors and their permutations when $m$=2 and 3. \textbf{A} and \textbf{B} display centrally symmetric vectors when $m$=2 and $m$=3, and \textbf{C} shows two groups of time- and amplitude-reverse vectors. The pair of amplitude-reverse vectors in \textbf{D} are both temporal self-symmetric. Red elements indicate equal values in vectors.}
\label{fig3}
\end{figure}
The PSVs and the SPs are generally different; however, in some special cases, i.e., in the case of centrally symmetric vectors, they are the same. When $m$=2 in Fig.~\ref{fig3}A, the three order patterns, namely, up (1,2), down (2,1) and equality (1,1), are all centrally symmetric, which is true for the case of $m$=3 in Fig.~\ref{fig3}B and other dimensions. And with the increase of $m$, there will be more pairs of centrally symmetric vectors. Note that the whole-equality vector, whose permutation is (1,1,1,$\cdots$), implies both time and amplitude reversibility.
The difference between the AIR and TIR lies in the Fig.~\ref{fig3}C and Fig.~\ref{fig3}D where vectors are not centrally symmetric. We need to calculate the probabilistic differences of time-reverse vectors for TIR while calculate those of amplitude-reverse vectors for TIR. The patterns (1,3,2) and (2,3,1) form a pair of SPs whose vectors are amplitude reversed, while the PSV of (1,3,2) is (3,1,2) and that of (2,3,1) is (2,1,3), and their vectors are time reversed or temporally symmetry. As for the pair of special vectors (1,1,2) and (2,1,1) in Fig.~\ref{fig3}D, they are temporal self-symmetric that their symmetric vectors are themselves, and they have particular physical implication, i.e., time reversibility; however, quantitative AIR requires to calculate they probabilistic difference. The different pair of vectors targeted by the AIR and TIR is their crucial difference.
Theoretically speaking, the larger length of vectors, the more amount of permutations, and the larger difference between the permutation TIR and AIR. In real-world applications, the difference between the TIR and AIR and the extent of the difference are affected by the probability distributions of permutations, which will be analyzed in following sections.
For the permutation TIR and AIR, the existence of forbidden permutations makes employing subtraction-based parameters, e.g., the parameter Ys defined in Eq.~\ref{eq1}, more reliable for calculating the probabilistic differences between paired order patterns. In the expression for Ys \cite{Yao2020ND,Yao2019Ys,Yao2019E,Yao2020}, $p(\pi_{i})$ and $p(\pi_{j})$ denote the probabilities of the SPs or PSVs, respectively, and $p(\pi_{i})$ should not be smaller than $p(\pi_{j})$.
\begin{eqnarray}
\label{eq1}
Ys \langle p(\pi_{i}),p(\pi_{j}) \rangle= p(\pi_{i})\frac{p(\pi_{i})-p(\pi_{j})}{p(\pi_{i})+p(\pi_{j})}
\end{eqnarray}
\section{Results}
In this section, model series and their surrogate data are generated to test the permutation TIR and AIR, which are then applied to quantify the nonequilibrium characteristics of two kinds of real-world physiological signals.
\subsection{TIR and AIR in model series}
The logistic, Henon and Gaussian model series are employed to test the permutation TIR and AIR, and the results are illustrated in Fig.~\ref{fig4}. The logistic equation, $x_{t+1}=r \cdot x_{t} (1- x_{t})$, and the coupled Henon equations, $x_{t+1}=1-\alpha \cdot x^{2}_{t}+y_{t}$ and $y_{t+1}=\beta \cdot x_{t}$, were used to generate nonlinear chaotic series, and zero-mean Gaussian series were employed as linear series. We also generated 500 sets of linear surrogate data for each set of model data using the improved amplitude-adjusted Fourier transform algorithm \cite{Schreiber1996} to test the nonlinearity detection abilities of the TIR and AIR.
\begin{figure}[htb]
\centering
\includegraphics[width=14cm,height=5cm]{4.eps}
\caption{TIR and AIR measures for logistic, Henon and Gaussian series. For the chaotic logistic and Henon series, $x_{1}$=$y_{1}$=0.01, $r$=4, $\alpha$=1.4, $\beta$=0.3, and x-components with a data length of 20$\times$(7!)=100800 are applied.}
\label{fig4}
\end{figure}
As shown in Fig.~\ref{fig4}, the TIR and AIR values for the logistic and Henon series are all larger than the 97.5th percentile of the surrogate data, whereas those for the Gaussian series are between the 2.5th and 97.5th percentiles of the surrogate data, suggesting that both the TIR and AIR enable effective nonlinearity detection according to surrogate theory \cite{Yao2020ND,Yao2019Ys}.
The permutation TIR and AIR have consistent results while different trends in the three model series. When $m$=2, permutation TIR and AIR are the same for each model series, and when $m$ increases from 3 to 5, TIR and AIR values show different increase trends. For the two chaotic series, the probabilistic differences between the PSVs are larger that those between SPs that TIR is larger than the AIR. Moreover, due to the influence of forbidden permutations \cite{Yao2020ND,Yao2019Ys}, when $m$ is greater than or equal to 5, the differences between permutation AIR and TIR decrease. The permutation TIR is equal to 1, indicating that no order pattern of a corresponding symmetric vector exists, while the AIR is nonzero when $m$=7, suggesting that there are still SPs although very rare. The different trends of permutation TIR and AIR in chaotic series suggest the AIR contains different information about non-equilibrium properties of the system from TIR, and the extent of their difference is determined by the probability distributions of permutations.
The permutation AIR and TIR share strong similarities as well as differences in model series, and they are both effective for the detection of nonlinearity. We should note that the TIR measure, i.e., the probabilistic differences in the temporal reversible process, and the AIR measure, i.e., the probabilistic differences in the amplitude fluctuations, extract different types of characteristics of complex systems, although they obtain similar results.
\subsection{TIR and AIR in physiological time series}
In previous studies, an AIR measure based on the original permutation scheme has been applied to brain signals \cite{Yao2019Ys}, and a TIR measure based on the equal-value permutation scheme has been used to characterize heartbeat signals \cite{Yao2019E,Yao2020}. In this paper, we conduct a comparative analysis of the permutation TIR and AIR for these physiological time series.
To collect electroencephalograms (EEGs), 22 epileptic patients (aged 15 to 49 yrs. old, mean 26.95$\pm$8.91 yrs. old) were recruited from Jinling Hospital (JLH); all the patients had been seizure-free for approximately 20 to 30 days. In addition, 22 healthy people (aged 4 to 51 yrs. old, mean 30.0$\pm$13.1 yrs. old) were also enrolled for EEG collection. Following the standard 10-20 EEG system, 16 scalp electrodes were placed on the subjects in their idle states; the duration of data collection was approximately 1 minute, and the sampling frequency was 512 Hz. Briefly, these epileptic data were from our previous study, and detailed information can be found in \cite{Yao2019Ys}.
For the heartbeat signals, data were obtained from the public PhysioNet database \cite{Goldb2000}. Of these heartbeat data, 44 sets were derived from electrocardiograms (ECGs) recorded from patients with congestive heart failure (CHF) (aged 22 to 79, mean 55.5$\pm$11.4 yrs. old), 20 sets were obtained from healthy elderly people (aged 68 to 85, mean 74.5$\pm$4.4 yrs. old), and 20 sets were obtained from healthy young people (aged 21 to 34, mean 25.8$\pm$4.3 yrs. old).
The AIR and TIR values for the EEGs and heartbeats are displayed in Fig.~\ref{fig5}.
\begin{figure}[htb]
\centering
\includegraphics[width=15cm,height=7cm]{5.eps}
\caption{AIR and TIR results based on equal-value permutations for two sets of EEGs and three sets of heartbeats. \textbf{A} and \textbf{B} show the TIR results for the EEGs of the patients with epilepsy and healthy people for $\tau$=1 and 2, respectively, and $m$ from 2 to 6; \textbf{C} and \textbf{D} present the corresponding AIR results. \textbf{E} and \textbf{F} show the TIR results for the heartbeats of the patients with CHF, the healthy elderly individuals and healthy young individuals when $m$=2 and 3, respectively, and $\tau$ ranging from 1 to 5; \textbf{G} and \textbf{H} present the corresponding AIR results.}
\label{fig5}
\end{figure}
Similar to the results for the model series, the permutation AIR and TIR show some differences but still enable highly similar nonequilibrium detections in both EEGs and heartbeats.
For the two groups of EEGs, the permutation AIR and TIR share consistent outcomes: the seizure-free EEGs of the patients with epilepsy show lower nonequilibrium than those of their healthy counterparts (p$<$0.005). Moreover, the equal-value AIR results for the EEGs of the patients with epilepsy from JLH are similar to and yield the same conclusion as those for the AIR measure based on the original permutation scheme \cite{Yao2019Ys}. For example, when $m$=3 and $\tau$=2, the AIR values of the EEGs of the healthy people based on the equal-value and original-order patterns are 0.0123 and 0.0122, respectively, whereas those of the EEGs of the patients with epilepsy are 0.0080 and 0.0072, respectively. The reason for this similarity lies in the fact that equal values occur very rarely in neuro-electrical signals, accounting for only 0.36\% and 0.05\% of the values in the EEGs of the patients with epilepsy and the healthy controls, respectively, and the distributions of these equal values therefore have no significant effects on the results. Nevertheless, the equal-value ordinal scheme is recommended due to its physical importance for both AIR and TIR \cite{Yao2020ND,Yao2020}.
Epilepsy is a life-threatening neurological disorder characterized by recurrent seizures. The characteristic seizures manifest as the sudden development of abnormal, excessive, and synchronous neuronal firing. Abnormally large nonlinearities in seizure EEGs have been widely reported \cite{Yao2020ND}; however, the lower AIR and TIR values observed in our work physiologically suggest long-term negative effects on brain activity, as the neural disorder causes declinations in the nonequilibrium of brain electrical activity \cite{Yao2019Ys,Yao2014}.
For the three groups of heartbeat signals, complexity-loss theory can be applied to both the TIR and AIR measures based on equal-value permutation. The central postulate of complexity-loss theory \cite{Costa2005,Costa2008,Yao2019E,Yao2020,Goldb2002CL,Ivanov1999} is that healthy physiological systems exhibit a type of nonlinear complexity that degrades with age and disease along with the accompanying reduction in adaptive capability. According to this theory, healthy young heartbeats should exhibit the largest nonlinearity, while CHF heartbeats should exhibit smallest nonlinearity, and the complexity of healthy elderly heartbeats should lie in between. The results are consistent with these expectations, and the difference between each pair of heartbeat groups is statistically significant (p$<$0.001), particularly when $\tau$$>$1 \cite{Yao2019E,Yao2020}.
The permutation AIR and TIR in the two kinds of physiological signals, although obtaining highly similar results, are different in the manner they detect nonequilibrium characteristics, sharing the findings in the analysis of model series. The highly consistent findings regarding the TIR and AIR, i.e., temporal asymmetry and amplitude irreversibility, indicate that the joint analysis of different nonequilibrium aspects might further broaden the quantitative nonequilibrium, which will be discussed in the following section.
According to the test results for the model series and physiological data, the simplified AIR and TIR measures both effectively characterize the nonequilibrium property of complex systems. These two closely related parameters, although they yield highly consistent findings, are different in the manner they quantify nonequilibrium: the TIR measure is a traditional parameter that targets the time-reversible probabilistic differences, while the AIR measure involves nonequilibrium amplitude fluctuations.
\section{Discussion}
In this paper, we analysed the relationship between permutations and vectors and conducted a comparative analysis of the TIR and AIR measures. From our tests, we find that the similarities and differences between the permutation AIR and TIR and their physical significance in quantitative nonequilibrium require further discussion.
\begin{figure}[htb]
\centering
\includegraphics[width=15cm,height=6cm]{7.eps}
\caption{Order patterns ($m$=3, $\tau$=3) of the heartbeats of the patients with CHF, the healthy elderly individuals and the healthy young individuals. The vectors of their corresponding permutations above the figure are in line with the Fig.~\ref{fig3}, and equal values in the vectors are indicated in red. The dashed double arrows above the plot point to pairs of PSVs for TIR quantification, and the solid double arrows below the plot point to pairs of SPs for AIR quantification. The three ordered patterns above the figure and the order pattern below it represented by circular arrows correspond to time reversibility and amplitude reversibility, respectively.}
\label{fig7}
\end{figure}
Let us further identify the connections between SPs and PSVs and between the AIR and TIR measures based on the distributions of the order patterns. As mentioned in section 2.4, when $m$=2, there are only three relationships, namely, up, down and equality, and they are all centrally symmetric, i.e., the SPs and PSVs, as well as the simplified quantitative AIR and TIR measures, are the same, which has been demonstrated by the test results. When $m$=3, we illustrate the exemplary probability distributions of the order patterns ($\tau$=3) of the three groups of heartbeats in Fig.~\ref{fig7}. The relationships of vectors and permutations further confirm the difference between and the connections of the PSVs and SPs in Fig.~\ref{fig3}. The SPs and PSVs only share the pair comprising (123) and (321) as well as (111) whose vectors are centrally symmetric; however the permutation TIR and AIR still have high consistent results in heartbeats. The TIR values for the heartbeats of the patients with CHF, the healthy elderly individuals and healthy young individuals are 0.0141, 0.0230 and 0.0448, respectively, and the corresponding AIR values are 0.0164, 0.0238 and 0.0462, respectively, which are very close to the TIR values. The TIR measures the probabilistic differences in time-reversible permutations and the AIR calculates the probabilistic differences in amplitude-reversible order patterns; i.e., the two parameters target different aspects of nonequilibrium, suggested by the differences between the paired SPs and PSVs and between the AIR and TIR. In addition, the similarity between the probabilistic differences between SPs and PSVs contributes to the similarity in the AIR and TIR measures. For different dimensions and delays in the ordinal scheme, the SPs and PSVs show slight differences and high similarity; therefore, the different aspects of nonequilibrium contribute to the difference between the AIR and TIR measures, while the consistency between the PSVs and SPs for the model series, EEGs and heartbeats is the direct reason for the high similarity in the permutation TIR and AIR.
However, although the similarities between the SPs and PSVs are consistent with those between the AIR and TIR measures, they are not the root cause of the latter. TIR measure quantifies the probabilistic differences from the time-reversible perspective, and the AIR measure quantifies the differences in amplitude fluctuations, and they both target the same characteristic, i.e., nonequilibrium, which is the fundamental reason for these highly consistent results. This similarity inspires us to focus on the nonequilibrium itself rather than particular measures thereof, i.e., the TIR or AIR measures. In the Introduction, we note that the traditional kernel estimators are not suitable for TIR due to the reconstruction of probability distributions; however, according to our findings, we can still detect probabilistic differences during time series transformations, such as between the positive and negative parts in the Heaviside kernel function \cite{Xiong2017}. Moreover, visibility graphs, although they have been gaining popularity in the field of TIR quantification \cite{Lacasa2012,Lacasa2015,Flanagan2016,Donges2013}, also reconstruct the time series. From this perspective, the difference between the in- and out-degree distributions of the visibility graph is also not directly consistent with the statistical concept of TIR, and it in fact measures a kind of networked nonequilibrium. According to the visibility graph TIR, we can derive other kinds of either local or global networked nonequilibrium by constructing weighted and directional networks and by calculating the probabilistic difference between the in and out degrees of nodes. These approaches all effectively characterize nonlinear dynamic processes because they detect nonequilibrium rather than TIR. Based on this fact, we should broaden our view of the measurement of nonequilibrium rather than limiting ourselves to the concept of time reversibility or amplitude reversibility.
Fluctuation theorems contribute to our understanding of how irreversibility emerges from reversible dynamics \cite{Sevick2008}, and both the TIR and AIR are effective parameters to characterize the fluctuations in nonequilibrium processes. The differences between and similarities of the permutation TIR and AIR contribute to improving the understanding and quantification of the nonequilibrium characteristics from a broader perspective and to more widely exploration to the essential characteristics of the nonequilibrium nature of complex systems.
\section{Conclusions}
To conclude, we have corrected the conceptual error underlying the use of SPs in the quantification of TIR and have defined a novel concept of amplitude reversibility for quantitative nonequilibrium analogous to time reversibility, and we have also demonstrated the significance of equal-value permutation for both TIR and AIR. The AIR and TIR target different aspects of nonequilibrium fluctuations; however, the permutation TIR and AIR show high consistency on both model series and real-world data; inspired by this observation, we clarify the relationship of AIR and TIR and quantitative nonequilibrium.
Our findings have more profound significance that we should not allow ourselves to be limited by the statistical definition of the TIR and that we can instead directly target the nonequilibrium of a system by considering the differences in various time series conversions, thus broadening the scope of nonequilibrium analysis.
\section{Acknowledgment}
The project is supported by the National Natural Science Foundation of China (Grant Nos. 61527815, 31771149, 81571770, 61933003), the CAMS Innovation Fund for Medical Sciences CIFMS (Grant No. 2019-I2M-5-039), Sichuan Science and Technology Program (Grant No. 2018HH0003), China Postdoctoral Science Foundation (2020M683279), and by the Slovenian Research Agency (Grant Nos. J4-9302, J1-9112, and P1-0403).
\section{Conflict of interest}
The authors declare that they have no conflict of interest.
\nocite{*}
|
2,877,628,091,184 | arxiv | \section{Introduction}
Let $F$ be an algebraically closed field of characteristic $p\geq 0$ and $G$ be a group. In general, given irreducible $FG$-representations $V$ and $W$, the tensor product $V\!\otimes\! W$ is not irreducible. We say that $V\!\otimes\! W$ is a non-trivial irreducible tensor product if $V\!\otimes\! W$ is irreducible and neither $V$ nor $W$ has dimension 1. One motivation to this question is the Aschbacher-Scott classification of maximal subgroups of finite classical groups, see \cite{a} and \cite{as}. In particular, in view of class $\mathcal{C}_4$, a classification of non-trivial irreducible tensor products is needed to understand which subgroups appearing in class ${\mathcal S}$ are maximal, see \cite{Ma} for more details.
Non-trivial irreducible tensor products of representations of symmetric groups have been fully classified (see \cite{bk}, \cite{gj}, \cite{gk}, \cite{m1} and \cite{z1}). In particular non-trivial irreducible tensor products for symmetric groups only exist in characteristic 2 for $n\equiv 2\Md 4$. For alternating groups in characteristic 0 or $p\geq 5$ non-trivial irreducible tensor products have been classified in \cite{bk3}, \cite{bk2}, \cite{m2} and \cite{z1}
In this paper we will consider the case where $G=A_n$ is an alternating group and $p=2$ or $3$. Our main result, which extends \cite[Main Theorem]{bk2} and \cite[Theorem 1.1]{m2} in a slight modified version, is the following. For an explanation of the notations used see \S\ref{sim} and the last part of \S\ref{sb}.
\begin{theor}\label{mt}
Let $V$ and $W$ be irreducible $FA_n$-modules of dimension larger than 1. If $V\otimes W$ is irreducible then one of the following holds up to exchange of $V$ and $W$:
\begin{enumerate}
\item $p\nmid n$, $V\cong E^\la_\pm$ where $\la$ is a JS-partition and $W\cong E^{(n-1,1)}$. In this case $V\otimes W$ is always irreducible and
$V\otimes W\cong E^{(\la\setminus A)\cup B}$, where $A$ is the top removable node of $\la$ and $B$ is the second bottom addable node of $\la$.
\item $p=3$, $V\cong E^{(4,1^2)}_+$ and $W\cong E^{(4,1^2)}_-$. In this case $V\otimes W\cong E^{(4,2)}$.
\item $p=2$, $V$ is basic spin and at least one of $V$ or $W$ cannot be extended to a $F\s_n$-module.
\end{enumerate}
\end{theor}
Note that in the first two cases the tensor products are irreducible. This does however not always hold in the third case. A classification of irreducible tensor products with a basic spin module for alternating groups in characteristic 2 is currently not known. In Section \ref{s2} we will consider case (iii) more in details and give some conditions for such products to be irreducible.
In the next section we will give an overview of known results which will be used in the paper. In Section \ref{s1} as well as in Sections \ref{geq2n} to \ref{sJS} we study, in different ways, certain submodules of the modules $\Hom_{\s_n}(D^\la)$ and $\Hom_{A_n}(E^\la_\pm)$, using results from Sections \ref{sbr} and \ref{sph} and Section \ref{spm} respectively. These results will then be used in Sections \ref{sns} and \ref{ds} to study tensor products of a non-split and a split modules and of two split modules. Together with results on tensor products for modules of symmetric groups this will allow us to prove Theorem \ref{mt} in Section \ref{s3}. Although we cannot completely classify irreducible tensor products in characteristic 2 with a basic spin module, we will give some more restrictions for such tensor products to be irreducible in Section \ref{s2}.
\section{Notations and basic results}\label{snot}
Throughout the paper $F$ will be an algebraically closed field of characteristic $p$.
Given modules $M$ and $N_1,\ldots,N_h$ we will write
\[M\sim N_1|\ldots|N_h\]
if $M$ has a filtration with subquotients $N_j$ counted from the bottom and
\[M\sim (N_{1,1}|\ldots|N_{1,h_1})\,\,\oplus\,\,\ldots\,\,\oplus\,\, (N_{k,1}|\ldots|N_{k,h_k})\]
if there exists modules $M_i,N_{j,\ell}$ such that $M\cong M_1\oplus\ldots\oplus M_k$ and $M_j\sim N_{j,1}|\ldots|N_{j,h_j}$ for $1\leq j\leq k$. Further if modules $V_1,\ldots,V_h$ are simple, we will write
\[M\cong V_1|\ldots|V_h\]
if $M$ is uniserial with factors $V_j$ counted from the bottom and then similarly to above we will also write
\[M\cong (V_{1,1}|\ldots|V_{1,h_1})\,\,\oplus\,\,\ldots\,\,\oplus\,\, (V_{k,1}|\ldots|V_{k,h_k}).\]
For certain specific modules $V$, where $V$ is a simple or (dual of a) Specht module or direct sum of such, we will sometimes write $V\subseteq M$. When writing this we will always mean that $V$ is contained in $M$ up to isomorphism.
\subsection{Irreducible modules}\label{sim}
It is well known that irreducible representations of symmetric groups in characteristic $p$ are indexed by $p$-regular partitions and that they are self-dual. For $\la\in\Par_p(n)$ a $p$-regular partition, let $D^\la$ be the corresponding simple $F\s_n$-module. The module $D^\la$ can be defined as the head of $S^\la$, see \cite[Corollary 12.2]{JamesBook}. Further let $\la^\Mull\in\Par_p(n)$, the Mullineux dual of $\la$, be the unique partition with $D^{\la^\Mull}\cong D^\la\otimes \sgn$ (where $\sgn$ is the sign representation of $F\s_n$).
For $p\geq 3$ it is well known that if $\lambda\not=\lambda^\Mull$ then $D^\lambda\da_{A_n}=E^\lambda$ is irreducible (and in this case $E^\lambda\cong E^{\lambda^\Mull}$), while if $\lambda=\lambda^\Mull$ then $D^\lambda\da_{A_n}=E^\lambda_+\oplus E^\lambda_-$ is the direct sum of two non-isomorphic irreducible representations of $A_n$. Further all irreducible representations of $A_n$ are of one of these two forms (see for example \cite{f}). If $p=2$ there is a different description of splitting irreducible representations (see Lemma \ref{split2}). Also in this case either $D^\la\da_{A_n}$ is irreducible or it is the direct sum of two non-isomorphic irreducible representations and any irreducible representation of $A_n$ is of one of these two forms.
For any $p$ let
\[\Parinv_p(n):=\{\la\in\Par_p(n)|D^\la\da_{A_n}\mbox{ splits}\}.\]
If $p\geq 3$ we have from the previous paragraph that $\la\in\Parinv_p(n)$ if and only if $\la=\la^\Mull$. For $p=2$ we have the following result:
\begin{lemma}{\cite[Theorem 1.1]{Benson}}\label{split2}
Let $p=2$ and $\la\in\Par_2(n)$. Then $\la\in\Parinv_2(n)$ if and only if the following hold
\begin{itemize}
\item $\la_{2i-1}-\la_{2i}\leq 2$ for each $i\geq 1$ and
\item $\la_{2i-1}+\la_{2i}\not\equiv 2\Md 4$ for each $i\geq 1$.
\end{itemize}
\end{lemma}
When considering splitting modules for $p\geq 3$ we have the following result, where $h(\lambda)$ is the number of parts of $\lambda$:
\begin{lemma}\label{Mull}{\cite[Lemma 1.8]{ks2}}
Let $p\geq 3$ and $n\geq 5$. If $\lambda\in\Parinv_p(n)$ then $h(\la)\geq 3$.
\end{lemma}
If $p=2$ a special role will be played by the irreducible modules indexed by the partition $\be_n:=(\lceil(n+1)/2\rceil,\lfloor (n-1)/2\rfloor)$. Such modules (for $\s_n$) can be obtained by reducing modulo 2 a basic spin module of the covering group of $\s_n$ and are therefore also called basic spin modules (see \cite{Benson}).
It easily follows from Lemmas \ref{split2} and \ref{Mull} that for large $n$, splitting modules cannot be indexed by partitions with at most two rows, unless possibly $p=2$ and the module is a basic spin module.
\subsection{Branching}\label{sb}
Since we will often study restrictions of modules to Young subgroups, we will now give a review of the needed branching results.
Given a node $(a,b)$ define its residue by $\res(a,b)=b-a\Md p$. Given a partition $\la$ define its content to be the tuple $(c_0,\ldots,c_{p-1})$, where $c_i$ is the number of nodes of $\la$ of residue $i$, for each residue $i$. Two simple $F\s_n$-modules are in the same block if and only if the corresponding partitions have the same content. Thus we may define the content of a block and distinct blocks have distinct contents. For a residue $i$ and a module $M$ contained in the block with content $(c_0,\ldots,c_{p-1})$, let $e_iM$ (resp. $f_iM$) be the block component of $M\da_{\s_{n-1}}$ (resp. $M\ua^{\s_{n+1}}$) contained in the block with content $(c_0,\ldots,c_{i-1},c_i-1,c_{i+1},\ldots,c_{p-1})$ (resp. $(c_0,\ldots,c_{i-1},c_i+1,c_{i+1},\ldots,c_{p-1})$) if such a block exists or let $e_iM:=0$ (resp. $f_iM:=0$) otherwise. The definitions of $e_iM$ and $f_iM$ can then be extended to arbitrary modules additively. Then
\begin{lemma}\label{l45}
For $M$ a $F\s_n$-module we hav
\[M\da_{\s_{n-1}}\cong e_0M\oplus\ldots\oplus e_{p-1}M\hspace{18pt}\mbox{and}\hspace{18pt}M\ua^{\s_{n+1}}\cong f_0M\oplus\ldots\oplus f_{p-1}M.\]
\end{lemma}
\begin{proof}
We may assume that $M$ has only one block component. For $M$ simple the result holds by \cite[Theorems 11.2.7, 11.2.8]{KBook}. The result then hold in general by definition of $e_i$ and $f_i$ (there are no other block components).
\end{proof}
The following properties of $e_i$ and $f_i$ can be seen as special cases of \cite[Lemma 8.2.2]{KBook}.
\begin{lemma}\label{l57}
If $M$ is self dual then so are $e_iM$ and $f_iM$.
\end{lemma}
\begin{lemma}\label{l48}
The functors $e_i$ and $f_i$ are left and right adjoint of each other.
\end{lemma}
For $r\geq 1$ define $e_i^{(r)}:F\s_n\md\rightarrow F\s_{n-r}\md$ and $f_i^{(r)}:F\s_n\md\rightarrow F\s_{n+r}\md$ to be the divided power functors (see \cite[\S11.2]{KBook} for the definitions). For $r=0$ define $e_i^{(0)}D^\lambda$ and $f_i^{(0)}D^\lambda$ to be equal to $D^\lambda$. For a partition $\lambda$ let $\epsilon_i(\lambda)$ be the number of normal nodes of $\lambda$ of residue $i$ and $\phi_i(\lambda)$ be the number of conormal nodes of $\lambda$ of residue $i$ (see \cite[\S11.1]{KBook} or \cite[\S2]{bk2} for definitions of normal and conormal nodes). Normal and conormal nodes of partitions will play a crucial role through all of the paper. If $\epsilon_i(\lambda)\geq 1$ we will denote by $\tilde{e}_i(\lambda)$ the partition obtained from $\lambda$ by removing the $i$-good node, that is the bottom $i$-normal node. Similarly, if $\phi_i(\lambda)\geq 1$ we denote by $\tilde{f}_i(\lambda)$ the partition obtained from $\lambda$ by adding the $i$-cogood node, that is the top $i$-conormal node. The next two lemmas will be used throughout the paper and show that the modules $e_i^rD^\lambda$ and $e_i^{(r)}D^\lambda$ (and similarly $f_i^rD^\lambda$ and $f_i^{(r)}D^\lambda$) are closely connected. For $r=0$ the lemmas hold trivially. For $r>0$ see \cite[Theorems 11.2.10, 11.2.11]{KBook}.
\begin{lemma}\label{l39}
Let $\lambda\in\Par_p(n)$, $r\geq 0$ and $i$ be a residue. Then $e_i^rD^\lambda\cong(e_i^{(r)}D^\lambda)^{\oplus r!}$. Further $e_i^{(r)}D^\lambda\not=0$ if and only if $\epsilon_i(\lambda)\geq r$. In this case
\begin{enumerate}
\item\label{l39a}
$e_i^{(r)}D^\lambda$ is a self-dual indecomposable module with head and socle isomorphic to $D^{\tilde{e}_i^r(\lambda)}$,
\item\label{l39b}
$[e_i^{(r)}D^\lambda:D^{\tilde{e}_i^r(\lambda)}]=\binom{\epsilon_i(\lambda)}{r}=\dim\End_{\s_{n-r}}(e_i^{(r)}D^\lambda)$,
\item\label{l39c}
if $D^\psi$ is a composition factor of $e_i^{(r)}D^\lambda$ then $\epsilon_i(\psi)\leq \epsilon_i(\lambda)-r$, with equality holding if and only if $\psi=\tilde{e}_i^r(\lambda)$.
\end{enumerate}
\end{lemma}
\begin{lemma}\label{l40}
Let $\lambda\in\Par_p(n)$, $r\geq 0$ and $i$ be a residue. Then $f_i^rD^\lambda\cong(f_i^{(r)}D^\lambda)^{\oplus r!}$. Further $f_i^{(r)}D^\lambda\not=0$ if and only if $\phi_i(\lambda)\geq r$. In this case
\begin{enumerate}
\item\label{l40a}
$f_i^{(r)}D^\lambda$ is a self-dual indecomposable module with head and socle isomorphic to $D^{\tilde{f}_i^r(\lambda)}$,
\item\label{l40b}
$[f_i^{(r)}D^\lambda:D^{\tilde{f}_i^r(\lambda)}]=\binom{\phi_i(\lambda)}{r}=\dim\End_{\s_{n+r}}(f_i^{(r)}D^\lambda)$,
\item\label{l40c}
if $D^\psi$ is a composition factor of $f_i^{(r)}D^\lambda$ then $\phi_i(\psi)\leq \phi_i(\lambda)-r$, with equality holding if and only if $\psi=\tilde{f}_i^r(\lambda)$.
\end{enumerate}
\end{lemma}
For $r=1$ it follows that $e_i=e_i^{(1)}$ and $f_i=f_i^{(1)}$. In this case more composition factors of $e_iD^\lambda$ and $f_iD^\lambda$ are known by \cite[Theorem E(iv)]{bk6} and \cite[Theorem 1.4]{k4}.
\begin{lemma}\label{l56}
Let $\lambda\in\Par_p(n)$. If $A$ is an $i$-normal node of $\lambda$ and $\lambda\setminus A$ is $p$-regular then $[e_iD^\lambda:D^{\lambda\setminus A}]$ is equal to the number of $i$-normal nodes of $\lambda$ weakly above $A$.
Similarly if $B$ is an $i$-conormal node of $\lambda$ and $\lambda\cup B$ is $p$-regular then $[f_iD^\lambda:D^{\lambda\cup B}]$ is equal to the number of $i$-conormal nodes of $\lambda$ weakly below $B$.
\end{lemma}
Since the modules $e_iD^\lambda$ (or $f_iD^\lambda$) correspond to pairwise distinct blocks, the following holds combining Lemmas \ref{l45}, \ref{l39}(ii) and \ref{l40}(ii).
\begin{lemma}\label{l53}
For $\la\in\Par_p(n)$ we have that
\begin{align*}
\dim\End_{\s_{n-1}}(D^\lambda\da_{\s_{n-1}})&=\epsilon_0(\lambda)+\ldots+\epsilon_{p-1}(\lambda),\\
\dim\End_{\s_{n+1}}(D^\lambda\ua^{\s_{n+1}})&=\phi_0(\lambda)+\ldots+\phi_{p-1}(\lambda).
\end{align*}
\end{lemma}
When considering the functors $\tilde{e}_i$ and $\tilde{f}_i$ the following easily holds by definition (alternatively see \cite[Lemma 5.2.3]{KBook} for the first part and Lemmas \ref{l39}(iii) and \ref{l40}(iii) for the second part).
\begin{lemma}\label{l47}
For $r\geq 0$ and $p$-regular partitions $\lambda,\nu$ we have that $\tilde{e}^r_i(\lambda)=\nu$ if and only if $\tilde{f}^r_i(\nu)=\lambda$. In this case $\epsilon_i(\nu)=\epsilon_i(\lambda)-r$ and $\phi_i(\nu)=\phi_i(\lambda)+r$.
\end{lemma}
The total numbers of normal and conormal nodes of a partition are related by following result, which hold by the corresponding result for removable and addable nodes and by definition of normal and conormal nodes (the set of normal and conormal nodes is obtained by recursively removing pairs of a removable and an addable node from the set of removable and addable nodes).
\begin{lemma}\label{l52}
Any $p$-regular partition has 1 more conormal node than it has normal nodes.
\end{lemma}
The following result connects branching and the Mullineux bijection (see \cite[Theorem
4.7]{kMull} or \cite[Lemma 5.10]{m2}).
\begin{lemma}\label{l17}
For any partition $\lambda\in\Par_p(n)$ and for any residue $i$ we have $\epsilon_i(\lambda)=\epsilon_{-i}(\lambda^\Mull)$ and $\phi_i(\lambda)=\phi_{-i}(\lambda^\Mull)$.
If $\epsilon_i(\lambda)>0$ then $\tilde{e}_i(\lambda)^\Mull=\tilde{e}_{-i}(\lambda^\Mull)$, while if $\phi_i(\lambda)>0$ then $\tilde{f}_i(\lambda)^\Mull=\tilde{f}_{-i}(\lambda^\Mull)$.
\end{lemma}
We conclude by defining JS-partitions. A JS-partition is a partition $\la\in\Par_p(n)$ for which $D^\lambda\da_{\s_{n-1}}$ is irreducible. In view of Lemmas \ref{l45} and \ref{l39} a $p$-regular partition is a JS-partition if and only if it has exactly one normal node. By Lemma \ref{l52} we then also have that JS-partitions have exactly 2 conormal nodes. JS-partitions will play a special role in this paper. They have a nice combinatorial description, see \cite[Section 4]{JS} and \cite[Theorem D]{k2}:
\begin{lemma}\label{L221119}
Let $\lambda=(a_1^{b_1},\ldots,a_h^{b_h})$ with $a_1>a_2>\ldots>a_h\geq 1$ and $1\leq b_i\leq p-1$ for $1\leq i\leq h$. Then $\lambda$ is a JS-partition if and only if $a_i-a_{i+1}+b_i+b_{i+1}\equiv 0\Md p$ for each $1\leq i<h$.
\end{lemma}
For $p=2$ this simplifies to:
\begin{lemma}\label{L151119}
Let $p=2$ and $\lambda\in\Par_2(n)$. Then $\lambda$ is a JS-partition if and only if all parts of $\la$ have the same parity.
\end{lemma}
\subsection{Permutation modules}
For any composition $\la$ of $n$ let $\s_\la=\s_{\la_1}\times\s_{\la_2}\times\ldots\subseteq\s_n$ be the corresponding Young subgroup and define $M^\la:=\1\ua_{\s_\la}^{\s_n}$. Clearly the modules $M^\la$ are self-dual, as is any permutation module. Note that if $\la$ and $\mu$ can be obtained from each other by rearranging their parts, then $M^\la\cong M^\mu$. So from now on we will assume that $\la\in\Par(n)$ is a partition. In this case let $S^\la$ be the Specht module indexed by $\la$. It is well known that $S^\la\subseteq M^\la$ (this holds for example by comparing standard bases of $M^\la$ and $S^\la$). Further let $Y^\la$ be the corresponding Young module, that is the module given by the following lemma (see \cite{JamesArcata} and \cite[\S4.6]{Martin}). In the lemma $\rhd$ denotes the dominance order.
\begin{lemma}{} \label{LYoung}
There exist indecomposable $F \s_n$-modules $\{Y^\lambda\mid \lambda\in\Par(n)\}$ such that $M^\lambda\cong Y^\lambda\,\oplus\, \bigoplus_{\mu\rhd\lambda}(Y^\mu)^{\oplus m_{\mu,\lambda}}$ for some $m_{\mu,\lambda}\geq 0$. Moreover, $Y^\lambda$ can be characterized as the unique direct summand of $M^\lambda$ containing $S^\lambda$. Further $Y^\lambda$ is self-dual for any $\lambda\in\Par(n)$.
\end{lemma}
The above lemma will be used in Section \ref{spm} to study the structure of certain small permutation modules. The structure of such permutation modules, together with the next lemma, will then be used in Sections \ref{geq2n} to \ref{sJS} to study the submodule structure of $\End_F(V)$, for $V$ a simple $F\s_n$- or $F A_n$-module using the next lemma, which holds by Frobenious reciprocity. For any partition $\al\in\Par(n)$ let $A_\al:=A_n\cap\s_\al$. It is easy, by Mackey induction-reduction theorem, to check that if $\al\not=(1^n)$ then $M^\al\da_{A_n}\cong\1\ua_{A_\al}^{A_n}$.
\begin{lemma}\label{l2}
For any $F\s_n$-module $V$ and any $\alpha\in\Par(n)$ we have that
\[\dim\Hom_{\s_n}(M^\alpha,\End_F(V))=\dim\End_{\s_\alpha}(V\da_{\s_\alpha}).\]
Similarly for any $F A_n$-module $W$ and $\al\not=(1^n)$ we have that
\[\dim\Hom_{A_n}(M^\alpha,\End_F(V))=\dim\End_{A_\alpha}(V\da_{A_\alpha}).\]
\end{lemma}
The following lemma will play a crucial role in Sections \ref{sns} and \ref{ds} to prove that in most cases $V\otimes W$ is not simple, see \cite[Lemma 5.3]{m2} for a proof (for $p=2$ and $H=A_n$ the proof is similar).
\begin{lemma}\label{l15}
Let $H=\s_n$ or $H=A_n$ and let $V$ and $W$ be $FH$-modules. For $\alpha\in\Par(n)$
let $m_{V^*,\alpha},m_{W,\alpha}\in\Z_{\geq 0}$ be such that there exist $\phi^\alpha_1,\ldots,\phi^\alpha_{m_{V^*,\alpha}}\in\Hom_H(M^\alpha,V^*)$ with $\phi^\alpha_1|_{S^\alpha},\ldots,\phi^\alpha_{m_{V^*,\alpha}}|_{S^\alpha}$ linearly independent and similarly there exist $\psi^\alpha_1,\ldots,\psi^\alpha_{m_{W,\alpha}}\in\Hom_H(M^\alpha,W)$ with $\psi^\alpha_1|_{S^\alpha},\ldots,\psi^\alpha_{m_{W,\alpha}}|_{S^\alpha}$ linearly independent. If $H=\s_n$ let $A:=\Par_p(n)$. If $H=A_n$ and $p=2$ let $A:=\Par_2(n)\setminus\Parinv_2(n)$. If $H=A_n$ and $p\geq 3$ let $A$ be the set of partitions $\al\in\Par_p(n)\setminus\Parinv_p(n)$ with $\al>\al^\Mull$. Then
\[\dim\Hom_H(V,W)\geq\sum_{\alpha\in A
m_{V^*,\alpha}m_{W,\alpha}.\]
\end{lemma}
Since we will often work with permutation modules $M^\la$ with $\la=(n-m,\mu)=(n-m,\mu_1,\mu_2,\ldots)$ for certain fixed partitions $\mu\in\Par(m)$ with $m$ small, we will write $M_\mu$, $S_\mu$ and $Y_\mu$ for $M^{(n-m,\mu)}$, $S^{(n-m,\mu)}$ and $Y^{(n-m,\mu)}$ respectively, provided $(n-m,\mu)\in\Par(n)$. Similarly, if $(n-m,\mu)\in\Par_p(n)$ is $p$-regular, we will write $D_\mu$ for the simple module $D^{(n-m,\mu)}$.
\section{Branching recognition}\label{sbr}
In this section we will show that under certain assumptions on $n$, if $\la\in\Par_p(n)$ is of certain special forms, then (some) restrictions $D^\la\da_{\s_{n-m}}$ have composition factors indexed by partitions with similar forms as $\la$. These results will be used in Section \ref{s1} to show existence of homomorphisms $M_\mu\to\End_F(D)$ which do not vanish on $S_\mu$ (for certain small partitions $\mu$), where $D$ is a simple $F\s_n$- or $F A_n$-module.
\begin{lemma}\label{L5}
Let $p=2$ and $\la\in\Par_2(n)$. If $n>h(\la)(h(\la)+1)/2$ there exists a composition factor $D^\mu$ of $D^\la\da_{\s_{n-1}}$ with $h(\mu)=h(\la)$. In particular $D^{(h(\la),h(\la)-1,\ldots,1)}$ is a composition factor of $D^\la\da_{\s_{h(\la)(h(\la)+1)/2}}$.
\end{lemma}
\begin{proof}
Note that for any $\la\in\Par_2(n)$ we have that $n\geq h(\la)(h(\la)+1)/2$, with equality holding if and only if $\la=(h(\la),h(\la)-1,\ldots,1)$. So the second part of the lemma follows from the first. Assume now that $n>h(\la)(h(\la)+1)/2$. Let $1\leq k\leq h(\la)$ minimal such that $\la_k\geq\la_{k+1}+2$ (such a $k$ exists since $n>h(\la)(h(\la)+1)/2$). Then $(k,\la_k)$ is normal and $\mu=\la\setminus(k,\la_k)\in\Par_2(n-1)$ with $h(\mu)=h(\la)$. The lemma then follows from Lemma \ref{l56}.
\end{proof}
\begin{lemma}\label{l34}
Let $p=3$, $n\geq 9$ and $\la\in\Par_3(n)$. If $h(\la),h(\la^\Mull)\geq 4$ then $D^\la\da_{\s_{n-1}}$ has a composition factor $D^\mu$ with $h(\mu),h(\mu^\Mull)\geq 4$.
\end{lemma}
\begin{proof}
Assume first that $h(\la),h(\la^\Mull)\geq 5$ and let $A$ be a good node of $\la$. Then $(\la\setminus A)^\Mull=\la^\Mull\setminus B$ for a good node $B$ of $\la^\Mull$ (see Lemma \ref{l17}). Then $D^{\la\setminus A}$ is a composition factor of $D^\la\da_{\s_{n-1}}$ by Lemma \ref{l39} and $h(\la\setminus A),h((\la\setminus A)^\Mull)\geq 4$. So, up to exchange of $\la$ and $\la^\Mull$ we may assume that $h(\la^\Mull)\geq h(\la)=4$. For any partition $\alpha$ let $G_1(\alpha)$ be the first column of the Mullineux symbol of $\alpha$ (see \cite[Section 1]{FK} for definition of the Mullineux symbol of $\la$ and how to obtain the Mullineux symbol of $\la^\Mull$ from that of $\la$). If $\la$ has a normal node $C$ such that $\la\setminus C$ is 3-regular and $G_1(\la)=G_1(\la\setminus C)$ then the lemma holds, by Lemma \ref{l56} and the combinatorial definition of $\la^\Mull$
{\sf Case 1.} $\la_1=\la_2$. Then $\la_2>\la_3$ and we can take $C=(2,\la_2)$.
{\sf Case 2.} $\la_1=\la_2+1=\la_3+1=\la_4+2$. In this case $\la^\Mull=(2\la_1-1,2\la_1-3)$ by \cite[Lemma 2.3]{bkz}, contradicting the assumptions.
{\sf Case 3.} $\la_1=\la_2+1=\la_3+1=\la_4+3$. If $\la_1=4$ then $\la=(4,3,3,1)$ and $D^{(5,2,2,1)}$ is a composition factor of $D^{(4,3,3,1)}\da_{\s_{10}}$ by \cite[Tables]{JamesBook}. Since $h((5,2,2,1))=h((5,2,2,1)^\Mull)=4$, we may assume that $\la_1\geq 5$. In this case $D^{(\la_1,\la_2,\la_3,\la_4-1)}$ is a composition factor of $D^\la\da_{\s_{n-1}}$ and $h((\la_1,\la_2,\la_3,\la_4-1)),h((\la_1,\la_2,\la_3,\la_4-1)^\Mull)\geq4$.
{\sf Case 4.} $\la_1=\la_2+1=\la_3+1>\la_4+3$. In this case we can take $C=(3,\la_3)$.
{\sf Case 5.} $\la_1=\la_2+1>\la_3+1$. In this case we can take $C=(1,\la_1)$.
{\sf Case 6.} $\la_1=\la_2+2=\la_3+2$. Then $\la_3>\la_4$ and we can take $C=(3,\la_3)$.
{\sf Case 7.} $\la_1=\la_2+2=\la_3+3=\la_4+3$. If $\la_1=4$ then $n=8$, so we may assume that $\la_1\geq 5$. In this case $D^{(\la_1,\la_2,\la_3,\la_4-1)}$ is a composition factor of $D^\la\da_{\s_{n-1}}$ and $h((\la_1,\la_2,\la_3,\la_4-1)),h((\la_1,\la_2,\la_3,\la_4-1)^\Mull)\geq4$.
{\sf Case 8.} $\la_1=\la_2+2=\la_3+3\geq\la_4+4$. In this case we can take $C=(2,\la_2)$.
{\sf Case 9.} $\la_1=\la_2+2=\la_3+4=\la_4+4$. If $\la_1\geq 6$ we can take $C=(4,\la_4)$. If $\la_1=5$ then $\la=(5,3,1,1)$, $D^{(5,2,1,1)}$ is a composition factor of $D^{(5,3,1,1)}$ and $h((5,2,1,1)),h((5,2,1,1)^\Mull)=4$.
{\sf Case 10.} $\la_1=\la_2+2=\la_3+4>\la_4+4$. In this case $D^{(\la_1,\la_2,\la_3-1,\la_4)}$ is a composition factor of $D^\la\da_{\s_{n-1}}$ and $h((\la_1,\la_2,\la_3-1,\la_4)),h((\la_1,\la_2,\la_3-1,\la_4)^\Mull)\geq4$.
{\sf Case 11.} $\la_1=\la_2+2\geq \la_3+5$. In this case we can take $C=(2,\la_2)$.
{\sf Case 12.} $\la_1\geq\la_2+3$. In this case we can take $C=(1,\la_1)$.
\end{proof}
\begin{lemma}\label{l13}
Let $p=3$, $n\geq 7$ and $\la=(n-k,k)$ with $n-2k\geq 2$ and $k\geq 2$. Then $D^\la\da_{\s_{n-1}}$ has a composition factor $D^\mu$ with $\mu=(n-1-\ell,\ell)$ with $n-1-2\ell\geq 2$ and $\ell\geq 2$. In particular $D^{(4,2)}$ is a composition factor of $D^\la\da_{\s_6}$.
\end{lemma}
\begin{proof}
If $n-2k\geq 3$ then we can take $\mu=(n-k-1,k)$ by Lemma \ref{l56}. If $n-2k=2$ then $k\geq 3$ since $n\geq 7$ and, again by Lemma \ref{l56}, we can take $\mu=(n-k,k-1)$. The result for $D^\la\da_{\s_6}$ follows by induction.
\end{proof}
\begin{lemma}\label{l10}
Let $p=3$, $n>6$ and $\la\in\Parinv_3(n)$ be a JS-partition with $h(\la)=3$. Then $n\equiv 0\Md 6$ and $D^\la\da_{\s_{n-6}}$ has a composition factor $D^\mu$ with $\mu\in\Parinv_3(n-6)$ a JS-partition with $h(\mu)=3$. Further $D^{(5,1^2)}$ is a composition factor of $D^\la\da_{\s_7}$.
\end{lemma}
\begin{proof}
We have that $\la\in\Parinv_3(m)$ if and only if $\la\in\Par_3(m)$ is Mullineux-fixed. From \cite[Theorem 4.1]{bo} we have that Mullineux-fixed partitions with 3-parts are exactly the partitions with Mullineux symbols
\[\left(\begin{array}{ccccc}6&\ldots&6&5&1\\3&\ldots&3&3&1\end{array}\right).\]
So $\la=\la^\Mull$ and $h(\la)=3$ if and only if $n\equiv 0\Md 6$ and $\la=(n/2+1,(n/2-1)^\Mull)$. Assume that this is the case. From Lemma \ref{l56} it follows that $D^{(n/2,(n/2-1)^\Mull)}$, $D^{(n/2,(n/2-2)^\Mull)}$, $D^{(n/2,(n/2-3)^\Mull)}$, $D^{(n/2-1,(n/2-3)^\Mull)}$, $D^{(n/2-1,(n/2-4)^\Mull)}$ and $D^{(n/2-2,(n/2-4)^\Mull)}$ are composition factors of $D^\la\da_{\s_{n-k}}$ with $1\leq k\leq 6$. We can then take $\mu=(n/2-2,(n/2-4)^\Mull)=((n-6)/2+1,((n-6)/2-1)^\Mull)$. For the last claim note that by induction $D^{(7,3,2)}$ is a composition factor of $D^\la\da_{\s_{12}}$ and by the previous $D^{(7,3,2)}\da_{\s_7}$ has a composition factor $D^{(5,1^2)}$.
\end{proof}
\section{Special homomorphisms}\label{sph}
In this section
we will give conditions under which there exist homomorphisms $M_\mu\to\End_F(D)$ which do not vanish on $S_\mu$. Such conditions will then be checked to hold in some cases in the next section.
\begin{lemma}\label{l18}
Let $n\geq 6$ and $V$ be a $FA_n$-module. For pairwise distinct $a,b,c$ define $[a,b,c]:=(a,b,c)+(a,c,b)$. If
\[x_3:=[1,2,3]+[1,5,6]+[2,4,6]+[3,4,5]-[1,2,6]-[1,3,5]-[2,3,4]-[4,5,6]\]
and $x_3V\not=0$ then there exists $\psi\in\Hom_{A_n}(M_3\da_{A_n},\End_F(V))$ which does not vanish on $S_3\da_{A_n}$.
\end{lemma}
\begin{proof}
Let $\{v_{\{x,y,z\}}\,|\,x,y,z\mbox{ distinct elements of }\{1,\ldots,n\}\}$ be the standard basis of $M_3$. Define $\psi\in\Hom_{A_n}(M_3\da_{A_n},\End_F(V))$ through
\[\psi(v_{\{x,y,z\}})(w)=(x,y,z)w+(x,z,y)w\]
for each $w\in V$ (it can be easily checked that $\psi$ is an homomorphism). Let
\[e:=v_{\{1,2,3\}}+v_{\{1,5,6\}}+v_{\{4,2,6\}}+v_{\{4,5,3\}}-v_{\{1,2,6\}}-v_{\{1,5,3\}}-v_{\{4,2,3\}}-v_{\{4,5,6\}}.\]
Then $e$ generates $S_3$ (see\cite[Section 8]{JamesBook}), since it corresponds to the tableau
\[\begin{array}{ccccc}
4&5&6&\cdots&n.\\
1&2&3
\end{array}\]
Notice that $\psi(e)(w)=x_3 w$.
Similar to \cite[Lemma 6.1]{kmt}, $\psi$ vanishes on $S_3\da_{A_n}$ if and only if $x_3 E^\la_\pm=0$
\end{proof}
\begin{lemma}\label{l16}{\cite[Lemma 6.1]{kmt}}
Let $n\geq 8$ and $V$ be a $F\s_n$-module. For pairwise distinct $a,b,c,d$ define $[a,b,c,d]$ to be the sum of all elements of $\s_{\{a,b,c,d\}}$ which do not fix any element. If
\begin{align*}
x_4&=[1,2,3,4]+[5,6,3,4]+[5,2,7,4]+[5,2,3,8]+[1,6,7,4]+[1,6,3,8]\\
&\hspace{12pt}+[1,2,7,8]+[5,6,7,8]-[5,2,3,4]-[1,6,3,4]-[1,2,7,4]-[1,2,3,8]\\
&\hspace{12pt}-[5,6,7,4]-[5,6,3,8]-[5,2,7,8]-[1,6,7,8]
\end{align*}
and $x_4V\not=0$ then there exists $\psi\in\Hom_{\s_n}(M_4,\End_F(V))$ which does not vanish on $S_4$.
\end{lemma}
\begin{lemma}{\cite[Lemma 6.1]{m2}}\label{l12}
Let $p\geq 3$, $n\geq 6$ and $V$ be a $F\s_n$-module. If
\begin{align*}
x_{2^2}&=(2,5)(3,6)-(3,5)(2,6)-(1,5)(3,6)+(1,6)(3,5)-(2,5)(1,6)\\
&\hspace{12pt}+(1,5)(2,6)-(2,4)(3,6)+(3,4)(2,6)+(1,4)(3,6)-(1,6)(3,4)\\
&\hspace{12pt}+(2,4)(1,6)-(1,4)(2,6)-(2,5)(3,4)+(3,5)(2,4)+(1,5)(3,4)\\
&\hspace{12pt}-(1,4)(3,5)+(2,5)(1,4)-(1,5)(2,4)
\end{align*}
and $x_{2^2}V\not=0$ then there exists $\psi\in\Hom_{\s_n}(M_{2^2},\End_F(V))$ which does not vanish on $S_{2^2}$.
\end{lemma}
\section{Homomorphism rings}\label{s1}
With the help of the two previous sections we will now show that in many cases there exist homomorphisms $M_\mu\to\End_F(D)$ which do not vanish on $S_\mu$. Existence of such homomorphisms will then be used to prove that often $V\otimes W$ is not irreducible. In the next lemma remember that $\be_n=(\lceil (n+1)/2\rceil,\lfloor(n-1)/2\rfloor)$ is the partition labeling the basic spin modules in characteristic 2.
\begin{lemma}{\cite[Corollary 6.4]{kmt}}\label{L2}
Let $p=2$ and $n\geq 5$. If $\la\in\Par_2(n)$ with $\la\not=(n),\be_n$, then there exists $\psi\in\Hom_{\s_n}(M_2,\End_F(D^\la))$ which does not vanish on $S_2$.
\end{lemma}
\begin{lemma}{\cite[Corollary 6.10]{kmt}}\label{L3}
Let $p=2$ and $n\geq 6$. If $\la\in\Par_2(n)$ with $h(\la)\geq 3$, then there exists $\psi\in\Hom_{\s_n}(M_3,\End_F(D^\la))$ which does not vanish on $S_3$.
\end{lemma}
\begin{lemma}\label{L1}
Let $p=2$ and $n\geq 7$. If $\la\in\Par_2(n)$ with $h(\la)\geq 3$ and $\la\in\Parinv_2(n)$, then there exists $\psi\in\Hom_{A_n}(M_3\da_{A_n}\,\End_F(E^\la_\pm))$ which does not vanish on $S_3\da_{A_n}$.
\end{lemma}
\begin{proof}
From \cite[Lemma 3.17]{kmt} and Lemma \ref{split2} we have that $E^{(4,2,1)}$ is a composition factor of $E^\la_\pm\da_{A_7}$. From Lemma \ref{l18} it is enough to prove that $x_3 E^\la_\pm\not=0$ (where $x_3$ is as in \cite[\S6.1]{kmt} or Lemma \ref{l18}), which follows from $x_3E^{(4,2,1)}\cong x_3 D^{(4,2,1)}\not=0$ by \cite[Lemma 6.9]{kmt}.
\end{proof}
\begin{lemma}\label{L9}
Let $p=2$ and $n\geq 10$. If $\la\in\Par_2(n)$ with $h(\la)\geq 4$, then there exists $\psi\in\Hom_{\s_n}(M_4,\End_F(D^\la))$ which does not vanish on $S_4$.
\end{lemma}
\begin{proof}
By Lemma \ref{L5}, $D^{(4,3,2,1)}$ is a composition factor of $D^\la\da_{\s_{10}}$. Since $(4,3,2,1)$ is a 2-core we have that $D^{(4,3,2,1)}\cong S^{(4,3,2,1)}$. From Lemma \ref{l16} it is enough to prove that $x_4D^\la\not=0$, where $x_4$ is as in Lemma \ref{l16}.
In particular it is enough to prove that $x_4S^{(4,3,2,1)}\not=0$. If $v_t$ and $e_t$ are the standard basis elements of $M^{(4,3,2,1)}$ and $S^{(4,3,2,1)}$ respectively (see \cite[Section 8]{JamesBook}) it can be easily checked that $x_4e_s$ has non-zero coefficient for $v_y$, where
\[s=\begin{array}{cccc}
1&2&3&4\\
5&6&7\\
8&9\\
10
\end{array}\quad\mbox{and}\quad y=\begin{array}{cccc}
1&2&7&9\\
4&6&8\\
3&10\\
5
\end{array}
\]
and so the lemma follows.
\end{proof}
\begin{lemma}{\cite[Lemma 3.8]{ks}}\label{psi2}
Let $p=3$ and $n\geq 4$. If $\la\in\Par_3(n)$ with $\la\not=(n),(n)^\Mull$, then there exists $\psi\in\Hom_{\s_n}(M_2,\End_F(D^\la))$ which does not vanish on $S_2$.
\end{lemma}
\begin{lemma}{\cite[Corollary 6.7]{kmt}}\label{psi3}
Let $p=3$ and $n\geq 6$. If $\la\in\Par_3(n)$ with $h(\la),h(\la^\Mull)\geq 3$, then there exists $\psi\in\Hom_{\s_n}(M_3,\End_F(D^\la))$ which does not vanish on $S_3$.
\end{lemma}
\begin{lemma}\label{l33}
Let $p=3$ and $n\geq 8$. If $\la\in\Par_3(n)$ with $h(\la),h(\la^\Mull)\geq 4$, then there exists $\psi\in\Hom_{\s_n}(M_4,\End_F(D^\la))$ which does not vanish on $S_4$.
\end{lemma}
\begin{proof}
By Lemma \ref{l16} in order to prove the lemma it is enough to prove that $x_4D^\la\not=0$ (where $x_4$ is as in Lemma \ref{l16}). Using Lemma \ref{l34} it is enough to prove the lemma when $n=8$. So we may assume that $\la=(4,2,1,1)$. Since $(4,2,1,1)$ is a 3-core, $D^{(4,2,1,1)}\cong S^{(4,2,1,1)}$. Let
\[\{v_{\{i,j\},k,l}|i,j,k,l\mbox{ distinct elements of }\{1,\ldots,8\}\}\]
be the standard basis of $M^{(4,2,1,1)}$. Let $e$ be the basis element of $S^{(4,2,1,1)}$ corresponding to the tableau
\[\begin{array}{cccc}
1&5&7&8\\
2&6&&\\
3&&&\\
4&&&
\end{array}\]
(see \cite[Section 8]{JamesBook} for definition of $e$). Then it can be proved that the coefficient of $x_4 e$ corresponding to $v_{\{2,3\},1,8}$ is non-zero and so the lemma holds.
\end{proof}
\begin{lemma}\label{l9}
Let $p=3$, $n\geq 6$ and $\la=(n-k,k)$ with $n-2k\geq 2$ and $k\geq 2$. Then there exists $\psi\in\Hom_{\s_n}(M_{2^2},\End_F(D^\la))$ which does not vanish on $S_{2^2}$.
\end{lemma}
\begin{proof}
Similar to the previous lemmas, from Lemmas \ref{l13} and \ref{l12} it is enough to prove that $x_{2^2}D^{(4,2)}\not=0$ (with $x_{2^2}$ as in Lemma \ref{l12}). Notice that $D^{(4,2)}\cong S^{(4,2)}$. Let $\{v_{\{i,j\}}:1\leq i<j\leq 6\}$ be the standard basis of $M^{(4,2)}$ and $e$ be the basis element of $S^{(4,2)}$ corresponding to the tableau
\[\begin{array}{cccc}
1&3&5&6\\
2&4&&
\end{array}\]
(see \cite[Section 8]{JamesBook} for definition of $e$). It can be computed that the coefficient of $x_{2^2}e$ corresponding to $v_{\{1,5\}}$ is non-zero, thus proving the lemma.
\end{proof}
\begin{lemma}\label{l30}
Let $p=3$ and $n\geq 9$. If $\la\in\Parinv_3(n)$ then there exists $\psi\in\Hom_{A_n}(M_3\da_{A_n},\End_F(E^\la_\pm))$ which does not vanish on $S_3\da_{A_n}$.
\end{lemma}
\begin{proof}
From Lemma \ref{Mull} we have that $h(\la)\geq 3$. Note that there are no Mullineux fixed partitions for $p=3$ and $n=9$.
In view of \cite[Lemma 3.16]{kmt} there exists $\mu\in\Par_3(9)$ with $h(\mu),h(\mu^\Mull)\geq 3$ and $E^\mu$ a composition factor of $E^\la_\pm\da_{A_9}$. By \cite[Lemma 6.6]{kmt} we have that $x_3E^\mu\cong x_3D^\mu\not=0$. In particular $x_3E^\la_\pm\not=0$ and so the lemma holds by Lemma \ref{l18}.
\end{proof}
\begin{lemma}\label{l8}
Let $p=3$, $n> 6$ and $\la\in\Parinv_3(n)$ be a JS-partition with $h(\la)=3$. Then there exists $\psi\in\Hom_{\s_n}(M_{2^2},\End_F(D^\la))$ which does not vanish on $S_{2^2}$.
\end{lemma}
\begin{proof}
From Lemmas \ref{l10} and \ref{l12} it is enough to prove that $x_{2^2}D^{(5,1^2)}\not=0$ (with $x_{2^2}$ as in Lemma \ref{l12}). Notice that $D^{(5,1^2)}\cong S^{(5,1^2)}$ (see \cite[Tables]{JamesBook}). Let $\{v_{i,j}:i\not=j\in\{1,\ldots 7\}\}$ be the standard basis of $M^{(5,1^2)}$ and $\{e_{i,j}:2\leq i<j\leq 7\}$ be the standard basis of $S^{(5,1^2)}$ (see \cite[Section 8]{JamesBook}). It can be checked that the coefficient of $x_{2^2}e_{2,4}$ corresponding to $v_{2,5}$ is non-zero and so the lemma follows.
\end{proof}
\section{Permutation modules}\label{spm}
In this section we consider the structure of certain permutation modules $M^\alpha$. The structure of many of the modules considered here has already been studied in other papers, in some cases dual filtrations to those presented here where found. Note that if $M\sim N_1|\ldots|N_h$ then $M^*\sim N_h^*|\ldots|N_1^*$. As noted in section \ref{snot}, the modules $M^\la$, $Y^\la$ and $D^\la$ are self-dual. Remember that $M_\mu:=M^{(n-m,\mu)}$ and similarly for $S_\mu$, $D_\mu$ and $Y_\mu$ if $\mu\in\Par(m)$.
\begin{lemma}\label{L12e}{\cite[Lemmas 4.7 and 4.9]{kmt}}
Let $p=2$. If $n\geq 6$ is even then $M_1\cong D_0|D_1|D_0\sim S_1|D_0$ and $M_2\sim S_2|(D_0\oplus S_1)$. Further if $n\equiv 0\Md 4$ then
\[M_3\cong M_1\oplus(\overbrace{D_2|D_1|D_3}^{S_3}|\overbrace{D_1|D_2}^{S_2}).\]
\end{lemma}
\begin{lemma}\label{L12o}
Let $p=2$. If $n\geq 7$ is odd then
\[M_1\cong D_0\oplus D_1,\quad M_2\sim S_2|M_1,\quad M_3\sim S_3|M_2.\]
\end{lemma}
\begin{proof}
This follows from \cite[Lemma 4.6]{kmt}, since $\hd(S_k)\cong D_k$ for $0\leq k<n/2$ (in particular in these cases $S_k$ is indecomposable) and $S_k\subseteq M_k$.
\end{proof}
\begin{lemma} \label{L160817_2}
Let $p=3$, $n\geq 8$ with $n\equiv 2\pmod{3}$. Then
$$M_1\cong D_0\oplus D_1,\quad M_2\sim S_2|M_1,\quad M_3\sim S_3|M_2.$$
\end{lemma}
\begin{proof}
This holds by \cite[Lemma 4.5]{kmt}, since $\hd(S_k)\cong D_k$ for $0\leq k\leq n/2$ and $S_k\subseteq M_k$.
\end{proof}
\begin{lemma} \label{L160817_0}
Let $p=3$, $n\equiv 0\pmod{3}$ with $n\geq 9$. Then
\begin{align*}
&M_1\cong \overbrace{D_0|D_1}^{S_1}|D_0,&&M_2\cong D_2\oplus M_1,\\
&M_3\sim D_2\oplus (S_3|(D_0\oplus S_1)),&&M_4\sim S_4|S_1|A,\\
&M_{1^2}\sim M_2\oplus (S_{1^2}|S_1).
\end{align*}
for a module $A\subseteq M_3$ with $M_3/A\cong S_1$.
\end{lemma}
\begin{proof}
For the structure of $M_1$, $M_2$ and $M_3$ see \cite[Lemma 4.3]{kmt}, together from $S_k\subseteq M_k$ and $\hd(S_k)\cong D_k$ for $k\leq n/2$. It then also follows that $S_2\cong D_2$
For the structure of $M_{1^2}$ note that by Lemma \ref{L160817_2} and \cite[Corollary 17.14]{JamesBook}
\[M_{1^2
\cong M_1\oplus (D^{(n-2,1)}\ua^{\s_n})\cong M_1\oplus (S^{(n-2,1)}\ua^{\s_n})\sim M_1\oplus S_2\oplus (S_{1,1}|S_1).\]
We then only still have to study the structure of $M_4$.
For $0\leq k\leq n/2$ let
\[\{v_I:I\subseteq \{1,\ldots,n\}\text{ with }|I|=k\}\]
be the standard basis of $M_k$. Given $0\leq k\leq\ell\leq n/2$ define $\eta_{\ell,k}:M_\ell\to M_k$ by
\[\eta_{\ell,k}v_I=\sum_{{J\subseteq I\subseteq\{1,\ldots,n\}:}\atop{|J|=k}}v_J\]
for any element $v_I$ of the standard basis of $M_\ell$.
From \cite[Theorem 1]{Wil} we have that $\dim\Im\eta_{4,3}=\dim M_3-(n-1)$, $\dim\Im\eta_{4,1}=\dim M_1$ and $\dim\Im\eta_{3,1}=n-1$. In particular there exist submodules $X,Y\subseteq M_4$ and $A\subseteq M_3$ with $\dim A=\dim M_3-(n-1)$ such that $M_4\sim X|A$ and $M_4\sim Y|M_1$. Further $\eta_{3,1}\circ\eta_{4,3}=0$ by \cite[(3.1)]{Wil}. So $A\cong \ker\eta_{3,1}$. So $M_3/A\cong\Im\eta_{3,1}\subseteq M_1$ has dimension $n-1$. Since $M_1\cong D_0|D_1|D_0\sim S_1|D_0$ is uniserial and $D_0\cong \1_{\s_n}$, it then follows that $M_3/A\cong S_1$. Since $D_3\cong\hd(S_3)$ is not a composition factor of $S_1$ and $S_3\subseteq M_3$, it follows that $S_3\subseteq A$. From \cite[Example 17.17, Theorem 24.15]{JamesBook} we also have that $D_1\cong \hd(S_1)$ is not a composition factor of $A/S_3$. Since $D_0$ is contained exactly once in the head of $M_k$ for each $k$, it follows that $M_4\sim (X\cap Y)|S_1|A$. As $S_4\subseteq M_4$ and $D_4\cong \hd(S_4)$ is not a composition factor of $S_1$ or $A$, it follows by comparing dimensions that $M_4\sim S_4|S_1|A$.
\end{proof}
\begin{lemma} \label{L160817_1}
Let $p=3$, $n\equiv 1\pmod{3}$ with $n\geq 10$. Then
\begin{align*}
&M_1\cong D_0\oplus D_1,&&M_2\cong D_1\oplus (\overbrace{D_0|D_2}^{S_2}|D_0),\\
&M_3\sim D_1\oplus (S_3|(D_0\oplus S_2)),&&M_4\sim S_4|M_3.
\end{align*}
\end{lemma}
\begin{proof}
For the structure of $M_1$, $M_2$ and $M_3$ see \cite[Lemma 4.4]{kmt} and use that $\hd(S_k)\cong D_k$ for $k\leq n/2$. Notice that $D_1$ and $D_4$ are in the same block, while $D_0$, $D_2$ and $D_3$ are in a different block. Further $S_4\cong D_4$ or $S_4\cong D_1|D_4$ from \cite[Theorem 24.15]{JamesBook}. In particular
$Y_4\cong D_4$ if $S_4\cong D_4$ or $Y_4\cong D_1|D_4|D_1$ if $S_4\cong D_1|D_4$. The lemma then follows from Lemma \ref{LYoung} and by comparing composition factors (see \cite[Example 17.17, Theorem 24.15]{JamesBook}).
\end{proof}
\section{Partitions with at least 2 normal nodes}\label{geq2n}
In the next three sections we will study more in details the endomorphism rings of the modules $D^\la$, $E^\la$ or $E^\la_\pm$ for certain particular classes of partitions. We start here by considering the case where $\la$ has at least 2 normal nodes.
\begin{lemma}\label{L7}
Let $p=2$ and $n\geq 10$ be even. If $\la\in\Par_2(n)$ with $\epsilon_0(\la)+\epsilon_1(\la)\geq 3$ then there exist $\psi,\psi',\psi''\in\Hom_{\s_n}(M_2,\End_F(D^\la))$ such that $\psi|_{S_2}$, $\psi'|_{S_2}$ and $\psi''|_{S_2}$ are linearly independent.
\end{lemma}
\begin{proof}
By Lemma \ref{L12e} we have that $M_2\sim S_2|(D_0\oplus S_1)$. By Lemma \ref{l2} if
\[\dim\End_{\s_{n-2,2}}(D^\la\da_{\s_{n-2,2}})=\dim\Hom_{\s_n}(S_1,\End_F(D^\la))+c,\]
then there exist homomorphisms $\phi_i\in\Hom_{\s_n}(M_2,\End_F(D^\la))$ for $1\leq i\leq c-1$ such that $\phi_1|_{S_2},\ldots,\phi_{c-1}|_{S_2}$ are linearly independent. Since $\la$ has at least 3 normal nodes we have by \cite[Lemma 5.4]{kmt} and Lemmas \ref{l2} and \ref{l12} that
\[\dim\End_{\s_{n-2}}(D^\la\da_{\s_{n-2}})>2\dim\Hom_{\s_n}(S_1,\End_F(D^\la))+7\]
and so by \cite[Lemma 4.14]{m1}
\[\dim\End_{\s_{n-2,2}}(D^\la\da_{\s_{n-2,2}})\geq \dim\Hom_{\s_n}(S_1,\End_F(D^\la))+4,\]
from which the lemma follows.
\end{proof}
\begin{lemma}\label{L8}
Let $p=2$ and $n\geq 10$ be even. Assume that $\la\in\Parinv_2(n)$ with $\epsilon_0(\la)+\epsilon_1(\la)=2$. Then there exist $\psi,\psi'\in\Hom_{\s_n}(M_2,\End_F(D^\la))$ such that $\psi|_{S_2}$ and $\psi'|_{S_2}$ are linearly independent.
\end{lemma}
\begin{proof}
By Lemma \ref{L12e} we have that $M_2\sim S_2|(D_0\oplus S_1)$ and so by Lemma \ref{l2} it is enough to prove that
\[\dim\End_{\s_{n-2,2}}(D^\la\da_{\s_{n-2,2}})\geq \dim\Hom_{\s_n}(S_1,\End_F(D^\la))+3.\]
From \cite[Lemma 5.5]{kmt} we have that $\dim\End_{\s_{n-2,2}}(D^\la\da_{\s_{n-2,2}})\geq 4$. By \cite[Lemmas 3.12, 3.13]{kmt} we may then assume that $\dim\Hom_{\s_n}(S_1,\End_F(D^\la))=2$ and that for some residue $\ell$ we have $\eps_\ell(\la),\phi_\ell(\la)>0$ and $(\la\setminus X)\cup Y$ is not $p$-regular, where $X$ is the $\ell$-good node and $Y$ the $\ell$-cogood node of $\la$. By \cite[Lemma 2.13]{kmt} we then have that $h(\la)\geq 3$ and that there exists $1\leq j\leq h(\la)$ with $\la_j=\la_{j+1}+2$ and
\[\la_1\equiv\ldots\equiv\la_{j-1}\not\equiv\la_j\equiv\la_{j+1}\not\equiv\la_{j+2}\equiv\ldots\equiv\la_{h(\la)}\Md 2.\]
If $j$ is odd then there exists $k\geq 1$ such that $\la_{2k+1}\geq 1$ and
\[\la_1\equiv \la_2\not\equiv\la_{2k+1}\equiv\la_{2k+2}\Md 2.\]
From Lemma \ref{split2} this contradicts the assumption that $D^\la\da_{A_n}$ splits. So $j$ is even. If $j=h(\la)$ then $\la_{h(\la)}=2$ and the other parts of $\la$ are odd, contradicting $n$ being even. If $j=h(\la)-1$ then, from Lemma \ref{split2}, $\la_{h(\la)-1}=3$, $\la_{h(\la)}=1$ and the other parts of $\la$ are even. So again from Lemma \ref{split2}, $\la=(4,3,1)$, contradicting $n\geq 10$. Thus $2\leq j\leq h(\la)-2$ is even. Notice that the normal nodes of $\la$ are on rows 1 and $j$ and so they have the same residue $i$. It then follows from Lemmas \ref{l45} and \ref{l39} that $D^\la\da_{\s_{n-2,2}}\cong A\oplus B$ with $A\da_{\s_{n-2}}\cong e_i^2D^\la$ and $B\da_{\s_{n-2}}\cong e_{1-i}e_iD^\la$. From \cite[Lemma 4.15]{m1} we have that $A\cong (D^{\tilde{e}^2_i(\la)}\otimes D^{(2)})|(D^{\tilde{e}^2_i(\la)}\otimes D^{(2)})$. So it is enough to prove that
\[\dim\End_{\s_{n-2,2}}(B)\geq\dim\Hom_{\s_n}(S_1,\End_F(D^\la))-\dim\End_{\s_{n-2,2}}(A)+3=3.\]
Notice that $B$ is self-dual, since it is a block component of a self-dual module of $\s_{n-2,2}$. Further
\[\tilde{e}_i(\la)=(\la_1,\ldots,\la_{j-1},\la_j-1,\la_{j+1},\ldots,\la_{h(\la)})\]
and then from $2\leq j\leq h(\la)-2$,
\[\tilde{e}_i(\la)_1\equiv\ldots\equiv\tilde{e}_i(\la)_j\not\equiv\tilde{e}_i(\la)_{j+1}\not\equiv\tilde{e}_i(\la)_{j+2}\equiv\ldots\equiv\tilde{e}_i(\la)_{h(\la)}\Md 2.\]
So $\epsilon_{1-i}(\tilde{e}_i(\la))=2$ (the corresponding normal nodes are on rows $j+1$ and $j+2$). Let $\mu:=\tilde{e}_{1-i}\tilde{e}_i(\la)$. From Lemma \ref{l39} it follows that
\begin{align*}
e_{1-i}e_iD^\la&\sim e_{1-i}D^{\tilde{e}_i(\la)}|\ldots|e_{1-i}D^{\tilde{e}_i(\la)}\sim\overbrace{D^{\mu}|\ldots|D^{\mu}}^{C}|\ldots|\overbrace{D^{\mu}|\ldots|D^{\mu}}^{C},
\end{align*}
with $C=e_{i-1}D^{\tilde{e}_i(\la)}$ indecomposable with simple head and socle and $[C:D^{\mu}]=2$. So $B$ is not semisimple. If the socle of $B$ is not simple then $\dim\End_{\s_{n-2,2}}(B)\geq 3$ (since head and socle of $B$ are isomorphic and $B$ is not semisimple). So we may assume that the socle of $B$ is simple. Since
\[\dim\Hom_{\s_{n-2,2}}(D^{\mu}\otimes M^{(1^2)},B)=\dim\Hom_{\s_{n-2}}(D^{\mu},B\da_{\s_{n-2}})\geq 1\]
and any composition factor of $M^{(1^2)}$ is of the form $D^{(2)}$, we then have that $\soc(B)\cong D^{\tilde{e}_{1-i}\tilde{e}_i(\la)}\otimes D^{(2)}$. Further
\begin{align*}
\dim\Hom_{\s_{n-2,2}}(C\otimes M^{(1^2)},B)&=\dim\Hom_{\s_{n-2}}(C,B\da_{\s_{n-2}})\\
&>\dim\Hom_{\s_{n-2}}(D^{\mu},B\da_{\s_{n-2}})\\
&=\dim\Hom_{\s_{n-2,2}}(D^{\mu}\otimes M^{(1^2)},B)\\
&\geq 1.
\end{align*}
Note that
\[\soc(B)\cong D^{\mu}\otimes D^{(2)}\cong \hd(C\otimes M^{(1^2)}).\]
So there exists a quotient $\overline{C}$ of $C\otimes M^{(1^2)}$ not isomorphic to $D^{\mu}\otimes D^{(2)}$ such that $\overline{C}\subseteq B$. Further $\soc(B)\subsetneq \overline{C}\subseteq B$ and $\overline{C}$ has simple head and socle isomorphic to $D^{\mu}\otimes D^{(2)}$. If $\overline{C}\cong C\otimes M^{(1^2)}$ then $\overline{C}$ is self-dual, as is $B$. So $C\otimes M^{(1^2)}$ is also a quotient of $B$ and then
\[\dim\End_{\s_{n-2,2}}(B)\geq\dim\End_{\s_{n-2,2}}(C\otimes M^{(1^2)})=4\]
(using Lemma \ref{l39}). So we may assume that $\overline{C}\not\cong C\otimes M^{(1^2)}$. Notice that $[C\otimes M^{(1^2)}:D^{\mu}\otimes D^{(2)}]=4$, that $C\otimes M^{(1^2)}$ has simple head and socle isomorphic to $D^{\mu}\otimes D^{(2)}$ and that $C\otimes D^{(2)}$ and $D^{\mu}\otimes M^{(1^2)}$ are distinct submodules of $C\otimes M^{(1^2)}$ with $[C\otimes D^{(2)}:D^{\mu}\otimes D^{(2)}]=2$, $[D^{\mu}\otimes M^{(1^2)}:D^{\mu}\otimes D^{(2)}]=2$ and both $C\otimes D^{(2)}$ and $D^{\mu}\otimes M^{(1^2)}$ have simple head and socle isomorphic to $D^{\mu}\otimes D^{(2)}$. So $[\overline{C}:D^{\mu}\otimes D^{(2)}]=2$.
Note that when $\mu\in\Par_p(m)$ and $D^\mu$ is defined as $K\s_m$-module (with $K$ a field which is not necessarily algebraic closed), then any block component of the restriction of $D^\mu$ to a Young subgroup is self-dual. Further any permutation module of $KG$ is self-dual, for any field $K$ and group $G$. In particular the previous part, about the structure of $B$, also holds over $\mathbb{F}_2$ (since $\mathbb{F}_2$ is a splitting field of $\s_m$ for any $m$)
and not only over $F$, where $F$ is algebraically closed, so until the end of the proof we will work over the field $\mathbb{F}_2$.
In this case there exist exactly three submodules $E_1,E_2,E_3\subseteq C\otimes M^{(1^2)}$ with $[E_j:D^{\mu}\otimes D^{(2)}]=2$ and head and socle isomorphic to $D^{\mu}\otimes D^{(2)}$. Similarly $C\otimes M^{(1^2)}$ has exactly three quotients $F_1,F_2,F_3$ with $[F_j:D^{\mu}\otimes D^{(2)}]=2$ and head and socle isomorphic to $D^{\mu}\otimes D^{(2)}$. We may assume that $E_1\cong F_1\cong C\otimes D^{(2)}$ and that $E_2\cong F_2\cong D^{\mu}\otimes M^{(1^2)}$. Let $g_1,g_2\in\End_{\s_{n-2,2}}(C\otimes M^{(1^2)})$ with $\Im\, g_j=E_j$ and $(C\otimes M^{(1^2)})/\Ker\, g_j=F_j$. Since $C\otimes M^{(1^2)}$ has simple head and socle isomorphic to $D^{\mu}\otimes D^{(2)}$, then so does $\Im(g_1+g_2)$, if it is non-zero. Since $E_j\not\subseteq E_k$ if $j\not=k$, we then have that $E_3=\Im (g_1+g_2)$ and $(C\otimes M^{(1^2)})/\Ker (g_1+g_2)=F_3$. So $E_3\cong F_3$. By duality of $C\otimes M^{(1^2)}$, there exists $\sigma\in\s_3$ with $F_{\sigma(j)}^*\cong E_j$ for $1\leq j\leq 3$. Since $E_1\cong F_1$ and $E_2\cong F_2$ are self-dual, it then follows that also $E_3\cong F_3$ is self-dual. In particular $\overline{C}$ is self dual, since it is isomorphic to some $E_j$.
Since $\soc(B)\subsetneq \overline{C}\subsetneq B$ and any of these three modules is self-dual, it then follows that $\dim\End_{\s_{n-2,2}}(B)\geq 3$.
\end{proof}
\section{Two rows partitions}
Modules indexed by two rows partitions will play a special role in the proof of Theorem \ref{mt}, since in this case not all results from Section \ref{s1} apply. So we will consider them more in details in this section. We start by citing a branching result for two rows partitions, which is part of the main result of \cite{sh}, that will be used in this section.
We want to remember that when writing for example $D_1\subseteq\End_F(V)$ we mean that $\End_F(V)$ has a submodule which is isomorphic to $D_1$.
\begin{lemma}\label{tsh}
Let $\la=(n-k,k)$ with $k\geq 1$ and $n-2k\geq 1$. Write $n-2k=\sum_j s_jp^j$ with $0\leq s_j<p$ and let $t$ minimal such that $s_t<p-1$. If $t\geq 1$ then, in the Grothendieck group, $[D^\la\da_{\s_{n-1}}]$ is equal to
\[[D^{(n-k-1,k)}]+\de[D^{(n-k-1+p^t,k-p^t)}]+\sum_{j=0}^{t-1}2[D^{(n-k-1+p^j,k-p^j)}],\]
where $D^{(n-k-1+r,k-r)}:=0$ if $(n-k-1+r,k-r)\not\in\Par_p(n-1)$ and $\de=1$ if $s_t<p-2$ or $\de=0$ else.
\end{lemma}
\begin{lemma}\label{L4}
Let $p=2$ and $n\geq 7$ be odd. If $\la=(n-k,k)$ with $k\geq 2$ and $n-2k\geq 3$ then there exist $\psi_2,\psi_2'\in\Hom_{\s_n}(M_2,\End_F(D^\la))$ such that $\psi_2|_{S_2}$, $\psi_2'|_{S_2}$ are linearly independent or there exists $\psi_3\in\Hom_{\s_n}(M_3,\End_F(D^\la))$ which does not vanish on $S_3$.
\end{lemma}
\begin{proof}
From Lemma \ref{L12o}, $M_2\sim S_2|M_1$ and $M_3\sim S_3|M_2$. So if
\[\dim\End_{\s_{n-2,2}}(D^\la\da_{\s_{n-2,2}})\geq \dim\End_{\s_{n-1}}(D^\la\da_{\s_{n-1}})+2\]
there exist $\psi,\psi'\in\Hom_{\s_n}(M_2,\End_F(D^\la))$ such that $\psi|_{S_2}$, $\psi'|_{S_2}$ are linearly independent, by Lemma \ref{l2}. If
\[\dim\End_{\s_{n-3,3}}(D^\la\da_{\s_{n-3,3}})\geq \dim\End_{\s_{n-2,2}}(D^\la\da_{\s_{n-2,2}})+1\]
there exists $\psi\in\Hom_{\s_n}(M_3,\End_F(D^\la))$ which does not vanish on $S_3$, again by Lemma \ref{l2}.
Since $n$ is odd, both removable nodes are normal and so, by Lemma \ref{l53}, $\dim\End_{\s_{n-1}}(D^\la\da_{\s_{n-1}})=2$. It is then enough to prove that at least one of
\[\dim\End_{\s_{n-2,2}}(D^\la\da_{\s_{n-2,2}})\geq 4\quad\text{or}\quad\dim\End_{\s_{n-3,3}}(D^\la\da_{\s_{n-3,3}})\geq 4\]
holds. Note that $n-2k$ is odd.
{\bf Case 1:} $n-2k\equiv 3\Md 4$, so $t\geq 2$ in Lemma \ref{tsh}. Then by block decomposition (Lemma \ref{l45}), Lemmas \ref{l39} and \ref{tsh}
\[D^\la\da_{\s_{n-2}}\cong (D^{(n-k-1,k-1)})^{\oplus 2}\oplus A\]
where $[A:D^{(n-k-2,k)}]=1$ and $[A:D^{(n-k,k-2)}]=2$.
It easily follows that $D^\la\da_{\s_{n-2,2}}$ has (at least) 2 block components with at least 2 composition factors each and then $\dim\End_{\s_{n-2,2}}(D^\la\da_{\s_{n-2,2}})\geq 4$, since $F\s_n$- and $F\s_{n-2,2}$-modules are self-dual.
{\bf Case 2:} $n-2k\equiv 1\Md 4$, so $t=1$ in Lemma \ref{tsh}.
Then by block decomposition (Lemma \ref{l45}), Lemmas \ref{l39} and \ref{tsh}
\begin{align*}
D^\la\da_{\s_{n-1}}\cong &D^{(n-k,k-1)}|D^{(n-k-1,k)}|D^{(n-k,k-1)},\\
D^\la\da_{\s_{n-2}}\cong &(D^{(n-k-1,k-1)})^{\oplus 2}\oplus D^{(n-k-2,k)},\\
D^\la\da_{\s_{n-3}}\sim &(\overbrace{D^{(n-k-1,k-2)}|D^{(n-k-2,k-1)}|D^{(n-k-1,k-2)}}^B)^{\oplus 2}\\
&\oplus (\overbrace{D^{(n-k-2,k-1)}|\ldots|D^{(n-k-3,k)}|\ldots|D^{(n-k-2,k-1)}}^C),
\end{align*}
where $
$ and $
$ are indecomposable with simple head and socle. To see this, note that $D^\la\da_{\s_{n-1}}$ is indecomposable with simple head and socle each isomorphic to $D^{(n-k,k-1)}$ by Lemma \ref{l39} and the composition factors, with multiplicities, of $D^\la\da_{\s_{n-1}}$ are known by Lemma \ref{tsh}. The structure of $D^\la\da_{\s_{n-2}}$ then follows by Lemmas \ref{l45} and \ref{l39}. For $D^\la\da_{\s_{n-3}}$ use again Lemmas \ref{l39} and \ref{tsh}.
Notice that $D^\la\da_{\s_{n-3,3}}\cong F\oplus G$, where all composition factors of $F\da_{\s_{1^{n-3},3}}$ are of the form $\1\otimes D^{(3)}$ and all composition factors of $G\da_{\s_{1^{n-3},3}}$ are of the form $\1\otimes D^{(2,1)}$ (since $D^{(3)}$ and $D^{(2,1)}$ are in different blocks). From \cite[Lemma 1.11]{bk5} we have that $D^{(n-k-2,k-1)}\otimes D^{(2,1)}$ and $D^{(n-k-3,k)}\otimes D^{(3)}$ are composition factors of $D^\la\da_{\s_{n-3,3}}$. So $F$ has a composition factor isomorphic to $D^{(n-k-2,k-1)}\otimes D^{(2,1)}$. Since $D^{(2,1)}$ has dimension 2 and $D^{(n-k-2,k-1)}$ appears only once in the socle of $D^\la\da_{\s_{n-3}}$, it follows that $F$ is non-zero and not simple. Similarly $G$ is non-zero and not simple, since it has a composition factor $D^{(n-k-3,k)}\otimes D^{(3)}$ and $D^{(n-k-3,k)}$ does not appear in the socle of $D^\la\da_{\s_{n-3}}$. Further $F$ and $G$ are self-dual, since they are block components of $D^\la\da_{\s_{n-3,3}}$. So $\dim\End_{\s_{n-3,3}}(D^\la\da_{\s_{n-3,3}})\geq 4$.
\end{proof}
\begin{lemma}\label{Lemma7.1}
Let $p=2$ and $n\geq 4$ be even. If $\la=(n-k,k)$ with $1\leq k<n/2$ and $\dim\Hom_{\s_n}(S_1,\End_F(D^\la))\geq 1$ then $\la=\be_n$. In this case if $n\equiv 0\Md 4$ then $D_1\subseteq\End_F(D^\la)$, while if $n\equiv 2\Md 4$ then $S_1\subseteq\End_F(D^\la)$.
\end{lemma}
\begin{proof}
This follows from \cite[Lemma 7.1]{m1}, since $(n-k,k)$ is JS by Lemma \ref{L151119}.
\end{proof}
\begin{lemma}\label{L10}
Let $p=2$ and $n\geq 8$ with $n\equiv 0\Md 4$. If $\la=(n-k,k)$ with $k\geq 2$ and $n-2k\geq 3$ then one of the following happens:
\begin{itemize}
\item $D_2^{\oplus 2}\subseteq \End_F(D^\la)$,
\item $S_3\subseteq \End_F(D^\la)$,
\item $D_2\oplus D_3\subseteq\End_F(D^\la)$,
\item there exists $\psi\in\Hom_{\s_n}(M_4,\End_F(D^\la))$ which does not vanish on $S_4$.
\end{itemize}
\end{lemma}
\begin{proof}
Note that $\la$ is JS by Lemma \ref{L151119}, since $n$ is even and $\la$ has two parts. From Lemmas \ref{l39} and \ref{tsh} we have that
\begin{align*}
D^\la\da_{\s_{n-1}}&\cong D^{(n-k-1,k)},\\
D^\la\da_{\s_{n-2}}&\sim D^{(n-k-1,k-1)}|\overbrace{B|D^{(n-k-2,k)}|C}^N|D^{(n-k-1,k-1)},
\end{align*}
where all composition factors of $B$ and $C$ are of the form $D^{(n-k-2+2^i,k-2^i)}$ with $i\geq 1$. Let $0\leq j\leq \lfloor k/2\rfloor$ with $D^{(n-k-2+2j,k-2j)}\subseteq N$ (such a $j$ exists since any composition factor of $N$, and so also of its socle, is of the form $D^{(n-k-2+2\overline{j},k-2\overline{j})}$ for some $0\leq \overline{j}\leq \lfloor k/2\rfloor$).
By Lemma \ref{l39} and block decomposition we then have that
\begin{align*}
D^\la\da_{\s_{n-3}}&\cong (D^{(n-k-2,k-1)})^{\oplus 2}\oplus N\da_{\s_{n-3}},\\
D^\la\da_{\s_{n-4}}&\supseteq D^{(n-k-2,k-2)}\oplus D^{(n-k-3+m,k-m-1)},
\end{align*}
where $m=2j$ if $j<k/2$ or $m=2j-1$ if $j=k/2$.
{\bf Case 1:} $k\geq 3$. Fix $j,m$ as above. We may assume that there is no $\psi\in\Hom_{\s_n}(M_4,\End_F(D^\la))$ which does not vanish on $S_4$. By \cite[Example 17.17]{JamesBook} we have that $M_4\sim S_4|A$ with $A\sim S_3|S_2|S_1|S_0$. Further $\dim\Hom_{\s_n}(S_1,\End_F(D^\la))=0$ by Lemma \ref{Lemma7.1}. In particular $D_1\not\subseteq\End_F(D^\la)$, since $D_1\cong\hd(S_1)$. By Lemma \ref{l2} it then follows that
\begin{align*}
&\dim\End_{\s_{n-4,4}}(D^\la\da_{\s_{n-4,4}})\\
&=\dim\Hom_{\s_n}(M_4,\End_F(D^\la))\\
&=\dim\Hom_{\s_n}(A,\End_F(D^\la))\\
&\leq\dim\Hom_{\s_n}(S_3,\End_F(D^\la))+\dim\Hom_{\s_n}(S_2,\End_F(D^\la))\\
&\hspace{11pt}+\dim\Hom_{\s_n}(S_1,\End_F(D^\la))+\dim\Hom_{\s_n}(S_0,\End_F(D^\la))\\
&=\dim\Hom_{\s_n}(S_3,\End_F(D^\la))+\dim\Hom_{\s_n}(S_2,\End_F(D^\la))+1.
\end{align*}
From Lemma \ref{L12e}, $S_3\cong D_2|D_1|D_3$ and $S_2\cong D_1|D_2$. From Lemma \ref{L2} we have that $D_2\subseteq \End_F(D^\la)$, since $D_1\not\subseteq\End_F(D^\la)$. By the same reasons, if $\dim\Hom_{\s_n}(S_2,\End_F(D^\la))\geq 2$ then $D_2^{\oplus 2}\subseteq\End_F(D^\la)$, while if $\dim\Hom_{\s_n}(S_3,\End_F(D^\la))\geq 1$ then $D_3$ or $S_3$ is contained in $\End_F(D^\la)$ (and so in this case $D_2\oplus D_3$ or $S_3$ is contained in $\End_F(D^\la)$. Thus it is enough to prove that $\dim\End_{\s_{n-4,4}}(D^\la\da_{\s_{n-4,4}})\geq 3$.
We have that $(n-k-2,k-2)\not=(n-k-3+m,k-m-1)$, since $k\geq 3$. If $\mu$ is either of these two partitions then
\[\dim\Hom_{\s_{n-4,4}}(D^\mu\otimes M^{(1^4)},D^\la\da_{\s_{n-4,4}})=\dim\Hom_{\s_{n-4}}(D^\mu,D^\la\da_{\s_{n-4}})\geq 1.\]
It then follows that there are at least two non-isomorphic simple modules appearing in the socle of $D^\la\da_{\s_{n-4,4}}$. Further $D^\la\da_{\s_{n-4}}$ is not semisimple, since it contains $D^{(n-k-2,k-1)}\da_{\s_{n-4}}$ which is not semisimple (by Lemma \ref{l39}). So the same holds for $D^\la\da_{\s_{n-4,4}}$. In particular $\dim\End_{\s_{n-4,4}}(D^\la\da_{\s_{n-4,4}})\geq 3$, since $D^\la\da_{\s_{n-4,4}}$ is self-dual, it is not semisimple and its socle is not simple.
{\bf Case 2:} $k=2$. By Lemma \ref{L12e} we have that $M_3\sim M_1\oplus (S_3|S_2)$. So by Lemmas \ref{l2} and \ref{l53}
\begin{align*}
&\dim\End_{\s_{n-3,3}}(D^\la\da_{\s_{n-3,3}})\\
&=\dim\Hom_{\s_n}(M_3,\End_F(D^\la))\\
&\leq\dim\Hom_{\s_n}(M_1,\End_F(D^\la))+\dim\Hom_{\s_n}(S_3,\End_F(D^\la))\\
&\hspace{11pt}+\dim\Hom_{\s_n}(S_2,\End_F(D^\la))\\
&=\dim\Hom_{\s_n}(S_3,\End_F(D^\la))+\dim\Hom_{\s_n}(S_2,\End_F(D^\la))+1.
\end{align*}
In this case it is then enough to prove that $\dim\End_{\s_{n-3,3}}(D^\la\da_{\s_{n-3,3}})\geq 3$. Since $n\equiv 0\Md 4$, from Lemma \ref{tsh} we have that $[N]=2[D^{(n-2)}]+[D^{(n-4,2)}]$ and so $[N\da_{\s_{n-3}}]=2[D^{(n-3)}]+[D^{(n-5,2)}]$. Since $D^{(n-3)}$ and $D^{(n-5,2)}$ are not in the same block as $D^{(n-4,1)}$, it follows that $D^\la\da_{\s_{n-3,3}}$ has at least two non-zero block components and that at least one of the block components is not simple. So $\dim\End_{\s_{n-3,3}}(D^\la\da_{\s_{n-3,3}})\geq 3$, since block components of $D^\la\da_{\s_{n-3,3}}$ are self-dual, as are simple $F\s_{n-3,3}$-modules.
\end{proof}
\begin{lemma}\label{l46}
Let $p=3$, $n\equiv 0\Md 3$
and $\la=(n-k,k)$ with $1\leq k<n/2$. Then
\[\dim\Hom_{\s_n}(S_1,\End_F(D^\la))=\dim\End_{\s_{n-1}}(D^\la\da_{\s_{n-1}})-1.\]
\end{lemma}
\begin{proof}
From Lemma \ref{L160817_0} and self-duality of $M_1$ and $D_0$, we have that $M_1\sim D_0|S_1^*$. So
\[D^\la\otimes M_1\sim (D^\la\otimes D_0)|(D^\la\otimes S_1^*)\sim D^\la|(D^\la\otimes S_1^*)\]
and then there exists $D\subseteq D^\la\otimes M_1$ with $D\cong D^\la$ such that $D^\la\otimes S_1^*\cong (D^\la\otimes M_1)/D$. Since $p=3$ and $h(\la)=2$, from \cite[Theorem 2.10]{ks3} we have that $\Ext^1(D^\la,D^\la)=0$. So
\begin{align*}
\dim\Hom_{\s_n}(S_1,\End_F(D^\la))&=\dim\Hom_{\s_n}(D^\la,D^\la\otimes S_1^*)\\
&=\dim\Hom_{\s_n}(D^\la,D^\la\otimes M_1)-1\\
&=\dim\Hom_{\s_n}(M_1,\End_F(D^\la))-1\\
&=\dim\End_{\s_{n-1}}(D^\la\da_{\s_{n-1}})-1.
\end{align*}
\end{proof}
\begin{lemma}\label{l44}
Let $p=3$, $n\geq 10$ with $n\equiv 1\Md 3$ and $\la=(n-k,k)$ with $1\leq k<n/2$. Then
\begin{align*}
&\dim\End_{\s_{n-2,2}}(D^\la\da_{\s_{n-2,2}})-\dim\End_{\s_{n-1}}(D^\la\da_{\s_{n-1}})\\
&=\dim\Hom_{\s_n}(S_2,\End_F(D^\la)).
\end{align*}
\end{lemma}
\begin{proof}
From Lemma \ref{L160817_1} and self-duality of $M_2$, $M_1$ and $D_0$ we have that $M_2\oplus D_0\sim M_1\oplus (D_0|S_2^*)$. Similarly to the previous lemma we then have that
\begin{align*}
&\dim\Hom_{\s_n}(S_2,\End_F(D^\la))\\
&=\dim\Hom_{\s_n}(M_2\oplus D_0,\End_F(D^\la))-\dim\Hom_{\s_n}(M_1,\End_F(D^\la))-1\\
&=\dim\End_{\s_{n-2,2}}(D^\la\da_{\s_{n-2,2}})-\dim\End_{\s_{n-1}}(D^\la\da_{\s_{n-1}}).
\end{align*}
\end{proof}
\begin{lemma}\label{l41}
Let $p=3$, $n\geq 9$ and $\la=(n-k,k)$ with $1\leq k\leq n/2$. If
\[\dim\End_{\s_{n-3,3}}(D^\la\da_{\s_{n-3,3}})>\dim\End_{\s_{n-2,2}}(D^\la\da_{\s_{n-2,2}})\]
then there exists $\psi\in\Hom_{\s_3}(M_3,\End_F(D^\la))$ which does not vanish on $S_3$.
\end{lemma}
\begin{proof}
If $n\equiv 2\Md 3$ we have by Lemma \ref{L160817_2} that $M_3\sim S_3|M_2$ and so the lemma follows by Lemma \ref{l2} applied for both $\alpha=(n-2,2)$ and $(n-3,3)$.
If $n\equiv 0\Md 3$ then by Lemma \ref{L160817_0} we have that $M_3\sim D_2\oplus (S_3|(D_0\oplus S_1))$ and that $M_2\cong D_2\oplus M_1$. Again by Lemma \ref{l2} applied for both $\alpha=(n-2,2)$ and $(n-3,3)$ and by assumption
\begin{align*}
\dim\Hom_{\s_n}(M_3,\End_F(D^\la))&=\dim\End_{\s_{n-3,3}}(D^\la\da_{\s_{n-3,3}})\\
&>\dim\End_{\s_{n-2,2}}(D^\la\da_{\s_{n-2,2}})\\
&=\dim\Hom_{\s_n}(M_2,\End_F(D^\la)).
\end{align*}
Since $M_2\cong D_2\oplus M_1$ we then have
\[\dim\Hom_{\s_n}(M_3,\End_F(D^\la))>\dim\Hom_{\s_n}(D_2\oplus M_1,\End_F(D^\la))\]
and then from Lemma \ref{l46}
\begin{align*}
\dim\Hom_{\s_n}(M_3,\End_F(D^\la))&>\dim\Hom_{\s_n}(D_2\oplus S_1,\End_F(D^\la))+1\\
&=\dim\Hom_{\s_n}(D_2\oplus D_0\oplus S_1,\End_F(D^\la)).
\end{align*}
Since $M_3\sim S_3|(D_2\oplus D_0\oplus S_1)$, the lemma follows.
If $n\equiv 1\Md 3$ then by Lemma \ref{L160817_1} we have that $M_3\sim D_1\oplus (S_3|(D_0\oplus S_2))$ and $M_1\cong D_0\oplus D_1$. The result then follows by Lemma \ref{l44} similarly to the previous case.
\end{proof}
\begin{lemma}\label{l35}
Let $p=3$, $n\equiv 1\Md 3$ with $n\geq 10$ and $\la=(n-k,k)$ with $2\leq k<n/2$ and $n-2k\geq 2$. If \[\dim\End_{\s_{n-4,4}}(D^\la\da_{\s_{n-4,4}})>\dim\End_{\s_{n-3,3}}(D^\la\da_{\s_{n-3,3}})\]
then there exists $\psi\in\Hom_{\s_n}(M_4,\End_F(D^\la))$ which does not vanish on $S_4$.
\end{lemma}
\begin{proof}
From Lemma \ref{L160817_1} we have that $M_4\sim S_4|M_3$. The result then follows by Lemma \ref{l2} applied for both $\alpha=(n-3,3)$ and $(n-4,4)$.
\end{proof}
\begin{lemma}\label{l43}
Let $p=3$ and $\la=(n-k,k)$ with $n-2k\geq 2$ and $k\geq 2$. If the two removable nodes of $\la$ are both normal and have different residues then
\[\dim\End_{\s_{n-2,2}}(D^\la\da_{\s_{n-2,2}})=3,\quad\dim\End_{\s_{n-3,3}}(D^\la\da_{\s_{n-3,3}})\geq 4.\]
\end{lemma}
\begin{proof}
In this case $n-k\equiv k\Md 3$, so $n-2k\geq 3$. Also if $i$ is the residue of the removable node on the first row of $\la$, then the residue of the removable node on the second row of $\la$ is $i-1$. Considering residues of removable/addable nodes of the corresponding partitions, it follows easily from Lemmas \ref{l45} and \ref{l39} that $e_i D^\la\cong D^{(n-k-1,k)}$, $e_{i-1} D^\la\cong D^{(n-k,k-1)}$ and
\begin{align*}
D^\la\da_{\s_{n-2}}&\cong D^{(n-k-1,k)}\da_{\s_{n-2}}\oplus D^{(n-k,k-1)}\da_{\s_{n-2}}\\
&\cong e_{i-1}D^{(n-k-1,k)}\oplus e_iD^{(n-k,k-1)}.
\end{align*}
Further $e_iD^{(n-k,k-1)}\cong D^{(n-k-1,k-1)}$, while $e_{i-1}D^{(n-k-1,k)}$ has simple socle isomorphic to $D^{(n-k-1,k-1)}$ and $\dim\End_{\s_{n-2}}(e_{i-1}D^{(n-k-1,k)})=2$.
Note that
\begin{align*}
D^\la\da_{\s_{n-2,2}}&\subseteq D^\la\da_{\s_{n-2}}\ua^{\s_{n-2,2}}\\
&\cong (D^{(n-k-1,k-1)}\oplus e_{i-1}D^{(n-k-1,k)})\otimes (D^{(2)}\oplus D^{(1^2)}).
\end{align*}
From \cite[Lemma 1.11]{bk5} we have that $D^{(n-k-2,k)}\otimes D^{(2)}$ and $D^{(n-k-1,k-1)}\otimes D^{(1^2)}$ are both composition factors of $D^\la\da_{\s_{n-2,2}}$. Since $\soc(D^\la\da_{\s_{n-2}})\cong (D^{(n-k-1,k-1)})^{\oplus 2}$, it follows (by block decomposition) that
\[\soc(D^\la\da_{\s_{n-2,2}})\cong D^{(n-k-1,k-1)}\otimes (D^{(2)}\oplus D^{(1^2)})\]
and that $D^\la\da_{\s_{n-2,2}}\cong M\oplus N$ with $M$ and $N$ indecomposable with simple socle. Since $D^{(n-k-1,k-1)}\subseteq e_{i-1}D^{(n-k-1,k)}$, by \cite[Lemma 1.2]{bk2} we have that, up to exchange, $M\subseteq e_{i-1}D^{(n-k-1,k)}\otimes D^{(2)}$ and $N\subseteq e_{i-1}D^{(n-k-1,k)}\otimes D^{(1^2)}$. By the same lemma we also have that $e_{i-1}D^{(n-k-1,k)}\subseteq M\da_{\s_{n-2}}$ or $N\da_{\s_{n-2}}$. Thus
\[D^\la\da_{\s_{n-2,2}}\cong (D^{(n-k-1,k-1)}\otimes D^{(2)})\oplus (e_{i-1}D^{(n-k-1,k)}\otimes D^{(1^2)})\]
or
\[D^\la\da_{\s_{n-2,2}}\cong (D^{(n-k-1,k-1)}\otimes D^{(1^2)})\oplus (e_{i-1}D^{(n-k-1,k)}\otimes D^{(2)}).\]
So
\[\dim\End_{\s_{n-2,2}}(D^\la\da_{\s_{n-2,2}})=1+\dim\End_{\s_{n-2}}(e_{i-1}D^{(n-k-1,k)})=3.\]
Since $k\geq 2$, from Lemma \ref{l39} we also have that $e_{i\pm1} D^{(n-k-1,k-1)}\not=0$. Since $D^{(n-k-1,k-1)}$ appears with multiplicity larger than 1 in $D^\la\da_{\s_{n-2}}$ and all simple $F\s_3$-modules are 1-dimensional, it follows that $D^\la\da_{\s_{n-3,3}}$ has at least two blocks components which are non-zero and not simple. As block components of $D^\la\da_{\s_{n-3,3}}$ are self-dual, we then have that
\[\dim\End_{\s_{n-3,3}}(D^\la\da_{\s_{n-3,3}})\geq 4.\]
\end{proof}
\begin{lemma}\label{l38}
Let $p=3$, $n\geq 9$ and $\la=(n-k,k)$ with $n-2k\geq 2$ and $k\geq 2$. If the two removable nodes of $\la$ are both normal and have different residues then there exists $\psi\in\Hom_{\s_n}(M_3,\End_F(D^\la))$ which does not vanish on $S_3$.
\end{lemma}
\begin{proof}
From Lemma \ref{l43} we have that
\[\dim\End_{\s_{n-3,3}}(D^\la\da_{\s_{n-3,3}})>\dim\End_{\s_{n-2,2}}(D^\la\da_{\s_{n-2,2}}).\]
The result then holds by Lemma \ref{l41}.
\end{proof}
\begin{lemma}\label{l42}
Let $p=3$ and $\la=(n-k,k)$ with $n-2k\geq 2$ and $k\geq 2$. If the two removable nodes of $\la$ have the same residue then
\[\dim\End_{\s_{n-1}}(D^\la\da_{\s_{n-1}})=2,\quad\dim\End_{\s_{n-2,2}}(D^\la\da_{\s_{n-2,2}})\geq 4.\]
\end{lemma}
\begin{proof}
In this case $n-k\equiv k+2\Md 3$ and both removable nodes are normal. It then follows that $\dim\End_{\s_{n-1}}(D^\la\da_{\s_{n-1}})=2$ by Lemma \ref{l53}.
Let $i$ be the residue of the removable nodes of $\la$. From Lemma \ref{l39} and considering the structure of the corresponding partitions $[e_i^2 D^\la:D^{(n-k-1,k-1)}]=2$ and
\begin{align*}
[e_{i-1}e_i D^\la:D^{(n-k,k-2)}]&\geq [e_i D^\la:D^{(n-k,k-1)}]\cdot [e_{1-i}D^{(n-k,k-1)}:D^{(n-k,k-2)}]\\
&=2.
\end{align*}
So by block decomposition $D^\la\da_{\s_{n-2}}\cong A\oplus B$ with $A$ and $B$ non-zero, non-simple and self-dual. Since any simple $F\s_2$-module is 1-dimensional it is easy to see that a similar decomposition exists for $D^\la\da_{\s_{n-2,2}}$. The lemma then follows
\end{proof}
\begin{lemma}\label{l37}
Let $p=3$
and $\la=(n-k,k)$ with $n-2k\geq 2$ and $k\geq 2$. If the two removable nodes of $\la$ have the same residue then there exists $\psi,\psi'\in\Hom_{\s_n}(M_2,\End_F(D^\la))$ such that $\psi|_{S_2}$ and $\psi'|_{S_2}$ are linearly independent.
\end{lemma}
\begin{proof}
From Lemma \ref{l42} we have that
\[\dim\End_{\s_{n-1}}(D^\la\da_{\s_{n-1}})\geq \dim\End_{\s_{n-2,2}}(D^\la\da_{\s_{n-2,2}})+2.\]
We have that $M_2\sim S_2|M_1$ by Lemmas \ref{L160817_2}, \ref{L160817_0} and \ref{L160817_1} (if $n\equiv 0\Md 3$ then $S_2\cong D_2$ by \cite[Theorem 24.15]{JamesBook}). The result then follows from Lemma \ref{l2}.
\end{proof}
\begin{lemma}\label{l36}
Let $p=3$ and $\la=(n-k,k)$ with $n-2k\geq 2$ and $k\geq 2$. If $\la$ is a JS-partition then
\[\dim\End_{\s_{n-2,2}}(D^\la\da_{\s_{n-2,2}})=2,\quad\dim\End_{\s_{n-4,4}}(D^\la\da_{\s_{n-4,4}})\geq 3.\]
\end{lemma}
\begin{proof}
Since $\la$ is a JS-partition we have that $n-k\equiv k+1\Md 3$. So by assumption $n-k\geq k+4$. Repeated use of Lemmas \ref{l45}, \ref{l39} and \ref{l56} give
\begin{align*}
D^\la\da_{\s_{n-1}}\cong &D^{(n-k-1,k)},\\
D^\la\da_{\s_{n-2}}\cong &D^{(n-k-2,k)}\oplus D^{(n-k-1,k-1)},\\
D^\la\da_{\s_{n-3}}\sim &(D^{(n-k-2,k-1)}|\ldots|D^{(n-k-3,k)}|\ldots|D^{(n-k-2,k-1)})\oplus D^{(n-k-2,k-1)},\\
D^\la\da_{\s_{n-4}}\sim &(D^{(n-k-2,k-2)}|\ldots|D^{(n-k-4,k)}|\ldots|D^{(n-k-2,k-2)})\oplus D^{(n-k-2,k-2)}\\
&\oplus (D^{(n-k-3,k-1)}|\ldots|D^{(n-k-3,k-1)})\oplus D^{(n-k-3,k-1)}\oplus\ldots.
\end{align*}
So $D^\la\da_{\s_{n-2,2}}$ is semisimple with two non-isomorphic direct summands and then $\dim\End_{\s_{n-2,2}}(D^\la\da_{\s_{n-2,2}})=2$. Further, comparing residues of the removed nodes, it can be checked that $D^{(n-k-2,k-2)}$ and $D^{(n-k-4,k)}$ are in the same block, but $D^{(n-k-3,k-1)}$ is in a distinct block. So $D^\la\da_{\s_{n-4,4}}$ has at least two non-zero block components, at least one of which is not simple. Since $D^\la\da_{\s_{n-4}}$ is self-dual, as are all simple $F\s_{n-4,4}$-modules, we also have that $\dim\End_{\s_{n-4,4}}(D^\la\da_{\s_{n-4,4}})\geq 3$.
\end{proof}
\begin{lemma}\label{l32}
Let $p=3$, $n\equiv
1\Md 3$ with $n\geq
10$ and $\la=(n-k,k)$ with $n-2k\geq 2$ and $k\geq 2$. If $\la$ is a JS-partition then there exists $\psi\in\Hom_{\s_n}(M_3,\End_F(D^\la))$ which does not vanish on $S_3$ or there exists $\psi'\in\Hom_{\s_n}(M_4,\End_F(D^\la))$ which does not vanish on $S_4$.
\end{lemma}
\begin{proof}
From Lemma \ref{l36} we have that
\[\dim\End_{\s_{n-4,4}}(D^\la\da_{\s_{n-4,4}})>\dim\End_{\s_{n-2,2}}(D^\la\da_{\s_{n-2,2}}).\]
The lemma then holds by Lemmas \ref{l41} and \ref{l35}.
\end{proof}
\begin{lemma}\label{l29}
Let $p=3$, $n\equiv 0\Md 3$ with $n\geq 9$ and $\la=(n-k,k)$ with $n-2k\geq 2$ and $k\geq 2$. If $\la$ is a JS-partition then the only normal node of $\la$ has residue 1 and $f_1e_1D^\la\cong D^\la|D^{(n-k-1,k,1)}|D^\la$.
\end{lemma}
\begin{proof}
It follows easily from Lemma \ref{L221119} and the assumptions on $n$ and $\la$ that the only normal node of $\la$ has residue 1. So from Lemmas \ref{l45} and \ref{l39}
\[D^\la\otimes M_1\cong f_1e_1D^\la\oplus f_0e_1D^\la\oplus f_2e_1D^\la.\]
Notice that $f_1e_1D^\la\cong f_1D^{\tilde{e}_1(\la)}$ has simple socle and head isomorphic to $D^\la$ from Lemmas \ref{l39} and \ref{l40}.
From Lemma \ref{L160817_0} we have that $M_1\sim D_0|S_1^*$ and that $S_1^*\subseteq M_{1^2}$. So $D^\la\otimes S_1^*\subseteq D^\la\otimes M_{1^2}$. Further
\[D^\la\otimes M_1\sim (\overbrace{D^\la\otimes D_0}^{\cong D^\la})|(D^\la\otimes S_1^*)\]
so that $D^\la\otimes S_1^*\cong (D^\la\otimes M_1)/D$ for some $D\subseteq D^\la\otimes M_1$ with $D\cong D^\la$. Let $B$ be the block component of $D^\la\otimes S_1^*$ corresponding to the block of $D^\la$. Then $B\cong (f_1e_1D^\la)/D^\la$. We will now show that $\soc(B)\cong D^{(n-k-1,k,1)}$. From Lemmas \ref{l45} and \ref{l39} we have that
\[D^\la\da_{\s_{n-2}}\cong\overbrace{D^{(n-k-2,k)}}^{e_0e_1D^\la}\oplus \overbrace{D^{(n-k-1,k-1)}}^{e_2e_1D^\la}.\]
Since $B\subseteq D^\la\otimes S_1^*\subseteq D^\la\otimes M_{1^2}\cong D^\la\da_{\s_{n-2}}\ua^{\s_n}$, comparing blocks we have that the socle of $B$ is contained in the socle of
\[f_1f_0 D^{(n-k-2,k)}\oplus f_0f_1 D^{(n-k-2,k)}\oplus f_1f_2 D^{(n-k-1,k-1)}\oplus f_2f_1 D^{(n-k-1,k-1)}.\]
From Lemma \ref{l40} we have that
\begin{align*}
&\soc(f_0f_1 D^{(n-k-2,k)}\oplus f_1f_2 D^{(n-k-1,k-1)}\oplus f_2f_1 D^{(n-k-1,k-1)})\\
&\cong\soc(f_0 D^{(n-k-2,k,1)}\oplus f_1 D^{(n-k-1,k)}\oplus f_2 D^{(n-k-1,k-1,1)})\\
&\cong D^\la\oplus (D^{(n-k-1,k,1)})^{\oplus 2}.
\end{align*}
Consider now $\soc(f_1f_0 D^\la)$. We have that $f_0D^{(n-k-2,k)}\cong e_0 D^{(n-k-1,k+1)}$ by \cite[Lemma 3.4]{m1}. Thus by Lemmas \ref{l39} and \ref{tsh}
\[f_0D^{(n-k-2,k)}\sim D^{(n-k-1,k)}|C|D^{(n-k-1,k)}\]
for a certain module $C$ such that all composition factors of $C$ are of the form $D^{(n-k-2+3j,k+1-3j)}$ with $j\geq 0$. Let $\mu\in\Par_3(n)$ with $D^\mu\subseteq f_1f_0D^{(n-k-2,k)}$. Then by Lemma \ref{l48}
\[\dim\Hom_{\s_{n-1}}(e_1 D^\mu,f_0D^{(n-k-2,k)})=\dim\Hom_{\s_n}(D^\mu,f_1f_0D^{(n-k-2,k)})\geq 1.\]
By Lemma \ref{l39} there exists a composition factor $D^\nu$ of $f_0D^{(n-k-2,k)}$ such that $\tilde e_1\mu=\nu$ and then, by Lemma \ref{l47}, $\mu=\tilde f_1\nu$. Thus $\mu=\la$ or $\mu=(n-k-2+3j,k+2-3j)$ for some $j\geq 0$. In the second case $e_1D^\mu\cong D^{(n-k-2+3j,k+1-3j)}$, contradicting $\dim\Hom_{\s_{n-1}}(e_1 D^\mu,f_0D^{(n-k-2,k)})\geq 1$ by Lemma \ref{l40}. Thus $\mu=\la$.
In particular the only simple modules appearing in the socle of
\[f_1f_0 D^{(n-k-2,k)}\oplus f_0f_1 D^{(n-k-2,k)}\oplus f_1f_2 D^{(n-k-1,k-1)}\oplus f_2f_1 D^{(n-k-1,k-1)}\]
are $D^\la$ and $D^{(n-k-1,k,1)}$. From Lemma \ref{l56} we have that $B$ has exactly one composition factor of the form $D^\la$, one composition factor of the form $D^{(n-k-1,k,1)}$ and possibly other composition factors. So, in view of Lemma \ref{l40}, $\soc(B)\cong D^{(n-k-1,k,1)}$ and then
\[f_1e_1D^\la\sim D^\la|\overbrace{D^{(n-k-1,k,1)}|E|D^\la}^B\]
for a certain module $E$. Since $f_1e_1 D^\la$ and $B$ both have simple socle and $f_1e_1D^\la$ is self-dual (by Lemma \ref{l57}), the lemma follows
\end{proof}
\begin{lemma}\label{l27}
Let $p=3$, $n\equiv 0\Md 3$ with $n\geq 6$ and $\la=(n-k,k)$ with $n-2k\geq 2$ and $k\geq 2$. If $\la$ is a JS-partition then
\begin{align*}
\dim\Ext^1_{\s_n}(S^{(n-k-1,k,1)},D^\la)=0\quad\text{and}\quad\dim\Ext^1_{\s_n}(D^{(n-k-1,k,1)},D^\la)\leq 1.
\end{align*}
\end{lemma}
\begin{proof}
Let $\mu:=(n-k-1,k,1)$. Since $\mu$ is 3-regular, so that $D^\mu$ is the head of $S^\mu$ we have that $S^\mu\sim\rad(S^\mu)|D^\mu$. So there exists an exact sequence
\[\Hom_{\s_n}(\rad (S^\mu),D^\la)\to\Ext^1_{\s_n}(D^\mu,D^\la)\to\Ext^1_{\s_n}(S^\mu,D^\la).\]
From \cite{jw} we have that $[S^\mu:D^\la]=1$, and then $\dim\Hom_{\s_n}(\rad (S^\mu),D^\la)\leq 1$. It is then enough to prove that $\dim\Ext^1_{\s_n}(S^\mu,D^\la)=0$.
Notice that by assumption $n-k\equiv 2\Md 3$, $k\equiv 1\Md 3$ and $n-2k\geq 4$. In particular $\la$ has no normal node of residue 0, so $e_0D^\la=0$ by Lemma \ref{l39}. Further
\[f_0 S^{(n-k-2,k,1)}\sim \overbrace{S^{(n-k-2,k,1^2)}|S^{(n-k-2,k+1,1)}}^A|S^\mu\]
by \cite[Corollary 17.14]{JamesBook}. Since $(n-k-2,k,1^2)$ and $(n-k-2,k+1,1)$ are 3-regular, we have that $\dim\Hom_{\s_n}(A,D^\la)=0$. Thus there exists an exact sequence
\[0=\Hom_{\s_n}(A,D^\la)\to \Ext^1_{\s_n}(S^\mu,D^\la)\to \Ext^1_{\s_n}(f_0 S^{(n-k-2,k,1)},D^\la).\]
From $e_0D^\la=0$ and \cite[Lemma 1.4]{ks3} it then follows that
\begin{align*}
\dim\Ext^1_{\s_n}(S^{\mu},D^\la)&\leq\dim\Ext^1_{\s_n}(f_0 S^{(n-k-2,k,1)},D^\la)\\
&=\dim\Ext^1_{\s_{n-1}}(S^{(n-k-2,k,1)},e_0 D^\la)\\
&=0.
\end{align*}
\end{proof}
\begin{lemma}\label{l28}
Let $p=3$, $n\equiv 0\Md 3$ with $n\geq 9$ and $\la=(n-k,k)$ with $n-2k\geq 2$ and $k\geq 2$. If $\la$ is a JS-partition then there exists $\psi\in\Hom_{\s_n}(M_3,\End_F(D^\la))$ which does not vanish on $S_3$ or there exists $\psi'\in\Hom_{\s_n}(M_4,\End_F(D^\la))$ which does not vanish on $S_4$.
\end{lemma}
\begin{proof}
In view of Lemma \ref{l41} we may assume that
\[\dim\End_{\s_{n-3,3}}(D^\la\da_{\s_{n-3,3}})=\dim\End_{\s_{n-2,2}}(D^\la\da_{\s_{n-2,2}}).\]
Thus by Lemma \ref{l36}
\[\dim\End_{\s_{n-4,4}}(D^\la\da_{\s_{n-4,4}})>\dim\End_{\s_{n-3,3}}(D^\la\da_{\s_{n-3,3}}).\]
By Lemma \ref{L160817_0} we have that $M_1\sim D_0|S_1^*$ and that $M_4\sim S_4|S_1|A$ for a certain submodule $A\subseteq M_3$ with $M_3/A\cong S_1$. In view of Lemma \ref{l2} it is then enough to prove that
\[\dim\End_{\s_n}(A\oplus S_1,\End_F(D^\la))\leq\dim\End_{\s_{n-3,3}}(D^\la\da_{\s_{n-3,3}}).\]
Since $\la$ is JS we have by Lemma \ref{l46} that
\[\dim\End_{\s_n}(S_1,\End_F(D^\la))=0.\]
From Lemmas \ref{l45} and \ref{l29} we have that $D^\la\otimes M_1\cong D^\la\da_{\s_{n-1}}\ua^{\s_n}\cong B\oplus C$ where $B\cong D^\la|D^{(n-k-1,k,1)}|D^\la$ is the block component of $D^\la\otimes M_1$ of the block containing $D^\la$ and $C$ is the sum of the other block components. It follows from $M_1\sim D_0|S_1^*$ that $D^\la\otimes S_1^*\cong (B/D^\la)\oplus C$. Thus there exists $M\subseteq D^\la\otimes M_3$ with $N\subseteq M\cong (B/D^\la)\oplus C$ such that $N\cong B/D^\la$ and $(D^\la\otimes M_3)/M\cong D^\la\otimes A^*$. Considering block decomposition we then have that
\begin{align*}
\dim\End_{\s_n}(A,\End_F(D^\la))&=\dim\End_{\s_n}(D^\la,D^\la\otimes A^*)\\
&=\dim\End_{\s_n}(D^\la,(D^\la\otimes M_3)/M)\\
&=\dim\End_{\s_n}(D^\la,(D^\la\otimes M_3)/N).
\end{align*}
Since $h(\la)=2<3=p$ we have from Lemma \ref{l27} and \cite[Theorem 2.10]{ks3} that
\[\dim\Ext^1_{\s_n}(D^{(n-k-1,k,1)},D^\la)\leq 1\quad\text{and}\quad\dim\Ext^1_{\s_n}(D^\la,D^\la)=0.\]
Since $N\cong D^{(n-k-1,k,1)}|D^\la$ it follows that
\begin{align*}
&\dim\End_{\s_n}(A,\End_F(D^\la))\\
&=\dim\End_{\s_n}(D^\la,(D^\la\otimes M_3)/N)\\
&\leq\dim\End_{\s_n}(D^\la,D^\la\otimes M_3)+\dim\Ext^1_{\s_n}(D^{(n-k-1,k,1)},D^\la)\\
&\hspace{11pt}+\dim\Ext^1_{\s_n}(D^\la,D^\la)-1\\
&\leq \dim\End_{\s_n}(D^\la,D^\la\otimes M_3)\\
&=\dim\End_{\s_{n-3,3}}(D^\la\da_{\s_{n-3,3}}).
\end{align*}
So the lemma holds.
\end{proof}
\section{Splitting JS partitions}\label{sJS}
Splitting modules indexed by JS partitions also play a special role in the proof of Theorem \ref{mt}, so they and the corresponding partitions will be studied more in details in this section.
\begin{lemma}\label{L6}
Let $p=2$. If $\la\in\Parinv_2(n)$ is a JS-partition then the parts of $\la$ are odd. Further $n\equiv h(\la)^2\Md 4$.
\end{lemma}
\begin{proof}
Since $\la$ is a JS-partition all parts have the same parity by Lemma \ref{L151119}. It then easily follows that all parts are odd by Lemma \ref{split2}. Let $k$ be maximal with $2k\leq h(\la)$. For $1\leq i\leq k$ we have by Lemma \ref{split2} that $\la_{2i-1}-\la_{2i}=2$ and so $\la_{2i-1}+\la_{2i}\equiv 0\Md 4$ and further if $h(\la)$ is odd then $\la_{h(\la)}=1$. So $n\equiv h(\la)^2\Md 4$.
\end{proof}
\begin{lemma}\label{L11}
Let $p=2$ and $n\geq 6$ be even. Let $\la\in\Parinv_2(n)$ be a JS-partition with $\la\not=\be_n$. Then $n\equiv 0\Md 4$ and $D_2\subseteq\End_F(D^\la)$. Further $D_2\da_{A_n}\subseteq\End_F(E^\la_\pm)$ or $S_3^*\da_{A_n}\subseteq\End_F(E^\la_\pm)$.
\end{lemma}
\begin{proof}
From Lemma \ref{L6} we have that $n\equiv 0\Md 4$. From \cite[Lemma 7.5]{m1} we then have that $D_2\subseteq\End_F(D^\la)$.
From Lemma \ref{L12e} we have that $M_3\cong M_1\oplus A$, where
\[A\cong\overbrace{D_2|D_1|D_3}^{S_3}|\overbrace{D_1|D_2}^{S_2}.\]
From Lemma \ref{LYoung} we have that $A\cong Y_3$ is self-dual, so we also have
\[A\cong\overbrace{D_2|D_1}^{S_2^*}|\overbrace{D_3|D_1|D_2}^{S_3^*}.\]
By Lemma \ref{L1} it then easily follows that $\dim\Hom_{A_n}(A,\End_F(E^\la_\pm))\geq 1$.
From Lemma \ref{Lemma7.1}, by Frobenious reciprocity and since $D_1\cong\hd S_1$,
\begin{align*}
\dim\Hom_{A_n}(D_1\da_{A_n},\End_F(E^\la_\pm))&\leq\dim\Hom_{\s_n}(D_1,\End_F(D^\la))\\
&\leq\dim\Hom_{\s_n}(S_1,\End_F(D^\la))\\
&=0.
\end{align*}
The lemma then follows.
\end{proof}
\begin{lemma}\label{l23}{\cite[Lemma 8.1]{m2}}
Let $p\geq 3$ and $\lambda\in\Parinv_3(n)$ be a JS-partition. Then $n\equiv h(\lambda)^2\Md p$.
\end{lemma}
\section{Spilt-non-split case}\label{sns}
In this section we study irreducible tensor products of the form $E^\lambda_\pm\otimes E^\mu$. We will use $E^\la_\pm$ to refer to $E^\la_+$ or $E^\la_-$, and $E^\la_\mp$ to the other.
For $p\geq 3$ the following lemma holds by \cite[Lemma 3.1]{bk2}.
\begin{lemma}\label{l14}
Let $\la\in\Parinv_p(n)$ and $\mu\in\Par_p(n)\setminus\Parinv_p(n)$. If $E^\lambda_\pm\otimes E^\mu$ is irreducible then
\begin{align*}
\dim\Hom_{\s_n}(\End_F(D^\lambda),\End_F(D^\mu))&\leq 2.
\end{align*}
\end{lemma}
\begin{proof}
Notice that, by Frobenious reciprocity,
\begin{align*}
&\dim\Hom_{\s_n}(\End_F(D^\lambda),\End_F(D^\mu))\\
&=\dim\Hom_{A_n}(\Hom_F(E^\la_\pm,E^\la_+\oplus E^\la_-),\End_F(E^\mu))\\
&=\dim\End_{A_n}(E^\la_\pm\otimes E^\mu)+\dim\Hom_{A_n}(E^\la_\pm\otimes E^\mu,E^\la_\mp\otimes E^\mu).
\end{align*}
The lemma then follows, since $E^\la_+\otimes E^\mu$ and $E^\la_-\otimes E^\mu$ have the same dimension.
\end{proof}
\begin{theor}\label{sns2a}
Let $p=2$, $\la\in\Parinv_2(n)$ and $\mu\in\Par_2(n)\setminus\Parinv_2(n)$. If $E^\la_\pm$ and $E^\mu$ are not 1-dimensional and $E^\la_\pm\otimes E^\mu$ is irreducible, then $\la$ or $\mu$ is equal to $(n-1,1)$ or $\be_n$.
\end{theor}
\begin{proof}
For $n\leq 9$ the theorem holds by comparing dimensions using \cite[Tables]{JamesBook}. So we may assume that $n\geq 10$ and $\la,\mu\not\in\{(n),(n-1,1),\be_n\}$. By Lemma \ref{split2} we then have that $h(\la)\geq 3$. Note that there always exist $\psi_{0,\la}\in\Hom_{\s_n}(M_0,\End_F(D^\la))$ and $\psi_{0,\mu}\in\Hom_{\s_n}(M_0,\End_F(D^\mu))$ which do not vanish on $S_0$. Further for $2\leq k\leq 3$, from Lemmas \ref{L2} and \ref{L3} there exist $\psi_{k,\la}\in\Hom_{\s_n}(M_k,\End_F(D^\la))$ which do not vanish on $S_k$. Similarly there exists $\psi_{2,\mu}\in\Hom_{\s_n}(M_2,\End_F(D^\mu))$ which does not vanish on $S_2$.
Assume first that $h(\mu)\geq 3$. Then there similarly also exists $\psi_{3,\mu}\in\Hom_{\s_n}(M_3,\End_F(D^\mu))$ which does not vanish on $S_3$. So by Lemma \ref{l15}
\[\dim\Hom_{\s_n}(\End_F(D^\lambda),\End_F(D^\mu))\geq 3,\]
contradicting $E^\la_\pm\otimes E^\mu$ being irreducible by Lemma \ref{l14}.
So we may now assume that $\mu=(n-k,k)$ with $n-2k\geq 3$ and $k\geq 2$. Consider first $n$ odd. Then by Lemma \ref{L4} there exist $\psi_{2,\mu},\psi_{2,\mu}'\in\Hom_{\s_n}(M_2,\End_F(D^\mu))$ with $\psi_{2,\mu}|_{S_2},\psi_{2,\mu}'|_{S_2}$ linearly independent or there exists $\psi_{3,\mu}\in\Hom_{\s_n}(M_3,\End_F(D^\mu))$ which does not vanish on $S_3$. In either case by Lemmas \ref{l15}
\[\dim\Hom_{\s_n}(\End_F(D^\la),\End_F(D^\mu))\geq 3,\]
again leading to a contradiction by Lemma \ref{l14}.
If $n$ is even and $\la$ has at least two normal nodes we can similarly conclude by Lemma \ref{L2} applied $\mu$ and Lemma \ref{L7} or \ref{L8} applied to $\la$.
So assume now that $n$ is even and $\la$ is JS. Then $h(\la)\geq 4$ by Lemma \ref{L6}. So by Lemma \ref{L9} there exists $\psi_{4,\la}\in\Hom_{\s_n}(M_4,\End_F(D^\la))$ which does not vanish on $S_4$. In view of Lemma \ref{L2} (and arguing as above), we may then assume that there does not exist any $\psi_{4,\mu}\in\Hom_{\s_n}(M_4,\End_F(D^\mu))$ which does not vanish on $S_4$. So by Lemma \ref{L10} we may assume that $A\subseteq\End_F(D^\mu)=\End_F(E^\mu)$ with $A\in\{D_2^2,S_3,D_2\oplus D_3\}$. By Lemma \ref{L11} we have that $D_2\subseteq\End_F(D^\la)$. Since $D_0\subseteq\End_F(D^\la),\End_F(D^\mu)$, we may thus assume that $A\in\{S_3,D_2\oplus D_3\}$. From Lemma \ref{L11} we also have that there exists $B\subseteq\End_F(E^\la_\pm)$ with $B\in\{D_2\da_{A_n},S_3^*\da_{A_n}\}$. Further from Lemma \ref{L12e}, $D_2\subseteq S_3$. It then follows that
\[\dim\End_{A_n}(E^\la_\pm\otimes E^\mu)=\dim\Hom_{A_n}(\End_F(E^\la_\pm),\End_F(E^\mu))\geq 2,\]
which also contradicts $E^\la_\pm\otimes E^\mu$ being irreducible.
\end{proof}
\begin{theor}\label{sns2b}
If $p=2$, $n\geq 3$ and $\la\in\Parinv_p(n)$ then $E^\la_\pm\otimes E^{(n-1,1)}$ is irreducible if and only if $n$ is odd and $\la$ is a JS-partition, in which case $E^\la_\pm\otimes E^{(n-1,1)}\cong E^\nu$, where $\nu$ is obtained from $\la$ by removing the top removable node and adding the second bottom addable node.
\end{theor}
\begin{proof}
If $E^\la_\pm\otimes E^{(n-1,1)}$ is irreducible then $D^\la$ is not a composition factor of $D^\la\otimes D_1$ (since $E^{(n-1,1)}$ is not 1-dimensional). In particular, using Lemmas \ref{L12e} and \ref{L12o},
\[[D^\la\otimes M_1:D^\la]=[M_1:D_0]=\left\{\begin{array}{ll}
2,&n\mbox { is even},\\
1,&n\mbox { is odd}.
\end{array}\right.\]
From \cite[Lemma 3.5]{kmt}, Lemma \ref{l45} and block decomposition,
\[[D^\la\otimes M_1:D^\la]=\epsilon_0(\la)(\phi_0(\la)+1)+\epsilon_1(\la)(\phi_1(\la)+1).\]
Assume first that $n$ is even. Then $\la$ has at most two normal nodes. If $\la$ has exactly two normal nodes then we have from \cite[Lemma 6.2]{m1} that $[D^\la\otimes M_1:D^\la]>2$ (notice that $\phi_0(\la)+\phi_1(\la)=3$ by Lemma \ref{l52}). If $\la$ is a JS-partition then the only normal node is the top removable node and the only conormal nodes are the two bottom addable nodes. From Lemma \ref{L6} all these nodes have residue 0, thus $[D^\la\otimes M_1:D^\la]=3$.
So we may now assume that $n$ is odd, in which case it easily follows from $[D^\la\otimes M_1:D^\la]=1$ that $\la$ is a JS-partition. In this case by Lemma \ref{L6} the normal node has residue 0 and the two conormal nodes both have residue 1. Let $A$ be the top removable node of $\la$, $B$ be the second bottom addable node of $\la$ and $C$ be the bottom addable node of $\la$. Then $A$ is the normal node of $\la$ and $B$ and $C$ are the conormal nodes of $\la$. From Lemmas \ref{split2} and \ref{L6} (or Lemma \ref{L151119}) we easily have that $h(\la)\geq 3$. In particular $B$ and $C$ are the two bottom addable nodes of $\tilde{e}_0(\la)=\la\setminus A$. So $B$ and $C$ are conormal in $\tilde{e}_0(\la)$. From Lemma \ref{l47} we have that $A$ is also conormal in $\tilde{e}_0(\la)$. Since $\la$ is a JS-partition it is easy to check that the normal nodes of $\tilde{e}_0(\la)$ are exactly the two top removable nodes. From Lemma \ref{l52} it follows that $A$, $B$ and $C$ are the only conormal nodes of $\tilde{e}_0(\la)$. So, from Lemmas \ref{l45} and \ref{l40},
\[D^\la\otimes M_
\cong f_0D^{\tilde{e}_0(\la)}\oplus f_1D^{\tilde{e}_0(\la)}\cong D^\la\oplus (\underbrace{D^{(\la\setminus A)\cup B}|\overbrace{\rlap{$\phantom{D^\lambda}$}\ldots\rlap{$\phantom{D^\lambda}$}}^{\text{no }D^{(\la\setminus A)\cup B}}|D^{(\la\setminus A)\cup B}}_{\text{indec. w. simple head and socle}}).
\]
From Lemma \ref{L12o} it then follows that
\[D^\la\otimes D_1\cong \underbrace{D^{(\la\setminus A)\cup B}|\overbrace{\rlap{$\phantom{D^\lambda}$}\ldots\rlap{$\phantom{D^\lambda}$}}^{\text{no }D^{(\la\setminus A)\cup B}}|D^{(\la\setminus A)\cup B}}_{\text{indec. w. simple head and socle}}.
\]
Notice that $\la$ has an odd number of parts, all of which are odd. Since $D^\la\da_{A_n}$ splits, it follows from Lemma \ref{split2} that $\la_{h(\la)}=1$ and then that $D^{(\la\setminus A)\cup B}\da_{A_n}$ does not split (the corresponding partition has an odd number of parts and the last part is 2). Since $\soc(D^\la\otimes D_1)\cong D^{(\la\setminus A)\cup B}$ it follows that
\[\soc((D^\la\otimes D_1)\da_{A_n})\cong\soc((E^\la_+\otimes E^{(n-1,1)})\oplus (E^\la_-\otimes E^{(n-1,1)}))\cong (E^{(\la\setminus A)\cup B})^{\oplus k}\]
for some $k\geq 2$.
So
\begin{align*}
&\dim\Hom_{\s_n}(D^\la\otimes D_1,E^{(\la\setminus A)\cup B}\ua^{\s_n})\\
&=\dim\Hom_{A_n}((D^\la\otimes D_1)\da_{A_n},E^{(\la\setminus A)\cup B})\\
&\geq 2.
\end{align*}
Since $E^{(\la\setminus A)\cup B}\ua^{\s_n}\cong D^{(\la\setminus A)\cup B}|D^{(\la\setminus A)\cup B}$ and the socle of $D^\la\otimes D_1$ is simple, we have that $E^{(\la\setminus A)\cup B}\ua^{\s_n}\subseteq D^\la\otimes D_1$ and then that $D^\la\otimes D_1\cong D^{(\la\setminus A)\cup B}|D^{(\la\setminus A)\cup B}$,
from which the theorem follows.
\end{proof}
\begin{theor}\label{sns3}
Let $p=3$, $\la\in\Parinv_3(n)$ and $\mu\in\Par_3(n)\setminus\Parinv_3(n)$. If $E^\la_\pm$ and $E^\mu$ are not 1-dimensional then $E^\lambda_\pm\otimes E^\mu$ is irreducible if and only if $\mu\in\{(n-1,1),(n-1,1)^\Mull\}$, $\la$ is a JS-partition and $n\not\equiv 0\Md 3$. In this case $E^\lambda_\pm\otimes E^{(n-1,1)}\cong E^\nu$, where $\nu$ is obtained from $\lambda$ by removing the top removable node and adding the bottom addable node.
\end{theor}
\begin{proof}
If $\mu\in\{(n-1,1),(n-1,1)^\Mull\}$ the theorem holds by \cite[Theorem 3.3]{bk2} and Lemma \ref{l23}. So we may now assume that $\mu\not\in\{(n),(n)^\Mull,(n-1,1),(n-1,1)^\Mull\}$. For $n\leq 9$ the theorem can be checked separately, using \cite[Tables]{JamesBook}. So we may also assume that $n\geq 10$. By \cite[Lemma 2.2]{bkz}, and checking small cases separately, it then follows that $\al>\al^\Mull$ for $\al\in\{(n),(n-2,2),(n-3,3),(n-4,2^2)\}$.
From \cite[Theorem 9.2]{m2} we may assume that $\la$ is a JS-partition. From Lemma \ref{Mull} we have that $h(\la)\geq 3$. Assume first that $h(\mu),h(\mu^\Mull)\geq 3$. Then for $k=0$ and $k=3$ (the second case by Lemmas \ref{psi3} and \ref{l30}) there exist $\psi_{k,\la}\in\Hom_{A_n}(M_k,\End_F(E^\la_\pm))$ and $\psi_{k,\mu}\in\Hom_{A_n}(M_k,\End_F(E^\mu))$ which do not vanish on $S_k$. By Lemma \ref{l15},
\[\dim\End_{A_n}(E^\la_\pm\otimes E^\mu)=\dim\Hom_{A_n}(\End_F(E^\la_\pm),\End_F(E^\mu))\geq 2,\]
contradicting $E^\la_\pm\otimes E^\mu$ being irreducible.
So, up to exchange of $\mu$ and $\mu^\Mull$, we may assume that $\mu=(n-k,k)$ with $k\geq 2$ and $n-2k\geq 2$. If the removable nodes of $\mu$ have distinct residues then apply Lemmas \ref{psi2} and \ref{psi3} to $\la$ and Lemmas \ref{psi2} and \ref{l38} to $\mu$. If the removable nodes of $\mu$ have the same residue apply Lemma \ref{psi2} to $\la$ and Lemma \ref{l37} to $\mu$. Similarly to the above case we then have by Lemma \ref{l15} that in either case
\[\dim\Hom_{\s_n}(\End_F(D^\lambda),\End_F(D^\mu))\geq 3.\]
again contradicting $E^\la_\pm\otimes E^\mu$ being irreducible, due to Lemma \ref{l14}.
So we may assume that $\mu$ is also a JS-partition. From Lemma \ref{l23} we have that $n\equiv h(\la)^2\equiv 0\mbox{ or }1\Md 3$. If $n\equiv 1\Md 3$, then $h(\la)\geq 4$ by Lemmas \ref{Mull} and \ref{l23}. In this case apply Lemmas \ref{psi2}, \ref{psi3} and \ref{l33} to $\la$ and Lemmas \ref{psi2} and \ref{l32} to $\mu$. If $n\equiv 0\Md 3$ and $h(\la)>3$ apply Lemmas \ref{psi2}, \ref{psi3} and \ref{l33} to $\la$ and Lemmas \ref{psi2} and \ref{l28} to $\mu$. If $n\equiv 0\Md 3$ and $h(\la)=3$ then apply Lemmas \ref{psi2} and \ref{l8} to $\la$ and Lemmas \ref{psi2} and \ref{l9} to $\mu$. In each of these cases we then again contradict $E^\la_\pm\otimes E^\mu$ being irreducible by Lemmas \ref{l15} and \ref{l14}.
\end{proof}
\section{Double split case}\label{ds}
In this section we study irreducible tensor products of the form $E^\la_\pm\otimes E^\mu_\pm$.
\begin{theor}\label{ds2}
Let $p=2$. If $\la,\mu\in\Parinv_2(n)$ and $E^\la_\pm\otimes E^\mu_\pm$ is irreducible, then $n\not\equiv 2\Md 4$ and $\la=\be_n$ or $\mu=\be_n$.
\end{theor}
\begin{proof}
For $n\leq 8$ the theorem can be checked separately. So we may assume $n\geq 9$. By Lemma \ref{split2} we then have that $(n-3,3)\in\Par_2(n)\setminus\Parinv_2(n)$ and that if $\la,\mu\not=\be_n$ then $h(\la),h(\mu)\geq 3$. In this case by Lemmas \ref{l15} and \ref{L1}
\[\dim\End_{A_n}(E^\la_\pm\otimes E^\mu_\pm)=\dim\Hom_{A_n}(\End_F(E^\la_\pm),\End_F(E^\mu_\pm))\geq 2\]
(similar to the proofs of Theorems \ref{sns2a} and \ref{sns3}), contradicting $E^\la_\pm\otimes E^\mu_\pm$ being irreducible.
So $\la=\be_n$ or $\mu=\be_n$, and then $n\not\equiv 2\Md 4$ by Lemma \ref{split2}.
\end{proof}
\begin{theor}\label{ds3}
Let $p=3$ and $\lambda,\mu\in\Parinv_3(n)$ and assume that $E^\la_\pm$ and $E^\mu_\pm$ are not 1-dimensional. Then $E^\lambda_\pm\otimes E^\mu_\pm$ is irreducible if and only if, up to exchange, $E^\la_\pm=E^{(4,1,1)}_+$ and $E^\mu_\pm=E^{(4,1,1)}_-$. Further $E^{(4,1,1)}_+\otimes E^{(4,1,1)}_-\cong E^{(4,2)}$.
\end{theor}
\begin{proof}
For $n\leq 8$ it can be proved using \cite[Tables]{JamesBook} that if $E^\la_\pm\otimes E^\mu_\pm$ is irreducible then $n=6$ and $\la,\mu=(4,1,1)$, in which case the theorem can be checked using \cite{Atl}. So we may now assume that $n\geq 9$. Then $(n-3,3)>(n-3,3)^\Mull$ by \cite[Lemma 2.2]{bkz} and so by Lemmas \ref{l15} and \ref{l30},
\[\dim\End_{A_n}(E^\la_\pm\otimes E^\mu_\pm)=\dim\Hom_{A_n}(\End_F(E^\la_\pm),\End_F(E^\mu_\pm))\geq 2.\]
In particular $E^\la_\pm\otimes E^\mu_\pm$ is not irreducible.
\end{proof}
\section{Proof of Theorem \ref{mt}}\label{s3}
We will now prove our main result. We will consider the cases $p=2$ and $p\geq 3$ separately.
{\bf Case 1:} $p=2$.
If $\la,\mu\in\Par_2(n)\setminus\Parinv_2(n)$ and $E^\la\otimes E^\mu$ is irreducible as $FA_n$-module, then $D^\la\otimes D^\mu$ is irreducible as $F\s_n$-module. If $E^\la$ and $E^\mu$ are not 1-dimensional then by \cite[Main Theorem]{bk} and \cite[Theorems 1.1 and 1.2]{m1} we have that $n\equiv 2\Md 4$ and $D^\la\otimes D^\mu\cong D^\nu$ with $\nu=(n/2-j,n/2-j-1,j+1,j)$ with $0\leq j\leq (n-6)/4$. By Lemma \ref{split2} it follows that $\nu\in\Parinv_2(n)$, contradicting $E^\la\otimes E^\mu$ being irreducible. If $\la\in\Parinv_2(n)$ and $\mu\in\Par_2(n)\setminus\Parinv_2(n)$ the theorem holds by Theorems \ref{sns2a} and \ref{sns2b}. If $\la,\mu\in\Parinv_2(n)$ the theorem holds by Theorem \ref{ds2}.
{\bf Case 2:} $p\geq 3$.
Note that from Lemma \ref{l23} if $\la\in\Parinv_p(n)$ is JS, then $n\equiv h(\la)^2\Md p$. In particular in this case $n\equiv 0\Md p$ if and only if $h(\la)\equiv 0\Md p$. Assume that $\la\in\Parinv_p(n)$ is JS and that $n\not\equiv 0\Md p$. Let $A$ be the top removable node of $\la$ and $B$ and $C$ be the two bottom addable nodes of $\la$. Then $A$ is the only normal node of $\la$ and $B$ and $C$ are the only conormal nodes of $\la$. Since $h(\la)\not\equiv 0\Md p$, the bottom addable node of $\la$ has residue different from 0. In view of Lemma \ref{l17} we then have that $\res(A)=0$ and that $\res(B)=i=-\res(C)$ for some residue $i\not=0$. By \cite[Lemma 2.9]{bk2} we further have that $A,B,C$ are the only conormal nodes of $\la\setminus A$. Comparing residues we have that $(\la\setminus A)^\Mull=\la\setminus A$ and that $((\la\setminus A)\cup B)^\Mull=(\la\setminus A)\cup C$ by Lemma \ref{l17}. So $(\la\setminus A)\cup B,(\la\setminus A)\cup C\in\Par_p(n)\setminus\Parinv_p(n)$ and $E^{(\la\setminus A)\cup B}\cong E^{(\la\setminus A)\cup C}$. For $p\geq 5$ the theorem then holds by \cite[Main Theorem]{bk2} and \cite[Theorem 1.1]{m2}. So assume now that $p=3$. If $\la,\mu\in\Par_3(n)\setminus\Parinv_3(n)$ and $E^\la$ and $E^\mu$ are not 1-dimensional, then $E^\la\otimes E^\mu$ is not irreducible by \cite[Main Theorem]{bk}. If $\la\in\Parinv_3(n)$ and $\mu\in\Par_3(n)\setminus\Parinv_3(n)$ the theorem holds by Theorem \ref{sns3} and the above observation. If $\la,\mu\in\Parinv_3(n)$ the theorem holds by Theorem \ref{ds3}.
\section{Tensor products with basic spin}\label{s2}
In this section we give some restrictions on tensor products with basic spin module in characteristic 2 which might be irreducible.
\begin{lemma}\label{L13}
Let $p=2$ and $\la,\nu\in\Par_2(n)$. If $[D^\la\otimes D^{\be_n}:D^\nu]=2^ib$ with $b$ odd then $h(\nu)\leq 4i+2$ if $n$ is odd or $h(\nu)\leq 4i+4$ if $n$ is even. Further if $D^\nu\subseteq D^\la\otimes D^{\be_n}$ then $h(\la)\leq 2h(\nu)$.
\end{lemma}
\begin{proof}
For $\gamma\in\Par(n)$ let $\xi^\gamma$ be the Brauer character of $M^\gamma$. For $\psi\in\Par_2(n)$ let $\phi^\psi$ be the Brauer character of $D^\psi$. If $\alpha\in\Par(n)$ is the cycle partition of a 2-regular conjugacy class and $\phi$ is any Brauer character of $\s_n$, let $\phi_\alpha$ be the value that $\phi$ takes on the conjugacy class indexed by $\alpha$.
Let $c:=2i+1$ if $n$ is odd or $c:=2i+2$ if $n$ is even. Let $\al\in\Par(n)$ correspond to a 2-regular conjugacy class of $\s_n$. We have that $\phi^{\be_n}_{\al}=\pm 2^{\lfloor (h(\al)-1)/2\rfloor}$ by \cite[VII, p.203]{s5}. In particular if $\phi^{\be_n}_\al$ is not divisible by $2^{i+1}$ then $h(\al)\leq c$ (note that $h(\al)\equiv n\Md 2$ since $\al$ is the cycle partition of a 2-regular conjugacy class).
For $\gamma\in\Par(n)$ and $1\leq j\leq n$ let $a_j=a_j(\gamma)$ be the number of parts of $\gamma$ equal to $j$. Further let $A=A(\gamma):=a_1!\cdots a_n!$ and $\overline{A}=\overline{A}(\gamma)$ be the largest power of 2 dividing $A$. Since $M^\la=1\ua_{\s_{\la}}^{\times_j\s_j\wr \s_{a_j}}\ua^{\s_n}$ we have that $A\mid \xi^\gamma_\alpha$ for each $\al\in\Par(n)$ corresponding to a 2-regular conjugacy class. Since irreducible Brauer characters are linearly independent modulo 2 (see for example \cite[Theorem 15.5]{isaacs}), we then have that $\xi^\gamma=\overline{A}\,\overline{\xi}^\gamma$ with $\overline{\xi}^\gamma$ a Brauer character. Further, whenever they are defined, $\xi^\gamma_\gamma=A$ (and so $\overline{\xi}^\gamma_\gamma$ is odd) and $\xi^\gamma_\psi=0$ if $h(\psi)\leq h(\gamma)$ and $\psi\not=\gamma$. In particular there exists $b_\gamma\in\N$ such that if
\[\phi=\phi^{\be_n}(\phi^\la+\sum_{\gamma:h(\gamma)\leq c}b_\gamma\overline{\xi}^\gamma)\]
then $\phi_\al$ is divisible by $2^{i+1}$ for each $\al\in\Par(n)$ corresponding to a 2-regular conjugacy class (start by choosing $b_{(n)}$ so that this holds for $\al=(n)$, if $n$ is odd, then consider $b_\al$ for partitions $\al$ with two parts and so on).
Again since irreducible Brauer characters are linearly independent modulo 2, it follows that $\phi=2^{i+1}\overline{\phi}$ for some Brauer character $\overline{\phi}$. If $m$ is the multiplicity of $\phi^\nu$ in $\overline{\phi}$ then
\begin{align*}
m&=1/2^{i+1}([D^{\be_n}\otimes D^\la:D^\nu]+\sum_{\gamma:h(\gamma)\leq c}b_\gamma/\overline{A}(\gamma)[D^{\be_n}\otimes M^\gamma:D^\nu])\\
&=b/2+\sum_{\gamma:h(\gamma)\leq c}b_\gamma/(2^{i+1}\overline{A}(\gamma))[D^{\be_n}\otimes M^\gamma:D^\nu].
\end{align*}
Since $b$ is odd and $m\in\N$, there then exists $\gamma\in\Par(n)$ with $h(\gamma)\leq c$ such that
\[[S^{\be_n}\otimes M^\gamma:D^\nu]\geq[D^{\be_n}\otimes M^\gamma:D^\nu]\geq 1.\]
Note that $S^{\be_n}\otimes M^\gamma\cong S^{\be_n}\da_{\s_\gamma}\ua^{\s_n}$. In view of the Littlewood-Richardson rule, in characteristic 0, any composition factor of $S^{\be_n}\da_{\s_{\gamma}}$ is of the form $S^{\al^1}\otimes\ldots\otimes S^{\al^{h(\gamma)}}$ with $\al^j\in\Par(\gamma_j)$ such that $h(\al^j)\leq h(\be_n)=2$ and then any composition factor of $S^{\be_n}\otimes M^\gamma$ is of the form $S^\al$ with $h(\al)\leq h(\gamma)h(\be_n)\leq 2c$. Considering reduction modulo 2 we then have that any composition factor of $S^{\be_n}\otimes M^\gamma$, and so in particular also any composition factor of $D^{\be_n}\otimes M^\gamma$, is of the form $D^\zeta$, where $\zeta\in\Par_2(n)$ has at most $2c$ parts. It follows that $h(\nu)\leq 2c$.
Assume now that $D^\nu\subseteq D^\la\otimes D^{\be_n}$. Since
\[\dim\Hom_{\s_n}(D^\la,D^{\be_n}\otimes D^\nu)=\dim\Hom_{\s_n}(D^\nu,D^\la\otimes D^{\be_n})\geq 1\]
it follows that
\[[S^{\be_n}\otimes M^\nu:D^\la]\geq[D^{\be_n}\otimes D^\nu:D^\la]\geq 1.\]
So, similarly to the above, $h(\la)\leq h(\be_n)h(\nu)=2h(\nu)$.
\end{proof}
\begin{theor}
Let $p=2$, $\la\in\Par_2(n)$ and assume that $D^\la$ and $D^{\be_n}$ are not 1-dimensional and that exactly one of them splits when restricted to $A_n$. Then $E^{\be_n}_\pm\otimes E^\la$ or $E^\la_\pm\otimes E^{\be_n}$ is irreducible if and only if $D^\la\otimes D^{\be_n}\sim D^\nu|D^\nu$ with $\nu\in\Par_2(n)\setminus\Parinv_2(n)$. In this case $h(\nu)\leq 6$ if $n$ is odd, $h(\nu)\leq 8$ if $n$ is even and $h(\la)\leq 2h(\nu)$. Further $\la$ has at most 2 normal nodes if $n$ is odd or at most 3 normal nodes if $n$ is even.
\end{theor}
\begin{proof}
From \cite[Main Theorem]{bk} and \cite[Theorems 1.1 and 1.2]{m1} if $D^\la\otimes D^{\be_n}$ is simple as $F\s_n$-module, then $n\equiv 2\Md 4$ and $h(\la)=h(\be_n)=2$. So from Lemma \ref{split2} neither $D^\la$ nor $D^{\be_n}$ splits in this case. Thus we may assume that $D^\la\otimes D^{\be_n}$ is not simple as $F\s_n$-module. Let $\{\al,\ga\}=\{\la,\be_n\}$ such that $\al\in\Parinv_2(n)$ and $\ga\not\in\Parinv_2(n)$. Then $E^\al_+\otimes E^\ga\cong(E^\al_-\otimes E^\ga)^\sigma$ with $\sigma\in\s_n\setminus A_n$. In particular $E^\al_+\otimes E^\ga$ is irreducible if and only if $E^\al_-\otimes E^\ga$ is irreducible. So, since $D^\la\otimes D^{\be_n}$ is not simple as $F\s_n$-module, $E^\al_\pm\otimes E^\ga$ is irreducible if and only if $D^\la\otimes D^{\be_n}\sim D^\nu|D^\nu$ with $\nu\in\Par_2(n)\setminus\Parinv_2(n)$. In this case $h(\nu)\leq 6$ if $n$ is odd, $h(\nu)\leq 8$ if $n$ is even and $h(\la)\leq 2h(\nu)$ by Lemma \ref{L13}.
If $n$ is odd then $M_1\cong D_0\oplus D_1$ by Lemma \ref{L12o}. Since $\be_n$ is not a JS-partition in this case, we have that $D_1\subseteq\End_F(D^{\be_n})$ by Lemma \ref{l53}. If $\la$ has at least 3 normal nodes then $D_1^{\oplus 2}\subseteq \End_F(D^\la)$ from Lemma \ref{l53}.
If $n$ is even then $M_1\cong D_0|D_1|D_0\sim D_0|S_1^*$ by Lemma \ref{L12e} and self-duality of $M_1$. From Lemma \ref{Lemma7.1} we also have that $D_1$ or $S_1$ is contained in $\End_F(D^{\be_n})$. If $\la$ has at least 4 normal nodes we have from Lemma \ref{l53} that
\begin{align*}
&\dim\Hom_{\s_n}(S_1^*,\End_F(D^\la))\\
&\geq\dim\Hom_{\s_n}(M_1,\End_F(D^\la))-\dim\Hom_{\s_n}(D_0,\End_F(D^\la))\\
&=\dim\End_{\s_{n-1}}(D^\la\da_{\s_{n-1}})-\dim\End_{\s_n}(D^\la)\\
&\geq 3.
\end{align*}
Since $S_1^*\cong D_1|D_0$ and $\dim\Hom_{\s_n}(D_0,\End_F(D^\la))=1$, we then have that $(S_1^*)^{\oplus 2}\subseteq \End_F(D^\la)$.
From $D_0\subseteq\End_F(D^{\be_n}),\End_F(D^\la)$, it follows that in either case
\[\dim\Hom_{\s_n}(\End_F(D^\la),\End_F(D^{\be_n}))\geq 3\]
and so $E^\al_\pm\otimes E^\ga$ is not irreducible by Lemma \ref{l14}.
\end{proof}
\begin{theor}
Let $p=2$, $n\not\equiv 2\Md 4$, $\la\in\Parinv_2(n)$ and $\epsilon,\delta,\epsilon',\delta'\in\{\pm\}$. If $E^\la_\pm$ and $E^{\be_n}_\pm$ are not 1-dimensional and $E^\la_\epsilon\otimes E^{\be_n}_\delta$ is irreducible then one of the following holds:
\begin{itemize}
\item
$D^\la\otimes D^{\be_n}\sim D^\nu|D^\nu|D^\nu|D^\nu$ with $\nu\in\Par_2(n)\setminus\Parinv_2(n)$. In this case $E^\la_{\epsilon'}\otimes E^\mu_{\delta'}\cong E^\nu$ is irreducible and $h(\nu)\leq 10$ if $n$ is odd or $h(\nu)\leq 12$ if $n$ is even.
\item
$D^\la\otimes D^{\be_n}\sim D^\nu|D^\nu$ with $\nu\in\Parinv_2(n)$. In this case $E^\la_{\epsilon'}\otimes E^\mu_{\delta'}\in\{E^\nu_+,E^\la_-\}$ is irreducible and $h(\nu)\leq 6$ if $n$ is odd or $h(\nu)\leq 8$ if $n$ is even.
\item
$[D^\la\otimes D^{\be_n}:D^\nu]=2$ with $\nu\in\Par_2(n)\setminus\Parinv_2(n)$ and $E^\la_\epsilon\otimes E^{\be_n}_\delta\cong E^\la_{-\epsilon}\otimes E^{\be_n}_{-\delta}\cong E^\nu$, while $E^\la_{-\epsilon}\otimes E^{\be_n}_\delta\not\cong E^\nu\not\cong E^\la_{\epsilon}\otimes E^{\be_n}_{-\delta}$. Further $h(\nu)\leq 6$ if $n$ is odd or $h(\nu)\leq 8$ if $n$ is even.
\item
$n\equiv 0\Md 4$, $[D^\la\otimes D^{\be_n}:D^\nu]=1$ with $\nu\in\Parinv_2(n)$, $\{E^\la_\epsilon\otimes E^{\be_n}_\delta,E^\la_{-\epsilon}\otimes E^{\be_n}_{-\delta}\}=\{E^\nu_+,E^\nu_-\}$, while $E^\la_{-\epsilon}\otimes E^{\be_n}_\delta,E^\la_{\epsilon}\otimes E^{\be_n}_{-\delta}\not\in\{E^\nu_+,E^\nu_-\}$. Further $h(\nu)\leq 4$.
\end{itemize}
In each of the above cases $h(\la)\leq 2h(\nu)$. Further $\la$ has at most 3 normal nodes if $n$ is odd or at most 4 normal nodes if $n$ is even.
\end{theor}
\begin{proof}
Note that if $\sigma\in\s_n\setminus A_n$ then $E^\la_{\epsilon'}\otimes E^{\be_n}_{\delta'}\cong(E^\la_{-\epsilon'}\otimes E^{\be_n}_{-\delta'})^\sigma$.
In particular if $E^\la_\epsilon\otimes E^{\be_n}_\delta\cong E^\nu$ then $E^\la_{-\epsilon}\otimes E^{\be_n}_{-\delta}\cong E^\nu$ and either both or neither of $E^\la_\epsilon\otimes E^{\be_n}_{-\delta}$ and $E^\la_{-\epsilon}\otimes E^{\be_n}_\delta$ is isomorphic to $E^\nu$. Similarly if $E^\la_\epsilon\otimes E^{\be_n}_\delta\cong E^\nu_\pm$ then $E^\la_{-\epsilon}\otimes E^{\be_n}_{-\delta}\cong E^\nu_\mp$ and $\{E^\la_\epsilon\otimes E^{\be_n}_{-\delta},E^\la_{-\epsilon}\otimes E^{\be_n}_\delta\}$ is either equal to or disjoint from $\{E^\nu_+,E^\nu_-\}$.
So we are in one of the following cases:
\begin{enumerate
\item
$D^\la\otimes D^{\be_n}\sim D^\nu|D^\nu|D^\nu|D^\nu$ with $\nu\in\Par_2(n)\setminus\Parinv_2(n)$ and $E^\la_{\epsilon'}\otimes E^\mu_{\delta'}\cong E^\nu$.
\item
$D^\la\otimes D^{\be_n}\sim D^\nu|D^\nu$ with $\nu\in\Parinv_2(n)$ and $E^\la_{\epsilon'}\otimes E^\mu_{\delta'}\in\{E^\nu_+,E^\la_-\}$.
\item
$[D^\la\otimes D^{\be_n}:D^\nu]=2$ with $\nu\in\Par_2(n)\setminus\Parinv_2(n)$ and $E^\la_\epsilon\otimes E^{\be_n}_\delta\cong E^\la_{-\epsilon}\otimes E^{\be_n}_{-\delta}\cong E^\nu$, while $E^\la_{-\epsilon}\otimes E^{\be_n}_\delta\not\cong E^\nu\not\cong E^\la_{\epsilon}\otimes E^{\be_n}_{-\delta}$.
\item
$[D^\la\otimes D^{\be_n}:D^\nu]=1$ with $\nu\in\Parinv_2(n)$, $\{E^\la_\epsilon\otimes E^{\be_n}_\delta,E^\la_{-\epsilon}\otimes E^{\be_n}_{-\delta}\}=\{E^\nu_+,E^\nu_-\}$, while $E^\la_{-\epsilon}\otimes E^{\be_n}_\delta,E^\la_{\epsilon}\otimes E^{\be_n}_{-\delta}\not\in\{E^\nu_+,E^\nu_-\}$.
\end{enumerate}
If $[D^\la\otimes D^{\be_n}:D^\nu]=2^i$ then, from Lemma \ref{L13}, $h(\la)\leq 2 h(\nu)$ and that $h(\nu)\leq 4i+2$ if $n$ is odd or $h(\nu)\leq 4i+4$ if $n$ is even (note that we always have $D^\nu\subseteq D^\la\otimes D^{\be_n}$, since $E^\nu_{(\pm)}\cong E^\la_\eps\otimes E^{\be_n}_\de\subseteq(D^\la\otimes D^{\be_n})\da_{A_n}$). In case (iv) if $n$ is odd then $h(\nu)\leq 2$ and so $\nu=\be_n$ by Lemma \ref{split2}, contradicting $E^\la_\pm$ not being 1-dimensional.
This proves the theorem, up to the bound on the number of normal nodes of $\la$. Notice that if $n$ is odd then $M_1\cong D_0\oplus D_1$, while if $n$ is even then $M_1\cong D_0|D_1|D_0$ by Lemma \ref{L12e}. If $n$ is odd then $D_1\subseteq\End_F(D^{\be_n})$ since in this case $\be_n$ is not a JS-partition. If $n$ is even then $n\equiv 0\Md 4$ by Lemma \ref{split2} and so $D_1\subseteq\End_F(D^{\be_n})$ from Lemma \ref{Lemma7.1}. If $\la$ has at least 4 normal nodes if $n$ is odd or at least 5 normal nodes if $n$ is even then $D_1^{\oplus 3}\subseteq\End_F(D^\la)$. It then follows that there exist $\epsilon'',\delta''\in\{\pm\}$ such that
\begin{align*}
D_1&\subseteq\Hom_F(E^{\be_n}_\delta,E^{\be_n}_{\delta''}),\\
D_1^{\oplus 2}&\subseteq\Hom_F(E^\la_{\epsilon''},E^\la_\epsilon),
\end{align*}
and so $E^\la_\eps\otimes E^{\be_n}_\de$ is not irreducible by \cite[Lemma 3.4]{bk2}.
\end{proof}
\section{Acknowledgements}
The author thank Alexander Kleshchev for some comments and pointing out some references. The author also thanks the referee for comments.
The author was supported by the DFG grant MO 3377/1-1.
|
2,877,628,091,185 | arxiv |
\section*{Acknowledgment}
We gratefully acknowledge the support of NVIDIA Corporation with a donation of a Titan XP GPU used in this research.
\section{Conclusion}
Detecting action units is an important task in face analysis, especially in facial expression recognition. This is due, in part, to the idea that expressions can be decomposed into multiple action units. Considering this, we have proposed detecting action units using 3D facial landmarks to train a convolutional neural network. Experimental results on binary and 3-class classification show encouraging AU detection results on the BP4D and BP4D+ datasets. We have also conducted cross-database validation, of the proposed approach, by training on BP4D and testing on BP4D+, as well as training on BP4D+ and testing on BP4D. We report state of the art results on BP4D using 3-fold and 10-fold cross-validation.
\section{Experiments}
\label{sec:Exp}
\subsection{Facial expression databases}
\label{sec:DB}
\textbf{BP4D} was used in the Facial Expression Recognition and Analysis (FERA) challenge in 2015 \cite{valstarFERA15} and 2017 \cite{valstarFERA17}; containing 2D and 3D data. It contains 41 subjects with eight dynamic expressions plus neutral. The dataset contains 18 male and 23 female subjects ages 18-29 years of age, with a range of ethnicities. For each sequence in this database, approximately 500 frames contain AU occurrences; we used all labeled frames, which is over 140,000.\\
\textbf{BP4D+} consists of 140 subjects (58 males and 82 females); ages 18-66, each with highly varied emotional responses. It includes thermal, physiological, 2D, and 3D data. It was also used in the FERA challenge 2017 \cite{valstarFERA17}. Like BP4D, approximately 500 frames per sequence contain AU occurrences; we used all labeled frames which is over 190,000.
\subsection{Experimental Design}
\label{sec:expDesign}
We detected 83 3D facial landmarks on all AU labeled frames from BP4D and BP4D+, using a shape-index-based statistical shape model \cite{canavan2015landmark}. We then normalized the 83 landmarks to [0,23] ($C=24$). While any number of landmarks that represent a face with AU occurrences can be used, we chose 83 to be consistent with other related works \cite{zhang2016multimodal}, \cite{zhang2014bp4d}, \cite{valstarFERA17}, \cite{fabiano2018}. We then transform the normalized landmarks to a binary representation in a 3 dimensional array of $24 \times 24 \times 24$ (Figure \ref{fig:overview} (d)).
We consider both binary \cite{li2017action} and 3-class \cite{chu2017learning} classification. For binary classification, the presence/absence of an action unit was coded as 1/0. For the 3-class classification, the presence/absence of an action unit was coded as 1/-1 and 0 when no information was available \cite{chu2017learning}.
For the binary-class, our CNN consists of 8 layers; two convolutional layers followed by a max pooling layer, two more convolutional layers, another max pooling layer and finally three fully connected layers. The output layer has 12 binary outputs for the 12 AUs being predicted. The activation function for each of the hidden layers is relu \cite{jarrett2009best} and the activation function for the output layer is sigmoid \cite{han1995influence}, the loss function for this network was binary cross-entropy and the optimizer used was adam \cite{kingma2014adam} (Figure \ref{fig:overview}). All training is done with 250 epochs.
For the 3-class, we use a similar network with common convolutional
layers, but different fully connected layers for each AU (a common output layer can not predict the 3-classes for all the AUs). The loss function was categorical cross-entropy and the activation function for the initial layers was still relu, but for the output layers it was softmax \cite{bridle1990probabilistic}, as it performs well for multi-class classification \cite{duan2003multi}. Again, all training is done with 250 epochs.
For binary-class, we used 3-fold and 10-fold cross-validation, for 3-class problem we used 3-fold cross-validation. We also balanced the distribution of positive and negative samples (i.e. occurrence of AUs), as the distribution of AUs is not consistent (Table \ref{Distribution}). We chose this experimental design to be consistent with related works \cite{chu2017learning}, \cite{li2017action}. As can be seen from Table \ref{Distribution}, some of the AUs have a small number of frames where the AU occurred, especially in BP4D+.
\begin{table}
\centering
\captionsetup{justification=centering}
\begin{tabular}{|p{0.5cm}|p{1.2 cm}|p{1.2 cm}|}
\hline
AU &BP4D &BP4D+ \\ \hline
1 &21.07 &9.54\\ \hline
2 &17.04 &8.01\\ \hline
4 &20.22 &0.67\\ \hline
6 &46.10 &65.88\\ \hline
7 &54.90 &3.46\\ \hline
10 &59.39 &57.37\\ \hline
12 &56.18 &59.99\\ \hline
14 &46.60 &32.46\\ \hline
15 &16.96 &12.96\\ \hline
17 &34.37 &0.67\\ \hline
23 &16.56 &2.35\\ \hline
24 &15.16 &-\\ \hline
\end{tabular}
\centering
\caption{Percentage of AU labeled frames with occurrences in BP4D and BP4D+.}
\label{Distribution}
\end{table}
For binary classification, we conducted 4 experiments for each 3-fold and 10-fold: (1) training and testing on BP4D; (2) training and testing on BP4D+; (3) training on BP4D and testing on BP4D+; and (4) training on BP4D+ and testing on BP4D. For our 3-class problem, we performed 2 experiments: (1) training and testing on BP4D; and (2) training and testing on BP4D+. For BP4D we detected 12 AUs, and 11 for BP4D+, as AU 24 did not occur in the labeled frames of this dataset. We refer the reader to Tables \ref{table:BinaryF1}, \ref{table:Cross Validation}, or \ref{table:3ClassResults} for the list of AUs detected for each dataset.
With this type of classification, especially with imbalanced data (Table \ref{Distribution}), F1 score can be a better indicator of performance compared to classification accuracy \cite{valstarFERA15}. Considering this, we calculated the frame-based F1 score as ($F1-frame=\frac{2RP}{R+P}$) where R is recall and P is precision. This approach is also consistent with related works, allowing us to compare our results.
\section{Introduction}
\label{sec:intro}
Facial geometry has a lot of information about an individual and has been used for various applications \cite{jeng1998facial}, \cite{kotsia2007facial}. It can also convey expressions, such as happy, sad, pain, and embarrassment \cite{zhang2016multimodal}, which can vary between subject to subject. Considering this, the Facial Action Coding System \cite{FACS} was developed, which represents fundamental facial activity in terms of Action Units (AUs). During an expression (i.e. facial activity), a single AU can occur or multiple at the same time. This allows for FACS to represent the large variety of facial expressions that exist between subjects.
Recently, there has been encouraging progress in automatically detecting AUs. Zeng et al. \cite{Zeng_2015_ICCV} developed a confidence preserving machine (CPM) for the task. In their proposed method, the CPM learns two classifiers. First the positive classifier separates all positive classes, and the negative classifier does the same for the negative classes. The CPM then learns a person-specific classifier to detect the AUs. Chu et al. \cite{chu2017learning} used a combination of convolutional neural networks (CNN) with long short-term networks to learn the spatial and temporal cues from images. Their proposed approach achieved promising results on the GFT \cite{cohn2010spontaneous} and BP4D \cite{zhang2014bp4d} datasets. Li et al. \cite{li2017action} used temporal fusing for AU detection. They developed a deep learning framework where regions of interest are learned independently so each sub-region has a local CNN; multi-label learning is then used.
\begin{figure}[t]
\label{fig:overview}
\includegraphics[width=8.5cm, height=6cm]{Images/Overview10.jpg}
\caption{Proposed convolutional neural network pipeline for detecting action units from 3D facial landmarks.}
\label{fig:overview}
\end{figure}
Current work, for detecting AUs, largely focuses on 2D images, however, it has been shown that 3D facial landmarks are important when understanding facial geometry for recognizing emotion \cite{fabiano2018}. Motivated by this, we propose to detect AUs using 3D facial landmarks. The contributions of this work are 3-fold and can be summarized as follows.
\begin{enumerate}
\item 3D facial landmarks are used to detect action unit occurrences. Binary and 3-class classification experiments, with 3-fold and 10-fold cross-validation, are conducted to validate the proposed approach.
\item The proposed approach of using 3D facial landmarks, outperforms current state-of-the-art, 2D image-based approaches, on the BP4D dataset \cite{zhang2014bp4d}.
\item To test the generalizability of the proposed approach, cross dataset experiments are conducted, for AU detection, on the BP4D \cite{zhang2014bp4d}, and BP4D+ \cite{zhang2016multimodal} datasets.
\end{enumerate}
\section{Proposed Approach and Experimental Design}
\label{sec:method}
We propose detecting action units using 3D facial landmarks to train a convolutional neural network (CNN) for binary and 3-class classification.
\subsection{Preprocessing 3D Facial Landmarks}
Given a set of 3D facial landmarks, \emph{L}, of size \emph{N}, where
\begin{equation}
L=(X_{1},Y_{1},Z_{1}),(X_{2},Y_{2},Z_{2}),....,(X_{N},Y_{N},Z_{N}),
\end{equation}
we first represent \emph{L} as a 2D matrix of size $N \times3$. We then normalize the 2D matrix of landmarks to be within the range between [0,1] using \emph{min-max} normalization. This is done, independently, for each of the \emph{(X,Y,Z)} axes of the frame as
\emph{$X_{norm}=\frac{(X_i-X_{min})}{(X_{max}-X_{min}}$}; \emph{$Y_{norm}=\frac{(Y_i-Y_{min})}{(Y_{max}-Y_{min}}$}; \emph{$Z_{norm}=\frac{(Z_i-Z_{min})}{(Z_{max}-Z_{min}}$}.
These normalized values of $X_{norm}$, $Y_{norm}$, $Z_{norm}$ are then multiplied by a constant value \emph{C-1} to scale the landmarks into the range of \emph{[0,C-1]}, giving us $X_{scaled}, Y_{scaled}, Z_{scaled}$. This is done to scale all faces into the uniform range of \emph{[0,C-1]}.
For the CNN to learn features for AU detection, we need $X_{scaled}, Y_{scaled}, Z_{scaled}$ to be scaled properly. Considering this we used a value of $C=24$ as we have empirically found that this gives a good representation of the face (Figure \ref{fig:FaceComparison} (d)). If we try to map the normalized (i.e. non-scaled) coordinates directly to a 3D array then the array will be of size $2 \times 2 \times 2$ each axis going from $[0,1]$. For example, if two or more normalized landmarks $(X_{norm}, Y_{norm}, Z_{norm})$ are in the range $[0,0.5)$ then they all set the same cell of $(0,0,0)$ to $1$, which can lead to loss in information of landmarks (Figure \ref{fig:FaceComparison} (b)). Also, if \emph{C} is not large enough, it can lead to loss of information (Figure \ref{fig:FaceComparison} (c)).
Given a set of scaled 3D facial landmarks, we then create a 3D representation of them, which can be used to train CNNs. To achieve this we create a 3D array \emph{A} of size $C \times C \times C$ initialized with all zeroes. We then set the locations where the landmarks are present to 1; $A[X_{scaled_i}][Y_{scaled_i}][Z_{scaled_i}]=1$. Scaling and mapping the landmarks creates a 3D array which is representative of the 3D locations of the landmarks (Figure \ref{fig:FaceComparison} (d)), which allows us to explicitly model the 3D shape of the landmarks, and subsequent AUs. As can be seen in Figure \ref{fig:FaceComparison}, the normalized 3D mapping does have some visual differences compared to the original 3D landmarks, however, the general shape, and more importantly AU activation, of the original 3D landmarks is the same. As we will show, this 3D representation of each set of 3D landmarks (i.e. face) can train CNNs to detect changes to the AUs across subjects, achieving high detection accuracy.
\begin{figure}
\centering
\begin{subfigure}[b]{4 cm}
\centering
\includegraphics[width=4cm]{Images/OriginalFace.png}
\caption{Original 3D Landmarks.}
\label{fig:OriginaFace}
\end{subfigure}
\hfill
\begin{subfigure}[b]{4 cm}
\centering
\includegraphics[width=4cm]{Images/C=2-2.png}
\caption{Non-scaled, normalized 3D mapping.}
\label{fig:OriginaFace}
\end{subfigure}
\hfill
\centering
\begin{subfigure}[b]{4 cm}
\centering
\includegraphics[width=4cm]{Images/C=12-2.png}
\caption{Scaled, Normalized 3D mapping with C=12.}
\label{fig:OriginaFace}
\end{subfigure}
\hfill
\begin{subfigure}[b]{4 cm}
\centering
\includegraphics[width=4 cm]{Images/C=24Face.png}
\caption{Scaled normalized 3D mapping with C=24.}
\label{fig:3DFace}
\end{subfigure}
\caption{Comparison of original 3D landmarks with (a)non-scaled normalized 3D mapping (b) and with scaled normalized 3D Mapping for both C=12 (c) and C=24 (d).}
\label{fig:FaceComparison}
\end{figure}
\subsection{Facial expression databases}
\label{sec:DB}
\textbf{BP4D} was used in the Facial Expression Recognition and Analysis (FERA) challenge in 2015 \cite{valstarFERA15} and 2017 \cite{valstarFERA17}; containing 2D and 3D data. It contains 41 subjects with eight dynamic expressions plus neutral. The dataset contains 18 male and 23 female subjects ages 18-29 years of age, with a range of ethnicities. For each sequence in this database, approximately 500 frames contain AU occurrences; we used all labeled frames, which is over 140,000.\\
\textbf{BP4D+} consists of 140 subjects (58 males and 82 females); ages 18-66, each with highly varied emotional responses. It includes thermal, physiological, 2D, and 3D data. It was also used in the FERA challenge 2017 \cite{valstarFERA17}. Like BP4D, approximately 500 frames per sequence contain AU occurrences; we used all labeled frames which is over 190,000.
\subsection{Experimental Design}
\label{sec:expDesign}
We detected 83 3D facial landmarks on all AU labeled frames from BP4D and BP4D+, using a shape-index-based statistical shape model \cite{canavan2015landmark}. We then normalized the 83 landmarks to [0,23] ($C=24$). While any number of landmarks that represent a face with AU occurrences can be used, we chose 83 to be consistent with other related works \cite{fabiano2018, valstarFERA17, zhang2014bp4d, zhang2016multimodal}. We then transform the normalized landmarks to a binary representation in a 3 dimensional array of $24 \times 24 \times 24$ (Figure \ref{fig:overview} (d)).
We consider both binary \cite{li2017action} and 3-class \cite{chu2017learning} classification. For binary classification, the presence/absence of an action unit was coded as 1/0. For the 3-class classification, the presence/absence of an action unit was coded as 1/-1 and 0 when no information was available \cite{chu2017learning}.
For the binary-class, our CNN consists of 8 layers; two convolutional layers followed by a max pooling layer, two more convolutional layers, another max pooling layer and finally three fully connected layers. The output layer has 12 binary outputs for the 12 AUs being predicted. The activation function for each of the hidden layers is relu \cite{jarrett2009best} and the activation function for the output layer is sigmoid \cite{han1995influence}. T he loss function for this network was binary cross-entropy and the optimizer used was adam \cite{kingma2014adam}, where all training is done with 250 epochs. See Figure \ref{fig:overview} for proposed pipleline and network architecture
For the 3-class, we use a similar network with common convolutional
layers, but different fully connected layers for each AU (a common output layer can not predict the 3-classes for all the AUs). The loss function was categorical cross-entropy and the activation function for the initial layers was still relu, but for the output layers it was softmax \cite{bridle1990probabilistic}, as it performs well for multi-class classification \cite{duan2003multi}. Again, all training is done with 250 epochs.
For binary-class, we used 3-fold and 10-fold cross-validation, for 3-class problem we used 3-fold cross-validation. We also balanced the distribution of positive and negative samples (i.e. occurrence of AUs), as the distribution of AUs is not consistent (Table \ref{Distribution}). We chose this experimental design to be consistent with related works \cite{chu2017learning}, \cite{li2017action}. As can be seen from Table \ref{Distribution}, some of the AUs have a small number of frames where the AU occurred, especially in BP4D+.
\begin{table}
\centering
\captionsetup{justification=centering}
\begin{tabular}{|p{0.5cm}|p{1.2 cm}|p{1.2 cm}|}
\hline
AU &BP4D &BP4D+ \\ \hline
1 &21.07 &9.54\\ \hline
2 &17.04 &8.01\\ \hline
4 &20.22 &0.67\\ \hline
6 &46.10 &65.88\\ \hline
7 &54.90 &3.46\\ \hline
10 &59.39 &57.37\\ \hline
12 &56.18 &59.99\\ \hline
14 &46.60 &32.46\\ \hline
15 &16.96 &12.96\\ \hline
17 &34.37 &0.67\\ \hline
23 &16.56 &2.35\\ \hline
24 &15.16 &-\\ \hline
\end{tabular}
\centering
\caption{Percentage of AU labeled frames with occurrences in BP4D and BP4D+.}
\label{Distribution}
\end{table}
For binary classification, we conducted 4 experiments for each 3-fold and 10-fold: (1) training and testing on BP4D; (2) training and testing on BP4D+; (3) training on BP4D and testing on BP4D+; and (4) training on BP4D+ and testing on BP4D. For our 3-class problem, we performed 2 experiments: (1) training and testing on BP4D; and (2) training and testing on BP4D+. For BP4D we detected 12 AUs, and 11 for BP4D+, as AU 24 did not occur in the labeled frames of this dataset. We refer the reader to Tables \ref{table:BinaryF1}, \ref{table:Cross Validation}, or \ref{table:3ClassResults} for the list of AUs detected for each dataset.
With this type of classification, especially with imbalanced data (Table \ref{Distribution}), F1 score can be a better indicator of performance compared to classification accuracy \cite{valstarFERA15}. Considering this, we calculated the frame-based F1 score as ($F1-frame=\frac{2RP}{R+P}$) where R is recall and P is precision. This approach is also consistent with related works, allowing us to compare our results.
\section{Results and Analysis}
\label{sec:results}
\subsection{Binary classification}
\label{sec:binClass}
For binary classification on BP4D, we achieved an average F1 binary score of 92.90 and 94.08 for 3-fold and 10-fold, respectively. On BP4D+, we achieved an average F1 binary score of 86.01 and 87.63 for 3-fold and 10-fold, respectively. See Table \ref{table:BinaryF1} for more details.
\begin{table}
\centering
\begin{tabular}{ |p{0.7cm}||p{1.3cm}|p{1.3cm}||p{1.3cm}|p{1.3cm}| }
\hline
\multirow{2}{*}{AU} & \multicolumn{2}{c||}{BP4D} & \multicolumn{2}{c|}{BP4D+}\\ \cline{2-5}&3 Fold &10 Fold &3 Fold &10 Fold \\ \hline
1 &91.16 &92.68 &82.78 &85.24\\ \hline
2 &90.31 &92.11 &82.82 &85.21\\ \hline
4 &93.12 &94.14 &75.57 &78.07\\ \hline
6 &96.24 &97.01 &97.18 &97.50\\ \hline
7 &96.40 &96.98 &80.62 &83.67\\ \hline
10 &97.59 &97.95 &96.97 &97.35\\ \hline
12 &97.89 &98.31 &95.85 &96.40\\ \hline
14 &95.47 &96.29 &91.22 &92.31\\ \hline
15 &87.63 &89.66 &83.26 &85.19\\ \hline
17 &91.14 &92.35 &72.39 &73.62\\ \hline
23 &85.87 &88.23 &87.40 &89.39\\ \hline
24 &91.93 &93.19 &- &- \\ \hline
\textbf{Avg} &\textbf{92.90} &\textbf{94.08} &\textbf{86.01} &\textbf{87.63}\\ \hline
\end{tabular}
\captionsetup{justification=centering}
\caption{Binary F1 scores for 3-fold and 10-fold on BP4D and BP4D+.}
\label{table:BinaryF1}
\end{table}
We also investigated cross-database validation between BP4D and BP4D+. When training on BP4D and testing on BP4D+, we achieved an average F1 score of 42.9 and 42.84 for 3-fold and 10-fold, respectively. When training on BP4D+ and testing on BP4D, we achieved an average F1 score of 39.1 and 40.02 for 3-fold and 10-fold, respectively (Table \ref{table:Cross Validation}). We found that the best performing AUs were 6, 10, and 12; while the worst performing were 4, 17, and 23. This difference in F1 scores can be explained by the disparity in the occurrence of AUs. The best performing AUs are present in a similar percent of the frames whereas the poor performing AUs have a large difference between BP4D and BP4D+ (Table \ref{Distribution}). The biggest difference in occurrence is for AU 17; BP4D has 34.37\% and BP4D+ has just 0.67\% frames with the AU.
\begin{table}
\centering
\captionsetup{justification=centering}
\newcolumntype{C}[1]{>{\centering\arraybackslash}p{#1}}
\begin{tabular} { |p{0.7cm}||p{1.3cm}|p{1.3cm}||p{1.3cm}|p{1.3cm}| }
\hline
\multirow{2}{*}{AU} & \multicolumn{2}{C{2.6cm}||}{Trained BP4D+ / Tested BP4D} & \multicolumn{2}{C{2.6cm}|}{Trained BP4D/ Tested BP4D+}\\ \cline{2-5}&3 Fold &10 Fold &3 Fold &10 Fold \\ \hline
1 &47.30 &54.27 &54.28 &53.18 \\ \hline
2 &43.50 &47.11 &44.05 &42.53 \\ \hline
4 &12.99 &11.44 &11.83 &13.06 \\ \hline
6 &70.05 &69.79 &79.24 &79.05 \\ \hline
7 &23.85 &20.72 &26.73 &26.74 \\ \hline
10 &72.54 &73.33 &80.46 &80.41 \\ \hline
12 &75.61 &75.21 &76.60 &76.77 \\ \hline
14 &36.49 &36.30 &41.54 &41.81 \\ \hline
15 &23.55 &24.30 &22.53 &22.72 \\ \hline
17 &10.03 &12.32 &14.61 &14.26 \\ \hline
23 &14.26 &15.38 &20.06 &20.68 \\ \hline
\textbf{Avg} &\textbf{39.10} &\textbf{40.02} &\textbf{42.90} &\textbf{42.84} \\ \hline
\end{tabular}
\centering
\caption{Cross-database F1 scores for 3-fold and 10-fold.}
\label{table:Cross Validation}
\end{table}
\subsection{3-class classification}
We performed 3-fold validation on BP4D and BP4D+, reporting the F1-macro and micro scores (Table \ref{table:3ClassResults}). F1-macro is the average of the F1 scores of the 3 classes; where as F1-micro is the weighted average of the 3 F1 scores. On BP4D, we achieved an average F1-micro score of 94.55, and F1-macro score of 96.09. For BP4D+, an F1-micro score of 87.90 and F1-macro score of 97.03 was achieved. Some of the lowest performing AUs with BP4D+ were again 4, 17, and 23. This is consistent with the cross-database validation and can be explained by the low number of AU occurrences (Table \ref{Distribution}).
\begin{table}
\centering
\begin{tabular}{ |p{0.6cm}||p{1.4cm}|p{1.4cm}||p{1.4cm}|p{1.4cm}| }
\hline
\multirow{2}{*}{AU} & \multicolumn{2}{c||}{BP4D} & \multicolumn{2}{c|}{BP4D+}\\ \cline{2-5}&F1 Macro &F1 Mirco &F1 Macro &F1 Mirco \\ \hline
1 &93.81 &96.01 &86.90 &96.57\\ \hline
2 &93.67 &96.52 &87.06 &97.07\\ \hline
4 &95.12 &96.93 &85.83 &99.61\\ \hline
6 &96.18 &96.21 &91.69 &96.02\\ \hline
7 &95.66 &95.71 &86.62 &98.60\\ \hline
10 &96.79 &96.90 &90.51 &96.22\\ \hline
12 &97.33 &97.37 &90.80 &94.66\\ \hline
14 &95.60 &95.62 &89.17 &94.15\\ \hline
15 &91.88 &95.65 &87.39 &95.54\\ \hline
17 &92.80 &93.58 &82.19 &99.54\\ \hline
23 &90.79 &95.17 &88.71 &99.30\\ \hline
24 &94.94 &97.46 &- &-\\ \hline
\textbf{Avg} &\textbf{94.55} &\textbf{96.09} &\textbf{87.90} &\textbf{97.03}\\ \hline
\end{tabular}
\caption{F1 scores for 3-class, 3-fold cross validation.}
\label{table:3ClassResults}
\end{table}
\subsection{Comparisons to state of the art}
On BP4D, many works use 2D images for 3-fold binary \cite{li2017action}, 10-fold binary \cite{Zeng_2015_ICCV}, and 3-fold 3-class classification \cite{chu2017learning}. For each of these experimental designs, the proposed method outperforms the state of the art, detailing the power of explicitly using 3D facial landmarks compared to 2D images.
For 3-fold binary classification, the proposed method achieves a significant increase in the average F1 score compared to current state of the art (Table \ref{table:Comparison}). For 10-fold binary classification, the proposed method achieved an average F1 score of 94.08, compared to Zeng et al. \cite{Zeng_2015_ICCV}, that achieved an average F1 score of 56.5. We also compare our 3-fold, 3-class results to state of the art. The proposed method achieved an average F1-micro and macro score of 96.06 and 94.55, respectively, compared to the work from Chu et al. \cite{chu2017learning} that reported an average F1 score of 82.5. These increases can be attributed to the proposed 3D representation of the landmarks.
\begin{table}
\centering
\begin{tabular}{|p{2.5cm}|p{2cm}|}
\hline
Method &Avg F1 Score\\ \hline
\textbf{Proposed} &\textbf{92.9}\\ \hline
R-T1\cite{li2017action} &66.1\\ \hline
FERA\cite{jaiswal2016deep} &61.4\\ \hline
CNN+LSTM\cite{chu2016modeling} &53.2\\ \hline
CPM\cite{Zeng_2015_ICCV} &50.0\\ \hline
DRML\cite{Zhao_2016_CVPR} &48.3\\ \hline
JPML\cite{Zhao_2015_CVPR} &45.9\\ \hline
\end{tabular}
\caption{Comparison of proposed method with state of the art on BP4D, for 3-fold binary classification.}
\label{table:Comparison}
\end{table}
For BP4D+, a subset of data was used in FERA 2017 \cite{valstarFERA17}. BP4D was used as training data, and BP4D+ was used as development and testing sets. They report results, using maximum likelihood, on both of these sets. When training on BP4D and testing on BP4D+, the proposed method achieved an average F1 score of 42.9 and 42.84, for 3-fold and 10-fold cros-validation, respectively. This compares to an average F1 score of 41.6 and 45.2 on the FERA development and test sets, respectively. As the two experimental design are \textit{not} the same, we \textit{do not} claim this as a direct comparison. It is included for clarification of results reported on BP4D+.
|
2,877,628,091,186 | arxiv | \section*{To Editor}
\section{Introduction}
It is now well understood that entanglement is the key resource for implementation of many quantum information protocols like teleportation, cryptography, logic operations and quantum communications \cite{niel, duetch, shor, bennett, zei, divincenzo}. Bi-partite entanglement i.e entanglement among two quantum mechanical systems each envisaged as a quantum bit (a quantum mechanical two level system analogous to a classical bit), has been found to be particularly important in this context. Numerous methods of producing qubit-qubit entanglement have been investigated during the past decade. A method, which is of particular interest in the context of quantum logic gate operations with systems like ion-traps and semiconductor nanostructures, relies on the coherent interactions among the qubits \cite{zoller, wineland, barenco, vin,li,cal,atac,petta,hans,rob}. An earlier proposal by Barenco \textit{et. al.} \cite{barenco} has shown how one can implement a fundamental quantum gate like the C-NOT gate using dipole-dipole interaction among two quantum dots modeled as two qubits.This was followed by another proposal from DiVincenzo and Loss \cite{vin} in which they showed how the Heisenberg exchange interaction between two quantum dots can be used to implement universal one and two-qubit quantum gates. In their model the qubit is realized as the spin of the excess electron on a single-electron quantum dot. They proposed the electrical gating of the tunneling barrier between neighbouring quantum dots to creat an Heisenberg coupling between the dots. Finally they showed explicitly how by controlling the exchange coupling one can implement a quantum swap gate and XOR operation. Moreover they also showed the implementation of single qubit rotation using pulsed magnetic field. Further in a later work Cirac and Zoller \cite{zoller} discovered that by using the coulombic interaction among two ions one can implement a two-qubit quantum logic gate operation. Clearly many proposals require interacting qubits for two qubit quantum gates. \\
\indent{}However for a computation to progress efficiently one needs sustained entanglement among the qubits as they dynamically evolve in time. This can be achieved effectively if the quantum mechanical system under evolution is weakly interacting with its surrounding. In practice though as the system evolves the system - environment interaction becomes stronger thereby inhibiting loses in its initial coherence. This loss of quantum coherence is known as decoherence \cite{zurek} and leads to degradation of entanglement. Thus the study of dynamical evolution of two entangled qubits coupled to environmental degrees of freedom is of fundamental importance in quantum information sciences. In recent years numerous studies have been done in this respect \cite{hor, raj, diosi, daf, dod, tin, mint}. One study in particular predicted a remarkable new behavior in the entanglement dynamics of a bi-partite system. It reported that a mixed state of an initially entangled two qubit system, under the influence of a pure dissipative environment becomes completely disentangled in a finite time \cite{tin}. This was termed as \textit{Entanglement Sudden Death} (ESD) \cite{eberly} and was recently observed in two elegantly designed experiments with photonic qubits \cite{almeida} and atomic ensemble \cite{kimble}. Note that an earlier proposal have discussed pausible experiment to observe ESD in cavity QED and trapped ion systems \cite{sant_pr}. The phenomenon of ESD have motivated numerous theoretical investigation in other bipartite systems involving pairs of atomic, photonic, and spin qubits \cite{mar, gong,tol,chou}, multiple qubits \cite{lopez} and spin chains \cite{cor, lai, abliz}. Further ESD has also been studied for different environments including collective vacuum noise \cite{ficek1}, classical noise \cite{eberly1} and thermal noise \cite{ikram,liu,james}. Moreover random matrix environments have been studied \cite{gorin,pin}. These authors \cite{pin}, also point out the differences
in the time evolution of concurrence arising from the internal dynamics of two entangled qubits due to the level splitting of each qubit.\\
\indent{}ESD in continuous variable systems has also been extensively studied. In particular the problem of oscillators interacting with different environments has attracted lots of interest \cite{dod, illuminati, paris, prauz, benatti,paz}. Note that the conditions leading to ESD and probable ways of suppressing it are currently being actively investigated \cite{sant_pr,lidar}. In particular it has been shown how ESD can be avoided by using external modulation with an electromagnetic field \cite{gordon, tahira, paraoanu2} and can even lead to sudden birth for some cases \cite{yu}. Moreover sudden birth of entanglement has also been predicted for structured heat baths \cite{maz, zell} and certain choice of initial conditions of the entangled qubits \cite{ficek}. In another recent work it has been shown that under a pure dephasing environment for a general two mode N-photon state ESD does not occur \cite{asma}. This result was explicitly proven for a general 3-photon state of the form $|\Psi\rangle = a|30\rangle+b|21\rangle+c|12\rangle+d|03\rangle$. \\
\indent{}Even though numerous investigations on ESD in a variety of systems have been done so far, the question of ESD in interacting qubits remains yet open. In this paper we investigate this question for a system of interacting qubits in contact with various models of the environment. We show that due to coherent qubit-qubit interactions two initially entangled qubits, get repeatedly disentangled and entangled as they dynamically evolve leading to dark and bright periods in entanglement \cite{das}. Moreover we find that the amplitude of bright periods reduce with time and eventually at some finite time vanishes completely, thereby causing death of entanglement. Our investigations also reveal that the length of the dark periods depends on the initial condition of the entangled qubits and also on the interaction strength. Further we find dark and bright periods in entanglement in presence of interaction among the qubits, for initial states which do not exhibit sudden death but simple asymptotic decay of entanglement in absence of the interaction. We find \textit{the existence of dark and bright periods to be generic for interacting qubits and occurs for a wide variety of models for the environment}. We show this explicitly by considering various models of the environment which induce correlated decays, pure and correlated dephasing of the qubits. All of these models exhibit the phenomenon of dark and bright periods even though some of them don't show ESD.\\
\indent{}The organization of this paper is as follows. In section II we discuss the model for two interacting qubits in contact with a simple dissipative environment and formulate their dynamical evolution by solving the quantum-Louiville equation of motion. In section III we develope the theory to study the dynamics of entanglement of the two interacting qubits and calculate the time evolution of the concurrence under the influence of environmental perturbations. In section IV we then study the entanglement dynamics of two interacting qubits under the influence of pure dephasing environment. We find that coherent qubit-qubit interaction not only leads to dark and bright periods in entanglement it also delays the onset of ESD. Further in section V we do a detail study of the dynamics of qubit-qubit entanglement for both non-interacting and interacting qubits for two different correlated models of the environment. In section V A we focus on dissipative environments inducing correlated decay of the qubits. Here we find that for non-interacting qubits there is no ESD and even though entanglement vanishes for certain initial conditions at some instant, it gets partially regenerated quickly and then decays very slowly. When we include the interaction among the qubits we find that entanglement exhibits the phenomenon of dark and bright periods. We further study the behavior of two qubit entanglement for a pure correlated dephasing environment in section V B. We find that the correlated dephasing leads to delay of ESD in absence of qubit-qubit interactions. We see that the degree of delay depends on the strength of the correlation. Here again when we include the qubit-qubit interaction we observe dark and bright periods in entanglement with a much later onset of ESD. In each section we mention the earlier works. Finally in section VI we summarize our findings and conclude with future outlook.\\
\section{Qubit-Qubit Interaction}
The model that we consider for our study consist of two initially entangled interacting qubits, labeled A and B. Each qubit can be characterized by a two-level system with an excited state $|e\rangle$ and a ground state $|g\rangle$. Further we assume that the qubits interacts independently with their respective environments. This leads to both local decoherence as well as loss of entanglement of the qubits. The decoherence, for instance can arise due to spontaneous emission from the excited states. Figure 1. show a schematic diagram of our model.
\begin{figure}[!h]
\begin{center}
\includegraphics[scale = 0.35]{model.ps}
\caption{(Color online) Schematic diagram of two qubits modelled as two two-level atom coupled to each other by an interaction parameter $v$. Here $|e\rangle, |g\rangle$ signifies the excited and ground states and $\omega_{0}$ their corresponding transition frequency.The qubits A and B independently interact with their respective environments (baths) which lead to local decoherence as well as loss in entanglement.}
\end{center}
\end{figure}
The Hamiltonian for our model is then given by,
\begin{equation}
\label{1}
\mathcal{H} = \hbar \omega_{0}(S^{z}_{A}+S^{z}_{B})+\hbar v(S^{+}_{A}S^{-}_{B}+S^{+}_{B}S^{-}_{A}),
\end{equation}
where $v$ is the interaction between the two qubits, $S^{z}_{i},S^{+}_{i},S^{-}_{i}$ ($i = $A,B) are the atomic energy, raising and lowering operators defined as $S^{z}_{i} = 1/2(|e_{i}\rangle\langle e_{i}|-|g_{i}\rangle\langle g_{i}|), S^{+}_{i} = |e_{i}\rangle\langle g_{i}| = (S^{-}_{i})^{\dagger}$ respectively and obey angular momentum commutation algebra. We would use the two qubit product basis given by,
\begin{eqnarray}
\label{2}
|1\rangle = |e\rangle_{A}\otimes|e\rangle_{B}&\qquad& |2\rangle = |e\rangle_{A}\otimes|g\rangle_{B}\nonumber\\
|3\rangle = |g\rangle_{A}\otimes|e\rangle_{B}&\qquad& |4\rangle = |g\rangle_{A}\otimes|g\rangle_{B}
\end{eqnarray}
Now as each qubit independently interacts with its respective environment, the dynamics of this interaction can be treated in the general framework of master equations. The time evolution of the density operator $\rho$ which gives us information about the dynamics of the system can then be evaluated from the quantum-Liouville equation of motion,
\begin{eqnarray}
\label{3}
\dot{\rho}=-\frac{i}{\hbar}[\mathcal{H},\rho]+\mathcal{L}\rho ,
\end{eqnarray}
where $\mathcal{L}\rho$ includes the effect of interaction of the environment with the qubits. Note that in its simplest form this can be considered to be a spontaneous emission process induced by the vacuum fluctuation of the radiation field. For the case of simple dissipative environment with which the qubits are interacting independently, the effect will be decay of the excited state and any initial coherences of the qubit. As an example say for qubit A this can be written as,
\begin{eqnarray}
\label{3a}
\dot{\rho}_{ee} & =& -2\gamma_{A}\rho_{ee} \nonumber\\
\dot{\rho}_{eg} & = & - \gamma_{A}\rho_{eg}.
\end{eqnarray}
The above equation together with the normalization $\Tr[\rho] =1$ and symmetry of the density matrix, define completely the dynamical system. The effect of environment as elucidated in equation (\ref{3a}) can be written in a compact form in terms of the atomic operators $S^{+},S^{-}$ as,
\begin{equation}
\label{4}
\mathcal{L}\rho = -\sum_{j = A, B}\frac{\gamma_{j}}{2}(S^{+}_{j}S^{-}_{j}\rho-2S^{-}_{j}\rho S^{+}_{j}+\rho S^{+}_{j}S^{-}_{j}),
\end{equation}
where the terms $\gamma_{A}(\gamma_{B} )$ gives the decay rate of qubit A (B) to the environment.
We give the complete analytical solution of equation (\ref{3}) in the basis defined by (\ref{2}) for coupling to a dissipative environment (\ref{4}) in appendix A.
\section{Concurrence Dynamics}
To investigate the effect of interaction among the two qubits on decoherence we need to study the dynamics of two qubit entanglement. The entanglement for any bipartite system is best identified by examining the concurrence \cite{wot,buch}, an entanglement measure that relates to the density matrix of the system $\rho$. The concurrence for two qubits is defined as,
\begin{equation}
\label{5}
C(t) = \max\{0, \sqrt{\lambda_{1}}-\sqrt{\lambda_{2}}-\sqrt{\lambda_{3}}-\sqrt{\lambda_{4}}\},
\end{equation}
where $\lambda$'s are the eigenvalues of the non-hermitian matrix $\rho(t)\tilde{\rho}(t)$ arranged in non-increasing order of magnitude. The matrix $\rho(t)$ being the density matrix for the two qubits and the matrix $\tilde{\rho}(t)$ is defined by,
\begin{equation}
\label{6}
\tilde{\rho}(t) = (\sigma^{(1)}_{y}\otimes\sigma^{(2)}_{y})\rho^{\ast}(t)(\sigma^{(1)}_{y}\otimes\sigma^{(2)}_{y}),
\end{equation}
where $\rho^{\ast}(t)$ is the complex conjugation of $\rho(t)$ and $\sigma_{y}$ is the well known time reversal operator for spin half systems in quantum mechanics.
Note that concurrence varies from $C = 0$ for a separable state to $C = 1$ for a maximally entangled state. Though in general the two qubit density matrix $\rho$ will have all sixteen elements, here we consider the initially entangled qubits to be in a mixed state \cite{tin} given by the density matrix,
\begin{eqnarray}
\label{8a}
\rho &\equiv& 1/3(a|1\rangle\langle 1|+d|4\rangle\langle 4|+(b+c)|\psi\rangle\langle\psi|);\nonumber\\
|\psi\rangle & = & \frac{1}{\sqrt{b+c}}(\sqrt{b}|2\rangle+e^{i\chi}\sqrt{c}|3\rangle);\nonumber\\
& &\frac{a+b+c+d}{3} = 1;
\end{eqnarray}
where $a,b,c$ are independent parameters governing the nature of the initial state of the two entangled qubits. Note that the entanglement part of the state depends on the initial phase $\chi$. Following (\ref{8a}) one can see that the initial two qubit density matrix have only six-elements. In the matrix form $\rho$ is then given by,
\begin{eqnarray}
\label{9}
\rho(0) = \frac{1}{3}
\left(\begin{array}{cccc} a & 0 & 0 & 0\\
0 & b & z & 0\\
0 & z^{\ast} & c & 0 \\
0 & 0 & 0 & d\ \end{array}\right).
\end{eqnarray}
Here $z = e^{i\chi}\sqrt{bc}$ are the single photon coherences. Using the solution of the quantum-Liouville equation (\ref{4a}) it can be shown that the \textit{initial density matrix} (\ref{9}) \textit{preserves its form for all t}. Finally we calculate the concurrence defined by (\ref{5}) and (\ref{6}) for the two qubits as,
\begin{equation}
\label{10}
C(t) = \mathsf{Max}\lbrace 0, \tilde{C}(t)\rbrace,
\end{equation}
where $\tilde{C}(t)$ is given by,
\begin{equation}
\label{11}
\tilde{C}(t) = 2\left\lbrace|\rho_{23}(t)|-\sqrt{\rho_{11}(t)\rho_{44}(t)}\right\rbrace
\end{equation}
Let us now consider a particular class of mixed states with a single parameter $a$ satisfying intially $a \geq 0$, $b = c = |z| = 1$ and $d = 1-a$ \cite{tin}. Note that then (\ref{8a}), has the structure similar to a Werner state \cite{wer}. On using the dynamical evolution of the density matrix elements from appendix (A) and this set of initial conditions in (\ref{11}), we obtain,
\begin{eqnarray}
\label{12}
\tilde{C}(t) & = &\frac{2}{3}e^{-\gamma t}\lbrack(\cos^{2}\chi+\sin^{2}\chi\cos^{2}(2vt))^{1/2}\nonumber\\
&-&\sqrt{a(1-a+2w^{2}+w^{4}a)}\rbrack,
\end{eqnarray}
\begin{figure}[!h]
\vspace{0.3in}
\begin{center}
\includegraphics[scale = 0.55]{CR1.eps}
\caption{(Color online) Concurrence as a function of time for two initially entangled, interacting qubits with initial conditions $b = c = |z| = 1.0 $ and two different initial phases $\chi = \pi/4$ (black curve) and $\chi = \pi/2$ (red curve).}
\end{center}
\end{figure}
where $w = \sqrt{1-e^{-\gamma t}}$. One can clearly see the dependence of $\tilde{C}(t)$ on the interaction $v$ among the qubits and the initial phase $\chi$. We see from (\ref{12}) that in absence of the interaction $v$, concurrence becomes independent of the initial phase and yields the well established result of Yu and Eberly \cite{tin}.\\
Note that $\tilde{C}(t)$ can become negative if,
\begin{eqnarray}
\label{13}
a(1-a+2w^{2}+w^{4}a) > (1-\sin^{2}\chi\sin^{2}(2vt)),
\end{eqnarray}
in which case concurrence is zero and the qubits get disentangled. In figure (2) we show the time dependence of the entanglement by plotting equation (\ref{12}) for $v = 5\gamma$ , $a = 0.4$ and different values of the initial phase $\chi$.
\begin{figure}
\vspace{0.2in}
\begin{center}
\begin{tabular}{ccccc}
\includegraphics[scale = 0.42]{p3.eps} & & \includegraphics[scale= 0.42]{p4.eps}\\
(a) & & (b)
\end{tabular}
\caption{(Color online) Evolution of Concurrence for two initially entangled, interacting qubits with initial conditions $a = 0.4, b = c = |z| = 1.0 $ and different initial phases $\chi$. Here $\gamma = 0$. The magnitude of bright periods in absence of environment does not diminish in magnitude. }
\end{center}
\end{figure}
The inset of figure 2 shows the long time behavior of entanglement for this case. We see from figure 2 that non-interacting qubits ($v/\gamma = 0$) exhibit sudden death of entanglement (ESD) [visible more clearly in the inset] whereas when they interact ($v/\gamma \neq 0$) the concurrence oscillates between zero and non-zero values. Thus we see that the initially entangled qubits in presence of interaction $v$ gets repeatedly disentangled and entangled leading to \textit{dark and bright} periods in the concurrence. The magnitude of bright periods diminish with time and eventually at longer time this behavior vanishes completely leading to death of entanglement (ESD). The length of a dark period is determined by the condition (\ref{13}). We have found that this behavior in entanglement prevails for other values of the parameter $a$ also \cite{das}. In figure 3(a) and (b) we plot the dyamical evolution of entanglement when ($\gamma = 0$) , i.e in absence of any environmental perturbation. This is a ideal case of close quantum systems whose dynamics is only influenced by the initial condition of the entangled qubits and the inter-qubit interactions. In this case we get,
\begin{equation}
\tilde{C}(t) = \frac{2}{3}[\sqrt{\cos^{2}\chi+\sin^{2}\chi\cos(2vt)}-\sqrt{a(1-a)}]
\end{equation}
For both the case of $a = 0.2$ and $a = 0.4$ with an initial phase of $\chi/4$ , we observe sinusoidal behavior of entanglement as seen in both (a) and (b) of figure 4. Thus there is no ESD in absence of the environment in this case. For another value of the initial phase $\chi/2$ we observe dark and bright periods of entanglement. The periods of disentanglement (dark periods) are governed by the condition $\sqrt{a(1-a)} > |\cos(2vt)|$. It is clearly visible from the plots that in absence of any environment the amplitude of the bright periods does not diminish at all and thus the qubits gets back their initial entanglement completely. This regeneration of entanglement is due to the inter-qubit interactions. Note that similar behavior in concurrence dynamics (collapse and revival of entanglement) have been predicted in earlier studies of non-interacting qubits in atom cavity systems. For example it was shown that for double Jaynes-Cumming (JC) \cite{jaynes_c} model, with completely undamped non-interacting cavities entanglement shows a periodic death and re-birth feature \cite{eberly_jb}. This was attributed to exchange of information between the finite number of cavity modes and the atoms - a new kind of temporary decoherence mechanism. In another work pairwise concurrence was calculated among four qubits, where the qubits were formed by the cavity modes and atoms \cite{eberly_jb1}. Here again JC like interaction between the atom and cavity gives rise to dark and bright period in the entanglement dynamics of the qubits. It was shown that during the period when the concurrence between the cavities vanish, the concurrence between the atoms reaches its peak and vice-versa. This only happens as the cavities where assumed to be lossless with finite number of mode and thus without environmental decoherence. Further it was shown that for qubits remotely located and in contact with their respective environment when driven independently by single mode quantized field, one gets dark and bright periods of entanglement instead of ESD, a feature similar to single atom behavior in cavity quantum electrodynamics \cite{luis,eberly_ol}. \textit{These works} \cite{jaynes_c, eberly_jb, eberly_jb1, luis, eberly_ol} \textit{differ from our's as we focus on the effect of interaction among the qubits in presence of a decohering environment}. Note that in a more recent work it was shown how oscillators interacting with a correlated finite temperature Markovian bath can lead to dark and bright periods in entanglement for certain initial conditions \cite{paz}.
\section{Pure Dephasing of the Qubits}
\begin{figure}[!h]
\begin{center}
\includegraphics[scale = 0.4]{model01.ps}
\caption{(Color online) Schematic diagram of two qubits modelled as two two-level atom coupled to each other by an interaction parameter $v$. Here $|e\rangle, |g\rangle$ signifies the excited and ground states and $\omega_{0}$ their corresponding transition frequency. The qubits A and B independently dephase to their respective environments (baths) which leads to decoherence and thus loss in entanglement. The corresponding dephasing rates are given by $\Gamma_{A}$ and $\Gamma_{B}$ respectively.}
\end{center}
\end{figure}
In order to demonstrate the generic nature of our results, we consider other models of the environment. A model which has been successfully used in experiments \cite{kwait} involves pure dephasing. The mathematical formulation for this kind of an environmental model can be done via a master equation technique and is given by,
\begin{eqnarray}
\label{14}
\mathcal{L}\rho = - \sum_{i = A,B}\Gamma_{i}(S^{z}_{i}S^{z}_{i}\rho-2S^{z}_{i}\rho S^{z}_{i}+\rho S^{z}_{i}S^{z}_{i})
\end{eqnarray}
where $\Gamma_{A} (\Gamma_{B})$ is the dephasing rate of qubit A (B). Substituting (\ref{14}) in (\ref{3}) we get the equation for dynamical evolution of the qubits under the influence of this kind of an environment. Note that in this model the populations do not decay as a result of the interaction with the environment whereas the coherences like $\rho_{23}(t)$ decay as $\rho_{23}(0)e^{-(\Gamma_{A}+\Gamma_{B})t}$.
\begin{figure}
\begin{center}
\includegraphics[scale = 0.45]{CR5.ps}\\
\caption{(Color online) Concurrence as a function of time with initial conditions $ b = c = |z| =1$ and two different values of the initial phase $\chi$ for the dephasing model. The red and black curve in figure is for $\chi = \pi/4$ and $\pi/2$ respectively. Here the interaction parameter is taken to be $v/\Gamma = 4$.}
\end{center}
\end{figure}
Let us now study the the effect of interaction $v$ between the qubits on the dynamics of entanglement. We assume the same initial density matrix of equation (\ref{9}) with the initial conditions $ d = 1-a, b= c = |z| = 1$ and $a \geq 0$ to calculate the concurrence. One can clearly see from the solution of quantum-Louiville equation given in appendix (B) that under pure dephasing, the \textit{form of matrix in} (\ref{9}) \textit{is preserved for all time}. Using the solutions of the master equation (\ref{3}) derived in appendix (B) for the environment effects given by (\ref{14}) and substituting in equations (\ref{10}), (\ref{11}) we get the time dependent concurrence for this model to be,
\begin{eqnarray}
\label{15}
\tilde{C}_{D}(t)& = &\frac{2}{3}\lbrack e^{-\tau}\lbrace e^{-2\tau}\cos^{2}\chi+\sin^{2}\chi\lbrace\cos(\Omega_{1}\tau)\nonumber\\
& &-\frac{1}{\Omega_{1}}\sin(\Omega_{1}\tau)\rbrace^{2}\rbrace^{1/2}-\sqrt{a(1-a)}\rbrack,
\end{eqnarray}
where the suffix $D$ signifies that the concurrence is calculated for a dephasing environment and we assume $\Gamma_{A} = \Gamma_{B} = \Gamma$. Here $\tau = \Gamma t$ and $\Omega_{1} = \sqrt{(2v/\Gamma)^{2} -1}$.
For $v = 0$ we get $\tilde{C}_{D}(t) = 2/3\lbrack e^{-2\tau} -\sqrt{a(1-a)}\rbrack$, which is independent of the initial phase $\chi$. We find \textit{death of entanglement} for $\tau > (1/2)\ln [1/\sqrt{a(1-a)}]$. Note that Yu and Eberly \cite{eberly} have considered this case earlier but for $a = 1$ only, in which case there is no ESD. In figure (5) we show the time dependence of entanglement for a purely dephasing model, for $a = 0.2$ and initial coherences governed by the phase $\chi$. From the figure we see that for $v \neq 0$, the two qubit entanglement exhibits the phenomenon of dark and bright periods. Further we also see that for $v \neq 0$, dark and bright periods continues beyond the time when ESD occurs for noninteracting qubits. This kind of behavior in the entanglement dynamics is found for other values of the parameter $a$ and interested readers are refered to \cite{das} for further dicsussions on these.
\section{Concurrence Dynamics in Correlated Environmental Models}
\subsection{Effect of Correlated Dissipative Environment}
\indent{}We next consider an environment involving correlated decay and show how coupling to such environment can lead to new effects in the entanglement dynamics for two qubit systems. We will consider the case of both non-interacting as well as interacting qubits for this model of the environment. To keep the analysis simple and get a better physical insight on the decoherence effect of this environment we will first study the case of non-interacting qubits. We assume as before that the qubits interacts independently with their respective environments with decay rates $\gamma_{A}$ and $\gamma_{B}$. Further we assume that the qubits are close enough ($r << \lambda$, $r$ being the inter-qubit distance and $\lambda$ the wavelength of emitted radiation in process of a decay) such that they can undergo a correlated decay with decay rates $\Gamma_{AB} (\Gamma_{BA})$ for qubits $A (B)$. Whether this would lead to further decoherence is a question we want to investigate. Note that the entanglement dynamics of two non-interacting two level atoms in presence of dissipation caused by spontaneous emission was studied earlier in details by Jak\'{o}bczyk and Jamr\'{o}z \cite{jj}. They even considered correlated model of dissipative environment and showed possible destruction of initial entanglement and possible creation of a transient entanglement between the atoms. Further they also discussed the question of non-locality and how it is influenced by the spontaneous emission by explicitly showing the violation of Bell-CSHS inequality. One of the chief difference between this work and ours is the initial density matrix $\rho$ considered and the interaction introduced between the qubits . While we consider the possibility of both the qubits (atoms) to be initially excited and show its important consequences on the decay dynamics, they have neglected this effect by putting $\rho_{11} (0) = 0$. We would show later in this paper (as can also be seen from their results) that the dissipative environment preserve the form of the initial $\rho$. Hence $\rho_{11} (t) = 0$ for all time in their case. Moreover, in a recent work the entanglement dynamics of two initially entangled qubits for collective decay model was studied in context to ESD, by Ficek and Tanas \cite{ficek1}. They considered an initial density matrix of the from,
\begin{eqnarray}
\rho & = &|\Psi_{0}\rangle\langle\Psi_{0}| ;\nonumber\\
|\Psi_{0}\rangle & =& \sqrt{p}|e_{1},e_{2}\rangle + \sqrt{(1-p)}|g_{1},g_{2}\rangle
\end{eqnarray}
It can be clearly seen that in this case the two-qubits are initially prepared in an entangled state by the two-photon coherences. They further show that for this initial condition the single photon coherences are never generated. Moreover the dipole-dipole interaction that they consider for the two qubit system have no influence for this initial condition. Ficek and Tanas predicted dark periods and revival in the two qubit entanglement in their work due to the correlated nature of the bath, we on the other hand consider the initial density matrix of the form (\ref{9}) with single photon coherences and show that any coherent interaction among the qubits does influence the entanglement dynamics at all later time. \\
We now include the effect of a dissipative environment with both independent and correlated decay of the qubits via a master equation technique given by,
\begin{eqnarray}
\label{16}
\mathcal{L}\rho & =& - \sum_{j,k = A, B}\frac{\Gamma_{ij}}{2}(S^{+}_{j}S^{-}_{k}\rho-2S^{-}_{k}\rho S^{+}_{j}+\rho S^{+}_{j}S^{-}_{k}), \nonumber\\
\Gamma_{jj} & = & \gamma_{j}
\end{eqnarray}
The time evolution of the density operator $\rho$ which gives us information about the dynamics of the system can then be evaluated by solving the quantum-Liouville equation (\ref{3}) with the environmental effect included by equation (\ref{16}) and taking $v = 0$. Next as before we consider the qubits to be intially entangled with their initial state to be a mixed state defined by the density matrix (\ref{9}). We then solve the quantum-Louiville equation to study the dynamical evolution of the system. The reader is refered to appendix (C) for explicit solution of the time dependent density matrix elements. One can clearly see from appendix (C) that for this kind of model of the environment, as before \textit{the initial density matrix preserves its form for all time t}. Now using appendix (C) in equations (\ref{10}) and (\ref{11}) and the initial conditions $ a \geq 0, d = 1-a$, $b = c = |z| = 1$, we obtain the concurrence dynamics of two initially entangled non-interacting qubits for this model of the environment as,
\begin{eqnarray}
\label{24}
\tilde{C}(t) & = &\frac{2}{3}e^{-\gamma t}\{\lbrack\{\cos\chi\cosh(\Gamma t)-\sinh(\Gamma t)+a\zeta(t)\}^{2}\nonumber\\
& &+\sin^{2}\chi\rbrack^{1/2}-\sqrt{3a\lbrack1-\kappa(t)\rbrack}\},\nonumber\\
\end{eqnarray}
where $\zeta(t)$ and $\kappa(t)$ are given by ,
\begin{eqnarray}
\label{25}
\zeta(t) & = & e^{-\gamma t}\{ \left (\frac{1+\Gamma/\gamma}{1-\Gamma/\gamma}\right )(e^{(1-\Gamma/\gamma)\gamma t}-1)\nonumber\\
& & - \left (\frac{1-\Gamma/\gamma}{1+\Gamma/\gamma}\right )(e^{(1+\Gamma/\gamma)\gamma t}-1) \},
\end{eqnarray}
\begin{eqnarray}
\label{26}
\kappa(t)& = & \frac{1}{3}a e^{-2\gamma t}\{ 1+\left (\frac{1+\Gamma/\gamma}{1-\Gamma/\gamma}\right )(e^{(1-\Gamma/\gamma)\gamma t}-1)\nonumber\\
&+ &\left (\frac{1-\Gamma/\gamma}{1+\Gamma/\gamma}\right )(e^{(1+\Gamma/\gamma)\gamma t}-1)\}+ \frac{2}{3}e^{-\gamma t}\{\cosh(\Gamma t)\nonumber\\
& & - \cos\chi\sinh(\Gamma t)\},
\end{eqnarray}
For simplicity we have assumed equal decay rates of both the qubits, $\gamma_{A} = \gamma_{B} = \gamma$ and $\Gamma_{AB} = \Gamma_{BA} = \Gamma$. One can clearly see the dependence of $\tilde{C}(t)$ on the correlated environmental effect given by $\Gamma$ and the initial phase $\chi$ in equation (\ref{24}). We see from (\ref{24}), (\ref{25}) and (\ref{26}) that for $\Gamma = 0$, concurrence becomes independent of the initial phase and yields the result of Yu and Eberly \cite{tin}. Note that $\tilde{C}(t)$ can become negative if,
\begin{eqnarray}
\label{27}
3a\lbrack1-\kappa(t)\rbrack & > & \lbrack\{\cos\chi\cosh(\Gamma t)-\sinh(\Gamma t)+a\zeta(t)\}^{2}\nonumber\\
& &+\sin^{2}\chi\rbrack
\end{eqnarray}
in which case concurrence is zero and the qubits get disentangled.
\begin{figure}
\begin{center}
\includegraphics[scale = 0.43]{p1.eps}
\caption{(Color online) Time evolution of concurrence for $a = 0.2, b = c =|z| = 1$ and two different initial phases $\chi$ for two non-interacting qubits in contact with a correlated dissipative environment. Here $\Gamma/\gamma = 0$ signifies absence of common bath for the qubits. }
\end{center}
\end{figure}
\begin{figure}
\vspace{0.25in}
\begin{center}
\includegraphics[scale = 0.43]{p2.eps}
\caption{(Color online) Time evolution of concurrence govern by the initial condition $a = 0.4$ for two non-interacting qubits in contact with a correlated dissipative environment. Here all other initial parameters remains the same as fig (7). $\Gamma/\gamma = 0$ signifies absence of any common bath in which case entanglement sudden death (ESD) is observed. }
\end{center}
\end{figure}
To understand how correlated decay of the qubits might effect their entanglement we study the analytical
result of equation (\ref{24}) for different values of the parameter $a$ and $\chi$. In figure (6) we show the time dependence of entanglement for $a = 0.2$ and two different values of initial phase $\chi$ and correlated decay rate of $\Gamma = 0.8\gamma$. Note that for $\Gamma = 0$, there is no ESD in this case \cite{tin} and concurrence monotonically goes to zero as $t \longrightarrow \infty$. For $\Gamma \neq 0$ we observe new behavior in the entanglement of the qubits. Concurrence is seen to have a much slower decay in comparison to when $\Gamma = 0$. For a initial phase of $\chi = \pi/4$ we observe that the condition in equation (\ref{27}) is satisfied and entanglement vanishes temporarily \textit{i.e} the qubits get disentangled. The entanglement gets regenerated at some later time and finally goes to zero very slowly as $t \longrightarrow \infty$. Note that this disentanglement and re-entanglement phenomenon is non periodic and is very sensitive to initial coherence among the qubits, for example it do not occur when the initial coherence is governed by the phase $\chi = \pi/2$. In figure (7) we plot concurrence for $a = 0.4$. For this value of $a$ ESD is observed for $\Gamma = 0$ but not for $\Gamma \neq 0$. Instead we observe disentanglement and regeneration of entanglement among the qubits for $\chi = \pi/4$. Here again we find that no dark and bright periods nor any ESD for initial phase of $\chi = \pi/2$. Further, note that for initial phase $\chi = \pi/4$ we have a longer time interval during which the qubits remain disentangled before getting entangled again, in comparison to the case for $a = 0.2$. Thus we find that the time interval between disentanglement and regeneration of entanglement as well as the magnitude of regeneration strongly depends on the initial coherences of the initially entangled qubits. Hence we can conclude that for non-interacting qubits in contact with a dissipative correlated environment no ESD occurs.\\
\indent{}Let us now consider the case of two initially entangled interacting qubits in contact with the correlated environment. The dynamical evolution of the system in presence of interaction $v$ for correlated model of environment is evaluate in details in appendix (D). We use the solutions of appendix (D) in (\ref{11}) to calculate the concurrence for this environment. Note that the solutions are essentially valid under the assumption that our initial two qubit density matrix $\rho$ is given by equation (\ref{9}). Further we consider as before that the two entangled qubit's evolution is governed by the initial conditions $a \geq 0, d =1-a$, $b = c = |z| = 1$. Hence the time dependent concurrence for two initially entangled interacting qubits becomes,
\begin{eqnarray}
\label{27a}
\tilde{C}(t) & = &\frac{2}{3}e^{-\gamma t}\{\lbrack\{\cos\chi\cosh(\Gamma t)-\sinh(\Gamma t)+a\zeta(t)\}^{2}\nonumber\\
& &+\cos^{2}(2vt)\sin^{2}\chi\rbrack^{1/2}-\sqrt{3a\lbrack1-\kappa(t)\rbrack}\},\nonumber\\
\end{eqnarray}
\begin{equation}
C(t) = \mathsf{Max}\lbrace 0, \tilde{C}(t)\rbrace\quad;
\end{equation}
where $\zeta(t)$ and $\kappa(t)$ are given by equations (\ref{25}) and (\ref{26}) respectively. The dependence of concurrence for $\tilde{C}(t) > 0$ on the interaction strength $v$ between the qubits is clearly visible in equation (\ref{27a}). Further now we can see that the condition of complete disentanglement of the qubits is given by,
\begin{eqnarray}
\label{27b}
3a\lbrack1-\kappa(t)\rbrack & > & \lbrack\{\cos\chi\cosh(\Gamma t)-\sinh(\Gamma t)+a\zeta(t)\}^{2}\nonumber\\
& &+\cos^{2}(2vt)\sin^{2}\chi\rbrack
\end{eqnarray}
\begin{figure}
\begin{center}
\includegraphics[scale = 0.47]{corr2.eps}
\caption{(Color online) Time evolution of concurrence for interacting qubits in contact with a correlated dissipative environment with correlated decay rate of $\Gamma/\gamma = 0.8$. Here $b = c = |z| = 1$. A long period of disentanglement is observed for initial phase $\chi = \pi/4$. Here the interaction strength among the qubits is taken to be $v/\gamma = 5.0$}
\end{center}
\end{figure}
\begin{figure}
\vspace{0.3in}
\begin{center}
\includegraphics[scale = 0.47]{corr4.eps}
\caption{(Color online) Time evolution of concurrence for interacting qubits in contact with a correlated dissipative environment for same parameters as figure (9) and $a = 0.4$. The dark and bright periodic features sustain for a longer time for initial phase of $\chi = \pi/2$. Much longer period of disentanglement now observed for $\chi = \pi/4$.}
\end{center}
\end{figure}
When condition (\ref{27b}) is satisfied, $\tilde{C}(t) < 0$ and hence $C(t) = 0$. Next to study the effect of qubit-qubit interaction on the entanglement dynamics we plot the time dependent concurrence for different value of $a$, initial phase $\chi$ and correlated decay rates of $\Gamma = 0.8\gamma $ in figures (8) and (9). We observe in the figures that for an initial phase of $\chi = \pi/2$ concurrence exhibits dark and bright periods at initial time for both $a = 0.2$ and $a= 0.4$. For longer time the concurrence shows a damped oscillatory behavior. We attribute this effect to the competition between the fast inter-qubit interactions $v$ and the environmental decays. For longer time the correlated decay becomes dominant and leads to a slow damped oscillatory decay of the entanglement. For $\chi = \pi/4$ the dark and bright periods are not very pronounced and is over shadowed very quickly by the correlated decay. Note that for this value of initial phase we find that there exist a long period of time during which the qubits remain disentangled. At a much later time entanglement gets regenerated and increases initially and then starts decaying very slowly after . This behavior is quite different from the dark and bright periods seen for other models of the environment. Thus we see that for interacting qubits there is no ESD for this model of the environment. Instead we find dark and bright periods with long period of disentanglement whose occurrence depends on the initial coherence.
\subsection{Delay of ESD by Correlated Dephasing Environment}
\begin{figure}[!h]
\begin{center}
\includegraphics[scale =0.38]{model03.ps}
\caption{(Color online) Schematic diagram of two qubits modelled as two two-level atoms. Here $\omega_0$ is the transition frequency of the excited state $|e\rangle$ to the ground state $|g\rangle$. The qubits A and B independently dephase to their environments (baths) with a dephasing rate of $\Gamma_{A}, \Gamma_{B}$ respectively. The qubits can also interact with the environment collectively when they are at proximity giving rise to correlated dephasing represented by the decay rate $\Gamma_{0}$.}
\end{center}
\end{figure}
\indent{}Finally we consider a purely correlated dephasing model of the environment and study the effect of such an environment on the entanglement dynamics of two qubits. Note that this kind of model is popular among solid state systems like semiconductor quantum dots. We will study the behavior of entanglement for both non-interacting and interacting qubits. As before to keep our analysis simple and to get a better physical insight to the question of decoherence for this kind of environment we will first study the case of non-interacting qubits. We will then generalize our results by introducing the interaction among the qubits. For non-interacting qubits the Hamiltonian for our model is given by (\ref{1}) with $v =0$. The effect of the dephasing environment on the qubits is included via a master equation technique and is given by,
\begin{eqnarray}
\label{28}
\mathcal{L}\rho & = &-\sum_{i = A,B}\Gamma_{i}(S^{z}_{i}S^{z}_{i}\rho-2S^{z}_{i}\rho S^{z}_{i}+\rho S^{z}_{i}S^{z}_{i})\nonumber\\
&&-2\Gamma_{0}(S^{z}_{A}S^{z}_{B}\rho-S^{z}_{B}\rho S^{z}_{A}+\rho S^{z}_{A}S^{z}_{B}-S^{z}_{A}\rho S^{z}_{B}),\nonumber\\
\end{eqnarray}
where $\Gamma_{A} (\Gamma_{B})$ and $2\Gamma_{0}$ are respectively the independent and correlated dephasing rate of qubit A (B) . The dynamical evolution of this system can then be studied by solving the quantum-Louiville equation (\ref{3}) for $v = 0$ and including the effect of environment by using (\ref{28}). We now consider as earlier that the initial state of the two qubits is defined by the density matrix $\rho$ (\ref{9}). Then the solution of the quantum-Louiville equation for this model of the environment is given by,
\begin{equation}
\label{29}
\rho_{11}(t) = \frac{1}{3}a,\quad \rho_{22}(t) = \frac{1}{3}b,\quad \rho_{33}(t) = \frac{1}{3}c,\nonumber\\
\end{equation}
\begin{equation}
\label{30}
\rho_{23}(t) = \frac{1}{3}|z|e^{-\left(\Gamma_{A}+\Gamma_{B}-2\Gamma_{0}\right)t}e^{i\chi}\nonumber\\
\end{equation}
\begin{equation}
\label{31}
\rho_{32}(t) = \rho^{\ast}_{23}(t),\quad \rho_{44}(t) = 1-\rho_{11}(t)-\rho_{22}(t)-\rho_{33}(t)\\
\end{equation}
All other matrix elements of the two qubit density matrix $\rho$ are zero. Now using the solutions of (\ref{31}) it is straight forward to show that, for pure dephasing of the qubits, the \textit{form of matrix in} (\ref{9}) \textit{is preserved for all time}. Note that in such a model the populations do not decay as a result of the interaction with the environment whereas the coherences like $\rho_{23}(t)$ decay as $\sim \rho_{23}(0)e^{-(\Gamma_{A}+\Gamma_{B}-2\Gamma_{0})t}$ for $\chi = 0$ or mod $\pi$. Let us now study the effect of correlated dephasing of the qubits on the dynamics of entanglement. For the initial conditions $ d = 1-a, b= c = |z| = 1$ and $a \geq 0$ on using (\ref{29}) in (\ref{11}) we get the expression for time dependent concurrence as,
\begin{eqnarray}
\label{32}
\tilde{C}_{D}(t) = \frac{2}{3}\left\{e^{-2(\Gamma -\Gamma_{0})t} -\sqrt{a(1-a)}\right \}
\end{eqnarray}
where we have assumed $\Gamma_{A} = \Gamma_{B} = \Gamma$ for simplicity.
\begin{figure}[!h]
\vspace{0.3 in}
\begin{center}
\includegraphics[scale = 0.45]{con_c1.eps}
\caption{(Color online) Time evolution of concurrence for two non-interacting qubits in contact with a purely dephasing environment for initial condition given by $a= 0.2$ and $b = c = |z| = 1$. The effect of correlated dephasing shows up as delay in the onset of ESD.}
\end{center}
\end{figure}
From equation (\ref{32}) it is clearly seen that in a purely dephasing environment entanglement among the qubits is independent of the initial coherence given by $\chi$ and depends only on $a$ and $\Gamma_{0}$. In figure (11) we plot the time dependence of concurrence for $a = 0.2$. We find that the effect of correlated dephasing is manifested in the delay of the onset of ESD. The time for the onset of ESD is given by $t \ge 1/2(\Gamma-\Gamma_{0})\{1/\ln\sqrt{a(1-a)}\}$. From the figure its is clealy visible that with increase in correlated decay $\Gamma_{0}$, the onset of ESD gets delayed further until $\Gamma_{0} = \Gamma$, when concurrence bceomes independent of the dephasing rates and is given by $C = 2/3[1-\sqrt{a(1-a)}]$. This situation represents a decoherence free subspace where concurrence becomes solely dependent on the value of $a$ \textit{i.e} population of the excited state of the two qubits. Note that this kind of situation has already been tailored to study entanglement in decoherence free subspace \cite{kwait}.\\
\indent{}Let us now include the interaction among the qubits and study how this interaction might influence the entanglement dynamics for this model of the environment. The Hamiltonian of the two qubit system and its coupling to the environment is then given by equations (\ref{1}) and (\ref{28}) respectively. To study the dynamics of entanglement we follow a similar process as described earlier. We use the solution of quantum-Louiville equation derived explicitly in appendix (E) and substitute them in equation (\ref{11}) to calculate the time dependence of concurrence $C$. With the initial conditions $a = 1-d, b = c = |z| = 1$, then we get,
\begin{eqnarray}
\label{33}
\tilde{C}_{D}(t) &=& \frac{2}{3}\lbrack e^{-(\Gamma-\Gamma_{0})t}\lbrace e^{-2(\Gamma-\Gamma_{0})t}\cos^{2}\chi\nonumber\\
&+&\sin^{2}\chi(\cos(\Omega^{\prime} t)-\frac{(\Gamma-\Gamma_{0})}{\Omega^{\prime}}\sin(\Omega^{\prime} t))^{2}\rbrace^{1/2}\nonumber\\
&-&\sqrt{a(1-a)}];\\
C(t) & = &\mathsf{Max}\lbrace 0, \tilde{C}_{D}(t)\rbrace
\end{eqnarray}
where $\Omega^{\prime} = \sqrt{4v^{2}-(\Gamma-\Gamma_{0})}$ and we have assumed $\Gamma_{A} = \Gamma_{B}$. One can clearly see the dependence of concurrence on the interaction $v$ among the qubits for $\tilde{C}_{D} > 0$. Note that due to the interaction among the qubits now concurrence becomes dependent of the initial phase $\chi$.
\begin{figure}[!h]
\vspace{0.35 in}
\begin{center}
\includegraphics[scale = 0.45]{corr5.eps}
\caption{(Color online)Time evolution of concurrence for two interacting qubits with interaction strength $v/\Gamma = 5.0$ in contact with a purely correlated dephasing environment and initial condition $a= 0.2., \chi = \pi/4, b = c = |z| = 1.$ The red curve correspond to concurrence of non-interacting qubits. Concurrence is seen to exhibit initial oscillations followed by dark and bright periods with eventual death of entanglement in presence of interaction. The interaction also leads to delayed death of entanglement. Here $\Gamma_{0}$ is the correlated dephasing rate. }
\end{center}
\end{figure}
\begin{figure}[!h]
\vspace{0.35 in}
\begin{center}
\includegraphics[scale = 0.45]{corr6.eps}
\caption{(Color online)Time evolution of concurrence for two interacting qubits in contact with a correlated dephasing environment with same parameters as for figure (12) but higher correlated dephasing rates. The effect of higher correlated dephasing manifests itself by increasing the periodicity of dark and bright features in concurrence . Here again we find that dark and bright periods is followed by death of entanglement. }
\end{center}
\end{figure}
\begin{figure}[!h]
\begin{center}
\includegraphics[scale = 0.45]{corr7.eps}
\caption{(Color online)Time evolution of concurrence for two interacting qubits in contact with a correlated dephasing environment with initial condition $a= 0.2., \chi = \pi/2, b = c = |z| = 1$. Concurrence is seen to be sensitive to initial coherence among the two qubits. It does not exhibit initial oscillations for this value of $\chi$ but dark and bright periods with eventual death of entanglement in presence of interaction. Here the interaction strength is taken to be $v/\Gamma = 5.0$ }
\end{center}
\end{figure}
To understand the behavior of entanglement in presence of interaction $(v/\gamma = 5.0)$ among the qubits we plot the time dependence of concurrence for different initial phase $\chi$ and correlated dephasing rates $\Gamma_{0}$ in figures (12-14). We consider the case, $a = 0.2$ only to do a comparative study on the behavior of concurrence in presence and absence of inter-qubit interactions. Note that we have already discussed the effect of correlated dephasing on the two qubit entanglement for this value of $a$. Let us now focus on any new feature that arises due to the qubit-qubit interactions. We can see clearly from figure (12) that for $v \neq 0, \chi = \pi/4$ the two qubit concurrence shows a damped oscillatory behavior which leads to dark and bright periods at longer time before eventual death of entanglement. The generation of dark and bright periods is seen to delay the death of entanglement even further in comparison to that induced by correlated dephasing in absence of qubit-qubit interactions. Moreover in figure (13) we see that both the oscillatory behavior as well as dark and bright periods is enhanced with an increase in correlated dephasing rate. When we change the initial phase to $\pi/2$ for $\Gamma_{0} = 0.2\Gamma$ we find (figure 14) no oscillatory behavior in entanglement rather a completely dark and bright periodic feature with eventual delayed death. Thus we see that the onset of dark and bright periods for this kind of environment model is profoundly influenced by the initial coherence of the two qubit system. \\
\indent{}The phenomenon of dark and bright periods in entanglement should have direct consequences for systems like ion traps , quantum dots, the later being currently the forerunner in implementation of quantum logic gates. The interaction between qubits considered in this paper are inherently present in these systems. In quantum dots for example, $\gamma^{-1}\sim$ few ns and one can get a very large range of the parameter $\Gamma^{-1}$ (1-100's of ps) \cite{bori}. Further the interaction strength $v$ can have a range between $1 \mu$ev - $1$ mev depending on gate biasing \cite{atac, kren,beirne, taylor}. An earlier study \cite{gert} reports $\gamma \sim 40-100 \mu$ev and coupling strength of $\sim 100-400 \mu$ev, thereby making $v/\gamma \sim 1- 10$ for quantum dot molecules. Thus experimental parameters are in the range we used for our numerical calculation.
\section{Summary}
In summary we have done a detail study of decoherence effect for non-interacting and interacting initially entangled qubits in contact with different environments at zero temperature . We have shown how the interaction between qubits generates the phenomenon of dark and bright periods in the entanglement dynamics of an intially entangled two qubit system in contact with different environments. We found this feature of dark and bright periods to be generic and occurs for various models of the environment , as an example in a correlated dissipative environment we found the phenomenon of dark and bright periods in entanglement dynamics even though there is no sudden death of entanglement. Moreover for purely dephasing models of the environment we found that the dark and bright periods feature sustains longer and delays the sudden death. We found that there is no sudden death of entanglement for a correlated dissipative environment but rather depending on the initial coherences in the system entanglement can show a substantial slower decay and even the phenomenon of dark and bright periods. For a simple pure dephasing environment as well as for correlated dephasing environment we have shown the existence of sudden death of entanglement. Due to correlated dephasing we found delayed death of entanglement. Further, in the correlated dephasing model we found that the onset of dark and bright periods is sensitive to the initial coherence in the system. The frequency of dark and bright periods was found to depend on the strength of interaction between the qubits as well as on the correlated decay and dephasing rates. As a future perspective it would be interesting to study the effect of qubit-qubit interaction for environments having temperature fluctuations. Further it would also be interesting to extend our study to multi-qubit entanglement. An important class of states that can be treated for this purpose are the GHZ and W states. Moreover cluster states \cite{clus} can also be considered as other probable candidates for the study of decoherence and loss of entanglement.\\
This work was supported by NSF grant no CCF-0829860.
|
2,877,628,091,187 | arxiv | \section{Introduction}
Recently, many efforts have devoted into the studies of
single-photon sources (SPS) composed of quantum dots (QDs), which
provide the potential applications of quantum cryptography and
quantum computing$^{1-3}$. For small size QDs, the strong three
dimensional confinement effect of QDs causes the large energy
level separation. Consequently, we can manipulate electrons and
holes to occupy the lowest energy level of QDs and operate emitted
photons from exciton, trion or biexciton formed in QDs. The
antibunching feature of SPS was demonstrated in ref[1,2] where
electros and holes of the QD are excited using optical pumping,
only a handful of studies employed the electrical pumping to
demonstrate the antibunching behavior of SPS.
From the practical point of view, the electrically driven SPS has
been suggested by Imamoglu and Yamamoto considering individual QDs
embedded into a semiconductor p-n junction$^{4}$. To fabricate a
single QD of nanometer size at specific location is one of
challenged techniques in the realization of electrically driven
SPS. Self-assembled quantum dots (SAQDs) combined with selective
formation method may solve such difficulty$^{5}$. Apart from the
electrically driven, it is very important to manipulate the the
spontaneous emission rate of SPS in the applications of quantum
cryptography. According to the Purcell effect$^{6-9}$, the
enhancement of the spontaneous emission rate due to cavity
($1/\gamma_{cav}$)with respect to free-space is
\begin{equation}
\frac{1}{\gamma_{cav}}=
\frac{1}{\gamma_{free}}\frac{3Q(\lambda_c/n)^3}{4\pi^2 V_{eff}}
\frac{\Delta\omega^2_c}{ 4(\omega-\omega_c)^2+\Delta \omega^2_c}
\frac{|\textbf{E}({\bf r}_e)|^2}{|E_{max}|^2} \eta^2 ,
\end{equation}
where $\eta=\textbf{d}
.\textbf{E}(\textbf{r})/|\textbf{d}||E(\textbf{r})|$ describes the
orientation matching of diploe $\textbf{d}$ and electric field of
emission light $\textbf{E}$, and where the factor 3 from a 1/3
averaging factor accounting for the random polarization of
free-space modes with respect to the dipole . The quality factor
of cavity is given by $Q=\omega_c/\Delta\omega_c$, where
$\omega_c$ and $\Delta \omega_c$ denote, respectively, the
angular frequency and linewidth of a cavity supporting a
single-mode. Other notations $V_{eff}$, and $n$ are the effective
volume of cavity and the refractive index at the field maximum.
Although a number of experiments have already demonstrated the
potential of QD-based solid-state cavity quantum electrodynamics
(QED) in applications such as single-photon sources$^{1-2,10-11}$.
Nevertheless, because it is very difficult to pre-determine the
exact resonance energy and location of an optically or
electrically active QD, all of the prior QD-based cavity QED
experiments relied on a random spectral and spatial overlap
between QDs and cavity modes. Most recently, in spite of Badolado
et al have proposed a method to solve above difficulty$^{12}$. The
studies of optimization of Purcell effect for electrically driven
SPS are still lack.
Compared with optical pumping, the electrically driven SPS provide
a tunable emission spectrum of exciton complexes arising from the
Stark effect. Such extra degree of freedom (tunable wavelengthes)
provides a possibility to reach $\omega=\omega_c$ of eq.
(1)condition. Although the Stark effect on exciton peaks formed in
QDs was well investigated experimentally and
theoretically$^{13-19}$, the Stark effect on the trion and
biexciton peaks is still not very clear. Thompson et al.
demonstrated that the single photon generated by the biexciton
state is significantly less in emission time than the single
exciton state$^{20}$. From this point of view, the biexciton state
is favored for the application of single photon generation.
Therefore, it is worth studying the physical parameters of
biextion state.
The modest purpose of this article is to theoretically study the
Stark effect on the transition energies of exciton complexes.
Besides, we will also investigate the electric field effect on the
particle Coulomb interactions and oscillator strengthes of QDs.
The formal determines the charging energies for electrons and
holes, which are crucial parameters for the electrically driven
SPS. The latter influences the spontaneous emission rate of QDs in
free space.
\section{Formalism}
Because a number of experiments used InAs/GaAs SAQD system to
fabricate SPS, we consider such SAQDs embedded into a p-n junction
shown in Fig. 1. To understand the emission spectrum of an
InAs/GaAs SAQDs, we need to calculate the electronic structures
for various electron-hole complexes, including exciton ($X$),
biexciton ($X^2$), negative trion ($X^-$), and positive trion
($X^+$), which can occur in this system. Due to the large
strain-induced splitting between heavy-hole and light-hole band
for InAs on GaAs, we only have to consider the heavy hole band
(with $J_z=\pm 3/2$) and ignore its coupling with light-hole band
caused by the QD potential. Thus, we can treat the heavy hole as a
spin-1/2 particle with $\sigma=\uparrow, \downarrow$ representing
$J_z=\pm 3/2$. Meanwhile, we adopt the Hartree-Fock (HF)
approximation to describe the many-particle systems for trions and
biexciton. This is a reasonable approximation for many particles
confined in a small QD, because the particle correlation effect is
greatly suppressed by the lack of available low-energy
excitations, which are coupled to the HF ground state via the
configuration interaction. In two-electron atomic systems such as
H$^-$, He and Li$^+$, it is well known that the correlation energy
for all these systems is around 0.11 Ry$^{21}$. A major part of it
can be accounted for within the unrestricted Hartee-Fock (UHF)
approximation by the so called ``in-and-out correlation'', which
arises because the two electrons can occupy two orbitals with very
different radii. The remainder of the correlation is due to the
``angular correlation'', which may be described by the coupling of
the HF ground state to the two-particle excited states with
non-zero single-particle orbital angular momentum (while keeping
the total orbital angular momentum to be zero)$^{22}$. Both the
``in-and-out correlation'' and the ``angular correlation'' are
significantly suppressed in the QD system with strong particle
confinement.
The strengths of $U_e$, $U_h$, and $U_{eh}$ in excitons, trions,
or biexcitons are determined via the self-consistent HF
calculation within a simple but realistic effective-mass model. We
consider an InAs/GaAs self-assembled QD (SAQD) with conical shape.
The QD resides on a monolayer of InAs wetting layer and the whole
system is embedded in a slab of GaAs with finite width. The slab
is then placed in contact with heavily doped p-type and n-type
GaAs to form a p-i-n structure for single photon generation.
Within the effective-mass model, the electron and hole in the
exciton ($X$), trions ($X^-$ or $X^+$), or biexciton ($X^2$) in
the QD are described by the coupled equations
\begin{eqnarray} && [-\nabla \frac {\hbar^2} {2m_e^*(\rho,z)} \nabla +
V^e_{QD}(\rho,z) - eFz+ V_{sc}(\rho,z)] \nonumber \\
&& \psi_e(\rho,\phi,z) = (E_e-E_c) \psi_e(\rho,\phi,z),
\label{HFe}
\end{eqnarray} \begin{eqnarray} && [-\nabla \frac {\hbar^2} {2m_h^*(\rho,z)}
\nabla +
V^h_{QD}(\rho,z) + eFz-V_{sc}(\rho,z)]\nonumber \\
&& \psi_h(\rho,\phi,z) = E_h \psi_h (\rho,\phi,z), \label{HFh}
\end{eqnarray} and
\begin{equation} V_{sc}({\bf
r})= \int d{\bf r}' \frac {e^2 [n_e({\bf r'})-n_h({\bf r}')]}
{\epsilon_0 |{\bf r}'-{\bf r}|}, \end{equation} where $E_c$
denotes the energy of the GaAs conduction band minimum, which is
$1.518 eV$ above the GaAs valence band maximum (defined as the
energy zero) , ${m_e^*(\rho,z)}$ (a scalar) denotes the
position-dependent electron effective mass, which takes on values
of $m_{eG}^* = 0.067 m_e$ for GaAs and $m_{eI}^* = 0.04
m_e$$^{23}$ for InAs. ${m_h^*(\rho,z)}$ denotes the
position-dependent effective mass tensor for the hole. Due to the
strong spin-orbit coupling and the large strain-induced splitting
between heavy-hole and light-hole band, we can neglect the
coupling of the heavy-hole band with the split-off band and
light-hole band. Consequently, it is a fairly good approximation
to describe ${m_h^*(\rho,z)}$ in InAs/GaAs QD as a diagonal tensor
with the $x$ and $y$ components given by
${m^*_t}^{-1}=(\gamma_1+\gamma_2)/m_e$ and the $z$ component given
by ${m^*_l}^{-1}=(\gamma_1-2\gamma_2)/m_e$. $\gamma_1$ and
$\gamma_2$ are the Luttinger parameters. Their values for InAs and
GaAs are taken from Ref. [24]. $V^e_{QD}(\rho,z)$
($V^h_{QD}(\rho,z)$) is approximated by a constant attractive
potential in the InAs region with value determined by the
conduction-band (valence-band) offset and the deformation
potential shift caused by the biaxial strain in the QD. These
values have been determined by comparison with results obtained
from a microscopic model calculation$^{25}$ and we have $V^e_{QD}
\sim -0.42 eV$ and $V^h_{QD} \sim -0.3 eV$. The $eFz$ term in
Eqs.~(\ref{HFe}) and (\ref{HFh}) arises from the applied voltage,
where $F$ denotes the strength of the electric field. $V_{sc}({\bf
r})$ denotes the self-consistent potential caused by the
electrostatic interaction with the charge densities [$n_e({\bf
r})$ and/or $n_h({\bf r})$] associated with the other particles
in the system. $\epsilon_0$ is the static dielectric constant of
InAs. The image force is ignored here due to the small difference
in dielectric constant between InAs and GaAs.
Eqs.~(\ref{HFe}) and (\ref{HFh}) are solved self-consistently via
the Ritz variational method by expanding the ground-state wave
functions in terms of a set of nearly complete basis functions,
which are chosen to be products of Bessel functions (with axial
symmetry) and sine waves
\begin{equation}
\psi_{nlm}(\rho,\phi,z)= J_0(\alpha_n\rho) \sin (k_m
(z+\frac{L}{2})),
\end{equation}
where $k_m = m\pi/L$, m=1,2,3... $L$ is taken to be 300\AA. $J_0$
is the Bessel function of zero-th order and $\alpha_nR$ is the
$n$-th zero of $J_0(x)$ with $R$ taken to be 400\AA, which is
large enough to give convergent numerical results. Forty sine
functions multiplied by fifteen Bessel functions are used to
diagonalize the Hartree-Fock Hamiltonian.
The ground state energy of system $i \; (i=X,X^2,X^-,X^+)$is
given by
\begin{eqnarray} E_i=N^i_e E^i_e + N^i_h E^i_h + N^i_eN^i_h U^i_{eh}\nonumber
\\-N^i_eN^i_e U^i_e -N^i_hN^i_h U^i_h ,
\end{eqnarray} where $E^i_e$ and $E^i_h$ denote the ground state HF
single-particle energy for electron and hole obtained from solving
Eqs.~(\ref{HFe}) and (\ref{HFh}), respectively. $N^i_e$ and
$N^i_h$ denote the electron and hole particle number in system
$i$. $U^{i}_e$, $U^{i}_h$ and $U^{i}_{eh}$ are the
electron-electron interaction, hole-hole interaction and
electron-hole interaction, respectively. The transition energies
for the electron-hole recombination in $X$, $X^2$, $X^-$, and
$X^+$ are give by $E_X$, $E_{X^2}-E_X$, $E_{X-}-E^0_e$, and
$E_{X+}-E^0_h$, respectively. Here $E^0_e$ and $E^0_h$ are the
free-particle ground state eigen-values for Eqs.~(\ref{HFe}) and
(\ref{HFh}) with $V_{sc}$ set to zero.
\section{Results}
For electrically driven SPS, electrons and holes are injected from
the electrodes. To create biexciton state, the applied voltage
needs to overcome the charging energies of electrons and holes,
which are determined by $U_e$ and $U_h$. The calculation of
$U_{e}$, $U_{eh}$, and $U_{h}$ is shown in Fig. 2, where the solid
curves denote $X^2$, the dotted curves denote $X^-$ , and the
dashed curves denote $X^+$ in conical SAQDs with base radius $R_0$
varying between 60 \AA \, and 110 \AA. Here, the QD height $h$ is
varied between 15 \AA \, and 65 \AA, while keeping the ratio
$(h-15\AA)/(R_0-60\AA)=1 $. The electron-hole Coulomb energy
$U_{eh}$ in the exciton $X$ is almost the same as that in $X^2$
and is omitted in this plot. The strengths of Coulomb interactions
are, in general, inversely proportional to the QD size, since the
charge densities in smaller QDs are more localized. However as the
QD size decreases below a threshold value (around $R_0$=70 \AA),
the Coulomb interactions becomes reduced as a result of the leak
out of charge density for small QDs, which tend to be more serious
for the electron than for the hole due to the smaller effective
mass for the electron. This manifests the effects of finite
potential barrier of the QD$^{26}$. This effect will not appear if
one adopts an infinite barrier approximation as in some previous
studies of excitons in QDs$^{27-29}$. In the large dot limit
$U_{e}$, $U_{eh}$ and $U_{h}$ all approach the same value,
indicating similar degree of localization for the electron and
hole in the large dot. Note that the magnitude of charging
energies for electrons or holes in the same order of thermal
energy of room temperature $k_B T$, where $k_B$ is a Boltzmann
constant. This indicates that the operation temperature of system
should be much lower than room temperature.
Owing to the applied bias crossing the QD, the electric field
effect is not negligible in the variation of particle
interactions. We adopt the size of QD with radius 7.5 nm and
height 3 nm to study the electric field effect on the particle
Coulomb interactions. We assume that the z axis is directed from
the base to the apex of the dot. Fig. 3 shows the Coulomb
interaction strengths as functions of electric field for different
exciton complexes. We see that $U_{h}$, $U_{eh}$ and $U_e$ display
asymmetric behavior of electric field as a result of geometer of
the dots. Increasing electric field, the deduction of $U_{eh}$
indicates that the electron-hole separation increases. However, we
note that $U_h$ increases in the positive direction of electric
field since the wave functions of holes become more localize. Even
though the enhancement of $U_e$ is observed in the negative
direction of electric field, it only exists at very small electric
field region. When the electric field is larger than a threshold
value, the wave functions of electrons become delocalize and leak
out the quantum dot. Consequently, electron-electron Coulomb
interactions becomes weak. As mentioned, $U_e$ and $U_h$ denote
the charging energies of QD for electrons and holes, respectively.
Therefore, the constant interaction model used extensively in the
Anderson model is valid only for small electric field case$^{30}$,
otherwise we should take into account bias-dependent Coulomb
interactions.
Based on eq. (1), the larger $1/\gamma_{free}$, the better for
$1/\gamma_{cav}$. Hence we also attempt to investigate the
oscillator strength of QDs, which is proportional to the
electron-hole overlap squared. Fig. 4 shows the squared
electron-hole overlap function $A(F=0)$ for $X,X^2,X^-$, and $X^+$
as functions of the QD radius, $R_0$ (with the height $h$ varying
in the same way as in Fig. 2). For all four complexes, the
electron-hole overlap increases with the QD size, indicating that
the electron and hole wave functions approach each other as they
become fully confined in the QD. The electron-hole overlaps in
exiton and biexciton are very similar since they are both charge
neutral. Meanwhile, the $A(F=0)$ of biexciton is slightly above
that of the exciton. For small size QDs, the electron-hole overlap
is the smallest in negative trion ($X^-$), and the largest in
positive trion ($X^+$). This is due to the fact that the Coulomb
repulsion between the two electrons in $X^-$ causes the electron
wave function to be more delocalized than in the charge-neutral
complexes ($X$ and $X^2$), thus reducing the electron-hole
overlap. However, the same effect in $X^+$ causes the hole wave
function to be more delocalized, which becomes closer to the
electron wave function, and therefore enhances the electron-hole
overlap. The behavior is reversed for large QDs with a cross-over
occurring at $R_0 \sim 95$ \AA.
In fig. 4 we did not include the electric field effect for
electron-hole overlap function $A(F)$. Using the size of QD
considered in Fig. 3, we show $A(F)$ as functions of electric
field for four complexes in Fig. 5. $A(F)$ of each complex is
enhanced at small field region for $F>0$, whereas the decline of
$A(F)$ appears when the electric field is larger than a threshold
field $F_s$, which depends on exciton complexes. For example,
$F_s$ is about $45~kV/cm$ for $X^{+}$, but $30 ~kV/cm$ for $X$. On
the other hand, $A(F)$ declines as the electric field is increased
from 0 to $120 kV/cm$ for negative applied field direction. We
observe that the variation of $A(F)$ is the largest in negative
trion $X^{-}$ since the electron wave function is more delocalized
for $X^{-}$ than other complexes. Even though the results of Fig.
5 shows that $A(F)$ of positive trion and biexciton states is
larger than that of exciton at a given electric field. However,
the creation of biexciton (trion) state requires the applied field
higher than that of exciton for the overcome of charing energies,
so that it is possible to observe that the spontaneous emission
rate of biexciton or trion is smaller that of exciton in this
electrically driven SPS.
Finally, we attempt to study the transition energies for exciton
complexes without and with electric field. Fig. 6 shows the
transition energy for the electron-hole recombination in a
biexciton, positive trion and negative trion relative to the
exciton transition energy ($E_X$) as functions of the QD radius,
$R_0$ (with the height $h$ varying in the same way as in Fig. 2).
The exciton transition energy ($E_X$) as a function of $R_0$ is
also displayed (dashed curve) with the scale indicated on the
right side of Fig. 6. The biexciton transition energy is
consistently above the exciton transition energy ($E_X$) for the
range of $R_0$ considered here, which is still considerably
smaller than the free exciton Bohr radius in InAs ($\sim 300$\AA).
For very large QDs (with $R_0 > 300$\AA), the correlation effect
will become important, and the biexciton transition energy can
become lower than $E_X$. The Positive (negative) trion appears at
significantly higher (lower) energy than the exciton for small
QDs, but approaches the biexciton quickly as the QD size
increases. This behavior can be understood by examining the
difference in $U_h$ ($U_e$) and $U_{eh}$ as shown in Fig. 2. The
biexciton peak displaying a blue shift with respect to the exciton
peak (showing an antibinding biexciton) is also consistent with
the observation reported in ref.[3,20]. Recently, studies of the
binding and antibinding of biexcitons were reported in Ref. 31.
Our calculation given by Eqs.~(2) and (3) provides only the
antibinding feature of biexciton. In Ref. 31 it is pointed out
that the biexciton complex changes from antibinding to binding as
the QD size increases. For QDs with dimension larger than the
exciton Bohr radius, the correlation energy becomes significant.
In this study, we have not taken into account the correlation
energy. This is justified as long as we restrict ourselves to
small QDs.
According to eq. (1), it is one of challenged techniques to obtain
$\omega=\omega_c$. If the frequencies of emitted photons
contributed from exciton complexes can be highly tuned, the
possibility of optimizing spontaneous emission rate will be
enhanced. Fig. 7 (a) and (b) shows Stark shifts of exciton
complexes with the strength and direction of the electric field
for different quantum dot size. In diagram (a) the size of QD
corresponds to that of Fig. 3. In diagram (b) we adopt the size of
QD with the radius $R_0=8.5 nm$ and height $h=3 nm$. The exciton
complexes display asymmetric shift around zero field and red shift
in the negative field direction. We see that in the positive field
direction the exciton complexes display the blue shifts at small
field region. The blue shift of positive trion is relatively large
if compare with that of other exciton complexes. For charge
neutral exciton $X$ and $X^2$, their responses to Stark effect are
almost the same. The comparison of diagram (a) and (b) shows that
the tunable range of emitted photon frequencies depends on the
size of quantum dot. In our case, Stark shifts of exciton
complexes are near $2.8~meV$ and $3.5~meV $ for the QD of the
radius $R_0=7.5 nm$ and height $h=3~ nm$ and the QD of the radius
$R_0=8.5 nm$ and height $h=4~ nm$, respectively. This remarkable
frequency shifts arising from the Stark effect lead the SPS with
electric pumping to readily reach the resonant condition of
$\omega=\omega_c$.
\section{Conclusion}
In this article we have calculated the following three physical
parameters of exciton complexes formed in the ground state of QD
embedded into a p-n junction: particle Coulomb interactions,
electron-hole overlaps and transition energies. In the typical
volume of grown SAQDs, we found that the magnitude of
electron-electron Coulomb interactions as well as hole-hole
Coulomb interactions is just near the thermal energy of room
temperature. Therefore, it is difficult for the InAs/GaAs system
to provide single-photon sources at room temperature. We note that
the suppression of electron-hole overlaps always exist for
electrically driven SPS. Although the decaying of biexciton to
exciton can generate single photons, we need to apply relatively
higher voltage to overcome the charging energies for electrons and
holes to create the biexciton state. Such higher voltage is
possible to suppress the spontaneous emission rate of the
biexciton state and make it smaller than that of exciton at lower
voltage. Consequently, the exciton state is preferable for
single-photon generation.
Owing to the asymmetric shape of QDs, the direction-dependent
Stark effects are observed in the measurable transition energies
of exciton complexes. We note that the Stark shifts of complexes
are smaller in the positive direction field than negative
direction field. Besides, the Stark shifts also depend on the size
of QDs. In particular, the largest blueshifts are occurred in the
positive trion state, on the other hand, the largest redshifts are
observed in the exciton state. The Stark shifts of exciton
complexes may be applied to manipulate the resonant condition of
eq. (1) and to control the spontaneous emission rate of individual
QDs embedded in a cavity.
\mbox{}
{\bf Acknowledgments}
This work was supported by National Science Council of Republic of
China Contract No. NSC 92-2112-M-008-053
|
2,877,628,091,188 | arxiv | \section{Introduction}
\label{sec:intro}
The use of large-scale multiple antenna systems, known as Massive MIMO, has emerged as a leading concept for advancing wireless communications infrastructure. The design of such systems, however, comes with inherent challenges in terms of analog hardware and computational complexity. The rapidly growing pace of scale in antenna systems for communications necessitates methods to reduce the RF hardware complexity and power consumption, such as reducing the resolution of the data converters and operating the power-amplifiers with constant-envelope signals. In particular, the power amplifier (PA) typically accounts for a substantial amount of the power consumption in a base station (BS) \cite{Blume2010}. Therefore, it is desirable in terms of power efficiency to run the PA in the saturation region where most of the available PA power is radiated and less power is lost as heat.
Operating PAs in the saturation region and using low-resolution D/A-converters (DACs), however, implies high distortions and nonlinearities that are introduced to the signals, calling for new processing techniques to mitigate their effects.
In \cite{mollen_2016}, the impact of power amplifier nonlinearity has been analyzed and it was shown that both single-carrier (SC) and OFDM transmission are affected similarly if conventional linear precoding is used since the peak-to-average power ratio (PAPR) increases with multi-user processing \cite{Jedda2019}. Standard and improved linear zero-forcing and minimum-mean-squared error (MMSE) precoder designs for (massive) MIMO with low-resolution DACs and their performance have been studied in \cite{Mezghani_2009, Usman_2016,Saxena_2016_2,DeCandido2019,Li2017,amodh_2020}, showing satisfactory performance for small loading factors and well-conditioned channels such as i.i.d. channels. Achievable rates for multi-user MISO systems with low-resolution D/A converters were considered in \cite{Kakkavas_2016}. Nonlinear \emph{Tomlinson-Harashima Precoding} has been considered in \cite{Mezghani_2008_G} for low-resolution DACs, and achieves better performance than pure linear methods. Other works such as \cite{Jedda_2016,Jacobsson_2016,Jacobsson_2016_2,Swindlehurst_2017,Studer2013,Shao_2019,Lopes2020} show that nonlinear symbol-by-symbol precoding schemes outperform linear precoders under hardware impairments at the cost of an increased computational complexity.
Motivated by the simplicity of linear methods, we reconsider the problem of optimizing linear precoding techniques and present a new approach where SC transmission can still preserve its advantage in terms of PAPR compared to OFDM despite the multi-user processing. The new precoders are used in particular for mitigating the effects of constant envelope transmission as well as 1-bit DACs. To this end, a new cost functions with $\ell_1$ regularization is investigated. The new class of linear precoders based on $\ell_1$ matrix norm optimization significantly outperforms conventional designs for QPSK as well as higher-order modulations and allows for reduced computational complexity.
\section{System Model}
\label{sec:format}
Consider a downlink system with nonlinear transmitters:
\begin{equation}
\hat{s}=\B{H} \cdot \mathcal{Q} (\B{P}\cdot \B{s}) +\B{\eta},
\end{equation}
where $\B{H} \in \mathbb{C}^{K\times N}$ is the channel matrix with $K$ single-antenna users and $N$ transmit antenna, while $\B{P} \in \mathbb{C}^{N\times K}$ is the linear precoder and $\mathcal{Q}(\cdot)$ represent the transmit nonlinearity. The transmit symbol vector $\B{s}$ is assumed to be either an OFDM or a single-carrier signal and the noise vector $\B{\eta}$ is i.i.d. circular symmetric complex Gaussian with variance $\sigma_\eta^2$.
One possible design criterion for $\B{P}$ is that it be zero-forcing, i.e, $\B{H}\cdot \B{P}= {\bf I}$ if quantization is ignored. However since usually $N>K$, the solution is not unique. Thus the question that arises is what is an appropriate solution for mitigating nonlinearities such us clipping at the power amplifier or one-bit quantization at the DAC. We will focus on two types of nonlinearities: (i) One-bit DAC quantization:
\begin{equation}
\mathcal{Q}(\B{y})=\frac{1}{\sqrt{2}}{\rm sign}({\rm Re} \{{\B{y}}\})+\frac{\rm j}{\sqrt{2}}{\rm sign}({\rm Im} \{{\B{y}}\}).
\end{equation}
and (ii) constant envelope transmission
\begin{equation}
\mathcal{Q}(\B{y})=\exp\left( {\rm j} \angle \B{y} \right),
\end{equation}
where the nonlinear operation is applied element-wise. In both cases, we obtain constant instantaneous power per antenna equal to one. We study in the next section the impact of OFDM or single-carrier processing on the PAPR and how standard linear multiuser precoding techniques affect those signals.
\section{PAPR Comparison between single-carrier and OFDM systems}
\label{sec:pagestyle}
In this section, we show that for the same power spectral density and the same performance under AWGN (with perfect orthogonal transmission), the worst-case peak power in the single-carrier (SC) systems grows only logarithmically with the spectral rolloff (steepness of the spectrum in the transmission band), whereas the worst-case peak power in OFDM grows linearly with the rolloff (i.e., proportional to the number of sub-carriers).
\subsection{OFDM vs. block-wise single-carrier transmission}
We compare OFDM and block-wise single-carrier modulation (also known as SC-FDMA in LTE) assuming an oversampling (interpolation) factor of two. In OFDM, a block of $M$ data symbols (consisting in general of QAM symbols) is mapped to the lower frequency points of a $2M$-point IFFT with zero-padding. In the block-wise single-carrier scheme, an additional $M$-point FFT is used to precode the data, resulting in a kind of data mapping in the time domain. The single-carrier processing at the transmitter can also be regarded as a cyclic convolution of data blocks with the impulse response of a discrete rectangular filter with a window size of $M$ out of $2M$ frequency points,
\begin{equation}
P_k=\left\{
\begin{array}{ll}
\frac{1}{\sqrt{M}} & \textrm{~for~} k < \frac{M}{4} \textrm{~or~} 2M-k < \frac{M}{4}-1 \\
\frac{1}{2\sqrt{M}} & \textrm{~for~} k = \frac{M}{4} \textrm{~or~} 2M-k = \frac{M}{4}-1,
\end{array}
\right.
\end{equation}
and zero elsewhere, with $0 \leq k < 2M$. The impulse response of the cyclic pulse shaping filter is given by applying an IFFT
\begin{equation}
p[n] =\left\{
\begin{array}{ll}
1 & \textrm{~for~} n =0 \\
\frac{\sin\left(\pi \frac{n}{2}\right)}{ M \sin\left(\pi \frac{n}{2M}\right) } & \textrm{~otherwise} \end{array} \right. , \quad 0 \leq n < 2M.
\end{equation}
Both systems have the same spectral density since the additional $M$-point FFT in SC-FDMA is a unitary transformation that does not change the second-order statistics. The resulting steepness of the spectral confinement is proportional to $M$. Also, in the AWGN case, both systems provide perfect orthogonality between data symbols and thus achieve the same BER performance. However, we show that these systems have significantly different behavior in terms of PAPR.
\subsection{Peak-to-average power analysis}
For simplicity, in this brief analysis we assume the input to be BPSK. For OFDM, the maximum peak-to-average-power is $M$ and is achieved when all the inputs are identical. In the single-carrier case the peak power is achieved at the intermediate time instants between two consecutive symbols, when all symbols add coherently. Therefore, we get the peak power
\begin{equation}
\begin{aligned}
\left(\sum_{n=0}^{M-1} |p[2n+1]|\right)^2 &= \left(\sum_{n=0}^{M-1} \left| \frac{\sin\left(\pi \frac{2n+1}{2}\right)}{ M \sin\left(\pi \frac{2n+1}{2M}\right) } \right|\right)^2 \\
&= \left(\sum_{n=0}^{M-1} \frac{1}{ M \sin\left(\pi \frac{2n+1}{2M}\right) } \right)^2 \\
&\leq \left( \int\limits_{\frac{1}{4M}}^{1-\frac{1}{4M}} \frac{1}{\sin(\pi x)} {\rm d}x \right)^{\! 2} \!\! + \! \frac{1}{M \sin (\frac{\pi}{2M}) }\\
&\approx \frac{4}{\pi^2}\left(\log\left(\sin(\frac{\pi}{8M})\right) \right)^2 \\
&\approx \frac{4}{\pi^2}\left(\log\left(\frac{\pi}{8M}\right) \right)^2,
\end{aligned}
\end{equation}
where the approximations hold for large $M$. Clearly, we see that the peak power of the single-carrier system only increases logarithmically with $M$, showing its superiority compared to OFDM in terms of PAPR. Standard linear multiuser precoding, however, diminishes this property due to the central limit theorem \cite{Jedda2019}. Our goal is to preserve this advantage in massive MIMO using sparse precoding.
\section{Sparse precoding design}
\label{sec:typestyle}
The most common choice for the zero-forcing precoder minimizes the Frobenius norm of $\B{P}$, i.e., the transmit power:
\begin{equation}
\min\limits_{\B{P}} \left\| \B{P} \right\|_{2,2}^2 \quad \textrm{s.t. } \B{H}\cdot \B{P}= {\bf I},
\end{equation}
where the $\ell_{p,q}$ matrix norm is defined as
\begin{equation}
\left\| \B{A} \right\|_{p,q}^q= \sum_j \left( \sum_i |a_{i,j}|^p \right)^{q/p}.
\end{equation}
The solution to this problem is famously given by the pseudo-inverse of $\B{H}$
\begin{equation}
\B{P}= \B{H}^{\rm H} \left( \B{H} \B{H}^{\rm H} \right)^{-1},
\end{equation}
which aims at minimizing the transmit power while fulfilling the zero-forcing requirement, or equivalently it maximizes the signal-to-noise ratio at the user terminals for a given power constraint. However, this solution results in a dense matrix $\B{P}$ (even when $\B{H}$ is sparse), leading to the fact that the distribution of $\B{P}\cdot \B{s}$ converges element-wise to a Gaussian distribution with high kurtosis. This happens even when $\B{s}$ is QPSK, resulting in high distortion at the DACs or amplifiers and eventually producing an error floor at high SNR. Therefore, an alternative formulation is presented in the next section.
\section{New approach: Formulation based on the $\ell_{1,2}$ norm}
If the input $\B{s}$ is a QPSK signal and the desired output of the precoder $\B{Ps}$ should be constant envelope or even QPSK (in the case of one-bit DACs), then we propose to design the precoder to have sparse rows, in order to preserve the statistical properties of $\B{s}$ as much as possible after precoding. To this end, we use the $\ell_{1,2}$ matrix norm
\begin{equation}
\min\limits_{\B{P}} \left\| \B{P}^{\rm T} \right\|_{1,2}^2=\sum_i \Big( \sum_j |p_{i,j}| \Big)^2 \quad \textrm{s.t. } \B{H}\cdot \B{P}= {\bf I}.
\end{equation}
This problem is convex and is solved efficiently. For constant envelope signaling, $|p_{i,j}|$ is the Euclidean norm.
For the one-bit DAC case, as it is applied separately to the inphase and quadrature components, it is appropriate to use a real valued representation of the channel
and to redefine the norm as
\begin{equation}
|p_{i,j}| \stackrel{\rm 1-bit}{=} |{\rm Re}\{p_{i,j}\}| + |{\rm Im}\{p_{i,j}\}| .
\end{equation}
Another important advantage of sparse precoding is the reduced computation for the matrix-vector multiplication.
\subsection{Combined $\ell_{1,2}/\ell_{2,2}$ regularization}
As we will see in the simulations, the $\ell_{2,2}$ formulation performs better at low SNR while the $\ell_{1,2}$ approach performs better at high SNR. This is because the quantization effects dominate the noise only for high SNR values. This suggests the combination of both norms in a way that the $\ell_{2,2}$ norm is effective at low SNR while the $\ell_{1,2}$ norm becomes active at high SNR values, similar to the \emph{elastic net} approach of \cite{Zou2005}:
\begin{equation}
\min\limits_{\B{P}} \left\| \B{H}\B{P} -{\bf I} \right\|_{2,2}^2 + \frac{1}{2} \left( \frac{K\cdot \sigma_\eta^2}{N} \left\| \B{P}^{\rm T} \right\|_{2,2}^2 +\lambda\left\| \B{P}^{\rm T} \right\|_{1,2}^2 \right),
\end{equation}
where $\lambda$ is a constant corresponding to the value $K\cdot \sigma_{\eta,\rm cross}^2/N$ at exactly the crossing point of both BER curves (c.f. Fig.~\ref{fig_2}).
This combined $\ell_{1,2}/\ell_{2,2}$ regularized optimization can be solved iteratively using the iterative shrinkage-thresholding algorithm (ISTA) \cite{Yang_2010} with an appropriate stepsize $\mu$
\begin{equation}
\begin{aligned}
&\B{P}^{\ell+1}= {\rm exp}\left({\rm j}\angle(\B{P}^\ell - \mu \B{\Delta}^\ell)\right) \circ \\
&~~~~~~ \max \Big( \left| \B{P}^\ell -\mu \B{\Delta}^\ell \right|-\mu \frac{\lambda}{2} \cdot (\sum_k |\B{p}_{k}^{\ell}|)\cdot \B{1}^{\rm T}, \B{0} \Big),
\end{aligned}
\label{grad_up}
\end{equation}
where $\circ$ represents the Hadamard product and
\begin{equation}
\B{\Delta}^\ell= \left( \B{H}^{\rm H} \B{H} + \frac{1}{2} \frac{K\cdot \sigma_\eta^2}{N} \cdot {\bf I} \right) \B{P}^\ell -\B{H}^{\rm H}.
\end{equation}
In the iterative formula (\ref{grad_up}), the absolute values are taken elementwise (note that for 1-bit we define $|x+y{\rm j}|=|x|+|y|$) and $\B{p}_{k}$ are the column vectors of $\B{P}$.
\subsection{Superposition coding for higher-order modulation}
Using higher-order modulation such as 16-QAM for better spectral efficiency while the base station only employs 1-bit DACs at each antenna may seem to be contradictory at first glance. However since the number of antennas is much larger than the number of users, such an idea can be achieved in principle with an appropriate transmit strategy. We propose such a method, still based on linear precoding, which is applicable for square QAM constellations. To this end, we apply the concept of superposition coding \cite{DeCandido2019}, to reformulate the problem again based on QPSK symbols that superimpose "over-the-air" through the channel $\B{H}$ to form the desired QAM constellation at the user terminals. For simplicity, let us consider the 16-QAM case, which can be represented as the sum of two QPSK signals: One least significant symbol (LSB) and one most significant symbol (MSB)
\begin{equation}
\B{s}_{\rm 16QAM}= ( {\bf I}_K \otimes [1,~2]) \cdot \B{s}_{\rm QPSK} = \B{\Pi}\cdot \B{s}_{\rm QPSK},
\end{equation}
where $\B{s}_{\rm QPSK} \in \{\pm 1\pm {\rm j} \}^{2K}$. Now, instead of designing a precoder matrix $\B{P}$ to be applied directly to $\B{s}_{\rm 16QAM}$, we design a larger precoder $\B{P} \in \mathbb{C}^{N\times 2K}$ that is applied to $\B{s}_{\rm QPSK}$, and we let the corresponding signals superimpose through the channel and form the desired signal $\B{s}_{\rm 16QAM}$ at the user terminals. By doing so and additionally encouraging $\B{P}$ to be sparse, we can produce a binary-like signal at the output of the precoder and mitigate the issue of the 1-bit DACs, while at the receiver side a higher-order modulated signal is generated through the channel.
Unfortunately, with the new constraint $\B{H}\B{P}=\B{\Pi}$, the Restricted Isometry Property (RIP) condition is not valid for this problem meaning that the $\ell_1$ solution might not be sparse as desired. In fact, due to symmetry considerations (same channel for both MSB and LSB bits), the solution will present the same symmetry, i.e., the precoding vector for the most significant bit is just twice the one for the LSB bit. To break this symmetry, we propose to add some small non-convex perturbation of the original problem in order to encourage sparse solutions having different precoding vectors for both bits:
\begin{equation}
\begin{aligned}
&\min\limits_{\B{P}} \left\| \B{H}\B{P} -\B{\Pi} \right\|_{2,2}^2 + \\
&\frac{1}{2} \left( \frac{K\cdot \sigma_\eta^2}{N} \left\| \B{P}^{\rm T} \right\|_{2,2}^2 +\lambda \left(\left\| \B{P}^{\rm T} \right\|_{1,2}^2 + {\rm tr} (\B{P}{\bf J}\B{P}^{\rm H}) \right) \right),
\end{aligned}
\end{equation}
with $ {\bf J}= {\bf I}_K \otimes ({\bf 1} {\bf 1}^{\rm T}-{\bf I}_2)$. The term $ {\rm tr} (\B{P}{\bf J}\B{P}^{\rm H}) $ represents the non-convex perturbation for breaking the symmetry. Again, the iterative shrinkage-thresholding algorithm can be used to solve this optimization (locally).
\section{Simulation Results}
\label{sec:foot}
We consider first a system with $N=30$ antennas, $K=5$ users, and QPSK constellations. The channel entries are $h_{ij} \sim \mathcal{CN}(0,1)$. In the simulations, we used a conservative value for the stepsize $\mu=0.01$. Accelerated ISTA versions (such as FISTA) can be also used for significantly faster convergence rate \cite{Yang_2010}. In Fig.~\ref{fig_1}, the complementary cumulative distribution function (CCDF) of the output power is shown for the new precoding technique with $\ell_1$ regularization compared to the conventional ZF-technique for different modulation formats. For comparison, the CCDF without precoding is also shown. With the new precoder, the PAPR is only increased marginally, while SC-FDMA and RRC pulse shaping still preserve their advantage compared to OFDM.
The BER performance is shown in Fig.~\ref{fig_2} and Fig.~\ref{fig_3} under constant envelope and 1-bit DAC constraints, respectively. In both cases, we observe substantial improvements at high SNR with sparse precoding. As explained earlier, the elastic net approach with the combined $\ell_1$ and $\ell_2$ regularization provides the best performance over the entire SNR range as it optimizes the trade-off between beamforming gain and PAPR reduction.
Finally, we consider 16-QAM modulation with superposition coding and 1-bit DACs for $K=8$ and $N=400$ in Fig.\ref{fig_5}. At the receive side, we use the following blind estimation method for the scaling factor for each user prior to detection, as proposed in \cite{Hela2017}:
\begin{equation}
f_k= T \cdot \frac{{\rm E}\left[|{\rm Re}\{s\}|+|{\rm Im}\{s\}|\right]}{ \sum_{t=1}^T |{\rm Re}\{\hat{s}_k[t]\}|+|{\rm Im}\{\hat{s}_k[t]\}|},
\end{equation}
where $T$ is the length the received sequence. Again, sparse linear precoding combined with superposition coding is clearly advantageous at high SNR. The proposed 16-QAM precoder design also better exploits the higher antenna count than standard ZF at higher SNR values.
\section{Conclusion}
\label{sec:copyright}
We introduced a novel sparse precoding method to cope with the PA and DAC issues in large-scale MIMO. We found the method to substantially improve the performance of standard linear precoders while reducing the amount of computation required to implement the linear precoder. With the increased demands for antenna element counts, we believe techniques offering reduced analog and digital hardware complexity such as the method presented here will be an essential ingredient in the development of future ultra-massive MIMO systems.
\begin{figure}
\centering
\psfrag{eta}[c][c]{$\eta$}
\psfrag{P}[c][c]{$P({\rm output~power} > \eta)$}
\centerline{\includegraphics[width=6.4cm]{sc_vs_ofdm_PAPR_10x64_3_rev}}
\caption{OFDM vs. single-carrier in terms of the CCDF of instantaneous output power for QPSK input and $M = 128$, $N=30$ antennas, and $K=5$ users.}
\label{fig_1}
\end{figure}
\begin{figure}
\psfrag{-10*log10}[c][c]{\tiny $-10\log_{10} \sigma_\eta^2$}
\psfrag{BER}[c][c]{\tiny BER}
\centerline{\includegraphics[width=5.5cm]{QPSK_5x30}}
\vspace{-0.5cm}
\caption{BER Performance with constant envelope precoding for the downlink with $K = 5$ users and $N = 30$ antennas, QPSK.}
\label{fig_2}
\end{figure}
\begin{figure}
\psfrag{-10*log10}[c][c]{\tiny $-10\log_{10} \sigma_\eta^2$}
\psfrag{BER}[c][c]{\tiny BER}
\centerline{\includegraphics[width=5.5cm]{precoding_res4}}
\vspace{-0.5cm}
\caption{BER for $K = 5$ and $N = 30$, QPSK, with 1-bit DACs.}
\label{fig_3}
\end{figure}
\begin{figure}
\psfrag{-10*log10}[c][c]{\tiny $-10\log_{10} \sigma_\eta^2$}
\psfrag{BER}[c][c]{\tiny BER}
\centerline{\includegraphics[width=5.5cm]{16QAM_8x400}}
\vspace{-0.5cm}
\caption{BER for $K = 8$ and $N = 400$, 16QAM, 1-bit DACs}
\label{fig_5}
\end{figure}
\pagebreak
\bibliographystyle{IEEEbib}
|
2,877,628,091,189 | arxiv | \section{Introduction}
Undoubtedly one of the most powerful theorems in topology is Tychonoff's theorem, saying that arbitrary products of compact (not necessarily Hausdorff) spaces are again compact.
As observed by Kelley \cite{Kelley} already in the early fifties of the last century, this theorem is equivalent to the full Axiom of Choice (AC), while its restriction to
Hausdorff spaces or even to the much larger class of sober spaces was later shown to be equivalent to the weaker Prime Ideal Theorem (PIT) (see \L o\'s and Ryll-Nardzewski \cite{LosRyll}, Johnstone \cite{Johnstone}).
In the present paper we are dealing primarily with arbitrary products of spaces and make permanent use of the surjectivity of the projections; thus, it will be unavoidable to invoke the full strength of (AC),
and we shall do so without particular emphasis. However, if the Continuum Hypothesis (CH) or even the General Continuum Hypothesis (GCH) is involved, this will be mentioned explicitly.
Of course, many variants of compactness have been studied in the past with respect to their stability under the formation of products.
Since some weaker forms of compactness like paracompactness or countable compactness behave
badly already under the formation of products of two factors (cf. \cite{Dugundji, Novak,counterexamples}), we shall exclude such properties from our present study.
On the other hand, it will be reasonable to include several product-stable properties that are not typically modified compactness properties. A list of relevant topological properties is added at the end of this note.
Our primary concern is the study of {\em local} topological properties and their behavior under the formation of arbitrary products.
There is a weak and a strong form of localization, which sometimes coincide, but sometimes differ essentially.
Let $\T$ be a class of topological spaces, referred to as {\em $\T$-spaces}.
Then, by a \textit{basic $\T$-space}\index{Basic T-space@Basic $\T$-space} we mean a topological space having an open base consisting of subsets that are $\T$-spaces with respect to the induced topology,
and by a \textit{local $\T$-space}\index{Local T-space@Local $\T$-space} a topological space in which every point has a neighborhood base of subsets being $\T$-spaces with respect to their subspace topology.
Of course, every $\T$-basic space is a local $\T$-space, but the converse fails, for example, if $\T$ is the class of compact spaces, whereas both notions agree in case $\T$ is the class of connected spaces.
If we speak of local compactness or local connectedness, respectively, we refer to the above definition for the class $\T$ of compact or connected spaces, respectively. This definition is the one commonly adopted for local compactness in the absence of higher separation axioms (cf.\ \cite{Erne2009, Gierz, Johnstone}), because to require only at least one compact neighborhood for each point would be too weak in order to derive substantial conclusions. However, in the Hausdorff setting of $T_2$-spaces, both notions of local compactness are equivalent.
Given an infinite cardinal $\kappa$, a union of fewer than $\kappa$ $\T$-spaces will be referred to as a {\em $\T_k$-space}. Our main results are:
\vspace{1ex}
{\em Let $\S,\,\T$ be classes of topological spaces such that $\T$ is closed under continuous images and
a product of topological spaces is a $\T$-space iff all factors are $\T$-spaces and fewer than $\kappa$ are not $\S$-spaces.
Then a product of topological spaces is a local (basic) $\T$-space iff all factors are local (basic) $\T$-spaces, all but finitely many are $\T$-spaces, and fewer than $\kappa$ are not $\S$-spaces.}
\vspace{1ex}
{\em Assume the class $\T$ is closed under continuous images and arbitrary products.
If a product of spaces is a $\T_{\kappa}$-space then all factors are $\T_{\kappa}$-spaces and for some $\lambda < \kappa$, fewer than $\lambda$ factors are not $\T$-spaces.
The converse holds if {\rm (GCH)} is assumed and $\kappa$ is a regular limit cardinal or the successor of a regular cardinal.}
\vspace{1ex}
Applying these two general results to various specific classes of topological spaces, we immediately arrive at several known and some new Tychonoff-like theorems concerning global or local properties of topological spaces.
Often the properties under consideration admit a point-free description, that is, a characterization by properties of the open set frames that are \mbox{invariant} under lattice isomorphisms.
This aspect together with the familiar observation that sums of spaces turn into products of their open set frames enables us to establish product decomposition theorems for certain classes of frames, like supercontinuous (=\,completely distributive) lattices and hypercontinuous frames.
\vspace{1ex}
Parts of our results are contained in the first author's 2012 Bachelor thesis.
\newpage
\section{Set- and order-theoretical preliminaries}
\label{set}
For later use, let us start by recalling a few set-theoretical conventions.
Each ordinal (number) is regarded as the set of all smaller ordinals. Thus, $\lambda < \kappa$ and $\lambda \in \kappa$ are equivalent statements about ordinals $\kappa$ and $\lambda$.
We denote by $|A|$ the cardinality of a set $A$; it is the minimal ordinal equipollent to $A$. A cardinal (number) is such a minimal ordinal, equal to its own cardinality.
Given a a cardinal number $\kappa$, a set $A$ is said to be \textit{$\kappa$-small}\index{kappa-small@$\kappa$-small} if $|A| < \kappa$,
and, on the contrary, \textit{$\kappa$-large}\index{kappa-large@$\kappa$-large} if $|A| \geq \kappa$. The cardinal successor of $\kappa$ is denoted by $\kappa^+$.
If $P$ is any property and $(X_i : i\in I)$ is a family of sets or spaces, we say (by slight abuse of language) that {\em fewer than $\kappa$ of the $X_i$'s have property P} if we mean that the set
$\{ i\in I : X_i \mbox{ has } P\}$ is $\kappa$-small (whereas the requirement that the set $\{ X_i : X_i \mbox{ has } P\}$ be $\kappa$-small may be weaker if some or all of the $X_i$'s coincide).
By definition of $\omega$, the least infinite cardinal, ``$\omega$-small'' means ``finite'', and ```fewer than $\omega$'' means ``finitely many''. For the cardinal successor $\omega_1 = \omega^+$
(the least uncountable cardinal), ``$\omega^+$-small'' means ``countable''.
A subset $Q$ of a preordered set $P$ is called {\em cofinal} if for each $x\in P$ there exists a $y \in Q$ with $x\leq y$.
The cofinality ${\rm cf}(\kappa)$ of a cardinal number $\kappa$ is defined as the minimal cardinality of a cofinal subset, and $\kappa$ is said to be {\em regular} iff $\kappa= {\rm cf}(\kappa)$.
In that case, unions of fewer than $\kappa$ sets that are $\kappa$-small are $\kappa$-small.
We write (GCH) to indicate that the Generalized Continuum Hypothesis is assumed, postulating the equation $\kappa^+\! = 2^\kappa$ for all infinite cardinals $\kappa$.
The special case of the classical Continuum Hypothesis $\omega^+\! = 2^\omega$ is indicated by (CH).
Notice the following properties of infinite sets and cardinals:
\begin{lem}
\label{infinite}
{\rm (1)} For each infinite set $I$ there is a partition into disjoint subsets \mbox{$J_i \subseteq I$} with $|I|=|J_i|$ for all $i \in I$.
{\rm (2)} Each infinite cardinal $\kappa$ is (isomorphic to) the ordinal sum of all smaller ordinals.
{\rm (3)} {\rm (GCH)} assures $\kappa^\lambda = \kappa$ for all infinite cardinal numbers $\kappa, \lambda$ with $\lambda < {\rm cf}(\kappa)$.
\end{lem}
\BP
(1) Note $I\times I = \Sigma_{i \in I}I = \bigcup \{ I \times \{i\}: i\in I\}$ and $|I|=|I\times I|=|\Sigma_{i \in I}I|$;
so there is a bijection $f:\Sigma_{i \in I}I \rightarrow I$, and one may put $J_i = f[I\times \{ i\}]$.
(2) For $\omega\! \leq \! \nu \! < \! \kappa$, the ordinal sum $\sum_{\iota < \nu} \iota$ is (isomorphic to) a unique ordinal $\lambda_{\nu} < \kappa$
(as $|\sum_{\iota < \nu} \iota| \leq |\nu |^2 = |\nu | < \kappa$),
whence \mbox{$\sum_{\iota < \kappa }\iota = \sup_{\omega\leq \nu < \kappa} \sum_{\iota < \nu} \iota \leq \kappa$.}\\
The reverse inequality is obvious.
For (3), see e.g.\ Jech \cite{Jech}.
\EP
Let us fix a few order-theoretical standard concepts and notations we shall need in due course.
Given a preordered set $P$, a subset $A$ of $P$ and an element $x \in P$,
\begin{itemize} \itemsep0pt
\item[] $\uparrow\! A:=\{y\! \in\! P : \exists\, x\! \in\! A\, ( x \leq y) \}$ is the \textit{upper set}\index{Upper set} ({\em upset}) generated by\,$A$,
\item[] $\downarrow\! A:=\{y\! \in\! P : \exists\, x\! \in\! X\, ( y \leq x) \}$ is the \textit{lower set}\index{Lower set} ({\em downset}) generated by\,$A$,
\item[] $\downarrow\! x:= {\downarrow\! \{x\}}$ is the \textit{principal filter}\index{Principal filter} generated by $x$,
\item[] $\uparrow\! x:= {\uparrow\! \{x\}}$ is the \textit{principal ideal}\index{Principal ideal} generated by $x$.
\end{itemize}
The \textit{specialization order}\index{Specialization order} on a space $X$ with topology $\O (X)$ is defined by
\[x \leq y \ \Leftrightarrow \ \forall \,U \in \O(X)\ ( x\in U \Rightarrow y \in U) \ \Leftrightarrow \ x\in cl\{ y\}.\]
This relation is always a {\em preorder} or {\em quasiorder} (reflexive and transitive). It is an order (antisymmetric) iff $X$ is $T_0$; and it is the identity relation iff $X$ is $T_1$.
Unless otherwise specified, all order-theoretical statements about topological spaces will refer to the specialization order.
Thus, in a topological space, principal ideals and point closures coincide: $\downarrow \! y =\{x \in X : x \leq y\}= cl\{y\}.$
Hence, every open set is an upper set and every closed set is a lower set.
For a subset $A$ of a space $X$, the {\em saturation} ${\uparrow\! A}$ is the intersection of all neighborhoods of $A$, while the closure $cl\,A$ may properly contain in ${\downarrow\! A}$;
however, in {\em A-spaces}, where arbitrary intersections of open sets are open, $cl\,A = {\downarrow\! A}$.
It is also helpful to notice that continuous maps $f$ between spaces are monotone with respect to the specialization order, i.e., $x\leq y$ implies $f(x) \leq f(y)$.
The specialization functor, sending a space to the underlying set endowed with the specialization order, preserves initial structures, and in particular products:
\begin{prop}
\label{produktspezord}
The product of the specialization orders of a family of topological spaces is the specialization order of the product.
\end{prop}
Indeed, the equivalence $\ x \leq y \ \Leftrightarrow \ \forall\, i \in I \, (x_i \leq y_i)\ $
is just a reformulation of the well known identity \ \mbox{$cl \{ y \}= cl \prod_{i\in I} \{y_i\} = \prod_{i\in I} cl \{ y_i \}$.}
\vspace{1ex}
Given a cardinal number $\kappa$, a preordered set or a topological space is said to be {\em $\kappa$-filtered} or {\em $\kappa$-down-directed} if every $\kappa$-small subset has a lower bound.
Topologically speaking, a space is $\kappa$-filtered if and only if each $\kappa$-small set of point closures has nonempty intersection.
The $\omega$-filtered preordered sets are just the down-directed ones. The $\omega$-filtered spaces are the {\em ultraconnected} ones,
i.e. those nonempty spaces in which the intersection of any two nonempty closed sets is nonempty (see e.g.\ Steen and Seebach \cite{counterexamples}).
More generally, the $\kappa$-filtered spaces are those in which every $\kappa$-small set of nonempty closed subsets has nonempty intersection ({\em $\kappa$-ultraconnected spaces}).
\begin{theo}
\label{kfilt}
{\rm (1)} Monotone maps send $\kappa$-filtered sets to $\kappa$-filtered sets.
{\rm (2)} A product of preordered sets is $\kappa$-filtered iff each factor is $\kappa$-filtered.
\end{theo}
\BP
(1) is straightforward: monotone maps preserve lower bounds.
(2) Suppose $(X_i\! : i\in I)$ is a family of $\kappa$-filtered preordered sets, and \mbox{$X = \prod_{i\in I} X_i$} is their product.
For a $\kappa$-small $A\subseteq X$, each projection set $\pi_i[A]$ is a $\kappa$-small subset of $X_i$, so there exist lower bounds $x_i$ of $\pi_i[A]$, and then $x =(x_i : i\in I)$ is a lower bound of $A$ in $X$.
The other implication is clear by (1).
\EP
\begin{coro}
\label{filtspaces}
{\rm (1)} Continuous maps send $\kappa$-filtered spaces to $\kappa$-filtered spaces.
{\rm (2)} A product of topological spaces is $\kappa$-filtered iff each factor is $\kappa$-filtered.
\end{coro}
Note that a subset of a space is {\em supercompact} (in the sense that it has a dense point) iff it is $\kappa$-filtered for all $\kappa$. Spaces with a base of supercompact open sets are also called {\em B-spaces},
and locally supercompact spaces (in which each point has a neighborhood base consisting of supercompact sets) are also termed {\em C-spaces}.
A-, B- and C-spaces play a central role in the interplay between order and topology (see e.g.\ \cite{Erne1991, Erne2005, Erne2009}).
\newpage
\section{Products of spaces with local properties}
\label{Prod}
In order to avoid undesired (but trivial) exceptions, all products considered in the sequel are tacitly assumed to be nonempty, and consequently, their factors are nonempty, too. By (AC), all projections
\[\pi_j : \prod_{i\in I} X_i \rightarrow X_j,\ x \mapsto x_j\]
are surjective, and so are all {\em generalized projections}
\[\pi_J : \prod_{i\in I} X_i \rightarrow \prod_{j\in J} X_j, \ x \mapsto x|_J \ \ (J\subseteq I).\]
For the sake of later use, we note a well-known fact, which holds in arbitrary categories having products; for the special case of topological spaces, see e.g.\ Dugundji \cite{Dugundji}.
\begin{lem}\label{productassociative}
Let $(X_i : i \in I)$ be a family of topological spaces and $\{ J_k : k \in K \}$ a partition of $I$ into nonempty subsets.
Then
$$\prod_{i \in I} X_i \cong \prod_{k\in K}(\prod_{j \in J_k} X_j).$$
\end{lem}
\vspace{-3ex}
Henceforth,
\BIT
\IT[] {\em $\T$ always denotes a class of topological spaces (so-called $\T$-spaces) that is closed under images of continuous surjections.}
\EIT
In particular, the continuity and surjectivity of the generalized projections guarantees that
\BIT
\IT[] {\em if a product is in $\T$ then so is any subproduct and each factor.}
\EIT
Recall that a {\em basic $\T$-space} has an open base consisting of $\T$-subspaces, while in a {\em local $\T$-space} every point has a neighborhood base consisting of $\T$-subspaces.
The following easy but fundamental result is due to R.-E.\ Hoffmann \cite{REHoffmann}.
\begin{prop}\label{localTspace}
The image of any local $\T$-space under a continuous open surjection is a local $\T$-space.
If a product of topological spaces is a local $\T$-space, all factors are local $\T$-spaces and all but finitely many are $\T$-spaces.
The converse holds as well if $\T$ is closed under products.
The same statements are valid with ``local'' substituted by ``basic''.
\end{prop}
By the classical Tychonoff theorem, a product of topological spaces is compact iff all factors are compact. Hence, Proposition \ref{localTspace} immediately provides the well-known fact that
a product is locally compact iff all factors are locally compact and all but finitely many are compact, and similar conclusions for (path-)connectedness instead of compactness. But there are also less familiar
applications of that very flexible proposition.
For example, calling a a space {\em compactly based} if it has a base of compact open sets, we see that a product of topological spaces is compactly based
iff each factor is compactly based and only a finite number of the factors are not compact. A {\em spectral space} is compactly based, sober (see e.g.\ \cite{Gierz} or \cite{Johnstone})
and coherent, where coherence means that finite intersections of compact saturated subsets are compact (in particular, the entire space must be compact).
Up to isomorphism, the open set frames of such spaces are exactly the {\em coherent} or {\em arithmetic frames}, or equivalently,
the ideal lattices of bounded distributive lattices. Hence, the spectral spaces are the duals of bounded distributive lattices in the classical Stone duality \cite{Sto2}.
A local variant is the following: according to \cite{Gierz}, a {\em stably compact} space is a locally compact, sober and coherent space. Via the patch functor, these spaces are in one-to-one
correspondence with the {\em compact pospaces} (i.e.\ compact topological spaces equipped with an additional closed order),
and via the open set functor, they are dual to the {\em stably continuous frames} (see \cite{Gierz} for details).
Since a product of spaces is sober or coherent, respectively, iff each factor has the corresponding property,
we have the following interesting consequences of Proposition \ref{localTspace}:
\begin{coro}
\label{spectral}
\BIT
\IT[{\rm (1)}] A product of spaces is spectral iff all factors are spectral.
\IT[{\rm (2)}] A product of spaces is stably compact iff all factors are stably compact.
\IT[{\rm (3)}] A product of ordered topological spaces is a compact pospace iff all factors are compact pospaces.
\EIT
\end{coro}
Our first major theorem extends Proposition \ref{localTspace} and will serve as the clue for many Tychonoff-like product theorems involving local properties.
\begin{theo}
\label{localSTspace}
Let $\S$ be a class of topological spaces and $\kappa$ an infinite cardinal such that
a product of topological spaces is a $\T$-space iff all factors are $\T$-spaces and fewer than $\kappa$ of them are not $\S$-spaces.
Then a product of topological spaces is a local $\T$-space iff all factors are local $\T$-spaces, all but finitely many are $\T$-spaces, and fewer than $\kappa$ are not $\S$-spaces.
The same conclusion holds with ``basic'' instead of ``local''.
\end{theo}
\BP
Let $X=\prod_{i \in I}X_i$ be a local $\T$-space. Then, by Proposition \ref{localTspace}, all factors are local
$\T$-spaces and all but finitely many are $\T$-spaces. Suppose that for a set $K \subseteq I$ having cardinality $\kappa$, no $X_k$ with $k\in K$ is an $\S$-space.
Since $\kappa$ is infinite, there exists a partition of $K$ into $\kappa$-large subsets $K_l$, $l\in L$, $|L|=\kappa$ (Lemma \ref{infinite}).
By Lemma \ref{productassociative}, the spaces $Y _l =\prod_{k \in K_l}X_k$ satisfy $\prod_{l\in L} Y_l \cong X$.
As this is a local $\T$-space, we may again apply Proposition \ref{localTspace}
to see that $Y_l$ is a $\T$-space for all but finitely many $l \in L$.
Since $L$ is infinite, we find indices $l \in L$ such that $Y_l$ is a $\T$-space.
Thus, by the hypothesis on $\S$ and $\T$, the subset $\{k \in K_l : X_k \not\in \S\}$ is $\kappa$-small.
But $K_l$ is $\kappa$-large, hence some $X_k$ is an $\S$-space -- a contradiction.
The other implication is almost routine: suppose each factor $X_i$ is a local $\T$-space, $J = \{i \in I: X_i\not\in \T\}$ is finite and $\{i \in I : X_i \not\in \S\}$ is $\kappa$-small.
Let $U=\prod_{i\in I}U_i$ be a basic open neighborhood of $x\in X\! = \! \prod_{i \in I}X_i$.
Then \mbox{$K =\{i \in I:U_i \neq X_i\}$} is finite. For $i \in J \mathop{\cup} K$, pick a $\T$-neighborhood $V_i \subseteq U_i$ of $x_i$, and set \mbox{$V_i=X_i\in \T$} for $i \in I \setminus (J \mathop{\cup} K)$.
By the hypotheses on $\S$ and $\T$, the product $\prod_{i\in I}V_i$ is a $\T$-space and is a $\T$-neighborhood of $x$ contained in $U$.
The proof for the ``basic'' case is quite similar.
\EP
To deduce Theorem \ref{localSTspace}, we have used a part of Proposition \ref{localTspace}; of course, the latter in turn is a consequence of the former, by taking simply the special case $\S = \T$.
Let us note a further consequence of Proposition \ref{localTspace} combined with Corollary \ref{filtspaces}:
{\em A product of topological spaces is locally $\kappa$-filtered iff all factors are locally $\kappa$-filtered and only finitely many are not $\kappa$-filtered.
In particular, a product of topological spaces is locally ultraconnected iff all factors are locally ultraconnected and only finitely many are not ultraconnected.}
\section{Products of unions of $\T$-spaces}
\label{kappaunion}
Let $\kappa$ be an infinite cardinal number. By a {\em $\T_{\kappa}$-space} we mean a topological space that is representable as a union of fewer than $\kappa$ many $\T$-subspaces.
For example, if $\T$ is the class of compact spaces, the $\T_{\omega}$-spaces are still compact, whereas the $\T_{\omega^+}$-spaces are just the $\sigma$-compact ones.
By our general assumption that $\T$ is closed under continuous images, we have:
\begin{lem}
\label{imageTunion}
The image of a $\T_{\kappa}$-space under a continuous map is a $\T_{\kappa}$-space.
\end{lem}
\begin{prop}\label{Tunion1}
If a product of topological spaces is a $\T_{\kappa}$-space then each factor is a $\T_{\kappa}$-space, and for some cardinal $\lambda\! < \!\kappa$, fewer than $\lambda$ factors are not $\T$-spaces.
\end{prop}
\BP
Consider a product $ X = \prod_{i \in I} X_i$ and suppose $X=\bigcup_{j \in J} Y_j$ for some $\kappa$-small set $J$ and certain $Y_j\in\T$.
That the factors $X_i$ must be $\T_{\kappa}$-spaces is clear by Lemma \ref{imageTunion}. By way of contraposition, assume that for each $\lambda < \kappa$, at least $\lambda$ many factors are not $\T$-spaces.
Then, in particular, this holds for $\lambda = |J|$. Hence, there exists a subset $K$ of $I$ and a bijection $g : K \rightarrow J$ such that $X_k\not\in \T$ for all $k \in K$.
For each $k\in K$, the set $Z_k = X_k\setminus \pi_k [Y_{g(k)}]$ must be nonempty, since $Y_{g(k)}$ and so $\pi_k [Y_{g(k)}]$ is a $\T$-space, while $X_k$ is not.
Putting $Z_i = X_i$ for $i\in I\setminus K$, we conclude that no $x\in \prod_{i\in I} Z_i$ can be contained in any one of the sets $Y_j$ (otherwise $x_k \in \pi_k [Y_j]$ for $k = g^{-1}(j)$), a contradiction.
\EP
\begin{theo}\label{Tunion2}
{\rm (GCH)} Let $\kappa$ be a regular limit cardinal or the successor of some regular infinite cardinal, and assume that the class $\T$ is closed under products.
Then a product of topological spaces is a $\T_{\kappa}$-space iff all factors are $\T_{\kappa}$-spaces and for some cardinal $\lambda < \kappa$, fewer than $\lambda$ factors are not $\T$-spaces.
\end{theo}
\BP
Let $(X_i : i \in I)$ be a family of $\T_{\kappa}$-spaces so that \mbox{$J=\{j \in I : X_j\not\in \T\}$} is $\lambda$-small for an infinite cardinal $\lambda < \kappa$,
and each $X_j$ is a union of $\T$-subspaces $Y_{k,j}$ ($k \in \lambda_j, \ \lambda_j < \kappa$).
The supremum of all those $\lambda_j$ is smaller than $\kappa$, by regularity of $\kappa$. Hence, taking $\lambda$ sufficiently large, we may assume $\lambda_j = \lambda$ for all $j \in J$.
For $f \in \lambda^J$ (the set of all functions from $J$ into $\lambda$) put $Z_f = \prod_{i\in I} W_i$,
where $W_j = Y_{f(j),j}$ for $j\in J$ and $W_i = X_i$ otherwise. By product closedness of $\T$, each $Z_f$ is a $\T$-space. We have
$\prod_{i \in I} X_i = \bigcup_{f \in \lambda^J} Z_f$, since for each $x=(x_i\! : i\!\in\! I)\in \prod_{i \in I} X_i$ and $j \in J$, there is an index $f(j)$ such that $x_j \in Y_{f(j),j}$, whence $x \in Z_f$.
It remains to check that $|\lambda^J |< \kappa$ (then $X$ is the union of a $\kappa$-small family of $\T$-subspaces).
Either $\kappa$ is a limit cardinal; in that case, $|\lambda^J| \leq \lambda^\lambda = \lambda^+ < \kappa$ by (GCH);
or, $\kappa = \rho^+$ for a regular cardinal $\rho$ and then $|\lambda^J| \leq \rho^{|J|} = \rho < \kappa$ ; for $\rho^{|J|} = \rho$, use $|J| < \rho = {\rm cf}(\rho)$ and (GCH) in case $\rho > \omega$.
The converse implication is assured by Proposition \ref{Tunion1}.
\EP
\begin{coro}
\label{Tunion3}
{\rm (GCH)} Let $\kappa$ be the successor of a regular infinite cardinal $\lambda$, and let $\T$ be closed under products.
Then a product of topological spaces is a local (resp.\ basic) $\T_{\kappa}$-space iff all factors are local (resp.\ basic) $\T_{\kappa}$-spaces,
all but finitely many factors are $\T_{\kappa}$-spaces, and fewer than $\lambda$ are not $\T$-spaces.
\end{coro}
\BP
Apply Theorem \ref{localSTspace}, with $\T_{\kappa}$ instead of $\T$ and $\T$ instead of $\S$, to the hypothesis assured by Theorem \ref{Tunion2}.
\EP
\section{$\kappa$-union compactness}
\label{Localcomp}
In the following three sections, we have a look at some global and local compactness properties and their productivity,
in order to demonstrate how Theorems \ref{localSTspace} and \ref{Tunion2} come into play.
Let us call a topological space \textit{$\kappa$-union compact} if it is the union of a $\kappa$-small family of compact subsets. In other words, for the class $\T$ of compact spaces,
the $\T_{\kappa}$-spaces are just the $\kappa$-union compact ones, and in particular, $\omega^+$-union compactness means $\sigma$-compactness.
(We avoid the terminology ``$\kappa$-compact'', which in other contexts is reserved to mean that every open cover contains a $\kappa$-small subcover,
a generalized compactness property we shall not discuss here; for example, ``$\omega^+$-compact'' means ``Lindel\"of''.)
In this specific setting, Lemma \ref{imageTunion}, Proposition \ref{Tunion1}, Theorem \ref{Tunion2} and Corollary \ref{Tunion3} amount to:
\begin{lem}
\label{imagekappa}
Continuous images of $\kappa$-union compact spaces are $\kappa$-union compact.
\end{lem}
\begin{prop}\label{k-compact2}
If a product of topological spaces is $\kappa$-union compact then all factors are $\kappa$-union compact, and for some $\lambda < \kappa$, fewer than $\lambda$ are not compact.
\end{prop}
\begin{theo}\label{kappa-union compact}
{\rm (GCH)} Let $\kappa$ be a regular limit cardinal or the successor of some regular infinite cardinal.
Then a product of topological spaces is $\kappa$-union compact iff all factors are $\kappa$-union compact and for a cardinal $\lambda\! <\! \kappa$, fewer than $\lambda$ factors are not compact.
\end{theo}
\begin{coro}
{\rm (GCH)} Let $\kappa$ be the successor of a regular infinite cardinal $\lambda$.
Then a product of topological spaces is locally $\kappa$-union compact iff all factors are locally $\kappa$-union compact,
all but finitely many factors are $\kappa$-union compact, and fewer than $\lambda$ are not compact.
In particular, a product of topological spaces is (locally) $\sigma$-compact iff all factors are (locally) $\sigma$-compact and all but finitely many factors are compact.
\end{coro}
An inspection of the proof for Theorem \ref{Tunion2} shows, neither (GCH) nor (CH) is needed for the conclusion about the productivity of (local) $\sigma$-compactness,
since the equation $\omega^{|J|} = \omega$ holds for $|J| < \omega$, without assuming (CH).
\vspace{1ex}
Entirely analogous results are obtained for ``connected'' or ``path-connected'' instead of ``compact''. Thus, for example, given a regular infinite cardinal $\lambda$,
{\em a product of topological spaces has at most $\lambda$ (path) components iff all factors have at most $\lambda$ (path) components and all but fewer than $\lambda$ factors are (path) connected.}
\section{$\kappa$-filtered spaces and $\kappa$-sequential compactness}
\label{special}
Recall that a topological space is said to be \textit{sequentially compact}\index{Sequential compactness} if every sequence in the space has a convergent subsequence.
More generally, given an infinite cardinal $\kappa$, we mean by a {\em $\kappa$-sequentially compact space} a topological space
in which every $\kappa$-sequence $(x_{\nu} : \nu \in \kappa)$ has a convergent $\kappa$-subsequence
\mbox{$(x_{\tau(\nu )} : \nu \in \kappa)$} (where $\tau : \kappa \rightarrow \kappa$ is a strictly monotone increasing map).
Sequential compactness is incomparable to compactness: while the ordinal space $\omega^+$ is sequentially compact but not compact,
an uncountable power of the real unit interval is compact but not sequentially compact (cf.\ \cite{counterexamples}).
It is easy to see that the class of $\kappa$-sequentially compact spaces is closed under continuous images.
A preordered set or topological space has been called $\kappa$-filtered if each $\kappa$-small subset has a lower bound. The previous notions are related as follows:
\begin{lem}
\label{kappaconvergence}
$\!$The following conditions on a topological space\,$X$\,are equivalent:
\BIT
\IT[\aa] Every $\kappa$-sequence in $X$ converges.
\IT[\bb] $X$ is $\kappa$-sequentially compact and $\kappa$-filtered.
\IT[\cc] $X$ is $\kappa^+$-filtered.
\EIT
\end{lem}
\BP
(a)$\,\Rightarrow\,$(b)\,and\,(c): Clearly, (a) implies $\kappa$-sequential compactness, so it suffices to verify that (a) yields a lower bound for
any subset $A= \{ a_{\nu} : \nu\in \kappa\}$. By Lemma \ref{infinite}, one has a unique isomorphism $f$ between $\kappa$
and the ordinal sum $\sum_{\lambda < \kappa} \lambda = \bigcup \,\{ \lambda \times \{ \lambda\} : \lambda < \kappa\}$
(ordered by $(\iota,\lambda)\leq (\iota',\lambda')$ iff $\lambda < \lambda'$ or $\lambda = \lambda'$ and $\iota \leq \iota'$).
Define a $\kappa$-sequence $(x_{\mu} : \mu \leq \kappa)$ by $x_{\mu} = a_{\pi_1 \circ f(\mu )}$, where $\pi_1$ is the projection onto the first coordinate.
This $\kappa$-sequence converges to a point $x$ in $X$. Hence, for any neighborhood $U$ of $x$,
there is a $\lambda \in \kappa$ such that $U$ contains the set $\{ x_{\mu} : \lambda \leq \mu < \kappa\} = A$.
(To see that each $a_{\nu}$ is an $x_{\mu}$ with $\mu \geq \lambda$, note that $\kappa$ is a limit ordinal, whence there exists some
$\lambda'$ with $\max \{ \pi_2 \circ f(\lambda), \nu \} < \lambda' < \kappa$, for which it follows that $f(\lambda ) < (\nu, \lambda' )$, and as $f$ is monotone, $f(\mu) < (\nu, \lambda')$ for all $\mu < \lambda$.
By contraposition, the unique $\mu$ with $f(\mu )= (\nu, \lambda' )$ satisfies $\lambda \leq \mu$ and $x_{\mu} = a_{\pi_1 \circ f(\mu )} = a_{\nu}$.)
Thus, $x$ is a lower bound of $A$.
(b)$\,\Rightarrow\,$(c): Let $A = \{ a_{\nu} : \nu \in \kappa\}$ be any $\kappa^+$-small subset of $X$.
For each (ordinal!) $\iota\in \kappa$, the set $\{ a_{\nu} : \nu \in \iota\}$ has a lower bound $x_{\iota}$, because $X$ is $\kappa$-filtered and $\iota$ ({\it a fortiori} $|\iota|$) is smaller than $\kappa$.
By the hypothesis that $X$ is $\kappa$-sequentially compact, we find a strictly monotone increasing $\tau : \kappa \rightarrow \kappa$ such that the $\kappa$-subsequence
$(x_{\tau(\lambda)} : \lambda \in \kappa )$ converges to a point $x\in X$. Thus, for each open neighborhood $U$ of $x$, there is a $\mu\in \kappa$ such that
$U$ contains each $x_{\tau (\lambda)}$ with $\mu\leq \lambda < \kappa$. Now, for any $\nu\in \kappa$, we find a $\lambda$ with $\mu \leq \lambda < \kappa$ and $\nu < \tau (\lambda )$.
It follows that $x_{\tau (\lambda )}\in U$ and $x_{\tau (\lambda )}\leq a_{\nu}$, hence $a_{\nu}\in U$ (because $U$ is an upper set). This shows that $x$ is a lower bound of $A$.
(c)$\,\Rightarrow\,$(a): The range of a $\kappa$-sequence $(x_{\mu}\! : \mu\in \kappa)$ in $X$ is a $\kappa^+$-small subset and has therefore a lower bound $x$.
By definition of the specialization order, every neighborhood of $x$ contains the whole sequence, so it converges to $x$.
\EP
For $\kappa = \omega$, the equivalence of (a) and (b) was also observed by Lipparini \cite{Lipparini}.
\begin{prop}
\label{kappasequ}
If a product of topological spaces is $\kappa$-sequentially compact then fewer than $2^{\kappa}$ (with {\rm (GCH):} at most $\kappa$) factors are not $\kappa^+$-filtered.
\end{prop}
\BP
By way of contraposition, let us look at a product $X = \prod_{i\in I} X_i$ of spaces none of which is $\kappa^+$-filtered,
where we may assume that $I$ is the set of all functions from $\kappa$ into $\kappa$ (on account of the equation $\kappa^{\kappa} = 2^{\kappa}$).
Using Lemma \ref{kappaconvergence}, pick for each $i\in I$ a non-convergent $\kappa$-sequence $(x^i_\nu : \nu \in \kappa)$ in $X_i$. Consider the elements
$y_{\nu} \in X$ with $y_{\nu,i} = x^i_{i(\nu)}$ (note $i(\nu )\in \kappa$ for $i\in I$).
Any $\kappa$-subsequence of $y = (y_{\nu}: \nu \in \kappa )$ is of the form $y \circ j = (y_{j(\nu)} : \nu \in \kappa )$ for a strictly monotone increasing $j : \kappa \rightarrow \kappa$.
Since $j$ is injective, there is an $i\in I$ with $i\circ j = id_{\kappa}$. Coordinatewise, this means $y_{j(\nu ), i} = x^i_{i\circ j (\nu )} = x^i_{\nu}$. Thus, no $\kappa$-subsequence of
$y$ can converge, as some of its coordinate sequences do not converge. Hence, $X$ is not $\kappa$-sequentially compact.
\EP
\begin{theo}
\label{sequ}
{\rm (CH)} {\rm (1)} A product of topological spaces is sequentially compact iff all factors are sequentially compact and all but countably many are $\omega^+$-filtered.
{\rm (2)} A product of topological spaces is locally sequentially compact iff all factors are locally sequentially compact,
all but finitely many are sequentially compact, and all but countably many are $\omega^+$-filtered.
\end{theo}
\BP
(1) It is well-known that a product of countably many sequentially compact topological spaces is sequentially compact.
Hence, if all other factors are $\omega^+$-filtered, the whole product is still sequentially compact.
For $\kappa = \omega$, Proposition \ref{kappasequ} requires only (CH) and provides the other implication.
(2) Use (1) and apply Theorem \ref{localSTspace} to the class $\S$ of $\omega^+$-filtered spaces and the class $\T$ of sequentially compact spaces.
\EP
An independent proof of (1) under weaker cardinal assumptions was given recently by Lipparini \cite{LippariniII}.
\vspace{1ex}
It would be interesting to discover how far Theorem \ref{sequ} may be extended to $\kappa$-sequentially compact spaces. One step in that direction is provided by
\begin{coro}
\label{kseqproduct}
If $\kappa$ has cofinality $\omega$\,then a product of $\kappa$-sequentially compact spaces all but countably many of which are $\kappa^+$-filtered is $\kappa$-sequentially compact.
\end{coro}
\BP
By the product-stability of $\kappa^+$-filteredness and Lemma \ref{kappaconvergence}, it suffices to verify that
for any sequence $(X_n : n\in \omega )$ of $\kappa$-sequentially compact spaces, their product $X$ is $\kappa$-sequentially compact, too.
The proof is similar to the classical case of sequential compactness but slightly more involved.
From any $\kappa$-sequence $x = (x_{\iota} : \iota \in \kappa )$ in $X$, one may extract successively $\kappa$-subsequences $x\circ \psi_n = (x_{\psi_n(\iota)} : \iota \in \kappa )$ such that
\vspace{-1ex}
\begin{itemize}
\item[(1)] $\varphi_n : \kappa \rightarrow \kappa$ and $\psi_n = \varphi_0 \circ ... \circ \varphi_n$ are strictly monotone increasing\\[-3.5ex
\item[(2)] $n\leq m$ implies $\psi_n (\iota ) \leq \psi_m (\iota )$ (since each $\varphi_k$ is extensive)\\[-3.5ex]
\item[(3)] the coordinate $\kappa$-subsequence $(x_{\psi_n(\iota ),n}\! : \iota\! \in\! \kappa)$ converges to some $z_n$\,in $X_n$.\\[-3.5ex]
\end{itemize}
Pick a monotone increasing cofinal subsequence $(\alpha_n : n\in \omega )$ of $\kappa$, put
$n_\iota : = \min \{ n\in \omega : \iota \leq \alpha_n \}$ for $\iota \in \kappa$ and define a $\kappa$-subsequence $x\circ \varrho = (x_{\varrho (\iota )} : \iota \in \kappa )$ by
$\varrho (\iota) =\psi_{n_\iota} (\iota)$. This $\varrho$ is actually strictly monotone, since by (1) and (2),
$\iota < \lambda < \kappa \,\Rightarrow\, n_{\iota} \leq n_{\lambda} \,\Rightarrow\, \varrho (\iota ) = \psi_{n_{\iota}}(\iota) \leq \psi_{n_{\lambda}} (\iota) < \psi_{n_{\lambda}} (\lambda ) = \varrho (\lambda )$.
In order to assure that $x\circ \varrho$ converges to $z = (z_n : n\in \omega )$ in $X$, it suffices to check that each coordinate $\kappa$-sequence $(x\circ \varrho )_n = (x_{\varrho (\iota), n}\! : \iota\!\in\! \kappa )$ converges to $z_n$. To that aim, consider a neighborhood $U$ of $z_n$ and a $\mu \in \kappa$ such that $x_{\psi_n (\iota ) , n} \in U$ for all $\iota\in \kappa$ with $\mu \leq \iota$. We may assume $\mu > \alpha_n$ and consequently $n < n_{\mu}$ (otherwise $\mu \leq \alpha_{n_{\mu}} \leq \alpha_n$).
For fixed $\iota$ and the strictly monotone increasing, hence extensive map $\theta = \varphi_{n+1}\circ ...\circ \varphi_{n_{\iota}}$, we get $\varrho (\iota ) = \psi_n \circ \theta (\iota)$.
Thus, $\iota \leq \theta (\iota )$ and therefore $x_{\varrho (\iota ),n} = x_{\psi_n (\theta (\iota)),n} \in U$ for all $\iota$ with $\mu \leq \iota < \kappa$, as desired.
\EP
\section{Hypercompact and supercompact spaces}
\label{hyper}
By evident reasons, we call a finitely generated upper set ${\uparrow\! F}$ in a preordered set a {\em foot}. Referring to the specialization order,
a subset $H$ of a topological space $X$ is called \textit{hypercompact}\index{Hypercompact} if its saturation is a foot, i.e., ${\uparrow \! H} = {\uparrow \! F}$ for some finite $F$ (contained in $H$);
if $F$ can be chosen to be a singleton, $H$ is called \textit{supercompact}\index{Supercompact} (see \cite{Erne1991, Erne2005, Erne2009}).
As demonstrated in \cite{Erne2009}, these notions admit point-free characterizations.
An element $c$ of a poset $T$ is said to be {\em hypercompact} (resp.\ {\em supercompact} or {\em completely join-prime}) if $T\setminus {\uparrow\!c}$ is a finitely generated lower set (resp.\ a principal ideal).
Clearly, these properties are stronger than order-theoretical compactness of $c$, which means that $T\setminus {\uparrow\!c}$ is closed under directed joins (cf.\ \cite{Erne2009, Gierz, Johnstone}).
Now, it turns out that an open subset of a topological space $X$ is hyper- or super\-compact, respectively,
if and only if it has the synonymous order-theoretical property, regarded as an element of the open set frame $\O (X)$.
More to the point for us, not only hyper- and supercompactness, but also local hyper- and supercompactness admit elegant point-free descriptions -- in contrast to local compactness, which is {\em not} invariant under
lattice isomorphisms between the open set frames (see \cite{Gierz} for a sophisticated counterexample):
the supercontinuous frames (=\,completely distributive lattices) are, up to isomorphism, the open set frames of locally supercompact spaces \cite{Erne1991, Erne2009}, while the
\mbox{hypercontinuous} frames are the open set frames of locally hypercompact spaces (also called {\em finitely bottomed spaces}; cf.\ \cite{Erne2009, Lawson1985}).
There are several equivalent definitions of hypercontinuity. The most topologically inspired one is this (see \cite{Gierz}): denoting by $\upsilon P$ the {\em upper} or {\em weak topology}
generated by the complements of the principal ideals of a lattice or preordered set $P$, \textit{hypercontinuity} of $P$
means that for each $y \in P$, the set $\{x \in P: y \in int_{\upsilon P}\, {\uparrow\! x}\}$ is directed and has the join $y$; in the case of a complete lattice $P$, the directedness condition is automatically fulfilled.
A straightforward verification confirms:
\begin{lem}
\label{hyperbildabg}
The class of hypercompact spaces is closed under the formation of continuous images.
\end{lem}
\begin{prop}
\label{foot}
A product of preordered sets is a foot iff all factors are feet and all but finitely many have least elements.
\end{prop}
\BP
Let $\prod_{i\in I}P_i$ be a foot and find a finite $F\!\subseteq\! \prod_{i\in I}P_i$ with ${\uparrow \! F} = \prod_{i\in I}P_i$. Then each factor $\uparrow \! \! \pi_i(F)=P_i$ is a foot.
Assume that infinitely many factors do not have least elements. Thus, if \mbox{$F=\{f_1, ... , f_n\}$}, there are distinct $i_1, ... i_n \in I$ such that $P_{i_k}$ has no least element.
This implies that $Y_k = P_{i_k} \setminus \! \! \uparrow \! \! \pi_{i_k}(f_k)$ is not empty. Put $Y_i = X_i$ for $i\in I \setminus \{ i_1,...,i_n\}$.
Then no $x\in \prod_{i\in I} Y_i$ can be contained in $\uparrow \! F$, which is a contradiction.
Conversely let $(P_i : i \in I)$ be a family of feet such that $J =\{i\in I : P_i$ has no least element$\}$ is finite.
For each $i\in I$ pick $F_i \subseteq P_i$ with minimal cardinality such that $\uparrow \! F_i = P_i$.
Since almost all $F_i$ are singletons, $|\prod_{i\in I}F_i|=|\prod_{i\in J}F_i|< \infty$, and \mbox{$\uparrow \prod_{i\in I}F_i = \prod_{i\in I} \uparrow \! \! F_i = \prod_{i\in I}P_i$}.
We conclude that $\prod_{i\in I}P_i$ is a foot.
\EP
Translating the last result into the language of topological spaces, we conclude:
\begin{theo}
\label{hyperproduct2}
A product of topological spaces is hypercompact iff all factors are hypercompact and all but finitely many are supercompact.
\end{theo}
Now, a further application of Theorem \ref{localSTspace} yields:
\begin{coro}
\label{hyperproduct3}
A product of topological spaces is locally hypercompact iff all factors are locally hypercompact and all but finitely many are supercompact.
\end{coro}
An analogous conclusion holds for ``supercompact'' instead of ``hypercompact''. Notice that arbitrary products of supercompact spaces are supercompact.
\section{$\kappa$-hypercompact spaces}
\label{kappahyper}
A common generalization of hypercompactness and supercompactness is provided by the following definition (see \cite{Erne2009}).
Let $\kappa$ be a cardinal number. A subset $H$ of a topological space $X$ is called \textit{$\kappa$-hypercompact}\index{kappa-hypercompact@$\kappa$-hypercompact} iff there exists a $\kappa$-small subset $F \subseteq X$ such that ${\uparrow \! F} = {\uparrow\! H}$. In particular, the whole space is $\kappa$-hypercompact iff it is the saturation of a $\kappa$-small subset.
By definition, ``$\omega$-hypercompact'' means ``hypercompact'', and ``$2$-hypercompact'' means ``super\-compact''.
One easily verifies that Lemma \ref{hyperbildabg} extends to $\kappa$-hypercompact spaces:
\begin{lem}
\label{kappahyperbildabg}
The class of $\kappa$-hypercompact spaces is closed under the formation of continuous images and of closed or at least lower subsets.
\end{lem}
For the class $\T$ of supercompact spaces, the $\T_{\kappa}$-spaces are just the $\kappa$-hypercompact ones. Hence, a further application of Proposition \ref{Tunion1} and Theorem \ref{Tunion2} gives:
\begin{prop}
\label{khyper-fast}
If a product of spaces is $\kappa$-hypercompact then so is each factor, and for some cardinal $\lambda\! < \!\kappa$, fewer than $\lambda$ factors are not supercompact.
\end{prop}
\begin{theo}
{\rm (GCH)} Let $\kappa$ be a regular limit cardinal or the successor of a regular infinite cardinal.
Then a product of topological spaces is $\kappa$-hypercompact iff all factors are $\kappa$-hyper\-compact and for some $\lambda < \kappa$, fewer than $\lambda$ factors are not supercompact.
\end{theo}
Finally, Theorem \ref{localSTspace} with $\S$ the class of supercompact spaces and $\T$ the class of $\kappa$-hypercompact spaces amounts to:
\begin{coro}
{\rm (GCH)} Let $\kappa$ be the successor of a regular infinite cardinal $\lambda$. A prod\-uct of spaces is locally $\kappa$-hypercompact iff each factor is locally $\kappa$-hypercompact,
all but finitely many are $\kappa$-hypercompact, and fewer than $\lambda$ are not supercompact.
\end{coro}
\section{Sum decompositions of spaces and product decompositions of frames}
\label{sum}
One useful effect of point-free thinking is the observation that sum decompositions of spaces into open components yield product decompositions of the corresponding frames into indecomposable factors:
$$\textstyle{\O (\sum_{i\in I} X_i ) \simeq \prod_{i\in I} \O (X_i).}$$
The best known class of topological spaces in which all components are (closed and) open is that of locally connected spaces,
which alternatively may be described as basic $\T$-spaces or as local $\T$-spaces for the class $\T$ of connected spaces.
We are now going to single out a few specific classes of such spaces.
\begin{lem}
\label{hyperfinitecomponents}
A hypercompact topological space has only finitely many components, hence a unique finite sum decomposition into indecomposable summands.
\end{lem}
\BP
Since components are closed, two points in different components have disjoint closures, hence no common lower bound in the specialization order.
Thus, the union of an infinite number of components cannot be a foot.
\EP
As in the case of locally compact spaces, but by different arguments, one has:
\begin{prop}
\label{openclosed}
Any intersection of a closed or at least lower set and an open set of a locally {\rm ($\kappa$-)}hypercompact space is locally {\rm ($\kappa$-)}hypercompact.
\end{prop}
\BP
Let $U$ be an open and $A$ a lower set in a locally ($\kappa$-)hypercompact space $X$. For $x\! \in \! U \cap A$ and a neighborhood $V$ of $x$ in $X$, we find a ($\kappa$-)hypercompact neighborhood $H \subseteq V \cap U$ of $x$. Now, $H \cap A$ has the desired properties as it is a ($\kappa$-)hypercompact neighborhood of $x$ in $U \cap A$ contained in $U \cap A \cap V$.
\EP
\begin{theo}
\label{locallyhyperconnected}
Each point of a locally hypercompact topological space has a neighborhood base of connected hypercompact sets and is therefore locally connected.
Hence, every locally hypercompact space has a unique sum decomposition into locally hypercompact and sum-indecomposable subspaces (its components).
\end{theo}
\BP
Let $X$ be a locally hypercompact space and $x \in U\in \O (X)$. Then we find a hypercompact neighborhood $H \subseteq U$ of $x$. By Lemma \ref{hyperfinitecomponents},
it has only finitely many connected components, which are therefore clopen. By $G$ we denote the component of $x$ in $H$.
Since G is a relatively closed subset of a hypercompact set, it is hypercompact. As it is relatively open in $H$, we find a $V \in \O (X)$ with $G=H\cap V$.
Now $H$ is a neighborhood of $x$, so $x$ is in its interior. We deduce that $x \in \mbox{\it int}\,H \cap V \subseteq G \subseteq H \subseteq U$.
Thus, $G$ is a connected hypercompact neighborhood contained in $U$.
\EP
As mentioned earlier, the hypercontinuous (resp.\ supercontinuous) frames are isomorphic to locally hypercompact (resp.\ supercompact) topologies.
Hence, Theorem \ref{locallyhyperconnected} immediately yields:
\begin{coro}
\label{producthypercont}
Every hypercontinuous (respectively, supercontinuous) frame has a product decomposition into product-indecomposable hypercontinuous (respectively, supercontinuous) factors.
\end{coro}
Since locally connected spaces also admit a point-free description, one has a similar product decomposition for frames in which every element is a join of connected ones, where an element $c$ of a bounded lattice is
{\em connected} iff $a\vee b = c$ and $a \wedge b = 0$ imply $a=c$ or $b=c$.
\section{Table: Productivity of topological properties}
\label{table}
\setlongtables
\begin{longtable}{| c | c | c | c |}
\hline
\hspace{2ex} {\em stable under products\,?} \hspace{2ex} & \hspace{2ex} {\em finite} \hspace{2ex} & {\em countable} & {\em arbitrary} \\
\hline
\hline
compact & yes & yes & yes\\
\hline
countably compact & no & no & no \\
\hline
paracompact & no & no & no \\
\hline
Lindel\"of & no & no & no \\
\hline
sequentially compact & yes & yes & no \\
\hline
$\sigma$-compact & yes & no & no \\
\hline
supercompact & yes & yes & yes\\
\hline
hypercompact & yes & no & no \\
\hline
connected & yes & yes & yes\\
\hline
path-connected & yes & yes & yes\\
\hline
($\kappa$-)ultraconnected & yes & yes & yes\\
\hline
$T_i$-space ($i=0,1,2,3$) & yes & yes & yes\\
\hline
$T_4$-space (normal) & no & no & no \\
\hline
sober & yes & yes & yes\\
\hline
spectral & yes & yes & yes\\
\hline
stably compact & yes & yes & yes\\
\hline
\end{longtable}
\newpage
\addcontentsline{toc}{section}{References}
|
2,877,628,091,190 | arxiv | \section{Introduction}
\label{Introduction}
Among coma and coma-like states, Locked-in Syndrome (LIS) might be one of the hardest diagnostic challenges medical professionals are facing today. A complete description of LIS, contemporary diagnostic challenges and attempts made to overcome these challenges can be found in Appendix \ref{appendix:bginfo}. Even though recent systematic research regarding the prognosis of LIS patients is very scarce, early research found mortality rates as high as 60\% \cite{patterson1986locked}. This is unsettling since early diagnosis, effective care and appropriate rehabilitation reduce the mortality rate significantly and allow for cognitive and motor function recovery, verbal communication and independent breathing patterns \cite{casanova2003locked}. The disconcerting truth, however, is that LIS is one of the most likely disorders to result in misdiagnosis for multiple reasons. Therefore, identifying objective and specific assessment markers is paramount to enable early diagnosis. This is a pivotal goal throughout psychiatry where a lack of specific and objective assessment markers is prevalent \cite{takizawa2014neuroimaging}.
\par Assessing consciousness is a necessity when differentiating between various disorders of consciousness (DOC), which is imperative given that particular disorders require their own approaches and therapeutic decisions, influencing prognosis \cite{noirhomme2017look}. Suggestively, the current gold standard in assessing consciousness in LIS-patients is based on bedside behavioral examinations (BBEs), despite 40\% of these assessments resulting in misdiagnosis \cite{gosseries2014recent}. Even though LIS is not a DOC, many characteristics are shared among the two (see Appendix \ref{appendix:bginfo}), resulting in regular confusion of the former for the latter by virtue of diagnostic hindrances \cite{bruno2011unresponsive}.
\par The lack of adequate assessment markers emanates from the complex pathogenesis of psychiatric and neurological conditions, inherently linked to human cognition and behavior \cite{lui2016psychoradiology}. Biomarkers have been proposed as a main contender to overcome this issue, given their ability to improve diagnosis, beneficially impact prognosis and tailor individual treatments \cite{singh2009biomarkers}. Functional imaging has proven successful because of its capability to identify specific disorders and to differentiate between similar disorders based on these biomarkers \cite{takizawa2014neuroimaging}. However, imaging tools are often expensive and distressing for patients to use, suffer from methodological constraints and are open to subjective interpretability \cite{takizawa2014neuroimaging,gosseries2014recent, noirhomme2017look}. Because it can be collected non-invasively, is easy to apply and relatively low in cost, EEG data is the most popular imaging tool and used widely for various applications \cite{mcloughlin2014search,thul2016eeg}. Current EEG advancements allow for both high spatial and temporal resolution, increasing its allure for biomarker identification \cite{mcloughlin2014search}. Simultaneously, interpreting EEG signals still poses challenges because of its high inter-rater variability and difficulty of interpretation \cite{acharya2018deep}. A machine learning (ML) approach has been suggested to overcome these issues due to the fast, consistent, accurate and relatively objective diagnoses it can achieve \cite{acharya2018deep}. Event-related potentials (ERPs) in particular have been identified as important clinical and research instruments in various tasks and modalities \cite{kutas1983brain,vecchio2011auditory,steppacher2013n400,balconi2013disorders,beukema2016hierarchy,rohaut2015probing}. ERPs are portions of EEG recordings that directly reflect cortical neuronal activity following particular events \cite{jung1999analyzing} (a more detailed explanation can be found in Section \ref{RelatedWork}).
\section{Related Work}
\label{RelatedWork}
Since its invention in 1924, EEG signals were not used to make an attempt to break down the concept of consciousness until 1961 \cite{henrie1961alteration}. Since then, EEG research has made use of aspects like bandwidth, spectral data and power spectra to make classifications of some kind, but generally lacked clinical relevance because of processing difficulties, sources of errors and a lack of adequate classification techniques \cite{walter1967discriminating,larsen1970automatic,struys1998comparison}. Even though difficulty of interpreting is still a relevant challenge within EEG processing, the emergence of ML has increased its relevance as a clinical tool \cite{acharya2018deep,fergus2016machine}. Advancements in processing techniques have, among other things, lead to systems that can automatically remove artifacts and extract features \cite{winkler2011automatic,alomari2013automated}, increasing the number of studies in which ML is used for various diagnostic purposes. EEG in combination with ML has, for example, been widely used to recognize and classify Alzheimer’s disease by means of principal component linear discriminant analysis, bagging, random forests, support vector machines and feed-forward neural networks \cite{lehmann2007application,trambaiolli2011improving,liu2014early}. In these cases, random forests, support vector machines and neural networks achieved the highest sensitivity and specificity, up to 89\% and 87\% respectively, even for mild patients \cite{lehmann2007application}. ML has also been effective in the recognition and classification of different states of consciousness. Techniques like SVM, k-nearest neighbor and linear discriminant analysis were used again to successfully classify patients with DOC \cite{holler2013comparison,engemann2018robust}. The common conclusion in these studies is that ML in combination with EEG markers of consciousness are effective for diagnosis, recognition and discrimination in various clinical contexts because they are reliable, low in cost, automatic and fast.
\par ERP's were first applied to the context of language by
\citeauthor{kutas1983brain} \cite{kutas1983brain}, who studied the effect of grammatical errors and semantic anomalies on elicited ERPs. They found that subjects who were presented with semantically anomalous words and grammatical errors elicited a distinct N400 ERP effect in numerous scalp regions, most prominently for unexpected words within their context \cite{kutas1983brain}. The N400 effect is described as the negative ERP peak following 300-500ms after experimental onset \cite{kutas2011thirty}. They have been found in a range of different stimulus types in various language processing tasks, applied to detect many aspects of language and established as a valid indicator for the surprisal value of a word in its presented context \cite{kutas1983brain,kutas2011thirty}. This surprisal value is represented by a Cloze score, where scores are low, medium and high for 0\%-33\%, 34\%-66\% and 67\%-100\%, respectively \cite{bormuth1968cloze}. This effect was experimentally confirmed by \citeauthor{nicenboim2020words} \cite{nicenboim2020words}, both for items with a low Cloze score (contstraining) and a high Cloze score (non-constraining). Items with a high Cloze score or in a non-constraining context elicited a more negative ERP peak about 300-500ms after onset as compared to items with a low Cloze score or in a constraining context in their study.
\par These ERP effects have moreover been used to detect and measure consciousness in various experimental settings. \citeauthor{steppacher2013n400} \cite{steppacher2013n400} used ERPs elicited by sound (P300) and speech (N400) to measure consciousness by assessing information processing in unresponsive wakefulness syndrome- and minimally conscious patients. They identified N400 effects in 32\% of unresponsive wakefulness syndrome patients and 41\% of minimally conscious syndrome patients. Specifically the N400 effect elicited by non-constraining words was identified to predict long term recovery and prognosis in these patients, but was not able to differentiate between the two. Stronger elicited N400 effects predicted a more favourable clinical outcome. \citeauthor{balconi2013disorders} \cite{balconi2013disorders} used the semantic anomalies described by \citeauthor{kutas1983brain} \cite{kutas1983brain} to detect N400 peaks to verify preservation of linguistic processing in DOC and minimal-consciousness states to determine the level of consciousness among them. Even though they concluded that the found differences within their experiment were not enough to differentiate between different DOC, differences in N400 latency were able to differentiate between control subjects and DOC patients. \citeauthor{beukema2016hierarchy} \cite{beukema2016hierarchy} measured N400 effects with regards to auditory stimuli in patients with DOC. Even though they could not significantly differentiate between DOC, they did identify ERPs following auditory stimuli as a possible clinical tool to improve accuracy of diagnosis and prognosis of DOC patients. In their experiment, 44\% of patients showed markers of N400 processing of speech and noise. To transform this process to be used as a valid clinical tool, sensitivity has to be improved first. This conclusion was also made by \citeauthor{rohaut2015probing} \cite{rohaut2015probing}, who probed semantic processing to determine measures of consciousness in non-communicating patients. They used N400 effects and late positive components effectively to differentiate between conscious, minimally conscious and vegetative state individuals, but emphasised that these effects have to be found easily and robustly at an individual level in order to develop a valid clinical diagnostic tool. \citeauthor{cruse2014reliability} \cite{cruse2014reliability} compared the effect of different stimuli and task demands on N400 amplitudes among healthy subjects, and detected 50\% N400 effects among subjects instructed to passively pay attention to normatively associated word-pairs.
\par In summary, ERP and N400 effects in particular have been identified to be capable of differentiating healthy subjects from patients with various DOC, unresponsive wakefulness syndrome or minimal consciousness, but generally lack clinical relevance because of low sensitivity or a lack of significant single-subject level differences. Moreover, N400 ERP effects with regards to Cloze scores offer the possibility to establish linguistic functions and as such detect consciousness among potential LIS patients. The fact that words can be presented passively \cite{steppacher2013n400,balconi2013disorders,beukema2016hierarchy,rohaut2015probing,cruse2014reliability} makes data collection convenient and increases the potential of N400 ML classification as the basis as diagnostic tool. Detection rates found in these studies (32\% and 41\% in \citeauthor{steppacher2013n400} \cite{steppacher2013n400}, 50\% in \citeauthor{cruse2014reliability}
\cite{cruse2014reliability}) are expected to be surpassed, while the sensitivity rate of 89\% found by \citeauthor{lehmann2007application} \cite{lehmann2007application} is aspired in the present study, so that the basis for an auxiliary LIS diagnostic tool is created.
\section{Objective}
\label{Objective}
Given the scenario presented in Section \ref{RelatedWork} with the additional information in Appendix \ref{appendix:bginfo}, the present study explores a ML-driven approach to classify conscious LIS patients from unconscious individuals based on EEG N400 ERP effects.
\section{Methods}
\label{Methods}
\subsection{Experimental Setup} \label{ExperimentalSetup}
In order to do so, data from \citeauthor{nicenboim2020words} \cite{nicenboim2020words} will be explored, extrapolated and adjusted. This data revolves around predictability effects of sentential context during a sentence comprehension task. Principally unrelated to LIS, the dataset will serve as the basis for the simulation of unconscious patients. Simulating EEG data has been successfully done throughout various ML studies, where different techniques propose particular merits and demerits \cite{haufe2013critical,yao2005evaluation,owen2012performance, delorme2007enhanced,haufe2016simulation,moosmann2008joint}. In some experimental setups simulated data is preferred over real data, because the inherently unsupervised nature of the latter entails an absence of ground truth values \cite{haufe2013critical}. In simulated data, the exact source locations are known, which is a great advantage because it allows for testing performance on challenging source configurations \cite{yao2005evaluation,owen2012performance}, generally problematic in natural EEG data \cite{delorme2007enhanced}. It also allows for objective and convenient testing of different methods, signal-to-noise ratios and dynamic characteristics \cite{haufe2016simulation}. On the other hand, it is important to accentuate that simulated data is essentially unrealistic, because it does not include any electrode measurement errors \cite{yao2005evaluation}, it remains difficult to define valid performance measures \cite{haufe2016simulation} and simulation is often done based on a Gaussian assumption, which generally represents natural data, but does not completely fit all natural scalp distributions \cite{moosmann2008joint}. Generated data should therefore have as much in common with the original data as possible, but should differ solely in the property being scrutinized in the study at hand \cite{haufe2013critical}.
\par The lion’s share of these simulations aim to generate signals from conscious individuals, while much less work is done to simulate signals of unconscious individuals. Given the lack of a theoretical foundation, unconscious patients were simulated based on the context of the original data. Since this data represents individuals being able to sensibly process words and thus representing conscious labels in the scope of this study, unconscious labels were yet to be acquired. Sensibility of conscious participants is reflected by the relationship between their EEG signals and the corresponding Cloze score of their experimental onset. It is expected that for conscious individuals, higher Cloze scores will elicit larger negative N400 peaks given the earlier described negative relationship between the two (see Section \ref{RelatedWork}). This relationship served as the basis for the simulation of unconscious individuals, for which EEG and ERP signals were forged and paired with randomly generated Cloze scores. By doing so, the inverse relationship true for conscious individuals will not be present in simulated individuals, assuming them to be unconscious. Since this simulation operates from the basis of Cloze scores rather than EEG data, methodological constraints commonly faced during simulation were circumvented. It also ensured that the simulated data deviated solely on the features of interest to ensure validity of the model.
\par When simulation was achieved, a support vector machine (SVM) and a random forest classifier (RF) were trained on preprocessed EEG signals. These models were subsequently used to explore to what extent conscious LIS patients can be distinguished from unconscious patients. The goal of doing so is to establish a foundation for the automatic diagnostic tool used to support traditional diagnostic methods in the assessment of LIS patients. Effective algorithms should be capable of measuring faint cortical signals inherent to LIS patients and as such reduce diagnosis time, ameliorate diagnostic procedures and bolster early diagnosis to ensure greater confidence in the assessment and treatment of LIS patients to ultimately reduce mortality rates. ML has been suggested, explored and applied as a solution for numerous diagnostic obstacles faced in functional imaging studies, but more research is necessary to ensure adequate appliance in the diagnostic process of LIS. The present study therefore built on this foundation of work with a model specifically designed to classify conscious LIS patients from unconscious patients based on EEG data.
\subsection{Participants}
\label{Paticipants}
From the 120 original subjects sampled by \citeauthor{nicenboim2020words} \cite{nicenboim2020words}, 110 subjects were available in the database at the moment of retrieval. EEG data from these 110 subjects was retrieved from the OSF database \cite{OSFwebsite}. This data served as the basis for the simulation of unconscious patients. The age of all these participants was between 18 and 35 years old and represented both a university subject-pool and community-based population. This number of participant is much larger than in typical ERP studies (N=30; \citeauthor{nicenboim2020words} \cite{nicenboim2020words} as well as typical studies regarding LIS classification (2 <= N <= 23 in
\citeauthor{noirhomme2017look} \cite{noirhomme2017look} and classification of DOC (for example 25, 23 and 14 in \cite{zheng2017disentangling,owen2012performance,wielek2018sleep}, respectively).
\subsection{Data} \label{Data}
The kick-off of this study was to further transform the preprocessed data from \citeauthor{nicenboim2020words}'s experiment, retrieved from \cite{nicenboim2019eegdata}, in order for them to be suitable for a ML classification algorithm. A description of their dataset and their preprocessing workflow can be found in Appendix \ref{appendix:originaldata}. Their data was used to represent conscious individuals on the assumption that they were capable of processing sentences sensibly, independent of their performance.
\par This dataset, from here on referred to as original data, consisted of individual .RDS files for every subject. Every file included EEG data points collected over time, information about experimental events and summarized information about the collected data. The code used in this study can be consulted as Github repository, in \cite{githubrepo}, and is explained below.
\subsection{Design \& Procedure} \label{DesignProcedure}
The original files were looped through and preprocessed individually. A list of the used software and packages to do so can be found in the footnote\footnote{\label{note1}Complete list of software: R (RStudio Team, 2019), and the R packages \textit{readr} (Version 1.3.1; Hester \& Francois, 2018), \textit{stringr} (Version 1.4.0; Wickham, 2019), \textit{eeguana} (Nicenboim, 2018), \textit{osfr} (Wolen, et al., 2020), \textit{caret} (Kuhn, 2020), \textit{dplyr} (Version 0.8.5; Wickham, François, Henry \& Müller, 2020), \textit{purrr} (Version 0.3.3; Henry \& Wickham, 2019), \textit{tidyr} (Version 1.0.2; Wickham \& Henry, 2020), \textit{tibble} (Version 3.0.1; Müller \& Wickham, 2020), \textit{factoextra} (Version 1.0.7; Kassambara \& Mundt, 2020), Jupyter Notebook (Kluyver, et al., 2016), and the Python packages \textit{NumPy} (Oliphant, 2006), \textit{Pandas} (McKinney, et al., 2010), \textit{Matplotlib} (Hunter, 2007), \textit{Scikit-learn} (Pedregosa, et al., 2011) and \textit{MNE-C} (Gramfort, et al., 2014).}. Below follows the demarcation of the created pipeline, displayed in Appendix \ref{Appendix3}.
\par Many of the features present in the original files were irrelevant for the study at hand, which is why they were discarded. First of all, channels related to the N400 ERP were defined and extracted from all the available recorded EEG channels. The N400 effect has been established as a negativity with a centroparietal distribution in the Cz, CP1, CP2, P3, Pz, P4 and POz channels within a time window of 300-500ms after experimental onset \cite{kutas2011thirty}. Neuronal signals are most anticipated within these particular channels because they are the most susceptible to experimental manipulation in this particular timeframe \cite{delong2014predictability}. Every datafile was further reduced to the seven N400 ERP channels as described above, the Cloze score of the corresponding noun and the feature \textit{constraint}. Next, signals from the ERP channels were downsampled by factor four, from the original 512 Hz to 128 Hz. This factor was chosen because it reduces the data to the greatest extent while still fitting the Shannon-Nyquist sampling theorem \cite{shannon1949communication} requirement, which states that a signal can be replaced by a discrete sequence of signals without losing any information if this signal has a sampling frequency of at least double the bandwidth (i.e. the Nyquist rate) \cite{jerri1977shannon}. Next, signals were binned with a small margin to the onset of the N400 ERP, ranging from 250 to 600ms after the onset of the experimental determiner. Data was filtered on items that were constraining, which are items presented in a context with nouns with a low Cloze score. The primary aim is to find whether a ML algorithm can classify conscious LIS patients from unconscious patients based on their EEG data. It was deemed appropriate to focus on constraining items so that data from the most generic sentences will be analyzed. By doing so, the model will be able to make predictions based on EEG data from natural reading tasks, warranting generalizability of the diagnostic tool. Since EEG data is of a rich format, a principal component analysis (PCA) was performed in order to reduce features and to decrease training and prediction time. The PCA algorithm performs a factor analysis to reduce data dimensionality by transforming features in such a way that they capture as much variation present in the original data as possible. These components have been established as valid input for classification algorithms, and subsequently used in further analysis \cite{wang2010feature}. As such, the PCA was performed on the ERP electrodes so that these were reduced to their principal components. The cross-validation method composed by \cite{wold1978cross} was applied, so that five, two and one principal components were all taken into the hyperparameter tuning process. It was assumed that two and one principal components would be enough to generate high performance metrics based on three extracted eigenvalues of randomly selected subjects, for which the values of the first component were 99.2\%, 99.4\% and 99.2\% respectively. This is indicated for one of these subjects in Figure \ref{fig:pca}. The original seven electrode ERP channels were nevertheless iteratively replaced with five, two and one components to assure best performance.
\begin{figure}
\centering
\includegraphics[scale=0.70]{scree.pdf}
\caption{Scree plot of randomly picked subject. See \url{https://github.com/DanielvdC/LISclassification/blob/master/Visualizations/Scree\%20plot.pdf} for full size}
\label{fig:pca}
\end{figure}
Subsequently, the .csv file was imported into Python for the final preprocessing steps and to set up the classification model because of the abundance of convenient processing tools. Necessary packages\footnote{See footnote 1.} were imported, the .csv file was loaded in and duplicated in order to generate data for unconscious individuals. The simulation was performed by means of a random uniform distribution between the minimum and maximum Cloze values of the original data. These generated numbers were then shuffled and inserted at the end of the duplicated data frame, upon which it was concatenated with the original one. This complete dataset was transformed so that each subject was represented by one row with all their corresponding principal components and Cloze scores as feature. As such, the dataframe now consisted of 220 participants with even conscious and unconscious classes, which were finalized by adding conscious labels (See Appendix \ref{Appendix3} for the complete pipeline).
\subsection{Models} \label{models}
The supervised classification was performed on a single-subject level, in which principal components (representing EEG signals) and Cloze scores were used to train and validate two classifiers, an SVM and an RF.
\par Before splitting the data in a train- and test-set, missing values were filled, labels were one-hot encoded where the conscious class was represented by 1 and the unconscious by 0, the train and test size as well as the number of folds for the cross validation were set. Even though decision trees and RFs generally handle missing values within their algorithm, SVMs do not and are sensitive to them \cite{garcia2010pattern}. Therefore, missing values were filled by means of a linear interpolation with the pandas module \cite{mckinney2010data}, so that index was ignored and values were treated as equally spaced. This ensures that missing data was estimated based on a linear distribution, which most closely represents natural EEG data \cite{moosmann2008joint}. Since linear classifiers, like the linear kernel of the SVM, depend on the inner products of feature vectors \cite{sartakhti2012hepatitis}, data was standardized using the scikit-learn StandardScaler module \cite{pedregosa2011scikit}, which standardizes features by removing the mean and scaling to unit variance. The most basic SVM and RF were set up using scikit-learn modules \cite{pedregosa2011scikit}, which were thrown into a grid search to determine optimum parameters. In the case of the SVM this concerned the kernel, C and $\gamma$ parameters, where kernel represents the used kernel, C the tradeoff of correct classification versus maximization of the decision function’s margin and $\gamma$ how far the influence of a single training example reaches. Low values encourage a larger margin and far influence, high values encourage a smaller margin and close influence, respectively \cite{pedregosa2011scikit}. For the RF, this concerned the maximum depth and features, minimum samples required to be at a leaf node and to split a node and the number of estimators to use. An overview of the considered and actually implemented (hyper)parameters of these algorithms can be found in Table \ref{tab:gridsearch}, in which the best (hyper)parameters are presented in bold. Both algorithms were updated with the best parameters provided by the grid search. Next, training and validation sets were derived by means of fivefold cross validation, against which accuracy metrics were checked. For training and validation purposes, evaluation metrics were confined to accuracy, sensitivity and specificity. The test set was also evaluated with the receiver operating characteristic (ROC) curve. Hyperparameter tuning consisted of changing the train-test ratio, the number of principal components and the number of folds to use in the cross-validation.
\par An additional analysis was performed to test the model for its robustness. In order to do so, the original dataset was reduced to just over half its size by filtering on a subset of regions of the presented sentences. These sentences originally consisted of the equally distributed regions adjectives, determiners, nouns, pre-critical and post-critical, while this was reduced to the regions nouns, pre-critical and post-critical for the robustness check. N400 effects are produced as a response to the presented adjective in combination with its presented noun. However, the effect itself is only anticipated 250-600ms after this onset, possibly enabling the model to be trained on just the latter three regions without losing any predictive power.
\begin{table}
\caption{Grid search of models, considered (hyper)parameters and \textbf{best} (hyper)parameters.}
\label{tab:gridsearch}
\newlength\q
\setlength\q{\dimexpr .3\textwidth -2\tabcolsep}
\noindent\begin{tabular}{p{\q}p{\q}p{\q}}
\toprule
Model & (Hyper)Parameter & Values \\
\midrule
\multirow{4}{4em}{SVM} & Kernel & [\textbf{linear}, rbf] \\
& C & [\textbf{1}, 10, 100] \\
& $\gamma$ & [\textbf{0.1}, 0.01, 0.001, 0.0001] \\
& PCA & [\textbf{1},2,5] \\
\midrule
\multirow{6}{6em}{RF} & Maximum depth & [\textbf{80}, 90, 100, 110] \\
& Maximum features & [\textbf{2}, 3] \\
& Min. samples leaf & [\textbf{3}, 4, 5] \\
& Min. samples split & [\textbf{8}, 10, 12] \\
& No. of estimators & [\textbf{100}, 200, 300, 1000] \\
& PCA & [1, \textbf{2}, 5] \\
\bottomrule
\end{tabular}
\end{table}
\section{Results}
The workflow was implemented on a workstation with a 1.4 GHz Dual-Core Intel Core i5 processor and 8 GB 1600 MHz random-access memory. The grid search with cross validation took about 25 minutes to complete, upon which the SVM and RF were set up and trained in about 15 seconds each. Prediction on the test set took about 2 seconds for each model.
\par Table \ref{tab:results} shows the confusion matrix and performance metrics for both models when predicting on the test set averaged over all five folds based on the best (hyper)parameters attained by predicting on the validation sets. The SVM scored 78\%, 80\%, 72\%, 94\% on area under the curve, accuracy, sensitivity and specificity respectively, where the RF scored 91\% on all four. The table also shows that the SVM produced more false positives, but one less false negative.
\par The robustness check indicates that the models were both robust to the decrease of input data. Both models produced very similar performance metrics, where the SVM produced one more false positive and the RF refrained from producing false negatives.
\begin{landscape}
\thispagestyle{empty}
\begin{table}
\caption{Confusion matrix and performance metrics of considered models.}
\label{tab:results}
\setlength\q{\dimexpr .18\textwidth-2\tabcolsep}
\renewcommand{\arraystretch}{1.5}
\noindent\begin{tabular}{p{\q}p{\q}p{\q}p{\q}p{\q}p{\q}p{\q}p{\q}}
\toprule
& \multicolumn{2}{l}{Predicted} & & AUC & Accuracy & Sensitivity & Specificity \\
& & Conscious & Unconscious \\
\midrule
\multirow{1}{*}{SVM} \\
\midrule
\multirow{2}{*}{True} & Conscious & 21 & 1 & \multirow{2}{*}{.78} & \multirow{2}{*}{.80} & \multirow{2}{*}{.72} & \multirow{2}{*}{.94} \\
& Unconscious & 8 & 15 \\
\midrule
\multirow{1}{*}{RF} \\
\midrule
\multirow{2}{*}{True} & Conscious & 20 & 2 & \multirow{2}{*}{.91} & \multirow{2}{*}{.91} & \multirow{2}{*}{.91} & \multirow{2}{*}{.91} \\
& Unconscious & 2 & 21 \\
\bottomrule
\textbf{Robustness check} \\
\toprule
& \multicolumn{2}{l}{Predicted} & & AUC & Accuracy & Sensitivity & Specificity \\
& & Conscious & Unconscious \\
\midrule
\multirow{1}{*}{SVM} \\
\midrule
\multirow{2}{*}{True} & Conscious & 21 & 1 & \multirow{2}{*}{.78} & \multirow{2}{*}{.78} & \multirow{2}{*}{.70} & \multirow{2}{*}{.93} \\
& Unconscious & 9 & 14 \\
\midrule
\multirow{1}{*}{RF} \\
\midrule
\multirow{2}{*}{True} & Conscious & 22 & 0 & \multirow{2}{*}{.96} & \multirow{2}{*}{.96} & \multirow{2}{*}{.92} & \multirow{2}{*}{1} \\
& Unconscious & 2 & 21 \\
\bottomrule
\end{tabular}
\end{table}
\end{landscape}
\section{Discussion}
The present study explored a ML-driven approach to classify consciousness among potential LIS patients in order to differentiate them from unconscious individuals based on EEG N400 ERP effects. Detecting consciousness among these patients is challenging because of similarities with disorders of consciousness, a lack of objective biomarkers and a difficult-to-recognize pathogenesis \cite{bruno2011unresponsive,gosseries2014recent,lui2016psychoradiology}. Within the bounds of this experiment, the performance metrics obtained show that an SVM and RF are to different extents, capable of classifying conscious from unconscious individuals. The RF especially achieved high accuracy, sensitivity and specificity when predicting on unseen data, advocating for the further development of a model that uses EEG N400 ERP signals. These results were not unexpected given the success SVMs and RFs have achieved with regards to the classification of consciousness in related contexts \cite{holler2013comparison,engemann2018robust} (see Section \ref{RelatedWork}). Moreover, as has been shown by the robustness check, both models were still capable of differentiating between conscious and unconscious individuals on data from the noun, pre-critical and post-critical sentence regions. The RF produced even better results, possibly because signals acquired during the other regions confounded prediction since they do not actually convey information about the N400 effect. As such, the baselines described in Section \ref{RelatedWork}, up to 40\% related to speech and sound modalities in \citeauthor{steppacher2013n400} (\citeyear{steppacher2013n400}), and 50\% related to passively presented auditory cues in \citeauthor{cruse2014reliability} (\citeyear{cruse2014reliability}), were greatly exceeded by both the SVM and RF, which detected 95\% (95\% in robustness check) and 91\% (100\% in robustness check) of conscious patients based on ERP effects respectively. An explanation for why the present model produced better results is because it was trained on more data of a different stimulus type, used simulated data and used a ML approach to reach classification. These results advocate for the development of a new auxiliary diagnostic tool for the classification of LIS patients based on the ML pipeline presented here. In order to achieve a ML-driven diagnostic tool, some developments should be achieved. The code is openly available on Github \cite{githubrepo} for further development.
\par For one, the number of false positives (conscious classified as unconscious) and false negatives (unconscious classified as conscious) should be reduced to a minimum. Even though every diagnostic tool should pursue high sensitivity to reduce false negatives and high specificity to reduce false positives, diagnostic tools for LIS urge especially for high sensitivity because misdiagnoses generally emanate from false negative classification \cite{zhu2010sensitivity,gosseries2014recent,bruno2011unresponsive}. Therefore, future endeavours should penalize false negatives more heavily than false positives by training the model on more instances of varying nature.
\par The present study ratifies the possibility to create a new auxiliary diagnostic tool for the automatic classification of LIS patients. In order for this possibility to become reality, the presented models should be trained on real data to capture consciousness instead of data that indirectly represents it. Using real data ensures that the performances achieved here can be extended to the actual relationship between ERP effects and (un)consciousness. When similar results are obtained on this data, the models are capable of generalizing to unseen instances, endorsing the development of the diagnostic tool. Even with a relatively low amount of subjects, which is likely given common sampling obstacles faced in LIS \cite{noirhomme2017look}, high sensitivity in the classification of conscious LIS patients can still be achieved by high sensitivity in the classification of unconscious subjects \cite{holler2013comparison}.
\par The consideration should also be made to formalize consciousness by training the models with EEG signals in a resting state so that new data can similarly be collected in resting state. Given that N400 effects can be found in subjects exposed to normatively associated word pairs \cite{cruse2014reliability}, this should not pose any significant obstacles. Further training the data should be based on occurrences where consciousness is formalized by utilizing the consciousness indexes described in \cite{sitt2014large} to ensure validity. This will improve accuracy and generalizability, makes new data collection convenient and reinforces the overall model dexterity. The virtually real-time classification of new instances makes the models presented here promising as an auxiliary diagnostic tool for LIS. If this is done properly, the studies from which the baseline was acquired (see Section \ref{RelatedWork}) can possibly benefit from the presented model as well. The fact that none of these studies used a ML approach makes it interesting to apply the models presented here to those contexts as well, possibly extending the use to more differentiated contexts. Since the studies conducted by \citeauthor{steppacher2013n400} \cite{steppacher2013n400}, \citeauthor{balconi2013disorders} \cite{balconi2013disorders} and \citeauthor{beukema2016hierarchy} \cite{beukema2016hierarchy} are based on different methods to elicited ERP effects, it would be interesting to see how the present models perform in those contexts.
\section{Conclusion}
Concluding, an SVM and RF were trained on N400 ERP effects of data collected by \citeauthor{nicenboim2020words} \cite{nicenboim2020words} in a sentence comprehension task. This data represented conscious individuals by means of EEG signals paired with congruent Cloze scores. Unconscious individuals were simulated based on this dataset, where EEG signals were paired with randomly generated Cloze scores so that a relationship between the two ceased to exist. Both the SVM and RF were able to classify most individuals with a sensitivity of 72\% and 91\% and a specificity of 94\% and 91\% respectively. If the limitations described in the discussion are taken into consideration, results from this study present the opportunity to create a new auxiliary diagnostic tool for the automatic classification of LIS patients among unconscious individuals, so that diagnosis becomes cheap, convenient, fast and reliable, ultimately reducing mortality rates and sustaining optimistic prognosis. Moreover, the model proposes interesting opportunities to be applied to different contexts related to the detection of consciousness.
\newpage
\printbibliography
\newpage
|
2,877,628,091,191 | arxiv | \subsection{#1}}
%
\newcommand{\refsecapp}[1]{Section~\ref{#1}}
\newcommand{\Refsecapp}[1]{Section~\ref{#1}}
\newcommand{\reftwosecapps}[2]{Sections~\ref{#1} and~\ref{#2}}
\newcommand{\Reftwosecapps}[2]{Sections~\ref{#1} and~\ref{#2}}
\newcommand{Section\xspace}{Section\xspace}
\newcommand{section\xspace}{section\xspace}
\newcommand{Sections\xspace}{Sections\xspace}
\newcommand{sections\xspace}{sections\xspace}
}
\ifthenelse{\boolean{conferenceversion}}
{}
{
\newtheoremstyle{metacommenttheoremstyle}%
{3pt}%
{3pt}%
{\sffamily \itshape \scriptsize
%
}%
{}%
{\bfseries \scshape \footnotesize }%
{:}%
{ }%
%
{}%
\theoremstyle{metacommenttheoremstyle}
}
\newtheorem{jncommentcontainer}{Jakob's comment}
\newtheorem{cbcommentcontainer}{Christoph's comment}
\newcounter{rbcounter}
\setlength{\marginparwidth}{0.8in}
\newcommand{\randbem}[3]%
{\stepcounter{rbcounter}%
\parbox[t]{0mm}{$^{\arabic{rbcounter}}$}%
\marginpar%
{\textcolor{#1}%
{\raggedright\footnotesize$\mathbf{#2}^{\arabic{rbcounter}}$: #3}%
}
}
\ifthenelse{\isundefined{\DONOTINSERTCOMMENTS}}
{ %
%
%
\newcommand{\jncomment}[1]%
{\begin{jncommentcontainer} \textcolor{blue}{#1} \end{jncommentcontainer}}
\newcommand{\cbcomment}[1]%
{\begin{cbcommentcontainer} \textcolor{magenta}{#1} \end{cbcommentcontainer}}
\newcommand{\jn}[1]{\randbem{blue}{J}{#1}}
\newcommand{\chr}[1]{\randbem{magenta}{C}{#1}}
}
{ %
\newcommand{\jncomment}[1]{}
\newcommand{\cbcomment}[1]{}
\newcommand{\chr}[1]{}
\newcommand{\jn}[1]{}
}
%
\numberwithin{equation}{section}
\begin{document}
\title{Near-Optimal Lower Bounds on Quantifier Depth and
Weisfeiler--Leman Refinement Steps%
\thanks{This is the full-length version of a paper with the same
title which appeared in \emph{Proceedings of the 31st Annual ACM/IEEE
Symposium on Logic in Computer Science (LICS~'16)}.}}
\author{%
Christoph Berkholz \\
Humboldt-Universität zu Berlin
\and
Jakob Nordström \\
KTH Royal Institute of Technology}
\date{\today}
\maketitle
\thispagestyle{empty}
\pagestyle{fancy}
\fancyhead{}
\fancyfoot{}
\renewcommand{\headrulewidth}{0pt}
\renewcommand{\footrulewidth}{0pt}
\fancyhead[CE]{\slshape
NEAR-OPTIMAL LOWER BOUNDS ON QUANTIFIER DEPTH
%
}
\fancyhead[CO]{\slshape \nouppercase{\leftmark}}
\fancyfoot[C]{\thepage}
\setlength{\headheight}{13.6pt}
\makeatletter{}%
\ifthenelse{\boolean{conferenceversion}}
{%
\begin{abstract}
We prove near-optimal trade-offs for quantifier depth versus number
of variables in first-order logic by exhibiting pairs of $n$-element
structures that can be distinguished by a $k$-variable first-order
sentence but where every such sentence requires quantifier depth at
least~$n^{\bigomega{k/\log k}}$. \mbox{Our trade-offs} also apply to
first-order counting logic, and by the known connection to the
$k$-dimensional Weisfeiler--Leman algorithm imply near-optimal lower
bounds on the number of refinement iterations.
A key component in our proof is the hardness condensation technique
recently introduced by [Razborov~'16] in the context of proof
complexity. We apply this method to reduce the domain size of
relational structures while maintaining the quantifier depth
required to distinguish them.
\end{abstract}
}
{%
\begin{abstract}
We prove near-optimal trade-offs for quantifier depth versus number
of variables in first-order logic by exhibiting pairs of $n$-element
structures that can be distinguished by a $k$-variable first-order
sentence but where every such sentence requires quantifier depth at
least~$n^{\bigomega{k/\log k}}$. Our \mbox{trade-offs} also apply to
first-order counting logic, and by the known connection to the
$k$-dimensional Weisfeiler--Leman algorithm imply near-optimal lower
bounds on the number of refinement iterations.
A key component in our proof is the hardness condensation technique
recently introduced by [Razborov~'16] in the context of proof
complexity. We apply this method to reduce the domain size of
relational structures while maintaining the minimal quantifier depth
to distinguish them in finite variable logics.
\end{abstract}
}
%
\makeatletter{}%
\section{Introduction}
\label{sec:intro}
The $\pebblesk$-variable fragment of first-order logic
\ensuremath{\mathsf L\!^{\pebblesk}}{}
consists of
those first-order sentences that use at most $\pebblesk$~different variables.
A simple example is the $\ensuremath{\mathsf L}\!^2$ sentence
\begin{equation}
\label{eq:FOkExample}
\exists x \exists y(Exy \wedge \exists x (Eyx\wedge \exists y
(Exy\wedge \exists x Eyx)))
\end{equation}
stating that there exists a directed path of length~$4$ in a
digraph.
Extending~$\ensuremath{\mathsf L\!^{\pebblesk}}$ with counting quantifiers~$\exists^{\geq i}x$
yields~$\ensuremath{\mathsf C^{\pebblesk}}$, which can be more economical in terms of
variables. As an illustration,
the $\ensuremath{\mathsf L}\!^8$~sentence
\begin{equation}
\exists x\exists y_1 \cdots \exists y_7
\bigl(
\textstyle\bigwedge_{i\neq j} y_i\neq y_j
\land
\textstyle\bigwedge_i Exy_i
\bigr)
\end{equation}
stating the existence of a vertex of degree at least 7 in a graph
can be written more succinctly
as
the \mbox{$\ensuremath{\mathsf C}^2$ sentence}
\ifthenelse{\boolean{conferenceversion}}
{\begin{equation} \exists x \exists^{\geq 7}y Exy \enspace . \end{equation}}
{\begin{equation} \exists x \exists^{\geq 7}y Exy \enspace . \end{equation}}
Bounded variable fragments of first order logic have found numerous
applications in finite model theory and related areas (see
\cite{Grohe.1998} for a survey). Their importance stems from the fact
that the model checking problem (given a finite relational structure
$\mathcal A$ and a sentence $\varphi$, does $\mathcal A$ satisfy $\varphi$?)
can be decided in polynomial time \cite{Immerman.1982,Vardi.1995}.
Moreover, the
equivalence problem (given two finite relational
structures $\mathcal A$ and $\mathcal B$, do they satisfy the same
sentences?) for \ensuremath{\mathsf L\!^{\pebblesk}}{} and \ensuremath{\mathsf C^{\pebblesk}}{}
can be decided in
\mbox{time $n^{O(\pebblesk)}$ \cite{Immerman.1990}}, i.e.,\ } polynomial for constant~$\pebblesk$.
\mysubsection{Quantifier Depth}
If $\mathcal A$ and $\mathcal B$ are not equivalent in $\ensuremath{\mathsf L\!^{\pebblesk}}$ or $\ensuremath{\mathsf C^{\pebblesk}}$,
then there exists a sentence $\varphi$ that defines a distinguishing
property,
i.e.,\ } such that
$\mathcal A\models\varphi$ and
$\mathcal B\not\models\varphi$, which certifies that the structures are
non-isomorphic.
But how complex can such a sentence be?
In particular,
what is the minimal quantifier depth of an \ensuremath{\mathsf L\!^{\pebblesk}}{} or
\mbox{\ensuremath{\mathsf C^{\pebblesk}}{} sentence}
that distinguishes two $n$-element relational structures
$\mathcal A$ and $\mathcal B$?
The best upper bound for the quantifier depth of $\ensuremath{\mathsf L\!^{\pebblesk}}$ and $\ensuremath{\mathsf C^{\pebblesk}}$
is $n^{k-1}$ \cite{Immerman.1990}, while to the best of our knowledge
the strongest lower bounds have been only linear in $n$
\cite{Cai.1992,Grohe.1996,Furer.2001}.
In this paper we present a near-optimal lower bound
of~$n^{\Omega(\pebblesk / \log \pebblesk)}$.
\begin{theorem}
\label{thm:maintheorem}
There
exist
$\varepsilon>0$, $k_0\in\mathbb N$ such that for all
$\pebblesk,n$
with
$k_0\leq \pebblesk \leq n^{1/12}$
there is a pair of $n$-element
\ifthenelse{\boolean{conferenceversion}}
{$(\pebblesk\!-\!1)$-ary}
{$(\pebblesk-1)$-ary}
relational structures
$\mathcal A_n, \mathcal B_n$
that
can be distinguished in
$\pebblesk$-variable first-order logic
but satisfy the same
$\ensuremath{\mathsf L}^\pebblesk$ and $\ensuremath{\mathsf C}^\pebblesk$ sentences up to
quantifier depth
$n^{\varepsilon \pebblesk / \log \pebblesk}$.
\end{theorem}
Note that any two non-isomorphic $n$-element $\sigma$-structures
$\mathcal A$ and~$\mathcal B$
can always be distinguished by a simple $n$\nobreakdash-variable
first-order sentence of quantifier depth~$n$, namely
\ifthenelse{\boolean{conferenceversion}}
{%
\begin{multline}
\label{eq:distinguishing-formula}
\exists x_1
\cdots
\exists x_n
\Biggl(%
\bigwedge_{i\neq j}x_i\neq x_j
\ \wedge
\!\!\!\!\
\bigwedge_{\substack{R\in\sigma, \\ (v_{i_1},\ldots,v_{i_{r}})\in
R^\mathcal A}}
\!\!\!\!
Rx_{i_1},\ldots,x_{i_{r}} \\
\wedge
\!\!\!\!
\bigwedge_{\substack{R\in\sigma, \\ (v_{i_1},\ldots,v_{i_{r}})\notin
R^\mathcal A}}
\!\!\!\!
\neg Rx_{i_1},\ldots,x_{i_{r}}
\Biggr)%
\enspace .
\end{multline}
}
{%
\begin{equation}
\label{eq:distinguishing-formula}
\exists x_1
\cdots
\exists x_n
\Biggl(%
\bigwedge_{i\neq j}x_i\neq x_j
\ \wedge \!\!\!\!
\bigwedge_{\substack{R\in\sigma, \\ (v_{i_1},\ldots,v_{i_{r}})\in
R^\mathcal A}}
\!\!\!\!
Rx_{i_1},\ldots,x_{i_{r}}
\ \wedge \!\!\!\!
\bigwedge_{\substack{R\in\sigma, \\ (v_{i_1},\ldots,v_{i_{r}})\notin
R^\mathcal A}}
\!\!\!\!
\neg Rx_{i_1},\ldots,x_{i_{r}}
\Biggr)%
\enspace .
\end{equation}%
}
Since our $n^{\Omega(\pebblesk / \log \pebblesk)}$ lower
bound for $\pebblesk$-variable logics grows significantly
faster
than this
trivial upper bound~$n$
on the quantifier depth as the number of
variables increases,
\refth{thm:maintheorem}
also describes a trade-off in the super-critical regime
above worst-case investigated by
Razborov~\cite{Razborov16NewKind}:
If one reduces one complexity measure (the number of variables), then
the other complexity parameter (the quantifier depth) increases
sharply even beyond its worst-case upper bound.
The equivalence problem for $\ensuremath{\mathsf C}^{\wldim+1}$ is known to be closely
related to the
\emph{\wldimtext\nobreakdash-dimensional Weisfeiler--Leman
algorithm} (\wldimtext\nobreakdash-WL) for testing non-isomorphism of graphs
and, more generally, relational structures. It was shown by Cai,
Fürer, and Immerman \cite{Cai.1992} that two structures are
distinguished by \wldimtext-WL if and only if there exists a
$\ensuremath{\mathsf C}^{\wldim+1}$~sentence that
differentiates between them.
Moreover, the quantifier depth of such a sentence also relates to the
complexity of the WL~algorithm in that the number of iterations
\wldimtext-WL needs to
tell $\mathcal A$ and~$\mathcal B$ apart
coincides with the
minimal quantifier depth of a distinguishing
$\ensuremath{\mathsf C}^{\wldim+1}$~sentence.
Therefore, \refth{thm:maintheorem} also implies a near-optimal
lower bound on the
number of refinement steps
required in the Weisfeiler--Leman algorithm.
We discuss this next.
\mysubsection{The Weisfeiler--Leman Algorithm}
The Weisfeiler--Leman algorithm, independently introduced by Babai in
1979 and by Immerman and Lander in~\cite{Immerman.1990}
(cf.~\cite{Cai.1992} and~\cite{Babai16GraphIsomorphism} for historic notes), is a
hierarchy of methods for isomorphism testing that iteratively refine
a partition
(or colouring) of the vertex set, ending with a \emph{stable colouring} that
classifies \emph{similar vertices}.
Since no isomorphism can
map non-similar vertices to each
other, this reduces the search space.
Moreover, if two structures end up with different stable colourings,
then we
can immediately deduce
that the structures are non-isomorphic.
The $1$\nobreakdash-dimensional Weisfeiler--Leman algorithm,
better known as \emph{colour refinement},
initially colours the vertices according to their degree (clearly, no
isomorphism identifies vertices of different degree).
The vertex colouring is then refined based on the colour classes of
the neighbours.
For example, two degree-$5$ vertices get different colours in the next
step if they have a different number of degree-$7$ neighbours.
This refinement step is repeated until the colouring stays stable
(i.e.,\ } every pair of equally coloured vertices have the same number of
neighbours in every other colour class). This algorithm is already
quite strong and is extensively used in practical graph isomorphism
algorithms.
In \wldimtext-dimensional WL this idea is generalized to colourings of
\emph{\wldimtext-tuples} of vertices.
Initially the \wldimtext\nobreakdash-tuples are coloured by their isomorphism
type, i.e.,\ } two tuples $\tuple{v}=(v_1,\ldots,v_{\wldim})$ and
$\tuple{w}=(w_1,\ldots,w_{\wldim})$ get different colours if the
mapping $v_i\mapsto w_i$ is not an isomorphism
on the substructures induced on
$\{v_1,\ldots,v_{\wldim}\}$ and
$\{w_1,\ldots,w_{\wldim}\}$.
In the refinement step, we consider for each \wldimtext-tuple
$\tuple{v}=(v_1,\ldots,v_{\wldim})$ and every vertex $v$ the colours
of the tuples
$\tuple{v}_j:=(v_1,\ldots,v_{j-1},v,v_{j+1},\ldots,v_{\wldim})$,
where
$v$ is substituted at the $j$th position in the tuple~$\tuple{v}$.
We refer to the tuple $(c(\tuple{v}_1),\ldots,c(\tuple{v}_\wldim))$
of these $\wldim$ colours as the \emph{colour type}
$\ensuremath{\mathsf t}(\tuple{v},v)$ and let $v$ be a
$\ensuremath{\mathsf t}$\nobreakdash-neighbour of
$\tuple{v}$ if $\ensuremath{\mathsf t}=\ensuremath{\mathsf t}(\tuple{v},v)$.
Now two tuples $\tuple{v}$ and~$\tuple{w}$ get different colours if
they are already coloured differently, or if there exists a colour
type~$\ensuremath{\mathsf t}$ such that $\tuple{v}$ and $\tuple{w}$ have a
different number of $\ensuremath{\mathsf t}$\nobreakdash-neighbours.
The refinement step is repeated until the colouring stays stable.
Since in every round the number of colour classes grows, the
process stops after at most $n^{\wldim}$ steps.
The colour names can be chosen in such a way that the stable colouring
is canonical, which means that two isomorphic structures end up with
the same colouring, and such a
canonical stable colouring
can be computed in time~$n^{O(\wldim)}$.
This simple combinatorial algorithm is surprisingly powerful.
Grohe \cite{Grohe12FixedPointDefinability} showed that for every
nontrivial graph class that excludes some minor (such as planar
graphs or graphs of bounded treewidth) there exists some $\pebblesk$ such
that $\pebblesk$-WL computes a different colouring for all non-isomorphic
graphs, and hence solves graph isomorphism in polynomial time on that
graph class. Weisfeiler--Leman has also been used as a subroutine in
algorithms that solve graph isomorphism on all graphs.
As one part of his very recent graph isomorphism algorithm,
Babai~\cite{Babai16GraphIsomorphism} applies $\pebblesk$-WL for
polylogarithmic~$\pebblesk$
to relational ($\pebblesk$-ary) structures and makes use of
the quasi-polynomial
running time of this algorithm.
Given the importance of
the Weisfeiler--Leman procedure,
it is a natural question to
ask whether the trivial $n^{\wldim}$~upper bound on the number of
refinement steps is tight.
By the correspondence between the number of refinement steps of
\wldimtext-WL and the quantifier depth
of~$\ensuremath{\mathsf C}^{\wldim+1}$~\cite{Cai.1992}, our main result implies a
near-optimal lower bound even up to
polynomial, but still sublinear, values of~$\wldim$
(i.e.,\ } $\wldim = n^\delta$ for small enough constant~$\delta$).
\begin{theorem}\label{thm:mainWLtheorem}
There
exist
$\varepsilon>0$, $k_0\in\mathbb N$ such that for all
$\pebblesk,n$ with
$k_0\leq \pebblesk \leq n^{1/12}$
there is an $n$-element $\wldim$-ary relational structure
$\mathcal A_n$
for which
the $\wldim$-dimensional Weisfeiler--Leman algorithm
needs $n^{\varepsilon
\wldim / \log \wldim}$ refinement steps to compute the
stable colouring.
\end{theorem}
%
In addition to the near-optimal lower bounds for a specific dimension
(or number of variables)~$\wldim$, we also obtain the following
trade-off between the dimension and the number of refinement steps:
If we fix two parameters $\ell_1$ and $\ell_2$ (possibly depending
on~$n$) satisfying
$\ell_1\leq\ell_2 \leq n^{1/6}/\ell_1$,
then there are $n$\nobreakdash-element structures such that $\wldim$-WL
needs $n^{\bigomega{\ell_1/\log \ell_2}}$ refinement steps for
all $\ell_1\leq\wldim\leq\ell_2$.
A particularly interesting choice of parameters is $\ell_1 = \log^cn$
for some constant $c>1$
and $\ell_2 = n^{1/7}$.
This implies the following quasi-polynomial lower bound on the number
of refinement steps for Weisfeiler--Leman from polylogarithmic
dimension all the way up to dimension~$n^{1/7}$.
\begin{theorem}\label{thm:mainTheoremWLquasipolyTradeoff}
For every $c>1$
there is a sequence of $n$-element relational structures
$\mathcal A_n$
for which
the $\wldim$-dimensional Weisfeiler--Leman algorithm needs
$n^{\bigomega{\log^{c-1} n}}$ refinement steps to compute the stable
colouring for all $\wldim$ with $\log^c n \leq \wldim \leq n^{1/7}$.
\end{theorem}
\mysubsection{Previous Lower Bounds}
In their seminal work~\cite{Cai.1992}, Cai, Fürer and Immerman
established the existence of non-isomorphic \mbox{$n$-vertex} graphs
that cannot be distinguished by any first-order counting sentence with
$\littleoh{n}$~variables.
Since every pair of non-isomorphic $n$-element structures can be
distinguished by a $\ensuremath{\mathsf C}^n$ (or even~$\ensuremath{\mathsf L}\!^n$) sentence
(as shown in \refeq{eq:distinguishing-formula} above), this result
also implies a linear lower bound on the quantifier depth of $\ensuremath{\mathsf C^{\pebblesk}}$
\mbox{if $k=\Omega(n)$.}
For all constant $k\geq 2$, a linear $\Omega(n)$ lower bound on the
quantifier depth of $\ensuremath{\mathsf C^{\pebblesk}}$ follows implicitly from an
intricate
construction of Grohe \cite{Grohe.1996}, which was used to show that
the equivalence
problems for $\ensuremath{\mathsf L\!^{\pebblesk}}$ and $\ensuremath{\mathsf C^{\pebblesk}}$ are complete for
polynomial time.
An explicit linear lower bound based on a simplified construction was
subsequently
presented
by Fürer~\cite{Furer.2001}.
For the special case of $\ensuremath{\mathsf C}^2$, Krebs and
Verbitsky~\cite{Krebs.2015} recently obtained an improved $(1-o(1))n$
lower bound on the quantifier depth, nearly matching the upper bound~$n$.
In contrast,
Kiefer and Schweitzer~\cite{Kiefer.2016}
showed that if two \mbox{$n$-vertex}
graphs
can be distinguished by
a
$\ensuremath{\mathsf C}^3$~sentence,
then there is always a distinguishing sentence of
quantifier depth $O(n^2/\log n)$. Hence, the trivial \mbox{$n^2$~upper}
bound is not tight in this case.
{As far as we are aware, the current paper presents the first lower
bounds that are super-linear in the domain size~$n$.}
\mysubsection{Discussion of Techniques}
The hard instances we construct are based on
propositional XOR\xspace{} (exclusive or) formulas,
which can alternatively be viewed as systems of linear equations
over~$\gf{2}$.
There is a long history of using XOR\xspace formulas
for proving lower bounds in
different
areas of theoretical
computer science such as, e.g., finite model theory,
proof complexity, and combinatorial optimization/hardness of
approximation.
Our main technical insight is to combine two methods that, to the
best of our knowledge, have not been used together before, namely
Ehrenfeucht-Fra\"iss\'e\xspace games on structures based on XOR\xspace formulas
and hardness amplification by variable substitution.
More than three decades ago, Immerman~\cite{Immerman.1981} presented a way to
encode an XOR\xspace formula into two graphs that are isomorphic if and only if
the formula is satisfiable.
This can then be used to show that the
two graphs cannot be distinguished by a sentence with few variables or
low quantifier depth using Ehrenfeucht-Fra\"iss\'e\xspace games.
Arguably the most important application of this method is the result
in~\cite{Cai.1992} establishing
that a linear number of variables is
needed to distinguish two
graphs in first-order counting logic.
Graph constructions based on XOR\xspace formulas have also been used to
prove lower bounds on the quantifier depth of
$\ensuremath{\mathsf C^{\pebblesk}}$ \cite{Immerman.1981,Furer.2001}.
We remark that for our result
we have to use a slightly different encoding of
\ifthenelse{\boolean{conferenceversion}}
{formulas}
{XOR\xspace formulas}
into relational structures rather than graphs.
In proof complexity, various flavours of XOR\xspace formulas
(usually called \introduceterm{Tseitin formulas} when used to encode the
\introduceterm{handshaking lemma}
saying that the sum of all vertex degrees in an
undirected graph has to be an even number)
have been
employed
to obtain lower bounds for
proof systems such as
resolution~\cite{Urquhart87HardExamples},
polynomial calculus~\cite{BGIP01LinearGaps},
and
bounded-depth Frege~\cite{Ben-Sasson02HardExamples}.
Such formulas have also played an important role in many lower bounds for
the Positivstellensatz/sums-of-squares proof system
\cite{Grigoriev01LinearLowerBound,KI06LowerBounds,Schoenebeck08LinearLevel}
corresponding to the Lasserre semidefinite programming
hierarchy, which has been the focus of much recent interest in the
context of combinatorial optimization.%
\footnote{%
No proof complexity is needed in this paper,
and so readers unfamiliar with these proof systems need not
worry---this is just
an informal overview.}
Another use of XOR\xspace in proof complexity has been for hardness amplification,
where one takes a (typically non-XOR\xspace) formula that is moderately hard with
respect to some complexity measure, substitutes all variables by
exclusive ors
over pairwise distinct sets of variables,
and then shows that the new
\introduceterm{XORified\xspace{}} formula must be very hard with respect to\xspace some
other (more important) complexity measure.
This technique was perhaps first made explicit
in~\cite{Ben-Sasson02SizeSpaceTradeoffsJOURNALREF}
(attributed there to personal communication with
Michael~Alekhnovich and Alexander~Razborov, with a note that it is also
very similar in spirit to an approach used in \cite{BW01ShortProofs})
and has later appeared in, e.g.,
\cite{BP07Complexity,BN08ShortProofs,BN11UnderstandingSpace,BNT13SomeTradeoffs,FLMNV13TowardsUnderstandingPC}.
An even more crucial role in proof complexity is played by well-connected
so-called \introduceterm{expander graphs}.
For instance, given a
formula in conjunctive normal form (CNF)
one can look at its bipartite clause-variable incidence graph (CVIG),
or some variant of the CVIG derived from the combinatorial structure
of the formula, and prove that if this graph is an expander, then this
implies that the formula must be hard for proof systems such as
resolution~\cite{BW01ShortProofs} and polynomial
calculus~\cite{AR03LowerBounds,MN15GeneralizedMethodDegree}.
In a striking recent paper~\cite{Razborov16NewKind}, Razborov combines
\XORification
and expansion in a simple (with hindsight)
but amazingly powerful way. Namely, instead of replacing every
variable by an XOR over new, fresh variables,
he recycles variables from a much smaller pool, thus decreasing the
total number of variables.
This means that the hardness amplification proofs no longer work,
since they crucially use that all new substitution variables
are distinct.
But here expansion come into play. If the pattern of variable
substitutions is described by a strong enough bipartite expander, it
turns out that locally there is enough ``freshness'' even among the
recycled variables to make the hardness amplification go through over
a fairly wide range of the parameter space.
And since the formula has not only become harder but has also had the
number of variables decreased,
this can be viewed as a kind of
\introduceterm{hardness compression}
or
\introduceterm{hardness condensation}.
What we do in this paper is to first revisit
Immerman's old quantifier depth lower bound for first-order counting
logic~\cite{Immerman.1981} and observe that the construction can be
used to obtain an improved scalable lower bound for the
\mbox{$\pebblesk$-variable} fragment. We then translate Razborov's hardness
condensation technique~\cite{Razborov16NewKind} into the language of
finite variable logics and use it---perhaps somewhat amusingly applied
to \XORification of XOR\xspace formulas, which is usually not the case in
proof complexity---to reduce the domain size of relational structures
while maintaining the minimal quantifier depth required to distinguish
them.
\mysubsection{Outline of This Paper}
The rest of this paper is organized as follows.
In \refsec{sec:prelims} we describe how to translate XOR\xspace formulas to
relational structures and play combinatorial games on these
structures.
This then allows us to state our main technical lemmas
in \refsec{sec:lower-bound-proof} and show how these lemmas yield our
results.
Turning to the proofs of these technical lemmas,
in \refsec{sec:pyramids}
we present a version of Immerman's quantifier depth lower bound for
XOR\xspace formulas, and in
\refsec{sec:hardness-condensation}
we apply Razborov's hardness condensation technique to these formulas.
Finally, in
\refsec{sec:conclusion}
we make some concluding remarks and discuss possible directions for
future research.
\ifthenelse{\boolean{conferenceversion}}
{Due to space constraints, we omit some of the more standard technical
proofs in this conference version, referring the reader to the
upcoming full-length version for the missing details.}
{Some proofs of technical results needed in the paper are deferred to
\refapp{app:existence-expander}.}
%
\makeatletter{}%
\section{From XOR\xspace Formulas to Relational Structures}
\label{sec:prelims}
In this paper all structures are finite and defined over a relational
signature $\sigma$.
We use the letters $X$,~$E$, and $R$ for unary, binary, and \mbox{$r$-ary}
relation symbols, respectively, and let
$X^\mathcal A$, $E^\mathcal A$, and~$R^\mathcal A$ be their interpretation in a
structure $\mathcal A$.
We write~$V(\mathcal A)$ to denote
the domain of the structure~$\mathcal A$.
The \emph{$\pebblesk$\nobreakdash-variable
fragment of first-order logic}~$\ensuremath{\mathsf L\!^{\pebblesk}}$ consists
of all first-order formulas that use at most $\pebblesk$~different variables
(possibly re-quantifying them as in Equation~\eqref{eq:FOkExample}).
We also consider \emph{$\pebblesk$-variable first-order counting logic}
$\ensuremath{\mathsf C^{\pebblesk}}$, which is the extension of $\ensuremath{\mathsf L\!^{\pebblesk}}$ by counting quantifiers
$\exists^{\geq i}x \varphi(x)$, stating that there exist at least $i$
elements $u\in V(\mathcal A)$ such that
$(\mathcal A,u)\models \varphi(x)$.
For a survey of finite variable logics and their applications we refer
the reader to, e.g.,~\cite{Grohe.1998}.
An \introduceterm{\ensuremath{\ell}-XOR\xspace{} clause} is a tuple
$(x_1,\ldots,x_\ensuremath{\ell},a)$ consisting of $\ensuremath{\ell}$~Boolean
variables and a Boolean value $a\in\{0,1\}$.
We refer to $\ensuremath{\ell}$ as the \introduceterm{width} of the clause.
An assignment~$\alpha$ \emph{satisfies}
$(x_1,\ldots,x_\ensuremath{\ell},a)$
if $\alpha(x_1) + \cdots + \alpha(x_\ensuremath{\ell}) \equiv a \pmod{2}$.
An $\ensuremath{\ell}$-XOR\xspace{} formula~$\ensuremath{F}$ is a conjunction of XOR\xspace~clauses of
width at most~$\ensuremath{\ell}$ and is satisfied by
an assignment~$\alpha$ if $\alpha$ satisfies all
clauses in~$\ensuremath{F}$.
For every $\ensuremath{\ell}$-XOR\xspace{} formula $\ensuremath{F}$ on $n$~variables we can
define a pair of $2n$-element structures \mbox{$\mathcal A=\mathcal A(\ensuremath{F})$}
and $\mathcal B=\mathcal B(\ensuremath{F})$ that are isomorphic if and only if\xspace $\ensuremath{F}$ is
satisfiable.
The domain of the structures contains two elements $x_i^0$ and $x^1_i$
for each Boolean variable $x_i$.
There is one unary
predicate~$X_i$ for every variable~$x_i$ identifying
the corresponding two elements $x_i^0$ and~$x^1_i$.
Hence these unary relations partition the domain of the structures into
two-element sets, i.e.,\ } $X_i^\mathcal A = X_i^\mathcal B =
\{x_i^0,x_i^1\}$.
To encode the XOR\xspace clauses, we introduce one $m$-ary
relation $R_m$ for every $1\leq m \leq
\ensuremath{\ell}$ and set
\begin{subequations}
\begin{align}
\label{eq:structure-A}
\ifthenelse{\boolean{conferenceversion}}
{\!R_m^\mathcal A \!&=\!}
{R_m^\mathcal A &=}
\Setdescr{\bigl(x^{a_1}_{i_1},\ldots,x^{a_m}_{i_m}\bigr)}
{(x_{i_1},\ldots,x_{i_m},a)
\ifthenelse{\boolean{conferenceversion}}
{\!\in\! \ensuremath{F}, \textstyle\sum_i\! a_i \!\equiv\! 0 }
{\in \ensuremath{F},\;\textstyle\sum_i a_i \equiv 0 \pmod{2}}}
\\
\shortintertext{and}
\label{eq:structure-B}
\ifthenelse{\boolean{conferenceversion}}
{\!R_m^\mathcal B \!&=\!}
{R_m^\mathcal B &=}
\Setdescr{\bigl(x^{a_1}_{i_1},\ldots,x^{a_m}_{i_m}\bigr)}
{(x_{i_1},\ldots,x_{i_m},a)
\ifthenelse{\boolean{conferenceversion}}
{ \!\in\! \ensuremath{F}, \textstyle\sum_i\! a_i \!\equiv\! a }
{ \in \ensuremath{F},\;\textstyle\sum_i a_i \equiv a \pmod{2}}}
\ifthenelse{\boolean{conferenceversion}}
{}
{\enspace .}
\end{align}
\end{subequations}
\ifthenelse{\boolean{conferenceversion}}
{(where the sums are taken $\bmod{}\ 2$). Every}
{Every}
bijection $\beta$ between the domains of $\mathcal A(\ensuremath{F})$ and
$\mathcal B(\ensuremath{F})$ that preserves the unary relations~$X_i$
can be translated
to an assignment $\alpha$ for the XOR\xspace{} formula via the correspondence
\ifthenelse{\boolean{conferenceversion}}
{$\alpha(x_i) = 0 \Leftrightarrow \beta(x_i^0) = x_i^0 \Leftrightarrow
\beta(x_i^1) = x_i^1$ and $\alpha(x_i) = 1 \Leftrightarrow \beta(x_i^0)
= x_i^1 \Leftrightarrow \beta(x_i^1) = x_i^0$.}
{\begin{subequations}
\begin{gather}
\label{eq:correspondence-1}
\alpha(x_i) = 0 \Leftrightarrow \beta(x_i^0) = x_i^0 \Leftrightarrow
\beta(x_i^1) = x_i^1
\\
\shortintertext{and}
\label{eq:correspondence-2}
\alpha(x_i) = 1 \Leftrightarrow \beta(x_i^0)
= x_i^1 \Leftrightarrow \beta(x_i^1) = x_i^0
\enspace .
\end{gather}
\end{subequations}
}
\ifthenelse{\boolean{conferenceversion}}
{It is}
{Moreover, it is}
not hard to show that such a bijection defines an
isomorphism between
$\mathcal A(\ensuremath{F})$ and $\mathcal B(\ensuremath{F})$ if and only if the
corresponding assignment satisfies $\ensuremath{F}$.
\ifthenelse{\boolean{conferenceversion}}
{}
{See \reffig{fig:xor_encoding_example} for a small example
illustrating the construction.
%
\makeatletter{}%
\begin{figure}[t]
\centering
\begin{tikzpicture}[vertex/.style={circle,draw=black,fill=black,inner sep=0pt,minimum size=4pt},bluevertex/.style={rectangle,draw=black,fill=blue,inner sep=0pt,minimum size=4pt},redvertex/.style={circle,draw=black,fill=red,inner sep=0pt,minimum size=4pt},
diredge/.style={->,shorten <=.5pt, shorten >=.5pt}] %
\node[bluevertex,label=above:{\small $x_7^0$}] (x0a) at (0,2) {};
\node[bluevertex,label=above:{\small $x_7^1$}] (x1a) at (.5,2) {};
\node[redvertex,label=below:{\small $x_8^0$}] (y0a) at (0,1.25) {};
\node[redvertex,label=below:{\small $x_8^1$}] (y1a) at (.5,1.25) {};
\node at (.25,0.25) {$\mathcal A$};
\node[bluevertex,label=above:{\small $x_7^0$}] (x0b) at (0+2,2) {};
\node[bluevertex,label=above:{\small $x_7^1$}] (x1b) at (.5+2,2) {};
\node[redvertex,label=below:{\small $x_8^0$}] (y0b) at (0+2,1.25) {};
\node[redvertex,label=below:{\small $x_8^1$}] (y1b) at (.5+2,1.25) {};
\node at (2.25,0.25) {$\mathcal B$};
\draw[diredge] (x0a) -- (y0a);
\draw[diredge] (x1a) -- (y1a);
\draw[diredge] (x0b) -- (y1b);
\draw[diredge] (x1b) -- (y0b);
%
\end{tikzpicture}
\caption{Structure encoding of $\ensuremath{F}=\{(x_7,x_8,1)\}$.}
\label{fig:xor_encoding_example}
\end{figure}
%
}
This kind of
encodings of XOR\xspace formulas into relational structures have been
very useful for proving lower bounds for finite variable logics in the
past.
Our transformation of
XOR\xspace clauses of width~$\ensuremath{\ell}$ into
$\ensuremath{\ell}$\nobreakdash-ary relational structures resembles the way
Gurevich and Shelah~\cite{Gurevich.1996} encode XOR\xspace formulas as hypergraphs.
It is also closely related to the way Cai, Fürer, and
Immerman~\cite{Cai.1992} obtain two non-isomorphic graphs $\mathcal G$
and~$\mathcal H$
from
an unsatisfiable \mbox{$3$-XOR\xspace} formula $\ensuremath{F}$
in the sense that $\mathcal G$ and $\mathcal H$ can be seen to be the
incidence graphs of our structures $\mathcal A(\ensuremath{F})$
and~$\mathcal B(\ensuremath{F})$.
In order to prove our main result, we make use of the combinatorial
characterization of quantifier depth of finite-variable logics in
terms of pebble games for $\ensuremath{\mathsf L\!^{\pebblesk}}$ and $\ensuremath{\mathsf C^{\pebblesk}}$,
which are played on two given relational structures.
Since in our case the structures are based on XOR\xspace{} formulas,
\ifthenelse{\boolean{conferenceversion}}
{for convenience we consider a simplified combinatorial game that is
played directly on the formulas}
{for convenience we will consider a simplified combinatorial game that is
played directly on the XOR\xspace{} formulas}
rather than on their structure encodings. We first describe this game and then
show
in \reflem{lem:EquivalentCharacterisations} that this
yields an equivalent characterization.
The
\introduceterm{$\roundstd$-round $\pebblesk$-pebble game} is played on an XOR\xspace
formula $\ensuremath{F}$ by two players,
whom we will refer to as
\firstplayer and \secondplayer.
A~position in the game is a partial assignment~$\alpha$ of at most
$\pebblesk$~variables of~$\ensuremath{F}$ and the game starts with the empty
assignment. In each round, \firstplayer can delete some variable
assignments from the current position (he chooses some
$\alpha'\subseteq\alpha$). If the current position assigns values to
exactly $\pebblesk$~variables, then \firstplayer has to delete at
least one variable assignment. Afterwards, \firstplayer chooses
some currently unassigned variable~$x$ and
asks for its value.
\secondplayer answers by either $0$ or~$1$
(independently of any previous answers to the same question)
and adds this
\ifthenelse{\boolean{conferenceversion}}
{assignment}
{variable assignment}
to the current position.
A winning position for \firstplayer is an assignment falsifying some
clause from $\ensuremath{F}$.
\firstplayer wins the $\roundstd$-round $\pebblesk$-pebble game if
he has a strategy to win every play of the $\pebblesk$-pebble game
within at most $\roundstd$~rounds.
Otherwise, we say that \secondplayer wins (or survives) the
$\roundstd$-round $\pebblesk$-pebble game.
\firstplayer \emph{wins the $\pebblesk$-pebble game} if he wins the
$\roundstd$-round $\pebblesk$-pebble game within a finite number of
rounds~$\roundstd$.
Note that if \firstplayer wins the $\pebblesk$-pebble game, then he
can always win the $\pebblesk$-pebble
within $2^\pebblesk n^{\pebblesk+1}$~rounds,
because there are
at most
$
\sum_{i=0}^{\pebblesk} 2^i \binom{n}{i} \leq
2^\pebblesk n^{\pebblesk+1}
$
different positions with at most $\pebblesk$~pebbles on
$n$\nobreakdash-variable XOR\xspace formulas.
We say that \firstplayer \emph{can reach a position} $\beta$ from a
position $\alpha$ within $\roundstd$ rounds if he has a strategy such
that in every play of the $\roundstd$\nobreakdash-round
$\pebblesk$\nobreakdash-pebble game starting from position $\alpha$ he
either wins or ends up with position~$\beta$.
\ifthenelse{\boolean{conferenceversion}}
{}
{%
As a side remark, we
note that if we expand the XOR\xspace formula
to
CNF\xspace,
%
then our pebble game is the same as the
so-called
\emph{Boolean existential pebble game} played on
%
this
CNF\xspace encoding and
therefore also characterizes the resolution width
%
%
required for
the corresponding
CNF\xspace formula as shown in~\cite{AD08CombinatoricalCharacterization}.
Intuitively, it is this correspondence that enables us to apply the
proof complexity techniques from~\cite{Razborov16NewKind} in our
setting. We will not need to use any concepts from proof complexity
in this paper, however, but will present a self-contained proof, and
so we do not elaborate further on this connection.
} %
Let us now show that the game described above is equivalent to the
pebble game for $\ensuremath{\mathsf L}^{\pebblesk}$ and to the bijective pebble game for
$\ensuremath{\mathsf C}^{\pebblesk}$ played on the structures $\mathcal A(\ensuremath{F})$ and
$\mathcal B(\ensuremath{F})$.
\begin{lemma}\label{lem:EquivalentCharacterisations}
Let $k, p, r$ be integers such that
$r>0$ and $k\geq p$
and let $\ensuremath{F}$ be
a $p$-XOR\xspace formula
giving rise to structures
$\mathcal A=\mathcal A(\ensuremath{F})$
and
$\mathcal B=\mathcal B(\ensuremath{F})$
as described in the paragraph preceding
\refeq{eq:structure-A}--\refeq{eq:structure-B}.
Then the following statements are equivalent:
\begin{enumerate}[label=(\alph*)]
\item
\label{item:equiv-char-1}
Player 1 wins the $\roundstd$-round $\pebblesk$-pebble game on
$\ensuremath{F}$.
\item
\label{item:equiv-char-2}
There is a $\pebblesk$-variable first-order sentence $\varphi\in
\ensuremath{\mathsf L}^{\pebblesk}$ of quantifier depth $\roundstd$ such that
$\mathcal A(\ensuremath{F})\models \varphi$ and $\mathcal B(\ensuremath{F})\not\models
\varphi$.
\item
\label{item:equiv-char-3}
There is a $\pebblesk$-variable sentence in first-order counting
logic $\varphi\in \ensuremath{\mathsf C}^{\pebblesk}$ of quantifier depth $\roundstd$ such
that $\mathcal A(\ensuremath{F})\models \varphi$ and
$\mathcal B(\ensuremath{F})\not\models \varphi$.
\item
\label{item:equiv-char-4}
The $(\pebblesk-1)$-dimensional Weisfeiler--Leman procedure
can
distinguish between $\mathcal A(\ensuremath{F})$
and~$\mathcal B(\ensuremath{F})$ within $\roundstd$ refinement steps.
\end{enumerate}
\end{lemma}
\begin{proof}[Proof sketch]
Let us start by briefly recalling known characterizations in terms
of Ehrenfeucht-Fra\"iss\'e\xspace games of
$\ensuremath{\mathsf L\!^{\pebblesk}}$~\cite{Barwise.1977,Immerman.1982}
and~$\ensuremath{\mathsf C^{\pebblesk}}$~\cite{Cai.1992,Hella.1996}.
In both cases the game is played by two players,
\ifthenelse{\boolean{conferenceversion}}
{referred to as}
{called}
Spoiler\xspace and
Duplicator\xspace, on the two structures $\mathcal A$ and~$\mathcal B$.
Positions in the games are partial mappings
$p =
\Set{(u_1,v_1), \ldots,
(u_i,v_i)}$
from $V(\mathcal A)$ to $V(\mathcal B)$ of size at most~$\pebblesk$.
The games start from the empty position and proceed in rounds.
At the beginning of each round in both games, Spoiler\xspace chooses
$p'\subseteq p$ with
$\setsize{p'}<\pebblesk$.
\begin{itemize}
\item
In the $\ensuremath{\mathsf L\!^{\pebblesk}}$-game, Spoiler\xspace then
selects
either some
$u\in V(\mathcal A)$ or some $v\in
V(\mathcal B)$ and Duplicator\xspace responds by choosing an element
$v\in V(\mathcal B)$ or $u\in V(\mathcal A)$
in the other structure.
\item
In the $\ensuremath{\mathsf C^{\pebblesk}}$-game, Duplicator\xspace first
selects
a global
bijection \mbox{$f:V(\mathcal A)\to V(\mathcal B)$} and Spoiler\xspace
chooses some pair $(u, v)\in
f$.
(If $\setsize{V(\mathcal A)} \neq \setsize{V(\mathcal B)} $,
Spoiler\xspace
wins the $\ensuremath{\mathsf C^{\pebblesk}}$-game immediately.)
\end{itemize}
The new position is
$p'\cup\{(u, v)\}$.
Spoiler wins the $\roundstd$-round $\ensuremath{\mathsf L\!^{\pebblesk}}\,/\,\ensuremath{\mathsf C^{\pebblesk}}$ game if he has
a strategy to reach within $\roundstd$~rounds
\ifthenelse{\boolean{conferenceversion}}
{a position}
{a position~$p$}
that does not define an isomorphism on the induced substructures.
Both games characterize equivalence in the corresponding logics:
Spoiler wins the $\roundstd$-round $\ensuremath{\mathsf L\!^{\pebblesk}}\,/\,\ensuremath{\mathsf C^{\pebblesk}}$ game if and only
if there is a
\ifthenelse{\boolean{conferenceversion}}
{\mbox{depth-$\roundstd$} sentence $\varphi\in \ensuremath{\mathsf L\!^{\pebblesk}}\,/\,\ensuremath{\mathsf C^{\pebblesk}}$}
{sentence $\varphi\in \ensuremath{\mathsf L\!^{\pebblesk}}\,/\,\ensuremath{\mathsf C^{\pebblesk}}$ of quantifier
depth~$\roundstd$}
such that $\mathcal A\models \varphi$ and
$\mathcal B\not\models \varphi$.
When these games are played on the two structures $\mathcal A(\ensuremath{F})$
and~$\mathcal B(\ensuremath{F})$ obtained from an XOR\xspace formula~$\ensuremath{F}$, it is
not hard to
verify
that both games are
equivalent to the $\pebblesk$-pebble game on $\ensuremath{F}$.
To see this,
we identify Spoiler\xspace with \firstplayer, Duplicator\xspace with
\secondplayer, and partial mappings
\mbox{$p=\setdescr{(x^{a_i}_i,x^{b_i}_i) }{ i\leq \ell}$}
with partial assignments
$\alpha = \setdescr{x_i\mapsto a_i\oplus b_i}{i\leq\ell}$.
Because of the $X_i$-relations, we can assume that partial assignments
of any other form will not occur as they are losing positions for
Duplicator\xspace.
If Spoiler\xspace asks for some $x_i^0$ or $x_i^1$ in the
\mbox{$\ensuremath{\mathsf L\!^{\pebblesk}}$-game}, which corresponds
to a choice by \firstplayer of
$x_i\in\operatorname{Vars}(\ensuremath{F})$, the only meaningful
action
for Duplicator\xspace is to choose either $x_i^0$ or $x_i^1$ in the other
structure, corresponding to an assignment to~$x_i$ by \secondplayer.
With any other choice Duplicator\xspace would lose immediately because of the unary
relations~$X_i$.
Thus, there is a natural correspondence between strategies in the
\mbox{$\ensuremath{\mathsf L\!^{\pebblesk}}$-game} and the $\pebblesk$\nobreakdash-pebble game.
The players in the $\pebblesk$\nobreakdash-pebble game can be
assumed to have perfect knowledge of the strategy of the other
player. This means that at any given position in the game, without
loss of generality we can think of \firstplayer as being given a
complete truth value assignment to the remaining variables, out of
which he can pick one variable assignment. By the correspondence
\ifthenelse{\boolean{conferenceversion}}
{discussed above}
{in \refeq{eq:correspondence-1}--\refeq{eq:correspondence-2}}
we see that this can be translated to a bijection $f$
chosen by Duplicator\xspace in the $\ensuremath{\mathsf C^{\pebblesk}}$-game (which has to preserve the
$X_i$ relations). Therefore, Spoiler\xspace picking some pair of the form
$(x_i^a,x_i^b)$ from $f$ can be viewed as \firstplayer asking
about the assignment to~$x_i$ and getting a response from
\secondplayer in the game on~$\ensuremath{F}$
(again using the above-mentioned correspondence between partial mappings
$p$ and partial assignments $\alpha$).
Finally, we observe that by design a partial mapping that preserves
the $X_i$-relations defines a local isomorphism if and only if the corresponding
$\alpha$ does not falsify any XOR\xspace clause.
Formalizing the proof sketch above, it is not hard to show that
\ifthenelse{\boolean{conferenceversion}}
{statements \ref{item:equiv-char-1}--\ref{item:equiv-char-3}}
{statements \ref{item:equiv-char-1}--\ref{item:equiv-char-3}
%
in
the lemma}
are all equivalent. The equivalence between
\ref{item:equiv-char-3} and \ref{item:equiv-char-4}
was proven in~\cite{Cai.1992}.
The lemma follows.
\end{proof}
%
\makeatletter{}%
\ifthenelse{\boolean{conferenceversion}}
{\section{Proofs of Main Theorems}}
{\section{Technical Lemmas and Proofs of Main Theorems}}
\label{sec:lower-bound-proof}
To prove our lower bounds of the quantifier depth of finite variable
logics in \refth{thm:maintheorem} and the number of refinement steps
of the Weisfeiler--Leman algorithm
in \reftwoths{thm:mainWLtheorem}{thm:mainTheoremWLquasipolyTradeoff},
we utilize the characterization in
\reflem{lem:EquivalentCharacterisations} and show that there are
$n$\nobreakdash-variable XOR\xspace
formulas on which \firstplayer is able to win the $\pebblesk$-pebble
game but cannot do so in significantly less than
$n^{\pebblesk/\log \pebblesk}$~rounds.
The next lemma states this formally and also provides a trade-off as
the number of pebbles increases.
\begin{lemma}[Main technical lemma]
\label{lem:MainTheoremXor}
There is an absolute constant
$k_0 \in \Nplus$ such that
\ifthenelse{\boolean{conferenceversion}}
{for}
{for integers}
$\pebblesk_{\parammin}$,
$\pebblesk_{\parammax}$,
and $n$
satisfying
%
%
$k_0 \leq
%
\pebblesk_{\parammin}
\leq \pebblesk_{\parammax}
\leq n^{1/6}/\pebblesk_{\parammin}$
there is an XOR\xspace formula
$\ensuremath{F}$ with $n$~variables such that \firstplayer{} wins the
$\pebblesk_{\parammin}$-pebble game on~$\ensuremath{F}$, but does not
win the $\pebblesk_{\parammax}$-pebble game
within
\mbox{$n^{\pebblesk_{\parammin}/(10\log \pebblesk_{\parammax})-1/5}$ rounds}.
\end{lemma}
Note that there is a limit to how far
$\pebblesk_{\parammin}$
and
$\pebblesk_{\parammax}$
can be from each other for the lemma to make sense---the statement
becomes vacuous if
$\pebblesk_{\parammin} \leq 2 \log \pebblesk_{\parammax}$.
Let us see how this lemma yields the theorems in \refsec{sec:intro}.
\begin{proof}[Proof of \refth{thm:maintheorem}]
This theorem
\ifthenelse{\boolean{conferenceversion}}
{can be seen to follow immediately from}
{follows immediately from}
\reftwolems
{lem:EquivalentCharacterisations}
{lem:MainTheoremXor},
but let us write out the details for clarity.
%
By setting
$\pebblesk_{\parammin} = \pebblesk_{\parammax} = \pebblesk$
in \reflem{lem:MainTheoremXor},
we can find XOR\xspace formulas with
$n$~variables such that \firstplayer wins the
$\pebblesk$-pebble game on $\ensuremath{F}_n$ but needs more than
\mbox{$n^{\varepsilon \pebblesk/\log \pebblesk}$
rounds}
in order to do so
(provided we choose $\varepsilon<1/10$ and $k_0$ large enough).
%
We can then plug these XOR\xspace formulas into
\reflem{lem:EquivalentCharacterisations}
to obtain $n$-element structures
$\mathcal A_n = \mathcal A(\ensuremath{F}_n)$
and
$\mathcal B_n = \mathcal B(\ensuremath{F}_n)$
that can be distinguished in the
$\pebblesk$-variable fragments of first-order logic~$\ensuremath{\mathsf L}^{\pebblesk}$
and first-order counting logic~$\ensuremath{\mathsf C}^{\pebblesk}$,
but where this requires sentences of quantifier depth at
\mbox{least $n^{\varepsilon \pebblesk / \log \pebblesk}$}.
\end{proof}
\begin{proof}[Proof of \refth{thm:mainWLtheorem}]
If we let $\ensuremath{F}_n$ be the XOR\xspace formula from
\reflem{lem:MainTheoremXor} for
$\pebblesk_{\parammin} = \pebblesk_{\parammax} = \pebblesk + 1$,
then by
\reflem{lem:EquivalentCharacterisations} it holds that
the structures $\mathcal A(\ensuremath{F}_n)$ and $\mathcal B(\ensuremath{F}_n)$
will be distinguished
by the $\pebblesk$-dimensional Weisfeiler--Leman algorithm, but only
after $n^{\varepsilon(\pebblesk+1) / \log
(\pebblesk+1)}\geq n^{\varepsilon\pebblesk / \log\pebblesk}$
refinement steps.
Hence, computing the stable colouring of either of these structures
requires at least
$n^{\varepsilon \pebblesk / \log \pebblesk}$
refinement steps (since they would be distinguished earlier if at
least one of the computations terminated earlier).
\end{proof}
\begin{proof}[Proof of \refth{thm:mainTheoremWLquasipolyTradeoff}]
This is similar to the proof of \refth{thm:mainWLtheorem}, but setting
$\pebblesk_{\parammin} = \lfloor\log n^c\rfloor + 1$
and
\mbox{$\pebblesk_{\parammax} = \Ceiling{n^{1/7}} + 1$}
in \reflem{lem:MainTheoremXor}.
\end{proof}
\ifthenelse{\boolean{conferenceversion}}
{The proof of \reflem{lem:MainTheoremXor} splits into two steps.}
{The proof of the trade-off between the number of pebbles versus number
of rounds in \reflem{lem:MainTheoremXor} splits into two steps.}
We first establish a rather weak lower bound on the number of rounds in
the pebble game played on suitably chosen
$m$\nobreakdash-variable XOR\xspace formulas
\mbox{for $m \gg n$.}
We then transform this into a much
stronger lower bound for formulas over $n$~variables using
hardness condensation.
To help the reader keep track of
which
results are proven in which
setting,
in what follows we will write $\pebblesl_{\parammin}$
and~$\pebblesl_{\parammax}$ to denote parameters depending on~$m$
and $\pebblesk_{\parammin}$ and~$\pebblesk_{\parammax}$ to denote parameters
depending on~$n$.
To implement the first step in our proof plan, we use tools developed by
Immerman~\cite{Immerman.1981} to establish a lower bound as stated in
the next lemma.
\begin{lemma}
\label{lem:pyramids}
%
%
%
For all $\pebblesl_{\parammax}, m \geq 3$
there is an
$m$-variable 3-XOR\xspace{} formula
$\ensuremath{F}^{\pebblesl_{\parammax}}_{m}$
on which
%
%
%
%
%
%
%
%
%
%
\firstplayer
\begin{enumerate}[label=(\alph*)]
\item
\label{item:immerman-a}
wins the \mbox{$3$-pebble} game, but
\item
\label{item:immerman-b}
\ifthenelse{\boolean{conferenceversion}}{
does not win the $\bigl(\tfrac{1}{\ceiling{\log \pebblesl_{\parammax}}}
m^{1/(1+\ceiling{\log \pebblesl_{\parammax}})}\! -\! 2\bigr)$-round \mbox{$\pebblesl_{\parammax}$-pebble} game.
}{
does not win the $\pebblesl_{\parammax}$-pebble game
within
$\max\bigl(3, \tfrac{1}{\ceiling{\log \pebblesl_{\parammax}}}
m^{1/(1+\ceiling{\log \pebblesl_{\parammax}})} - 2\bigr)$
%
%
rounds.
}
\end{enumerate}
\end{lemma}
We defer the proof of \reflem{lem:pyramids} to \refsec{sec:pyramids}, but
at this point an expert reader might wonder why we would need to prove
this lower bound at all, since a much stronger $\bigomega{m}$
bound on the number of rounds in the
pebble game
on \mbox{$4$-XOR\xspace} formulas was
already obtained by Fürer~\cite{Furer.2001}. The reason is that in Fürer's
construction
\firstplayer cannot
win the game with
few pebbles.
However, it is crucial for the second step of our proof,
where we boost the lower bound but also significantly increase
the number of pebbles that are needed to win the game,
that \firstplayer is able to win the original game with
very few pebbles.
The second step in the proof of our main technical lemma is carried
out by using the techniques developed by
Razborov~\cite{Razborov16NewKind} and applying them to the XOR\xspace
formulas in \reflem{lem:pyramids}.
Roughly speaking,
if we set
$\pebblesk_{\parammin} = \pebblesk_{\parammax} = \pebblesk$
for simplicity, then the number of variables decreases from~$m$
to \mbox{$n \approx m^{1/\pebblesk}$}, whereas the
\mbox{$m^{1/\log \pebblesk}$ round} lower bound for the
$\pebblesk$-pebble game stays essentially the same and hence becomes
$n^{\pebblesk/\log \pebblesk}$ in terms of the new number
of variables~$n$.
The properties of hardness condensation are
summarized in the next lemma,
which we prove in \refsec{sec:hardness-condensation}.
To demonstrate the flexibility of this tool we state the lemma in its
most general form---readers who want to see
an example of
how to apply it to the
XOR\xspace formulas in \reflem{lem:pyramids} can mentally fix
$p = 3$,
$\pebblesl_{\parammin} = 3$,
$\pebblesl_{\parammax} = \pebblesk_{\parammax}$,
$r \approx m^{1/\log \pebblesk_{\parammax}}$,
and
$\Delta \approx \pebblesk_{\parammax} / 3$
when reading the statement of the lemma below.
\begin{lemma}[Hardness condensation lemma]
\label{lem:hardnessCondensationXOR}
There
exists
an absolute constant $\expanderdegree_0 \in \Nplus$ such that the following holds.
Let $\ensuremath{F}$ be an
\mbox{$m$-variable} \mbox{$p$-XOR\xspace{}} formula
and suppose that we can choose parameters
%
%
\mbox{$\pebblesl_{\parammin} > 0$},
\mbox{$\pebblesl_{\parammax}\geq \expanderdegree_0\pebblesl_{\parammin}$}
and
%
%
\mbox{$r$}
such that \firstplayer
\begin{enumerate}[label=(\alph*)]
\item \label{item:condensationlhs-a}
has a winning strategy for the
$\pebblesl_{\parammin}$-pebble game on~$\ensuremath{F}$, but
\item \label{item:condensationlhs-b}
does not win the $\pebblesl_{\parammax}$-pebble game on~$\ensuremath{F}$
within $r$ rounds.
\end{enumerate}
Then for
any %
$\Delta$ satisfying
$\expanderdegree_0\leq \Delta \leq \pebblesl_{\parammax}/\pebblesl_{\parammin}$ and
\mbox{$(2\pebblesl_{\parammax}
\Delta)^{2\Delta} \leq m$}
%
there is an $(\expanderdegreep)$\nobreakdash-XOR\xspace{} formula
$\ensuremath{H}$
with
$\Ceiling{m^{3/\Delta}}$ variables such that
\firstplayer
\begin{enumerate}[label=(\alph*')]
\item \label{item:condensationrhs-a}
has a winning strategy for the
$(\Delta\pebblesl_{\parammin})$-pebble game
on~$\ensuremath{H}$, but
\item \label{item:condensationrhs-b}
does not win the $\pebblesl_{\parammax}$-pebble game on~$\ensuremath{H}$
within
${r}/{(2\pebblesl_{\parammax})}$ rounds.
\end{enumerate}
\end{lemma}
Taking \reftwolems{lem:pyramids}{lem:hardnessCondensationXOR}
on faith for now, we are ready to prove our main technical lemma
yielding an $n^{\bigomega{\pebblesk/\log \pebblesk}}$ lower bound on
the number of rounds in the $\pebblesk$-pebble game.
\begin{proof}[Proof of \reflem{lem:MainTheoremXor}]
Let $\expanderdegree_0$ be the constant in
\reflem{lem:hardnessCondensationXOR}.
We let
\begin{equation}
\label{eq:K0-first}
k_0 \geq 3\expanderdegree_0 +9
\end{equation}
be an absolute constant, the
precise value of which will be determined by calculations later in
the proof.
We are given $\pebblesk_{\parammax}$, $\pebblesk_{\parammin}$, and $n$
satisfying the conditions
\begin{equation}
\label{eq:main-lemma-inequality}
k_0
\leq
\pebblesk_{\parammin}
\leq
\pebblesk_{\parammax}
\leq
n^{1/6} / \pebblesk_{\parammin}
\end{equation}
in \reflem{lem:MainTheoremXor}.
%
%
%
%
%
%
Let us set
\begin{subequations}
\begin{align}
\label{eq:l-high-value}
\pebblesl_{\parammax} & := \pebblesk_{\parammax}
\\
\shortintertext{and}
\label{eq:m-value}
m &:=
n^{\lfloor\pebblesk_{\parammin}/9\rfloor}
\end{align}
and apply
\reflem{lem:pyramids}
(which is in order since
$\pebblesl_{\parammin} \geq 3$
and
$m \geq 3$
by
\refeq{eq:K0-first}
and~\refeq{eq:main-lemma-inequality}).
This yields an $m$\nobreakdash-variable \mbox{$3$-XOR}
formula on which
\firstplayer wins the \mbox{$3$-pebble} game but cannot win the
$\pebblesl_{\parammax}$-pebble game within
\begin{equation}
\label{eq:r-value}
r
:=
\tfrac{1}{\ceiling{\log \pebblesl_{\parammax}}}
m^{1/(1 + \ceiling{\log \pebblesl_{\parammax}})} - 2
\end{equation}
\end{subequations}
rounds.
As a side remark we note that this lower bound term might vanish if $\pebblesk_{\parammin}$ and $\pebblesk_{\parammax}$
were to far apart from each other ($\pebblesk_{\parammin} \leq 2 \log
\pebblesk_{\parammax}$), but recall that in this case the statement of
Lemma~\ref{lem:MainTheoremXor} becomes vacuous anyway.
%
%
%
%
%
%
%
Now we can apply hardness condensation as in
\reflem{lem:hardnessCondensationXOR}
to the
formula provided by \reflem{lem:pyramids},
where we fix parameters
\begin{subequations}
\begin{align}
\label{eq:p-value}
p &:= 3 \enspace ,
\\
\label{eq:l-low-value}
\pebblesl_{\parammin} &:= 3 \enspace ,
\\
\shortintertext{and}
\label{eq:delta-value}
\Delta &:= 3\floor{\pebblesk_{\parammin}/9}
\enspace .
\end{align}
\end{subequations}
To verify that our choice of parameters is legal, note that in
addition to
$r \geq 1$
we also have
$\pebblesl_{\parammin} > 0$
and
\begin{equation}
\label{eq:hardnesscondensation-condition-1}
\pebblesl_{\parammax}
=
\pebblesk_{\parammax}
\geq
k_0
>
3\expanderdegree_0
=
\expanderdegree_0\pebblesl_{\parammin}
\enspace .
\end{equation}
Thus, the assumptions needed for
\ref{item:condensationlhs-a} and~\ref{item:condensationlhs-b}
are satisfied by the XOR\xspace formula obtained from
\reflem{lem:pyramids}.
To confirm that
$\Delta$
chosen as in~\refeq{eq:delta-value}
satisfies the conditions
in \reflem{lem:hardnessCondensationXOR},
observe that
\begin{equation}
\label{eq:delta-bound-1}
\expanderdegree_0
\leq
3 \lfloork_0/9\rfloor
\leq
3\lfloor\pebblesk_{\parammin}/9\rfloor
=
\Delta
\leq
\pebblesk_{\parammin}/3
\leq
\pebblesk_{\parammax}/3
=
\pebblesl_{\parammax}/\pebblesl_{\parammin}
\enspace .
\end{equation}
Furthermore, since
$\Delta\leq \pebblesk_{\parammin}/3$
and
$\pebblesl_{\parammax}=\pebblesk_{\parammax} \leq n^{1/6}/\pebblesk_{\parammin}$
we get
\begin{equation}
\label{eq:delta-bound-2}
(2\pebblesl_{\parammax}\Delta)^{2\Delta}
\leq
\left( \frac23 n^{1/6} \right)^{2\Delta}
\leq
n^{\Delta/3}
= m
\enspace .
\end{equation}
Note, finally, that
$n = n^{\lfloor\pebblesk_{\parammin}/9\rfloor 3/\Delta}
= m^{3/\Delta}$.
%
Now \reflem{lem:hardnessCondensationXOR} provides us with an
$n$\nobreakdash-variable $\pebblesk_{\parammin}$-XOR formula on which
according to~\ref{item:condensationrhs-a} \firstplayer has a winning
strategy for the \mbox{$(3\Delta)$-pebble} game and hence also for
the game with $\pebblesk_{\parammin}\geq 9\lfloor\pebblesk_{\parammin}/9\rfloor =
3\Delta$ pebbles.
Moreover, by~\ref{item:condensationrhs-b} it holds that
\firstplayer needs
%
more than
\mbox{$r/(2\pebblesk_{\parammax})$ rounds} to win
the \mbox{$\pebblesk_{\parammax}$-pebble} game.
To complete the proof, we observe that if we choose
$k_0$ large enough, then for
$
n > \pebblesk_{\parammax} \geq \pebblesk_{\parammin} \geq k_0
$
it holds that
\begin{align}
\nonumber
\frac{r}{2\pebblesk_{\parammax}}
&=
\frac{1}{2\pebblesk_{\parammax} \ceiling{\log \pebblesk_{\parammax}}}
n^{\lfloor \pebblesk_{\parammin}/9 \rfloor /
(1 + \ceiling{\log \pebblesk_{\parammax}})} - \frac{1}{\pebblesk_{\parammax}}
&&
\bigl[\text{by \refeq{eq:m-value} and \refeq{eq:r-value}}\bigr]
\\
&\geq
\frac{6 n^{1/5}}{n^{1/6} \log n}
n^{\lfloor\pebblesk_{\parammin}/9\rfloor/(1+\ceiling{\log
\pebblesk_{\parammax}})-1/5} - \frac{1}{\pebblesk_{\parammax}}
&&
\bigl[\text{since
$\pebblesk_{\parammax} \leq n^{1/6}$}\bigr]
\\
\nonumber
&\geq
n^{\pebblesk_{\parammin}/(10\log \pebblesk_{\parammax})-1/5}
&&
\bigl[\text{for large enough
$n$, $\pebblesk_{\parammax}$, and $\pebblesk_{\parammin}$.}\bigr]
\end{align}
We now choose the constant $k_0$ large so that all conditions
encountered in the calculations above are valid. This establishes the
lemma.
\end{proof}
%
\makeatletter{}%
\ifthenelse{\boolean{conferenceversion}}
{\section{XOR\xspace Formulas over Pyramids}}
{\section{XOR\xspace Formulas over High-Dimensional Pyramids}}
\label{sec:pyramids}
We now proceed to establish the $k$-pebble game lower bound
stated in \reflem{lem:pyramids}.
Our XOR\xspace formulas will be constructed over directed acyclic graphs
(DAGs) as described in the following definition.
\begin{definition}
\label{def:xor-formula}
Let $\mathcal G$ be
a DAG
with
sources~$S$ and a unique sink~$z$.
The XOR\xspace formula $\digraphtoxor(\mathcal G)$ contains one
variable~$v$ for
every vertex $v\in V(\mathcal G)$ and consists of the following clauses:
\begin{enumerate}[label=(\alph*)]
\item
\label{xor-clause-source}
$(s,0)$ for every source $s\in S$,
\item
\label{xor-clause-non-source}
$(v,w_1,\ldots,w_\ell,0)$ for all
non-sources
$v\in V(\mathcal G)\setminus S$ with in-neighbours
$N^-(v)=\{w_1,\ldots,w_\ell\}$,
\item
\label{xor-clause-sink}
$(z,1)$ for the unique sink $z$.
\end{enumerate}
\end{definition}
Note that the formula
$\digraphtoxor(\mathcal G)$
is always unsatisfiable, since all
\ifthenelse{\boolean{conferenceversion}}
{sources}
{source vertices}
are forced to~$0$ by~\ref{xor-clause-source},
which forces all other vertices to~$0$ in topological order
by~\ref{xor-clause-non-source},
contradicting~\ref{xor-clause-sink}
for the sink.
Incidentally, these formulas are somewhat similar to the
\emph{pebbling formulas} defined in~\cite{BW01ShortProofs}, which have
been very useful in proof complexity (see the
survey~\cite{Nordstrom13SurveyLMCS} for more details).
The difference is that pebbling formulas state that a vertex $v$
is true if and only if all of its in-neighbours are true,
whereas $\digraphtoxor(\mathcal G)$ states that $v$ is true if and only if the
parity of the number of true in-neighbours is odd.
It is clear that one winning strategy for \firstplayer is to ask first
about the sink~$z$, for which \secondplayer has to answer~$1$ (or lose
immediately) and then about all the in-neighbours of the sink until the
answer for one vertex~$v$ is~$1$ (if there is no such vertex,
\secondplayer again loses immediately). At this point \firstplayer can
forget all other vertices and then ask about the
in-neighbours of~$v$ until a \mbox{$1$-labelled} vertex~$w$ is found,
and then continue in this way to trace a path of \mbox{$1$-labelled} vertices
backwards through the DAG until some source~$s$ is reached,
which contradicts the requirement that $s$ should be
labelled~$0$. Formalizing this as an induction proof on the depth
of~$\mathcal G$ shows that if the in-degree is bounded, then
\firstplayer can win the pebble game on $\digraphtoxor(\mathcal G)$ with few pebbles
\ifthenelse{\boolean{conferenceversion}}
{as stated next.}
{as stated in the next lemma.}
\begin{lemma}
\label{lem:UpperBoundPebblingDAG}
Let $\mathcal G$ be a DAG with a unique sink and maximal
in-degree~$d$.
Then
\firstplayer wins the $(d+1)$-pebble game
on~$\digraphtoxor(\mathcal G)$.
\end{lemma}
As a warm-up for the proof of \reflem{lem:pyramids},
let us describe a very weak lower bound from \cite{Immerman.1981}
for the complete binary tree of height~$h$ (with
edges directed from the leaves to the root), which we will
denote~$\mathcal T_h$.
By the lemma above, \firstplayer wins the \mbox{$3$-pebble} game on
$\formulaformat{xor}(\mathcal T_h)$ in $O(h)$~steps by
propagating~$1$ from the root down to some leaf. On the other hand,
\secondplayer has the freedom to decide on which path she answers~$1$.
Hence, she can safely
respond~$0$
for a vertex~$v$ %
as long as there is some leaf with a pebble-free path leading to the
lowest pebble labelled~$1$
without passing~$v$. %
In particular,
if \secondplayer is asked about vertices at least $\ell$~layers below
the lowest pebbled vertex for which the answer~$1$ was given,
then she can answer~$0$ for $2^\ell-1$~queries.
It follows that the height~$h$ provides a
lower bound on the number of rounds \firstplayer needs to win the
game, even if he has an infinite amount of pebbles.
We remark that this proof in terms of pebble-free paths is somewhat reminiscent
of an argument by Cook~\cite{Cook74ObservationTimeStorageTradeOff} for
the so-called black pebble game corresponding to the pebbling formulas
in~\cite{BW01ShortProofs} briefly discussed above.
The downside of this lower bound is that the height is only
logarithmic in the number of vertices and thus too weak for us as we are
shooting for a lower bound of the order of~$n^{1/\log k}$.
To get a better bound for the black pebble game Cook
instead considered so-called pyramid graphs as
in~\reffig{fig:2figsA}. These will not be sufficient to obtain strong
enough lower bounds for our pebble game,
\ifthenelse{\boolean{conferenceversion}}
{however.}
{however.%
\footnote{For readers knowledgeable in pebbling, we comment
that the problem is that the open-path argument
in~\cite{Cook74ObservationTimeStorageTradeOff}
does not work in a DAG-like setting for the XOR pebble game.
To see this, consider a pyramid with a vertex row
$u,v,w$
and a second row
$p,q,r,s$
immediately below such that the edges are
$(p,u),(q,u),(q,v),(r,v),(r,w),(s,w)$.
Then if the values of $u,w$ on the upper row
and
$p,s$ on the lower row are known,
there is still an open path via $(q,v)$ or $(r,v)$, which is enough
for the black pebbling lower bound for pyramids in
\cite{Cook74ObservationTimeStorageTradeOff}.
But in the XOR\xspace pebble game this means that $r$ and~$q$ are already
fixed because of the XOR\xspace constraints, and so there is no ``open
path'' with unconstrained vertices.}}
Instead, following Immerman we consider a kind of high-dimensional
generalization of these graphs, for which the lower bound on the
number of rounds in the $k$-pebble game is still linear in the
height~$h$ while the number of vertices is roughly~$h^{\log k}$.
\begin{definition}[\cite{Immerman.1981}]
\label{def:high-dim-pyramid}
For $d\geq 1$ we define the
\introduceterm{$(d+1)$-dimensional pyramid of height $h$}, denoted
by~$\mathcal P^d_h$, to be the following layered DAG.
We let
$\layernumber$, $0\leq \layernumber\leq h$
be the \emph{layer number} and set
$\pyrquot{d}{\layernumber}
:= \lfloor \layernumber/d\rfloor$ and
$\pyrrem{d}{\layernumber}
:= \layernumber{}\pmod d$. Hence,
for any $\layernumber$ we have
$\layernumber=
\pyrquot{d}{\layernumber} \cdot d +
\pyrrem{d}{\layernumber}$.
For integers $x_i \geq 0 $
the vertex set is
\begin{subequations}
\begin{equation}
\ifthenelse{\boolean{conferenceversion}}
{%
\begin{split}
%
%
%
%
%
%
%
&V \bigl( \mathcal P^d_h \bigr) =
\big\{ (x_0,\ldots, x_{d-1},\layernumber) \mid
\layernumber \leq h ; \\
&\quad x_i \leq \pyrquot{d}{\layernumber} + 1
\text{ if $i \!<\! \pyrrem{d}{\layernumber}$} ; %
x_i \leq \pyrquot{d}{\layernumber}
\text{ if $i \!\geq\! \pyrrem{d}{\layernumber}$}\big\}
\enspace ,
\end{split}
}{
%
V \bigl( \mathcal P^d_h \bigr) =
\Setdescr
{(x_0,\ldots, x_{d-1},\layernumber) \, }
{\,
\layernumber \leq h ; \,
x_i \leq \pyrquot{d}{\layernumber} + 1
\text{ if $i \!<\! \pyrrem{d}{\layernumber}$} ; \,
x_i \leq \pyrquot{d}{\layernumber}
\text{ if $i \!\geq\! \pyrrem{d}{\layernumber}$} }
\enspace ,
}
%
%
%
%
%
%
%
\end{equation}
where we say that $\layernumber$ is the \introduceterm{layer} of the
vertex~$(x_0,\ldots, x_{d-1},\layernumber)$.
The edge set $E \bigl( \mathcal P^d_h \bigr)$ consists of
the pair of edges
\begin{equation}
\begin{split}
\bigl(
(x_0, \ldots,
x_{\pyrrem{d}{\layernumber}},
\ldots, x_{d-1},
\layernumber+1) ,
(x_0,\ldots,
x_{\pyrrem{d}{\layernumber}},
\ldots, x_{d-1},
\layernumber)
\bigr)
\enspace ,
\\
\bigl(
(x_0,\ldots,
x_{\pyrrem{d}{\layernumber} }+1,
\ldots,
x_{d-1},
\layernumber+1) ,
(x_0,\ldots,
x_{\pyrrem{d}{\layernumber}},
\ldots,
x_{d-1},\layernumber)
\bigr)
\end{split}
\end{equation}
\end{subequations}
for all vertices
$(x_0,\ldots, x_{d-1},\layernumber{})\in V(\mathcal P^d_h)$ and
layers~%
$\layernumber<h$,
so that every vertex in layer~$\layernumber$ has exactly two in-neighbours
from \mbox{layer $\layernumber+1$}.
\end{definition}
\ifthenelse{\boolean{conferenceversion}}
{We refer the reader to
\reftwofigs{fig:2figsA}{fig:2figsB}
for illustrations of
$2$\nobreakdash-dimensional and
$3$\nobreakdash-dimensional pyramids
(where all the edges in the figures are assumed to be directed upwards).
The vertex~$(0,\ldots,0)$ at the top of the pyramid is the unique sink and
all vertices at the bottom layer~$h$
are sources.}
{It might be easier to parse
\refdef{def:high-dim-pyramid}
by noting that the $(k d)$th layer of~$\mathcal P^d_h$
is a $d$\nobreakdash-dimensional cube of side length~$k$.
Intuitively, we then want to have incoming edges to each vertex~$u$ at the
$(k d)$th layer from all vertices~$v$ in the
$d$\nobreakdash-dimensional cube of side length~$k+1$
such that all coordinates
of~$v$ are at distance $0$ or~$+1$ from the coordinates of~$u$.
This would give a fan-in larger than~$2$, however, and to avoid this
we expand in one dimension at a time to obtain a sequence of
multidimensional cuboids where in each consecutive cuboid the side
length increases by one in one dimension,
until $d$~layers later we have a complete cube with side length~$k+1$.
We refer the reader to
\reffig{fig:2figsA}
for an illustration of a
$2$\nobreakdash-dimensional pyramid generated by stacking
$1$\nobreakdash-dimensional cubes on top of one another and to
\reffig{fig:2figsB}
for a
$3$\nobreakdash-dimensional pyramid generated from
$2$\nobreakdash-dimensional cuboids
(where all the edges in the figures are assumed to be directed upwards).
The vertex~$(0,\ldots,0)$ at the top of the pyramid is the unique sink and
all vertices at the bottom layer~$h$
are sources. Observe that it follows from the definition that
$\Setsize{V \bigl( \mathcal P^d_h \bigr) }\leq (h+1)^{d+1}$.}
\ifthenelse{\boolean{conferenceversion}}
{%
\makeatletter{}%
\begin{figure}%
\centering
\parbox{1.2in}{ %
%
\resizebox{!}{4cm}{
\begin{tikzpicture}
[vertex/.style={circle,draw=black,fill=black,inner sep=0pt,minimum size=3pt},
diredge/.style={-}] %
%
\foreach \height/\xdist/\ydist in {10/.5/.5} {
%
\foreach \y in {0,...,\height} {
\pgfmathsetmacro{\diff}{\height-\y};
\foreach \x in {0,...,\diff} {
%
%
%
\node[vertex] (v_\x_\y) at (\x*\xdist-\diff*\xdist/2,\y*\ydist) {};
}
}
%
\foreach \y in {1,...,\height} {
\pgfmathsetmacro{\diff}{\height-\y};
\foreach \x in {0,...,\diff} {
\pgfmathtruncatemacro{\nextx}{1+\x};
\pgfmathtruncatemacro{\prevy}{\y-1};
\draw[diredge] (v_\x_\prevy) -- (v_\x_\y);
\draw[diredge] (v_\nextx_\prevy) -- (v_\x_\y);
}
}
}%
\end{tikzpicture}
}%
\caption{2D pyramid}%
\label{fig:2figsA}}%
\hspace{1.4cm}
\begin{minipage}{1.2in}%
\resizebox{!}{4cm}{
\begin{tikzpicture}
[vertex/.style={circle,draw=black,fill=black,inner sep=0pt,minimum size=4pt},
%
diredge/.style={-,shorten <=.5pt, shorten >=.5pt}]
%
%
%
\foreach \height/\xdist/\ydist/\zdist in {8/1.8/1.2/.7} {
%
\foreach \y in {0,...,\height} {
\pgfmathtruncatemacro{\xmax}{(1+\height-\y)/2};
\pgfmathtruncatemacro{\zmax}{(\height-\y)/2};
\foreach \x in {0,...,\xmax} {
\foreach \z in {0,...,\zmax} {
%
%
%
%
%
\node[vertex] (v_\x_\y_\z) at (\x*\xdist-\xmax*\xdist/2,\y*\ydist,\z*\zdist-\zmax*\zdist/2) {};
}
}
}
%
\foreach \y in {1,...,\height} {
\pgfmathtruncatemacro{\xmax}{(1+\height-\y)/2};
\pgfmathtruncatemacro{\zmax}{(\height-\y)/2};
\foreach \x in {0,...,\xmax} {
\foreach \z in {0,...,\zmax} {
%
%
\pgfmathtruncatemacro{\Expanded}{mod(\height-\y,2)};
\pgfmathtruncatemacro{\prevy}{\y-1};
\draw[diredge] (v_\x_\prevy_\z) -- (v_\x_\y_\z); %
\pgfmathtruncatemacro{\nextx}{\x+1-\Expanded};
\pgfmathtruncatemacro{\nextz}{\z+\Expanded};
\draw[diredge] (v_\nextx_\prevy_\nextz) -- (v_\x_\y_\z); %
}
}
}
}%
\end{tikzpicture}
}%
\caption{3D pyramid}%
\label{fig:2figsB}%
\end{minipage}%
\end{figure}%
%
}
{%
\makeatletter{}%
\newcommand{\vspace{3mm}}{\vspace{3mm}}
\begin{figure}[tp]%
\centering
\subfigure[2D pyramid]%
{
\centering
\label{fig:2figsA}%
\begin{minipage}{7.5cm}
\resizebox{!}{7.2cm}{
\begin{tikzpicture}%
[vertex/.style={circle,draw=black,fill=black,%
inner sep=0pt,minimum size=3pt},%
diredge/.style={-}] %
%
\foreach \height/\xdist/\ydist in {10/.5/.5} {
%
\foreach \y in {0,...,\height} {
\pgfmathsetmacro{\diff}{\height-\y};
\foreach \x in {0,...,\diff} {
%
%
%
\node[vertex] (v_\x_\y) at (\x*\xdist-\diff*\xdist/2,\y*\ydist) {};
}
}
%
\foreach \y in {1,...,\height} {
\pgfmathsetmacro{\diff}{\height-\y};
\foreach \x in {0,...,\diff} {
\pgfmathtruncatemacro{\nextx}{1+\x};
\pgfmathtruncatemacro{\prevy}{\y-1};
\draw[diredge] (v_\x_\prevy) -- (v_\x_\y);
\draw[diredge] (v_\nextx_\prevy) -- (v_\x_\y);
}
}
}%
\end{tikzpicture}
}%
%
\vspace{3mm}
\end{minipage}
}
\hfill
\subfigure[3D pyramid]
{
\centering
\label{fig:2figsB}%
\begin{minipage}{7.5cm}
\hspace{.8cm}
\resizebox{!}{7.2cm}{
\begin{tikzpicture}
[vertex/.style={circle,draw=black,fill=black,%
inner sep=0pt,minimum size=4pt},%
diredge/.style={-,shorten <=.5pt, shorten >=.5pt}]
%
\foreach \height/\xdist/\ydist/\zdist in {8/1.8/1.2/.7} {
%
\foreach \y in {0,...,\height} {
\pgfmathtruncatemacro{\xmax}{(1+\height-\y)/2};
\pgfmathtruncatemacro{\zmax}{(\height-\y)/2};
\foreach \x in {0,...,\xmax} {
\foreach \z in {0,...,\zmax} {
%
%
%
%
%
\node[vertex] (v_\x_\y_\z) at (\x*\xdist-\xmax*\xdist/2,\y*\ydist,\z*\zdist-\zmax*\zdist/2) {};
}
}
}
%
\foreach \y in {1,...,\height} {
\pgfmathtruncatemacro{\xmax}{(1+\height-\y)/2};
\pgfmathtruncatemacro{\zmax}{(\height-\y)/2};
\foreach \x in {0,...,\xmax} {
\foreach \z in {0,...,\zmax} {
%
%
\pgfmathtruncatemacro{\Expanded}{mod(\height-\y,2)};
\pgfmathtruncatemacro{\prevy}{\y-1};
\draw[diredge] (v_\x_\prevy_\z) -- (v_\x_\y_\z); %
\pgfmathtruncatemacro{\nextx}{\x+1-\Expanded};
\pgfmathtruncatemacro{\nextz}{\z+\Expanded};
\draw[diredge] (v_\nextx_\prevy_\nextz) -- (v_\x_\y_\z); %
}
}
}
}%
\end{tikzpicture}
}%
\vspace{3mm}
\end{minipage}%
}
\caption{Examples of high-dimensional pyramids.}
\label{fig:high-dim-pyramids}
\end{figure}%
%
}
As high-dimensional pyramids have in-degree~$2$,
\reflem{lem:UpperBoundPebblingDAG} implies that \firstplayer wins the
\mbox{$3$-pebble} game on~$\mathcal P^d_h$.
Recall that, as discussed in the proof sketch of the lemma,
\firstplayer starts his winning strategy in the \mbox{$3$-pebble} game
by pebbling the sink of the pyramid and its two in-neighbours.
One of them has to be labelled~$1$. Then he picks up the two other
pebbles and pebbles the two in-neighbours of the vertex marked
with~$1$ and so on. Continuing this strategy, he is able to ``move''
the~$1$ all the way to the bottom, reaching a contradiction, in a
number of rounds that is linear in the height of the pyramid. This
strategy turns out to be nearly optimal in the sense that in order to
move a $1$ from the top to the bottom in $\mathcal P^d_h$,
as long as the total number of available pebbles is at
most~$2^d$
it makes no sense for \firstplayer at any point in the game to pebble
a vertex that is $d$ or more levels away from the lowest
level containing a pebble.
The next lemma states a key property of pyramids in this regard.
In order to state it, we need to make a definition.
\begin{definition}
\label{def:consistent-labelling}
We refer to a partial assignment $\mathcal{M}$ of Boolean values to
the vertices of a DAG~$\mathcal G$ as a \introduceterm{labelling} or
\introduceterm{marking} of~$\mathcal G$.
We say that $\mathcal{M}$ is
\introduceterm{consistent} if no clause of
type~\ref{xor-clause-non-source}
or~\ref{xor-clause-sink}
in the XOR formula~$\digraphxorarg{\mathcal G}$ in
\refdef{def:xor-formula}
is falsified by~$\mathcal{M}$.
\ifthenelse{\boolean{conferenceversion}}
{}
{We also say that $\mathcal{M}'$ is
\introduceterm{consistent with~$\mathcal{M}$}
if
$\mathcal{M} \cup \mathcal{M}'$
is a consistent labelling of~$\mathcal G$.}
\end{definition}
That is, a consistent labelling does not violate any constraint on any
non-source vertex, but
\ifthenelse{\boolean{conferenceversion}}
{source}
{source vertex}
constraints~\ref{xor-clause-source}
may be falsified.
Such labellings are easy to find for high-dimensional pyramids.
\begin{lemma}%
[\cite{Immerman.1981}]
\label{lem:ImmermansMainTechnicalLemma}
Let $\mathcal{M}$ be any consistent labelling of all vertices in a
pyramid~$\mathcal P^d_h$ from layer~$0$ to
layer~$\layernumber$.
Then for every set $S$ of $2^d{}-1$ vertices on or below
layer $\layernumber+d$ there is a consistent labelling of
%
the
entire pyramid that extends $\mathcal{M}$ and labels all vertices in
$S$ with~$0$.
\end{lemma}
To get some intuition why \reflem{lem:ImmermansMainTechnicalLemma}
holds, note that
the $d$-dimensional pyramids are constructed in such a way
that they locally look like binary trees.
In particular, every vertex $v\in V(\mathcal P^d_h)$ together
with all its predecessors at distance at most~$d$ form a
complete binary tree.
By the same argument as for the
binary trees above, it follows that
if $v$ is labelled with~$1$, \secondplayer can safely answer~$0$
up to $2^d-1$ times when asked about vertices
$d$~layers below~$v$.
\ifthenelse{\boolean{conferenceversion}}
{However, the full proof
of \reflem{lem:ImmermansMainTechnicalLemma}
is more challenging and requires some quite
subtle reasoning. We refer the reader to~\cite{Immerman.1981}
(or the upcoming full-length version of this paper) for the details.}
{However, the full proof
of \reflem{lem:ImmermansMainTechnicalLemma}
is more challenging and requires some quite
subtle reasoning. For the convenience of the reader we now present a
slightly modified version of the proof in~\cite{Immerman.1981}
with notation and terminology adapted to this paper.
%
%
%
%
\makeatletter{}%
\newcommand{\veczerodminusone}[1]{\vec{#1}}
\newcommand{j}{j}
\newcommand{a}{a}
\newcommand{\dimValLayerWedge}[3]{\ensuremath{\mathcal W({#1},{#2},{#3})}}
\newcommand{\dimindex}{j}
\newcommand{\vala}{a}
\newcommand{\layernumber}{L}
\newcommand{\dimValLayerLeftSlice}[3]%
{\ensuremath{\mathcal{S}_{\mathsf L}({#1},{#2},{#3})}}
\newcommand{\dimValLayerRightSlice}[3]%
{\ensuremath{\mathcal{S}_{\mathsf R}({#1},{#2},{#3})}}
\newcommand{j}{j}
\newcommand{a}{a}
\newcommand{\layernumber}{L}
\newcommand{T}{T}
\newcommand{\alpha}{\alpha}
\newcommand{q}{q}
\newcommand{left slice\xspace}{left slice\xspace}
\newcommand{left slices\xspace}{left slices\xspace}
\newcommand{right slice\xspace}{right slice\xspace}
\newcommand{right slices\xspace}{right slices\xspace}
\newcommand{\widetilde{\labelling}}{\widetilde{\mathcal{M}}}
\newcommand{pyramidal~\XOR formula\xspace}{pyramidal~XOR\xspace formula\xspace}
\newcommand{frustum\xspace}{frustum\xspace}
\newcommand{frustums\xspace}{frustums\xspace}
\newcommand{top-expanding\xspace}{top-expanding\xspace}
\newcommand{I_{\text{\upshape lo}}}{I_{\text{\upshape lo}}}
\newcommand{I_{\text{\upshape hi}}}{I_{\text{\upshape hi}}}
\newcommand{\mathit{slen}}{\mathit{slen}}
\newcommand{\Delta_\mathit{slen}}{\Delta_\mathit{slen}}
\newcommand{\bluecross}{\tikz{\node[cross out,draw=blue,ultra thick,%
inner sep=0pt,minimum size=5 pt] at (0,0){ };}}
\DeclareRobustCommand{\bluecrossrobust}{\tikz{\node[cross out,draw=blue,ultra thick,%
inner sep=0pt,minimum size=5 pt] at (0,0){ };}}
\DeclareRobustCommand{\whitecirclerobust}{
\mbox{\tikz{\node[circle,draw=black,fill=white,inner
sep=0pt,minimum size=5pt] at (0,0) {};}}}
To formalize the intuitive argument above, we need some additional
notation and technical definitions.
Let us use the shorthand
$
\veczerodminusone{x}
= (x_0, \ldots, x_{d-1})
$.
For a pyramid~$\mathcal P^d_h$, a coordinate
\mbox{$j\in\set{0,\ldots,d-1}$},
and a layer~$L$,
we let
\begin{equation}
\label{eq:maxval-1}
\mathit{slen}(j,L)
:=
\max
\Setdescr{x_j}{(\veczerodminusone{x}, L)
\in
V\bigl( \mathcal P^d_h \bigr)}
\end{equation}
be the side length in the $j$th dimension of the cuboid in
layer~$L$, i.e.,\ }
the maximal value that can be achieved in the
$j$th~coordinate in layer~$L$,
and for $\layernumber' \geq L$ we write
\begin{equation}
\label{eq:maxval-2}
\Delta_\mathit{slen}(j,L,\layernumber')
:=
\mathit{slen}(j,\layernumber') -
\mathit{slen}(j,L)
\end{equation}
to denote how much the cuboids in
$\mathcal P^d_h$
grow in the \mbox{$j$th dimension}
in betwen layers~$\layernumber$ and~$\layernumberalt$.
We define
the \introduceterm{frustum\xspace}~$\mathcal P^d_{L,h}$
to be the subgraph of~$\mathcal P^d_h$
induced on the set
\mbox{$
\Setdescr
{\bigl( \veczerodminusone{x},\layernumber' \bigr)}
{\layernumber' \geq L}
$}
of all vertices on layer~$L$ and below.
We say that
the \introduceterm{wedge}
\introduceterm{\dimValLayerWedge{\dimindex}{\vala}{\layernumber}} is the
subgraph of~$\mathcal P^d_h$ induced on the vertices
$(x_0, \dots, x_{\dimindex-1}, \vala, x_{\dimindex+1}, \ldots,
x_{d-1}, \layernumber)$
with fixed $\dimindex$th coordinate $x_{\dimindex} = \vala$
together with all predecessors of these vertices.
That is, the vertex set of
$\dimValLayerWedge{\dimindex}{\vala}{\layernumber}$
is
\begin{equation}
\label{eq:wedge-vertex-set}
V \bigl( \dimValLayerWedge{\dimindex}{\vala}{\layernumber} \bigr) =
\Setdescr{
\bigl(
\veczerodminusone{x},
\layernumber' \bigr)
\in V\bigl( \mathcal P^d_h \bigr)
}
{
\layernumber'\geq \layernumber, \vala
\leq x_{\dimindex} \leq \vala +
\Delta_\mathit{slen}(\dimindex,\layernumber,\layernumber')
}
\enspace .
\end{equation}
An important part in our proof will be played by subgraphs obtained by
deleting wedges from frustums\xspace. We define these subgraphs next.
Fix a frustum\xspace $\mathcal P^d_{L,h}$, two
disjoint subsets of coordinates $I_{\text{\upshape lo}},I_{\text{\upshape hi}} \subseteq
\{0,\ldots,d-1\}$, and a mapping
\mbox{$\alpha \colon I_{\text{\upshape lo}}\cupI_{\text{\upshape hi}}\to \Nzero$}
such that
$\alpha(j)\leq
\mathit{slen}(j,L)$.
We let the \introduceterm{restricted frustum\xspace}
$\mathcal P^d_{L,h}%
[I_{\text{\upshape lo}},I_{\text{\upshape hi}},\alpha]$
be the subgraph of the frustum\xspace~$\mathcal P^d_{L,h}$
induced on the vertex set
\begin{equation}
\label{eq:restricted-frustum-vertex-set}
\Setdescr{
\bigl( \veczerodminusone{x},\layernumber' \bigr)
\in
V(\mathcal P^d_{L,h})
}
{\,
\forall
j\!\in\!I_{\text{\upshape lo}}: x_j>
\alpha(j) +
\Delta_\mathit{slen}(j,L,\layernumber')
;
\,
\forall j\!\in\!I_{\text{\upshape hi}}: x_j <
\alpha(j)}
\end{equation}
where no coordinates in
$I_{\text{\upshape lo}}\cupI_{\text{\upshape hi}}$ are
expanded, i.e.,\ }
the cuboids will not grow in size in dimensions
$I_{\text{\upshape lo}} \cup I_{\text{\upshape hi}}$ as we move down the layers.
For dimensions in $I_{\text{\upshape hi}}$ the coordinate set stays the same, and
for dimensions in $I_{\text{\upshape lo}}$ the coordinate set shifts by an
additive~$+1$ every time the pyramid graph grows in this direction.
We say that a layered directed graph is a
\introduceterm{$(d,q)$-frustum\xspace{}}
if it is a
restricted frustum\xspace $\mathcal P^d_{L,h}%
[I_{\text{\upshape lo}},I_{\text{\upshape hi}},\alpha]$ where
$q$ coordinates are restricted, i.e.,\ }
$\setsize{I_{\text{\upshape lo}}\overset{.}{\cup}I_{\text{\upshape hi}}} = q$.
To see how restricted frustums\xspace are obtained by deleting wedges from
frustums\xspace, note that after removing the wedge
$\dimValLayerWedge{j}{\vala}{L}$
from the frustum\xspace
$\mathcal P^d_{L,h}$,
the remaining graph is the disjoint union of
the restricted frustums\xspace
$\mathcal P^d_{L,h}%
[\set{j},\emptyset,\set{j\mapsto\vala}]$
and
$\mathcal P^d_{L,h}%
[\emptyset,\set{j},\set{j\mapsto\vala}]$.
\reffig{fig:pyramidWedgeFrustum}
shows a 3D pyramid with a
\mbox{wedge \tikz{\node[circle,draw=black,fill=white,inner
sep=0pt,minimum size=5pt] at (0,0) {};}} and a restricted
frustum~\bluecross.
\makeatletter{}%
\begin{figure}[t]%
\hspace{-1.5cm}
\resizebox{14cm}{!}{\centering%
\makeatletter{}%
\newcommand{\drawthreedimpyramid}[5]{
\tdplotsetmaincoords{180}{180}
\tdplotsetrotatedcoords{0}{20}{20}
\begin{tikzpicture}[rotate=20,tdplot_main_coords,
layerlabel/.style={tdplot_rotated_coords},
vertex/.style={tdplot_rotated_coords,circle,draw=black,fill=black,inner sep=0pt,minimum size=#5 pt},
wedgevertex/.style={tdplot_rotated_coords,circle,draw=black,fill=white,inner sep=0pt,minimum size=#5 pt},
slicevertex/.style={tdplot_rotated_coords,cross out,draw=blue,ultra thick,inner sep=0pt,minimum size=#5 pt},
frustumvertex/.style={tdplot_rotated_coords,cross out,draw=blue,ultra thick,inner sep=0pt,minimum size=#5 pt},
%
diredge/.style={-,shorten <=.5pt, shorten >=.5pt,thin}]
%
%
%
\foreach \height/\xdist/\ydist/\zdist in {#1/#2/#3/#4} {
\node[layerlabel] at (-6.5,0.2+11*\ydist,0) {\Large layer};
\node[layerlabel] at (-6.5,0.2+10*\ydist,0) {\Large $0$};
\node[layerlabel] at (-6.5,0.2+9*\ydist,0) {\Large $1$};
\node[layerlabel] at (-6.5,0.2+7.5*\ydist,0) {\Large $\cdots$};
\node[layerlabel] at (-6.5,0.2+6*\ydist,0) {\Large $L$};
\node[layerlabel] at (-6.5,0.2+5*\ydist,0) {\Large $L+1$};
%
\node[layerlabel] at (-6.5,0.2+4*\ydist,0) {\Large $L+d$};
\node[layerlabel] at (-6.5,0.2+3*\ydist,0) {\Large $L+d+1$};
\node[layerlabel] at (-6.5,0.2+1.5*\ydist,0) {\Large $\cdots$};
\node[layerlabel] at (-6.5,0.2+0,0) {\Large $h$};
%
\newboolean{wedge}
\newboolean{slice}
\newboolean{frustum}
\foreach \y in {0,...,\height} {
\pgfmathtruncatemacro{\xmax}{(1+\height-\y)/2};
\pgfmathtruncatemacro{\zmax}{(\height-\y)/2};
\foreach \x in {0,...,\xmax} {
\foreach \z in {0,...,\zmax} {
%
%
%
%
%
\setboolean{wedge}{false}
\ifthenelse{\y=5 \and \x=1}{\setboolean{wedge}{true}}{}
\ifthenelse{\y=4 \and \x=1}{\setboolean{wedge}{true}}{}
\ifthenelse{\y=3 \and \x>0 \and \x<3}{\setboolean{wedge}{true}}{}
\ifthenelse{\y=2 \and \x>0 \and \x<3}{\setboolean{wedge}{true}}{}
\ifthenelse{\y=1 \and \x>0 \and \x<4}{\setboolean{wedge}{true}}{}
\ifthenelse{\y=0 \and \x>0 \and \x<4}{\setboolean{wedge}{true}}{}
\setboolean{slice}{false}
\ifthenelse{\y=5 \and \x=2}{\setboolean{slice}{true}}{}
\ifthenelse{\y=4 \and \x=2}{\setboolean{slice}{true}}{}
\ifthenelse{\y=3 \and \x=3}{\setboolean{slice}{true}}{}
\ifthenelse{\y=2 \and \x=3}{\setboolean{slice}{true}}{}
\ifthenelse{\y=1 \and \x=4}{\setboolean{slice}{true}}{}
\ifthenelse{\y=0 \and \x=4}{\setboolean{slice}{true}}{}
\setboolean{frustum}{false}
\ifthenelse{\y=5 \and \x=2}{\setboolean{frustum}{true}}{}
\ifthenelse{\y=4 \and \x=2}{\setboolean{frustum}{true}}{}
\ifthenelse{\y=3 \and \x=3}{\setboolean{frustum}{true}}{}
\ifthenelse{\y=2 \and \x=3}{\setboolean{frustum}{true}}{}
\ifthenelse{\y=1 \and \x=4}{\setboolean{frustum}{true}}{}
\ifthenelse{\y=0 \and \x=4}{\setboolean{frustum}{true}}{}
\ifthenelse{\y=5 \and \x=3}{\setboolean{frustum}{true}}{}
\ifthenelse{\y=4 \and \x=3}{\setboolean{frustum}{true}}{}
\ifthenelse{\y=3 \and \x=4}{\setboolean{frustum}{true}}{}
\ifthenelse{\y=2 \and \x=4}{\setboolean{frustum}{true}}{}
\ifthenelse{\y=1 \and \x=5}{\setboolean{frustum}{true}}{}
\ifthenelse{\y=0 \and \x=5}{\setboolean{frustum}{true}}{}
\ifthenelse{\boolean{wedge}}
{
\node[wedgevertex] (v_\x_\y_\z) at (\x*\xdist-\xmax*\xdist/2,\y*\ydist,\z*\zdist-\zmax*\zdist/2) { };
}
{
\ifthenelse{\boolean{frustum}}{
\node[frustumvertex] (v_\x_\y_\z) at (\x*\xdist-\xmax*\xdist/2,\y*\ydist,\z*\zdist-\zmax*\zdist/2) { };
}
{
\node[vertex] (v_\x_\y_\z) at (\x*\xdist-\xmax*\xdist/2,\y*\ydist,\z*\zdist-\zmax*\zdist/2) { };
%
}
}
}
}
}
%
\foreach \y in {1,...,\height} {
\pgfmathtruncatemacro{\xmax}{(1+\height-\y)/2};
\pgfmathtruncatemacro{\zmax}{(\height-\y)/2};
\foreach \x in {0,...,\xmax} {
\foreach \z in {0,...,\zmax} {
%
%
\pgfmathtruncatemacro{\Expanded}{mod(\height-\y,2)};
\pgfmathtruncatemacro{\prevy}{\y-1};
\draw[diredge] (v_\x_\prevy_\z) -- (v_\x_\y_\z); %
\pgfmathtruncatemacro{\nextx}{\x+1-\Expanded};
\pgfmathtruncatemacro{\nextz}{\z+\Expanded};
\draw[diredge] (v_\nextx_\prevy_\nextz) -- (v_\x_\y_\z); %
}
}
}
}%
\end{tikzpicture}
}
\drawthreedimpyramid{10}{1.8}{1.4}{0.7}{5} %
}
%
%
%
%
%
%
\caption{Pyramid with
wedge \dimValLayerWedge{0}{2}{L+1}
\whitecirclerobust{}
and
restricted frustum
$\mathcal P^2_{L+1,10}[\emptyset,\set{0},\set{0\mapsto 2}]$
\bluecrossrobust{}.
}%
\label{fig:pyramidWedgeFrustum}
\end{figure}
%
We prove \reflem{lem:ImmermansMainTechnicalLemma} by inductively
cutting the pyramid into a wedge and restricted frustums\xspace to the left and
right of this wedge. It will be convenient to focus on
$(d,q)$-frustums\xspace
which grow in the dimensions corresponding to the topmost
\mbox{$d - q$ layers}
(as the one in
\reffig{fig:pyramidWedgeFrustum}).
More formally, we say that an
$(d,q)$-frustum\xspace
$\mathcal P^d_{L,h}%
[I_{\text{\upshape lo}},I_{\text{\upshape hi}},\alpha]$
is \introduceterm{top-expanding\xspace} if
\begin{equation}
\label{eq:top-expanding-condition}
\setdescr{(L + j) \bmod d}{0\leq j
\leq d - 1 - q}\cap
(I_{\text{\upshape lo}}\cupI_{\text{\upshape hi}})
=
\emptyset
\enspace .
\end{equation}
As we have done for digraphs with unique sinks in
\refdef{def:xor-formula}, we identify with each (restricted) frustum
$\mathcal P$ the corresponding XOR formula $\digraphxorarg{\mathcal P}$
containing all clauses given in
\refdef{def:xor-formula}\ref{xor-clause-source}
and~\ref{xor-clause-non-source}.
We do not include hard-coded labels on the sources at the top layer
as in~\ref{xor-clause-sink}
but instead will always provide a labelling of that layer.%
\footnote{For readers more familiar with proof complexity language,
our subgraphs correspond to formulas obtained by applying
restrictions to
$\Digraphxorarg{\mathcal P^d_{h}}$.}
We now state our inductive claim.
\Reflem{lem:ImmermansMainTechnicalLemma} follows immediately once this
claim has been established, as the subgraph of a pyramid
$\mathcal P^d_{h}$ on or below layer~$L$ is
an (unrestricted) top-expanding\xspace
\mbox{$(d,0)$-frustum\xspace $\mathcal P^d_{L,h}$}
and, in particular,
$\Digraphxorarg{\mathcal P^d_{h}}$ with all vertices on
or above layer~$L$ consistently labelled is equivalent to
$\Digraphxorarg{\mathcal P^d_{L,h}}$ with
the same labelling of layer~$L$.
\begin{claim}
\label{claim:inductiveclaimpyramids}
Let $\mathcal P$ be a top-expanding\xspace
$(d,q)$-frustum\xspace and
$\mathcal{M}_L$
be a labelling of its top layer $L$.
Then for every set $S$ of
$2^{d-q}-1$ vertices on or below
layer $L+d-q$ there is a
consistent labelling
of all vertices in~$\mathcal P$
that extends
$\mathcal{M}_L$ and labels
every vertex in
$S$ with~$0$.
\end{claim}
The following proposition summarizes the core properties of frustums
that we will use when establishing
Claim~\ref{claim:inductiveclaimpyramids}.
\begin{proposition}
\label{propo:frustums}
Let
$\mathcal P=\mathcal P^d_{L,h}%
[I_{\text{\upshape lo}},I_{\text{\upshape hi}},\alpha]$
be a restricted frustum\xspace
with a labelling~$\mathcal{M}_L$ of all vertices in the
top layer~$L$.
Then the following holds:
\begin{enumerate}[label=(\alph*)]
\item
\label{item:propo_frustums-a}
There is a labelling~$\mathcal{M}_{L+1}$
of all vertices in layer $L+1$
of~$\mathcal P$
that is consistent
with~$\mathcal{M}_{L}$.
\item
\label{item:propo_frustums-b}
%
%
%
%
%
%
%
%
Let $\mathcal{M}_{L+1}$ be any labelling of
layer~$L+1$
in~$\mathcal P$
that is consistent with
$\mathcal{M}_{L}$ and suppose that $\mathcal P$ expands
coordinate
$j
$
from layer $L$
to layer $L+1$, i.e.,\ }
$j = \pyrrem{d}{L}$
and
$j\notinI_{\text{\upshape lo}}\cupI_{\text{\upshape hi}}$.
Then
for any vertex
$(\veczerodminusone{y}, L + 1)$ in~$\mathcal P$
it holds that
the labelling
$\mathcal{M}^{\veczerodminusone{y}}_{L+1}$
defined by
\begin{equation*}
\mathcal{M}^{\veczerodminusone{y}}_{L+1}%
(\veczerodminusone{x},L+1)
:=
\begin{cases}
1 - \mathcal{M}_{L+1}(\veczerodminusone{x},L+1)
&
\text{if $x_i = y_i$ for all $i \neq j$,}
\\
\mathcal{M}_{L+1}(\veczerodminusone{x}, L+1)
&
\text{otherwise}
\end{cases}
\end{equation*}
is also consistent with~$\mathcal{M}_{L}$.
\end{enumerate}
\end{proposition}
\begin{proof}
We first note that the set of
XOR\xspace
constraints between two layers
$L$ and $L+1$
can be partitioned into
several connected
components.
Each of the components forms a ``one-dimensional line'' in the
direction of the expanding coordinate
$j = \pyrrem{d}{L}$.
More formally, two vertices from layers $L$
and~$L+1$
are on the same line if and only if they agree on
the coordinates $x_i$ for all $i\neqj$.
These lines form the connected components of the graph induces on
layers $L$ and~$L+1$.
All such lines between layers $L$ and~$L+1$
isomorphic and their shape depends on whether
$j\in I_{\text{\upshape lo}}$, $j\in I_{\text{\upshape hi}}$, or
$j\notinI_{\text{\upshape lo}}\cupI_{\text{\upshape hi}}$.
See \reffig{fig:between_layers} for an illustration.
We remark that in
\reftwofigs{fig:between_layersB}{fig:between_layersC} the vertex
with in-degree $1$ and its predecessor form a binary XOR clause in
$\digraphxorarg{\mathcal P}$.
%
\makeatletter{}%
\newcommand{\vspace{1mm}}{\vspace{1mm}}
\begin{figure}
\centering
\subfigure[$j\notinI_{\text{\upshape lo}}\cupI_{\text{\upshape hi}}$]%
{
\label{fig:between_layersA}
%
{%
\begin{tikzpicture}%
[vertex/.style={circle,draw=black,fill=black,%
inner sep=0pt,minimum size=3pt},%
%
diredge/.style={->,shorten <=.5pt, shorten >=.5pt}]
%
%
\foreach \height/\xdist/\ydist in {8/.5/.5} {
%
\foreach \y in {0,1} {
\pgfmathsetmacro{\diff}{\height-\y};
\foreach \x in {0,...,\diff} {
%
%
%
\node[vertex] (v_\x_\y) at (\x*\xdist-\diff*\xdist/2,\y*\ydist) {};
}
}
%
\foreach \y in {1} {
\pgfmathsetmacro{\diff}{\height-\y};
\foreach \x in {0,...,\diff} {
\pgfmathtruncatemacro{\nextx}{1+\x};
\pgfmathtruncatemacro{\prevy}{\y-1};
\draw[diredge] (v_\x_\prevy) -- (v_\x_\y);
\draw[diredge] (v_\nextx_\prevy) -- (v_\x_\y);
}
}
}%
\end{tikzpicture}
}%
%
\vspace{1mm}
}
%
\hfill
\subfigure[$j\inI_{\text{\upshape hi}}$]%
{
\label{fig:between_layersB}
%
{%
\begin{tikzpicture}%
[vertex/.style={circle,draw=black,fill=black,%
inner sep=0pt,minimum size=3pt},%
%
diredge/.style={->,shorten <=.5pt, shorten >=.5pt}]
%
%
\foreach \height/\xdist/\ydist in {8/.5/.5} {
\pgfmathtruncatemacro{\heightminusone}{\height-1};
\pgfmathtruncatemacro{\heightminustwo}{\height-2};
%
\foreach \y in {0,1} {
\pgfmathsetmacro{\diff}{\height-\y};
\foreach \x in {0,...,\heightminusone} {
%
%
%
\node[vertex] (v_\x_\y) at (\x*\xdist-\diff*\xdist/2,\y*\ydist) {};
}
}
%
\foreach \y in {1} {
\pgfmathsetmacro{\diff}{\height-\y};
\foreach \x in {0,...,\heightminustwo} {
\pgfmathtruncatemacro{\nextx}{1+\x};
\pgfmathtruncatemacro{\prevy}{\y-1};
\draw[diredge] (v_\x_\prevy) -- (v_\x_\y);
\draw[diredge] (v_\nextx_\prevy) -- (v_\x_\y);
}
\draw[diredge] (v_\heightminusone_0) -- (v_\heightminusone_1);
}
}%
\end{tikzpicture}
}%
%
}
%
\hfill
\subfigure[$j\inI_{\text{\upshape lo}}$]%
{
\label{fig:between_layersC}
%
{%
\begin{tikzpicture}%
[vertex/.style={circle,draw=black,fill=black,%
inner sep=0pt,minimum size=3pt},%
novertex/.style={circle,draw=white,fill=white,%
inner sep=0pt,minimum size=3pt},%
%
diredge/.style={->,shorten <=.5pt, shorten >=.5pt}]
%
%
\foreach \height/\xdist/\ydist in {8/.5/.5} {
\pgfmathtruncatemacro{\heightminusone}{\height-1};
\pgfmathtruncatemacro{\heightminustwo}{\height-2};
%
\foreach \y in {0,1} {
\pgfmathsetmacro{\diff}{\height-\y};
\pgfmathtruncatemacro{\start}{1-\y};
\foreach \x in {\start,...,\diff} {
%
%
%
\node[vertex] (v_\x_\y) at (\x*\xdist-\diff*\xdist/2,\y*\ydist) {};
}
}
%
\foreach \y in {1} {
\pgfmathsetmacro{\diff}{\height-\y};
\foreach \x in {1,...,\diff} {
\pgfmathtruncatemacro{\nextx}{1+\x};
\pgfmathtruncatemacro{\prevy}{\y-1};
\draw[diredge] (v_\x_\prevy) -- (v_\x_\y);
\draw[diredge] (v_\nextx_\prevy) -- (v_\x_\y);
}
\draw[diredge] (v_1_0) -- (v_0_1);
}
%
\node[novertex] (v_0_0) at (0*\xdist-\height*\xdist/2,0*\ydist) {};
}%
\end{tikzpicture}
}%
%
}
\caption{%
%
Shapes of
connected component between layers $L$ and
$L+1$
expanding in dimension~$j$.
}
\label{fig:between_layers}
\end{figure}
%
If the layer is not expanding (as depicted in
\reftwofigs{fig:between_layersB}{fig:between_layersC})
and the upper layer~$L$
is entirely labelled, then it is not hard to see that there is a
unique consistent labelling of the lower-level vertices of each line
(determined by propagating values from right to left in
\reffig{fig:between_layersB}
and from left to right in
\reffig{fig:between_layersC}).
As all lines are disjoint this gives a unique labelling of the
entire layer $L+1$ that is consistent with the labelling
of layer~$L$.
If the layer expands (i.e.,\ }, if
$j\notinI_{\text{\upshape lo}}\cupI_{\text{\upshape hi}}$ as illustrated in
\reffig{fig:between_layersA}), then we have more freedom.
Indeed, if we label either the rightmost or the leftmost vertex at
the bottom layer with 0, then we have the same situation as in
\reftwofigs{fig:between_layersB}{fig:between_layersC},
respectively.
This concludes the proof of item~\ref{item:propo_frustums-a}
in the proposition.
For item~\ref{item:propo_frustums-b},
first observe that the condition
$j\notinI_{\text{\upshape lo}}\cupI_{\text{\upshape hi}}$
means that we are in the case depicted in
\reffig{fig:between_layersA}.
This means that if
we have a consistent labelling of
the upper and lower part of a line,
then flipping all values
at the lower level
yields another consistent labelling.
This is
so since
every XOR clause contains exactly two
vertices from the
lower part.
Hence, flipping both of these vertices does not change the parity of the
variables in the XOR clause but leaves the clause satisfied.
As all
lines between layers~$L$ \mbox{and~$L + 1$} are
disconnected from each other,
flipping all values in one line gives another consistent labelling
for the whole layer $L+1$, which is precisely
what is claimed in item~\ref{item:propo_frustums-b}. The proposition
follows.
\end{proof}
\begin{proof}[Proof of Claim~\ref{claim:inductiveclaimpyramids}]
The proof is by induction over
decreasing values of $q$,
the base case being
$q = d$.
As
$\setsize{S} = 2^{d - q} - 1 = 0$
if
$q = d$,
in this case we only have to ensure that there is a
consistent labelling of the entire frustum\xspace that is consistent with
the labelling of the top layer.
This follows from inductively applying
\refpr{propo:frustums}\ref{item:propo_frustums-a}
layer by layer.
For the
inductive step,
assume that the claim
holds
for all
top-expanding\xspace $(d,q+1)$-frustums\xspace.
We want prove it for a top-expanding\xspace
$(d,q)$-frustum\xspace
$\mathcal P=\mathcal P^d_{L,h}%
[I_{\text{\upshape lo}},I_{\text{\upshape hi}},\alpha]$.
Let
$j
= \pyrrem{d}{L}
= L
\bmod %
d$
be the
dimension
that
expands from layer $L$ to $L+1$.
As $\mathcal P$ is top-expanding\xspace and $q<d$
we have $j\notinI_{\text{\upshape lo}}\cupI_{\text{\upshape hi}}$.
For some well-chosen
$\vala \in [0, \mathit{slen}(j,L+1)]$
to be specified shortly, we
partition
$\mathcal P$ on and below layer $L+1$ into the
wedge
\begin{subequations}
\begin{equation}
\label{eq:split-wedge}
\dimValLayerWedge{j}{\vala}{L+1}
\end{equation}
and two disjoint $(d,q+1)$-frustums\xspace: the ``right'' frustum
\begin{equation}
\label{eq:split-frustum-1}
\mathcal P^d_{L+1,h}
[I_{\text{\upshape lo}}\cup\set{j},I_{\text{\upshape hi}},
\alpha\cup\set{j \mapsto\vala}]
\end{equation}
and the ``left'' frustum (depicted by \bluecross{} in Figure~\ref{fig:pyramidWedgeFrustum})
\begin{equation}
\label{eq:split-frustum-2}
\mathcal P^d_{L+1,h}
[I_{\text{\upshape lo}},I_{\text{\upshape hi}}\cup\set{j},
\alpha\cup\set{j\mapsto \vala}]
\enspace .
\end{equation}
\end{subequations}
We choose the position $\vala$ of the wedge so that both
$(d,q+1)$-frustums\xspace
in~\refeq{eq:split-frustum-1}
and~\refeq{eq:split-frustum-2}
contain at most
$(\setsize{S}-1) / 2 \leq 2^{d-(q+1)}-1$
vertices from
$S$
%
%
(which implies
that the wedge~\refeq{eq:split-wedge}
contains at least one vertex from $S$).
%
To be more specific,
we choose the largest $\vala\geq 0$ such that
the
left frustum~\refeq{eq:split-frustum-2} contains at most
$(\setsize{S}-1) / 2$ vertices
$S_\vala \subseteq S$. Such an $\vala$ exists as $S_0=\emptyset$.
If $\vala$ reached the maximum $\mathit{slen}(j,L+1)$,
then empty right frustum~\refeq{eq:split-frustum-1} clearly contains
no vertices from $S$.
Otherwise let $S_{\vala+1}$ be the set of vertices from $S$
left of the wedge {at position $\vala+1$}.
By the choice of $\vala$ we have
$\setsize{S_{\vala+1}}>(\setsize{S}-1) / 2$.
Furthermore, because all vertices in $S$ are below layer
$L+1$ it follows that $S_{\vala+1}\setminus
S_\vala$ is contained in the wedge~\refeq{eq:split-wedge} at
position $\vala$.
Hence the right frustum~\refeq{eq:split-frustum-1} contains at most
$\setsize{S\setminusS_{\vala+1}} \leq (\setsize{S}-1) /
2$ vertices.
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
Now we proceed as follows.
First we use
\refpr{propo:frustums}\ref{item:propo_frustums-a}
to obtain any consistent labelling of all vertices in
layer~$L+1$.
Consider the set of all vertices
$v = (\veczerodminusone{x},L+1)$ in layer
$L+1$ with $x_j=\vala$,
i.e.,\ } the topmost vertices in the
wedge~\refeq{eq:split-wedge}.
Note that this set of vertices form a hyperplane through, and
perpendicular to, the disconnected parallel lines discussed in
\refpr{propo:frustums}.
We go over these vertices~$v$ one by one and
flip every \mbox{$1$-labelled}~$v$ to~$0$.
As $j$ is the expanding coordinate from layer $L$
\mbox{to $L+1$},
after every such flip we can apply
\refpr{propo:frustums}\ref{item:propo_frustums-b}
to relabel the rest of the line through~$v$ as needed.
In this way, all vertices in the top layer of the
wedge~\refeq{eq:split-wedge}, get labelled by~$0$,
and we label all other vertices in the wedge
with~$0$ also.
It follows from repeated application of
\refpr{propo:frustums}\ref{item:propo_frustums-b}
that the end result is a labelling of \mbox{layer~$L+1$} that is
consistent with~$\mathcal{M}_L$.
Note that this labels every vertex from $S$ within the wedge
with $0$ and moreover, layer
$L+1$ contains no vertices from $S$ outside of the
wedge.
This is because if some vertex from $S$ is on layer
$L+1$, then we have by the assumption in
Claim~\ref{claim:inductiveclaimpyramids} that $\setsize{S}=1$
and hence this one labelled vertex is guaranteed to be in the wedge
by the choice of $\vala$.
In this way we obtain a
labelling for the top layer of both
$(d,q+1)$-frustums\xspace,
and
we then apply induction to consistently label all vertices in both
frustums in such a way that
every vertex in $S$ is set to $0$.
Now we argue that the disjoint union of the all-zero
labelling of the wedge and the consistent labellings
%
%
of both
frustums is a consistent labelling of $\mathcal P$.
Clearly, every (non-source) clause in $\digraphxorarg{\mathcal P}$
that is entirely contained in the wedge is satisfied by the
all-zero labelling of the wedge.
In the same way, every clause that is entirely contained in one of
the two
frustums is satisfied by their consistent labellings.
It remains to consider clauses that contain variables from the wedge
as well as from one of the frustums.
By construction, those clauses have the form $(v,w_1,w_2,0)$, for a
vertex $v$ with in-neighbours $w_1$, $w_2$, where the vertex $v$ and
one of its neighbours (say $w_1$) is within the frustum and the
other neighbour is inside the wedge.
As the edge $(w_1,v)$ inside the frustum forms the binary clause
$(v,w_1,0)$ it follows that the consistent labelling of the frustum
guarantees that the parity of $v$ and $w_1$ is even.
Because $w_2$ is labelled $0$ in the wedge, the merged labelling
satisfies $(v,w_1,w_2,0)$.
The claim follows.
\end{proof}
As noted above, our proof of
Claim~\ref{claim:inductiveclaimpyramids}
also establishes
\reflem{lem:ImmermansMainTechnicalLemma}.
%
}
In~\cite{Immerman.1981} $\log n$-dimensional pyramids (where $n$ is
the number of vertices) are used to prove a
$\Bigomega{2^{\sqrt{\log n}}}$ lower bound on the quantifier depth of
full first-order counting logic. The next lemma
shows
that if we instead choose the dimension to be logarithmic in the
number of variables (i.e.,\ } pebbles)
in the game,
we get an improved quantifier depth
lower bound for the $k$-variable fragment.
\begin{lemma}\label{lem:pyramidLowerBound}
For every $d\geq 2$ and height~$h$, \firstplayer does not
win the $2^d$-pebble game on
$\Digraphxorarg{\mathcal P^d_{h}}$ within
%
%
$h/(d-1)- 1$
rounds.
\end{lemma}
\begin{proof}
We show that \secondplayer has a counter-strategy to answer
consistently for at least $\lfloor h/(d-1)\rfloor-1$ rounds and
therefore \firstplayer needs at least $\lfloor h/(d-1)\rfloor >
h/(d- 1)-1$ rounds to win.
Starting at the top layer~$\layernumber_1=0$,
she maintains the invariant that at the start of round~$r$
she has a consistent labelling of all vertices from layer~$0$
to layer~$\layernumber_{r}$ with the property that
there is no pebble on layers~$\layernumber_{r}+1$
to~$\layernumber_{r} + d - 1$.
Whenever \firstplayer places a pebble on or above layer~$\layernumber_r$,
\secondplayer responds
according to the consistent labelling and whenever \firstplayer
puts a pebble on or below layer $\layernumber_r + d$, she
answers~$0$,
and in both cases sets $\layernumber_{r+1} = \layernumber_r$.
Note that as long as \firstplayer places pebbles in this way, the
game can go on forever.
%
%
%
%
Since there are never more than
$2^d - 1$
pebbles left on vertices on or below layer $\layernumber_r + d$
(when \firstplayer runs out of pebbles the next move must be a removal),
the conditions needed for
\reflem{lem:ImmermansMainTechnicalLemma}
to apply are never violated.
Thus, the interesting case is when \firstplayer places a pebble
between layer $\layernumber_r + 1$ and
$\layernumber_r + d-1$.
Then \secondplayer uses
\reflem{lem:ImmermansMainTechnicalLemma} to extend her labelling to
the first layer $\layernumber_{r+1} > \layernumber_r$ such
that there is no pebble on layers $\layernumber_{r+1} + 1$ to
$\layernumber_{r+1} + d -1$, after which she answers the query
according to the new labelling.
\ifthenelse{\boolean{conferenceversion}}
{Note that}
{It is worth noting that}
when \secondplayer skips downward from
layer~$\layernumber_{r}$ to
layer~$\layernumber_{r+1}$ she might jump over a lot of layers in one
go, but if so there is at least one pebble for every
\mbox{$(d-1)$th layer} forcing such a big jump.
We see that following this strategy \secondplayer survives for at
least $\lfloor h/(d-1)\rfloor-1$ rounds, and this establishes
the lemma.
\end{proof}
Putting the pieces together, we can now present the lower bound for the
$k$-pebble game in \reflem{lem:pyramids}.
\begin{proof}[Proof of \reflem{lem:pyramids}]
Recall that we want to prove that
for all \mbox{$\pebblesl_{\parammax}\geq 3$} and
\mbox{$m \geq 3$} there is an
$m$-variable \mbox{$3$-XOR\xspace{}} formula
$\ensuremath{F}$
on which
\firstplayer
wins the \mbox{$3$-pebble} game but
cannot win the $\pebblesl_{\parammax}$-pebble game
within
$\tfrac{1}{\ceiling{\log \pebblesl_{\parammax}}}
m^{1/(1 +
\ceiling{\log \pebblesl_{\parammax}})} - 2$
rounds.
If $m < (5\ceiling{ \log \pebblesl_{\parammax} })^{(\ceiling{ \log \pebblesl_{\parammax} } + 1)}$, then the round
lower bound is trivial and we let $\ensuremath{F}$ be, \eg, the 3-variable
formula $\Digraphxorarg{\mathcal P^1_1}$ plus $m-3$ auxiliary
variables on which \firstplayer{} needs 3 rounds to win.
Otherwise, we choose the formula to be
$\ensuremath{F} = \Digraphxorarg{\mathcal P^d_h}$
for parameters
$d
= %
\ceiling{ \log \pebblesl_{\parammax} }
$
and
$h
= %
\Floor{m^{1/(d+1)}}-1$.
%
Note that $\mathcal P^d_h$ contains less than $(h+1)^{d+1}\leq m$
vertices and we can add dummy variables to reach exactly $m$.
Since the graph~$\mathcal P^d_h$ has in-degree~$2$,
\reflem{lem:UpperBoundPebblingDAG} says that \firstplayer wins the
\mbox{$3$-pebble} game
as claimed in
\reflem{lem:pyramids}\ref{item:immerman-a}.
The lower bound for the \mbox{$\pebblesl_{\parammax}$-pebble} game
in \reflem{lem:pyramids}\ref{item:immerman-b}
follows from \reflem{lem:pyramidLowerBound} and the oberservation
that because $h\geq 5d-1$ we have $h/(d-1)\geq
(h+1)/d$ and hence
\begin{align}
\label{eq:1}
h/(d-1)- 1 \leq (h+1)/d- 1
&= \tfrac{1}{\ceiling{\log \pebblesl_{\parammax}}}
\FLOOR{m^{1/(1 +
\ceiling{\log \pebblesl_{\parammax}})}} - 1 \\
&\leq \tfrac{1}{\ceiling{\log \pebblesl_{\parammax}}}
m^{1/(1 +
\ceiling{\log \pebblesl_{\parammax}})} - 2
\enspace .
\end{align}
The lemma follows.
\end{proof}
%
\makeatletter{}%
\newcommand{\boundaryexp}[3]%
{$({#2},{#1},{#3})$\nobreakdash-boundary expander\xspace}
\newcommand{\nmboundaryexp}[5]%
{${#1}\times{#2}$ \boundaryexp{#3}{#4}{#5}}
\newcommand{\boundaryexpnodeg}[2]%
{$({#1},{#2})$\nobreakdash-boundary expander\xspace}
\newcommand{\nmboundaryexpnodeg}[4]{${#1}\times{#2}$ \boundaryexpnodeg{#3}{#4}}
\newcommand{\expansionguarantee}{s}
\newcommand{\expansionfactor}{c}
\newcommand{\expanderdegree}{\Delta}
\newcommand{\leftsize}{\leftsize}
\newcommand{\rightsize}{m}
\newcommand{\boundaryexpnodegstd}%
{\boundaryexpnodeg{\expansionguarantee}{\expansionfactor}}
\newcommand{\boundaryexpstd}%
{\boundaryexp{\expanderdegree}{\expansionguarantee}{\expansionfactor}}
\newcommand{\nmboundaryexpstd}%
{\nmboundaryexp{\leftsize}{\rightsize}{\expanderdegree}{\expansionguarantee}{\expansionfactor}}
\newcommand{\nmboundaryexpnodegstd}%
{\nmboundaryexpnodeg{\leftsize}{\rightsize}{\expansionguarantee}{\expansionfactor}}
\newcommand{\aboundaryexpnodegstd}%
{an \boundaryexpnodeg{\expansionguarantee}{\expansionfactor}}
\newcommand{\aboundaryexpstd}%
{an \boundaryexp{\expanderdegree}{\expansionguarantee}{\expansionfactor}}
\newcommand{\annmboundaryexpstd}%
{an \nmboundaryexp{\leftsize}{\rightsize}{\expanderdegree}{\expansionguarantee}{\expansionfactor}}
\newcommand{\annmboundaryexpnodegstd}%
{an \nmboundaryexpnodeg{\leftsize}{\rightsize}{\expansionguarantee}{\expansionfactor}}
\newcommand{\Aboundaryexpnodegstd}%
{An \boundaryexpnodeg{\expansionguarantee}{\expansionfactor}}
\newcommand{\Aboundaryexpstd}%
{An \boundaryexp{\expanderdegree}{\expansionguarantee}{\expansionfactor}}
\newcommand{\Annmboundaryexpstd}%
{An \nmboundaryexp{\leftsize}{\rightsize}{\expanderdegree}{\expansionguarantee}{\expansionfactor}}
\newcommand{\Annmboundaryexpnodegstd}%
{An \nmboundaryexpnodeg{\leftsize}{\rightsize}{\expansionguarantee}{\expansionfactor}}
\section{Hardness Condensation}
\label{sec:hardness-condensation}
In this section we establish \reflem{lem:hardnessCondensationXOR},
which shows how to convert an XOR\xspace formula into a harder formula over
fewer variables. As discussed in the introduction,
this part of our construction relies heavily on Razborov's recent
paper~\cite{Razborov16NewKind}. We follow his line of
reasoning closely below, but translate it from proof complexity to
a pebble game argument for bounded variable logics.
A key technical concept in the proof is graph expansion. Let us define
the particular type of expander graphs we need and then discuss some
crucial properties of these graphs. We use standard
notation, letting
$\mathcal G =
(U \overset{.}{\cup} V,E)$
denote a bipartite graph with left vertex set~$U$
and
right vertex set~$V$.
We let
$N^\mathcal G \bigl( U' \bigr)
=
\Setdescr{v}{\{u,v\}\in E(\mathcal G),u\in U'}$
\ifthenelse{\boolean{conferenceversion}}
{denote the right neighbours}
{denote the set of neighbour vertices on the right}
of a left vertex subset~$U' \subseteq U$
(and vice versa for right vertex subsets).
\begin{definition}[Boundary expander]
A bipartite graph
$\mathcal G = (U \overset{.}{\cup}
V,E)$
is an
\introduceterm{\nmboundaryexpnodegstd{} graph}
if
\mbox{$\setsize{U}=\leftsize$},
\mbox{$\setsize{V}=m$},
and for every set
$U'\subseteqU$,
$\setsize{U'}\leq s$,
it holds that
$\Setsize{\boundary^{\mathcal G}(U')} \geq
c\setsize{U'}$,
where the \emph{boundary}
$\boundary^{\mathcal G}(U')$ is the set of all
$v \in N^{\mathcal G}(U')$
having a unique neighbour in~$U'$, meaning that
\mbox{$\Setsize{N^{\mathcal G}(v)\capU'}=1$}.
\Aboundaryexpstd is \aboundaryexpnodegstd where additionally
$\Setsize{N^\mathcal G(u)} \leq \Delta$
for all
$u\inU$,
i.e.,\ } the graph has left degree bounded by~$\Delta$.
\end{definition}
In what follows, we will omit~$\mathcal G$ from the notation when
the graph is clear from context.
In any \boundaryexpnodegstd
\ifthenelse{\boolean{conferenceversion}}
{with $c > 0$}
{with expansion $c > 0$}
it holds that any left vertex subset
$U' \subseteq U$
of size
$\setsize{U'} \leq s$
has a partial matching into~$V$
\ifthenelse{\boolean{conferenceversion}}
{where the}
{where in addition the}
vertices in~$U'$ can be ordered
in such a way that every vertex $u_i \in U'$ is
matched to a vertex outside of the neighbourhood of
the preceding vertices $u_1, \ldots, u_{i-1}$.
The proof of this fact is sometimes
referred to as a \introduceterm{peeling argument}.
\begin{lemma}[Peeling lemma]
\label{lem:peeling_lemma}
Let
$\mathcal G = (U \overset{.}{\cup} V,E)$
be \aboundaryexpnodegstd with
$s\geq1$
and
$c > 0$.
Then
for every set $U' \subseteq U$,
$|U'|=t\leqs$ there is an
ordering $u_1,\ldots,u_t$ of its vertices and a sequence of
vertices $v_1,\ldots,v_t \in V$ such that $v_i\in
N(u_i)\setminus N(\{u_1,\ldots,u_{i-1}\})$.
\end{lemma}
\ifthenelse{\boolean{conferenceversion}}
{%
\begin{proof}[Proof sketch]
Fix any \mbox{$v_t \in \boundary(U')$}
and let
$u_t \in U'$
be the unique vertex
such that
$\Setsize{N(v_t)\capU'}=\{u_t\}$.
Then it holds that
$v_t \in
N(u_t) \setminus
N \bigl(U'\setminus\{u_t\}\bigr)$.
By induction we can now find sequences
$u_1,\ldots,u_{t-1}$ and $v_1,\ldots,v_{t-1}$
for~$U'\setminus\{u_t\}$
such that $v_i\in
N(u_i)\setminus N(\{u_1,\ldots,u_{i-1}\})$,
to which we can append
$u_t$ and~$v_t$ at the end.
The lemma follows.
\end{proof}
}
{%
\begin{proof}
The proof is by induction on $t$.
The base case $t=1$
is immediate
since
$s\geq1$
and
$c > 0$
implies that no left vertex can be isolated.
For the
inductive step,
suppose the lemma holds
for~$t - 1$.
To
construct
the sequence $v_1,\ldots,v_t$ we first fix
$v_t$ to
be any vertex in $\boundary(U')$, which
has to exist since
$\Setsize{\boundary(U')}
\geq \mbox{$c \setsize{U'} > 0$}$.
The fact that $v_t$ is in the boundary of~$U'$
means that there is a unique~$u_t \in
U'$
such that
$\Setsize{N(v_t)\capU'}=\{u_t\}$.
Thus, for this pair $(u_t, v_t)$ it holds that
$v_t \in
N(u_t) \setminus
N \bigl(U'\setminus\{u_t\}\bigr)$.
By the induction hypothesis we can now find sequences
$u_1,\ldots,u_{t-1}$ and $v_1,\ldots,v_{t-1}$
for~$U'\setminus\{u_t\}$
such that $v_i\in
N(u_i)\setminus N(\{u_1,\ldots,u_{i-1}\})$,
to which we can append
$u_t$ and~$v_t$ at the end. The lemma follows.
\end{proof}
}
For a right vertex subset
$V' \subseteq V$
in
$\mathcal G = (U \overset{.}{\cup} V,E)$
we define the \introduceterm{kernel}
$\operatorname{Ker} \bigl( V' \bigr) \subseteq U$
to be the set of all left vertices whose entire neighbourhood is
contained in
$V'$, i.e.,\ }
\begin{equation}
\label{eq:kernel}
\operatorname{Ker} \bigl( V' \bigr) =
\Setdescr{\mbox{$u\inU$}}{N(u)\subseteqV'}
\enspace .
\end{equation}
We let
$\expandersubgraph{\mathcal G}{V'}$
denote the subgraph of $\mathcal G$ induced on
$
\bigl(
U\setminus\operatorname{Ker}(V')
\bigr)
\overset{.}{\cup}
\bigl(
V\setminusV'
\bigr)
$.
In other words,
we obtain
$\expandersubgraph{\mathcal G}{V'}$
from~$\mathcal G$ by first deleting $V'$ and
afterwards all isolated vertices from $U$
(assuming that there were no isolated left vertices before, which is
true if $\mathcal G$ is expanding).
\ifthenelse{\boolean{conferenceversion}}
{The next lemma states that for any small enough right vertex
set~$V'$ in an expander~$\mathcal G$ we can}
{The next lemma states that if $\mathcal G$ is an expander graph,
then for any small enough right vertex set~$V'$ we
can always}
find a \introduceterm{closure}
$\gamma\bigl(V'\bigr) \supseteq V'$
with a small kernel such that
\ifthenelse{\boolean{conferenceversion}}
{$\expandersubgraph{\mathcal G}{\gamma(V')}$
has good expansion.}
{the subgraph
$\expandersubgraph{\mathcal G}{\gamma(V')}$
has good boundary expansion.}
\ifthenelse{\boolean{conferenceversion}}
{The proof of this lemma (albeit with slightly different parameters)
can be found in~\cite{Razborov16NewKind} and is also provided in
the full-length version of this paper.}
{The proof of this lemma (albeit with slightly different parameters)
can be found in~\cite{Razborov16NewKind}, but we also include it in
\refapp{app:existence-expander}
for completeness.}
\begin{lemma}[\cite{Razborov16NewKind}]
\label{lem:ClosedSet}
Let $\mathcal G$ be an
\boundaryexpnodeg{\expansionguarantee}{2}.
Then for every $V' \subseteq V$ with
$\setsize{V'}\leq s/2$
there exists a subset
$\gamma\bigl(V'\bigr) \subseteq V$
with
$\gamma(V') \supseteq V'$
such that
$\Setsize{\operatorname{Ker} \bigl( \gamma \bigl( V' \bigr)\bigr)}
\leq \Setsize{V'}$
and the induced subgraph
$\expandersubgraph{\mathcal G}{\gamma(V')}$ is
an
\boundaryexpnodeg{\expansionguarantee/2}{1}.
\end{lemma}
\ifthenelse{\boolean{conferenceversion}}
{In order for \reftwolems{lem:peeling_lemma}{lem:ClosedSet} to be
useful, we need to know that there exist good expanders.
This can be established by a standard probabilistic argument.
A proof of the next lemma is given in~\cite{Razborov16NewKind} and can
also be found in the full version of this paper.}
{Note that
\reflem{lem:ClosedSet}
is a purely existential result. We do not know how the closure is
constructed and, in particular,
if we want to choose closures of minimal size, then
$V_1 \subseteq V_2$
does not necessarily imply
$\gamma(V_1) \subseteq \gamma(V_2)$.
In order for \reftwolems{lem:peeling_lemma}{lem:ClosedSet} to be
useful, we need to know that there exist good enough boundary expanders.
To prove this, one can just fix a left vertex set~$U$ of
size~$\leftsize$ and a right vertex set~$V$ of
size~$m$ and then for every $u\inU$ choose
$\Delta$~neighbours from~$V$ uniformly and
independently at random.
\mbox{A standard} probabilistic argument shows that
%
%
%
with high probability
this random graph is an
\nmboundaryexp{\leftsize}{m}{\expanderdegree}{\expansionguarantee}{2}
for appropriately chosen parameters.
We state this formally as a lemma below. A similar lemma is proven
in~\cite{Razborov16NewKind} but we also
provide a proof in \refapp{app:existence-expander}
for the convenience of the reader.}
\begin{lemma}%
\label{lem:expanderexistnew}
There is
an absolute constant $\expanderdegree_0 \in \Nplus$
such that for all
integers
$\Delta$,
$s$,
and~$\leftsize$
satisfying
$
\Delta \geq \expanderdegree_0
$
and
\mbox{$(s\Delta)^{2\Delta} \leq \leftsize$}
there exist
\nmboundaryexp{\leftsize}{\Ceiling{{\leftsize}^{3/\Delta}}}
{\expanderdegree}{\expansionguarantee}{2}{}s.
\end{lemma}
\ifthenelse{\boolean{conferenceversion}}
{}
{%
For readers familiar with expander graphs from other contexts,
it might be worth pointing out that the parameters
%
above
are different from what tends to be the standard expander graph
settings of
$s = \bigomega{\leftsize}$
and
$\Delta = \bigoh{1}$.
Instead, in \reflem{lem:expanderexistnew} we have
$s$ growing sublinearly in~$\leftsize$
and
$\Delta$
need not be constant
(although we still need
$\Delta \lessapprox \log \leftsize / \log \log \leftsize$
in order to satisfy the conditions of the lemma).
} %
In what follows, unless otherwise stated
$\mathcal G = (U \overset{.}{\cup} V,E)$
will be an
\boundaryexpnodeg{\expansionguarantee}{2} for
$\expansionguarantee = 2\pebblesk$.
We will use such expanders
when we do XOR\xspace substitution in our formulas as described formally in the next
definition. In words, variables in the XOR\xspace formula are identified
with left vertices~$U$ in~$\mathcal G$, the pool of
new variables is the right vertex set~$V$, and every
variable $u \in U$ in an XOR\xspace clause is replaced by
an exclusive or
$\bigoplus_{v \in N(u)} v$
over its neighbours
$v \in N(u)$.
\ifthenelse{\boolean{conferenceversion}}
{}
{We emphasize that in ``standard'' \XORification as found in the proof
complexity literature all new substituted variables would be
distinct, i.e.,\ }
$N(u_1) \cap N(u_2) = \emptyset$
for $u_1 \neq u_2$. While this often makes formulas harder, it also
increases the number of variables. Here,
we use the approach in~\cite{Razborov16NewKind} to instead recycle
variables from a much smaller set~$V$ in the substitutions, thus decreasing
the total number of variables.}
\begin{definition}[XOR\xspace substitution with recycling]
\label{def:xor-substitution}
Let $\ensuremath{F}$ be an XOR\xspace formula with
$\operatorname{Vars}(\ensuremath{F}) = U$
and let
$\mathcal G = (U \overset{.}{\cup} V,E)$
be a bipartite graph.
For every clause $c = (u_1,\ldots,u_t,a)$ in
$\ensuremath{F}$
we let $\constraint\substituted$ be the clause
$(v_1^1,\ldots,v_1^{z_1},\ldots,
v_t^1,
\ldots,
v_t^{z_t},a)$,
where
$N(u_i)=\{v^1_i,\ldots,v^{z_i}_i\}$
for all $1\leq
i\leq t$.
Taking unions, we let
$\xformf\substituted$ be the XOR\xspace formula
$\xformf\substituted =
\setdescr{\constraint\substituted}{c\in\ensuremath{F}}$.
\end{definition}
When using an
\nmboundaryexp{\leftsize}{{\leftsize}^{3/\Delta}}
{\expanderdegree}{\expansionguarantee}{2}
as in \reflem{lem:expanderexistnew}
for substitution in an $\leftsize$\nobreakdash-variable XOR\xspace
formula~$\xformf$ as described in \refdef{def:xor-substitution},
we obtain a new XOR\xspace formula $\xformf\substituted$
where the number of variables have decreased significantly
to~$\leftsize^{3/\Delta}$.
The next lemma, which is at the heart of our logic-flavoured version
of hardness condensation, states that a round lower bound for the
\mbox{$\pebblesk$-pebble} game on~$\xformf$ implies a round lower
bound for the \mbox{$\pebblesk$-pebble} game on~$\xformf\substituted$.
\begin{lemma}
\label{lem:HardnessCondensationGameStyle}
Let
$\pebblesk$
be a positive integer and let
$\mathcal G$
be an \nmboundaryexpnodeg{\leftsize}{\rightsize}{2 \pebblesk}{2}.
Then if $\xformf$ is an XOR\xspace formula
over $\leftsize$~variables such that
\secondplayer wins the \mbox{$\roundstd$-round}
\mbox{$\pebblesk$-pebble} game on~$\xformf$,
she also wins the \mbox{$\roundstd/(2\pebblesk)$-round}
\mbox{$\pebblesk$-pebble} game on~$\xformf\substituted$.
\end{lemma}
\ifthenelse{\boolean{conferenceversion}}{}
{By way of comparison with~\cite{Razborov16NewKind},
we remark that a straightforward translation of Razborov's technique
would start with formulas on which \firstplayer can win with few
pebbles, but needs an almost linear number of rounds to win the
game, even if he has an infinite amount of
pebbles.%
\footnote{In terms of resolution, this corresponds to formulas that are
refutable in small width, but where every resolution refutation
has almost linear depth.}
Applying this without modification to Immerman's construction, we would
obtain very weak bounds (and, in particular, nothing interesting for
constant~$\pebblesk$). Instead, as input to our hardness condensation
lemma we use a construction that has a round lower bound
of~$n^{1/\log \pebblesk}$, and show that for hardness
condensation it is not necessary that the original formula is hard
over the full range.}
Before embarking on a formal proof of
\reflem{lem:HardnessCondensationGameStyle},
which is rather technical and will take the rest of this section, let
us discuss the intuition behind it. The main idea to obtain a good
strategy for \secondplayer on the substituted formula~$\xformf\substituted$ is
to think of the game as being played on~$\xformf$ and simulate the
survival strategy there for as long as possible
(which is where
\ifthenelse{\boolean{conferenceversion}}
{expansion}
{boundary expansion}
comes into play).
Let
$\mathcal G = (U \overset{.}{\cup} V,E)$
be an
\boundaryexpnodeg{2\pebblesk}{2}
as stated in the lemma.
We have
$\operatorname{Vars}(\xformf) = U$ and $\operatorname{Vars}(\xformf\substituted)
= V$.
Given a strategy for \secondplayer in the $\roundstd$-round
$\pebblesk$-pebble game on $\xformf$, we want to
convert this into a winning strategy for \secondplayer for
the $\roundstd/(2\pebblesk)$-round $\pebblesk$-pebble
game on $\xformf\substituted$.
A first approach
(which will not quite work) is the following.
While playing on the substituted formula $\xformf\substituted$, \secondplayer
simulates the game on~$\xformf$.
For every position $\beta$ in the game on $\xformf\substituted$, she
maintains a corresponding position $\alpha$ on $\ensuremath{F}$, which is
defined on all variables whose entire neighbourhood in the expander is
contained in the domain of $\beta$, i.e.,\ }
$\operatorname{Vars}(\alpha)=\operatorname{Ker}(\operatorname{Vars}(\beta))$.
The assignments of $\alpha$ should be
defined in such a way that they are
\emph{consistent with} $\beta$, i.e.,\ } so that
$\alpha(u)
= %
\bigoplus_{v\in N(u)}\beta(v)$.
It then follows from
\ifthenelse{\boolean{conferenceversion}}
{the definition of \XORification}
{the description of \XORification in \refdef{def:xor-substitution}}
that $\alpha$ falsifies an XOR\xspace
clause of $\xformf$ if and only if $\beta$ falsifies an XOR\xspace
clause of $\xformf\substituted$.
Now \secondplayer wants to play
in such a way that if $\beta$ changes to
$\beta'$ in one round of the game on~$\xformf\substituted$, then the
corresponding position $\alpha$ also changes to $\alpha'$ in one
round of the game on~$\xformf$.
Intuitively, this should
be done as follows.
Suppose that starting from a position $\beta$, \firstplayer asks
for a variable $v\in V$.
If $v$ is not the last
unassigned
vertex in a neighbourhood of some
$u\inU$, i.e.,\ }
$\operatorname{Ker}(\operatorname{Vars}(\beta))=\operatorname{Ker}(\operatorname{Vars}(\beta)\cup\{v\})$,
then \secondplayer can make an arbitrary choice as $\alpha =
\alpha'$ is consistent with both choices.
If $v$ is the last free vertex in the neighbourhood of exactly one
vertex $u$, i.e.,\ } $\{u\}=\operatorname{Ker}(\operatorname{Vars}(\beta)\cup\{v\})\setminus
\operatorname{Ker}(\operatorname{Vars}(\beta))$, then \secondplayer assumes that she was
asked for $u$ in the simulated game on $\xformf$.
If
in her strategy for the $\roundstd$-round $\pebblesk$-pebble game on
$\xformf$ she
would answer
with an assignment $a\in\{0,1\}$
which would yield the new position
$\alpha'=\alpha\cup\{u\mapsto a\}$,
then in the game on $\xformf\substituted$ she now sets $v$ to the right value
$b\in\{0,1\}$
so
that the new position
$\beta'=\beta\cup\{v\mapsto b\}$ satisfies the consistency
property $\alpha'(u)=\bigoplus_{v\in N(u)}\beta'(v)$.
If \secondplayer could follow this strategy, then the number of rounds
she would survive the game on $\xformf\substituted$ would be lower-bounded by
the number of rounds she survives in the game on $\xformf$.
There is a gap in this intuitive argument, however, namely
how to handle the case when the
queried variable $v$ completes the neighbourhood of two (or more)
vertices
$u_1$, $u_2$ at the same time.
If it holds that
$\set{u_1,u_2} \subseteq \operatorname{Ker}(\operatorname{Vars}(\beta)\cup\{v\})\setminus
\operatorname{Ker}(\operatorname{Vars}(\beta))$,
then we have serious
\ifthenelse{\boolean{conferenceversion}}
{problems, as $u_1$ and $u_2$ could guide to two different ways of
assigning~$v$, implying}
{problems. Following the strategy above for $u_1$ and $u_2$ separately can
yield two different and conflicting ways of assigning~$v$, meaning}
that for the new position~$\beta'$ there will
be no consistent assignment $\alpha'$ of $\operatorname{Ker}(\operatorname{Vars}(\beta'))$.
To circumvent this problem and implement the proof idea above,
we will use the boundary expansion of~$\mathcal G$
to ensure that this problematic case does not occur.
For instance, suppose that the graph
$\expandergraph' =
\expandersubgraph{\mathcal G}{\operatorname{Vars}(\beta)}$,
which is the induced subgraph of~$\mathcal G$ on
$U\setminus\operatorname{Vars}(\alpha)$ and
$V\setminus\operatorname{Vars}(\beta)$,
has boundary expansion at least~$1$.
Then the bad situation described above with two variables $u_1,u_2$
having neighbourhood
$N^{\expandergraph'}(u_1)=N^{\expandergraph'}(u_2)=\{v\}$ in
$\expandergraph'$
cannot arise, since this would imply
\mbox{$\boundary^{\expandergraph'}(\{u_1,u_2\}) = \emptyset$},
contradicting the expansion properties of~$\expandergraph'$.
Unfortunately, we cannot ensure boundary expansion of
$\expandersubgraph{\mathcal G}{\operatorname{Vars}(\beta)}$ for every
position $\beta$, but we can apply \reflem{lem:ClosedSet} and
extend the current position to a larger one that is defined on
$\gamma(\operatorname{Vars}(\beta))$ and has the desired expansion
property.
Since \reflem{lem:ClosedSet} ensures that the domain
$\operatorname{Ker}(\gamma(\operatorname{Vars}(\beta)))$ of our assignment~$\alpha$
under construction is bounded by
$\setsize{\alpha} \leq \setsize{\beta} \leq \pebblesk$,
such an extension will still be good enough.
We now proceed to present a formal proof. When doing so, it turns out
to be convenient for us to prove the contrapositive of the statement
discussed above. That is, instead of transforming a strategy for
\secondplayer in the $\roundstd$-round $\pebblesk$-pebble game
on~$\xformf$ to a strategy for the $\roundstd/(2\pebblesk)$-round
$\pebblesk$-pebble game on~$\xformf\substituted$
for an \boundaryexpnodeg{2\pebblesk}{2}~$\mathcal G$,
we will show that a winning strategy for \firstplayer in the game
on the substituted formula~$\xformf\substituted$ can be used to obtain a
winning strategy for \firstplayer in the game on the original
formula~$\xformf$.
Suppose that $\beta$ is any position in the $\pebblesk$-pebble game
on~$\xformf\substituted$, i.e.,\ } a partial assignment of variables
in~$V$.
Since
$\mathcal G$ is a \boundaryexpnodeg{2\pebblesk}{2} and
$\setsize{\beta} \leq \pebblesk$,
we can apply
\reflem{lem:ClosedSet} to obtain a superset
$\gamma(\operatorname{Vars}(\beta))\supseteq \operatorname{Vars}(\beta)$
having the properties that
$\setsize{\operatorname{Ker} ( \gamma ( \operatorname{Vars}(\beta) ) )}
\leq \setsize{\operatorname{Vars}(\beta)}$
and
the induced subgraph
$\expandersubgraph{\mathcal G}{\gamma(\operatorname{Vars}(\beta))}$ is
a \boundaryexpnodeg{\pebblesk}{1}.
For the rest of this section, fix a minimal such set
$\gamma(V')$ for
every \mbox{$V' = \operatorname{Vars}(\beta)$} corresponding to a
position~$\beta$ in the $\pebblesk$-pebble game. This will allow
us to define formally what we mean by \introduceterm{consistent}
positions in the two games on~$\xformf$ and~$\xformf\substituted$ as
described next.
\begin{definition}[Consistent positions]
\label{def:consistent-positions}
Let $\alpha$ be a
position in the pebble game on~$\xformf$, i.e.,\ } a
partial assignment of variables in
$U$, and let $\beta$ be a partial assignment of
variables in $V$
corresponding to a position in the pebble game on~$\xformf\substituted$.
We say that $\alpha$ \emph{is consistent with} $\beta$
if there exists an extension $\possubst_{\mathrm{ext}}\supseteq\beta$ with
$\operatorname{Vars}(\possubst_{\mathrm{ext}}) =
N \bigl( \operatorname{Vars}(\alpha) \bigr) \cup \operatorname{Vars}(\beta)$
such that
for all
$u\in\operatorname{Vars}(\alpha)$
it holds that
\mbox{$\alpha(u)=\bigoplus_{v\in N(u)}\possubst_{\mathrm{ext}}(v)$}.
Let $\beta$ be a position in the $\pebblesk$-pebble game on the
XOR-substituted formula~$\xformf\substituted$ and let
$\gamma(V')$ be the fixed, minimal closure of
$\beta$ chosen above.
Then we let
$\mathit{Cons}(\beta)$
denote the set of all positions~$\alpha$ with
$\operatorname{Vars}(\alpha)=\operatorname{Ker}(\gamma(\operatorname{Vars}(\beta)))$
in the pebble game on~$\xformf$
that are
consistent with~$\beta$.
\end{definition}
Observe that for
$\alpha_1 \subseteq \alpha$
and
$\beta_1 \subseteq \beta_2$
it holds that if
$\alpha_2$ is consistent with~$\beta_1$ then so is~$\alpha_1$,
and if
$\alpha_1$ is consistent with~$\beta_2$ then
$\alpha_1$ is consistent also with~$\beta_1$.
Furthermore, by \reflem{lem:ClosedSet} we have
$\setsize{\alpha} \leq \setsize{\beta}$
for all
\mbox{$\alpha \in \mathit{Cons}(\beta)$}.
The next claim states the core inductive argument.
\begin{claim}
\label{claim:inductionGameStyle}
Let $\beta$ be a position on $\xformf\substituted$
for an \boundaryexpnodeg{2\pebblesk}{2}~$\mathcal G$
and suppose that
\firstplayer wins the $i$-round $\pebblesk$-pebble game
on~$\xformf\substituted$ from position~$\beta$.
Then \firstplayer has a strategy to win the $\pebblesk$-pebble
game on~$\xformf$ within $2\pebblesk i$ rounds from every
position $\alpha\in\mathit{Cons}(\beta)$.
\end{claim}
\ifthenelse{\boolean{conferenceversion}}
{We note that this claim
is just a stronger (contrapositive) version of
\reflem{lem:HardnessCondensationGameStyle}.}
{We note that this claim
is just a stronger version of (the
contrapositive of) \reflem{lem:HardnessCondensationGameStyle}.}
\begin{proof}[Proof of \reflem{lem:HardnessCondensationGameStyle}
assuming Claim~\ref{claim:inductionGameStyle}]
Note that if $\roundstd/(2\pebblesk)<1$, then the lemma is
trivially true,
as \firstplayer always needs at least one round to win the
pebble game from the empty position.
Otherwise, we apply Claim~\ref{claim:inductionGameStyle}
with parameters
$\beta=\emptyset$ and
$i=\roundstd/(2\pebblesk)$.
Since
\mbox{$\mathit{Cons}(\emptyset)=\{\emptyset\}$},
we directly get the contrapositive statement of
\reflem{lem:HardnessCondensationGameStyle}
that if \firstplayer{} wins the $\roundstd/(2\pebblesk)$-round
$\pebblesk$-pebble game on $\xformf\substituted$, then he wins the
$\roundstd$-round $\pebblesk$-pebble game on $\xformf$.
\end{proof}
All that remains for us to do now is to establish
Claim~\ref{claim:inductionGameStyle}, after which the hardness
condensation lemma will follow easily.
\begin{proof}[Proof of Claim~\ref{claim:inductionGameStyle}]
The proof is by induction on $i$.
For the base case $i=0$ we have to show that if $\beta$
falsifies an XOR\xspace clause in~$\xformf\substituted$, then every assignment
$\alpha\in\mathit{Cons}(\beta)$ falsifies an XOR\xspace clause
in~$\xformf$.
But if
$\beta$ falsifies a clause of $\xformf\substituted$,
which by construction has the form $\constraint\substituted$ for some
clause $c$ from~$\xformf$,
then by
\reftwodefs{def:xor-substitution}{def:consistent-positions}
it holds that every $\alpha\in\mathit{Cons}(\beta)$ falsifies~$c$.
For the induction step, suppose that the statement holds for $i-1$
and assume that \firstplayer{} wins the $i$-round
$\pebblesk$-pebble game on $\xformf\substituted$ from position $\beta$.
The $i$th~round
consists of two steps:
\begin{enumerate}
\item
\firstplayer first chooses a subassignment
$\possubst'\subseteq\beta$.
\item
He then asks for the value of one variable
$v\in V \setminus \Vars{\possubst'}$,
to which \secondplayer{}
chooses
an assignment
$b \in \set{0,1}$ yielding the new position
$\possubst' \cup \set{v \mapsto b}$.
\end{enumerate}
As \firstplayer{} has a strategy to win from $\beta$ within
$i$ rounds, it follows that he can win from both
$\possubst' \cup \set{v\mapsto 0}$ and
$\possubst' \cup \set{v\mapsto 1}$
within $i-1$ rounds.
By the inductive assumption we then
\ifthenelse{\boolean{conferenceversion}}
{immediately obtain the following
statement for the set of assignments
\begin{equation}
\label{eq:beta-star-v}
\mathit{Cons} \bigl( \possubst'\ast v \bigr) :=
\textstyle \bigcup_{a \in \set{0,1}}
\mathit{Cons} \bigl( \possubst' \cup \set{v\mapsto a} \bigr)
\end{equation}}
{deduce for the set of assignments
\begin{equation}
\label{eq:beta-star-v}
\mathit{Cons} \bigl( \possubst'\ast v \bigr) :=
\mathit{Cons} \bigl( \possubst' \cup \set{v\mapsto 0} \bigr) \cup
\mathit{Cons} \bigl( \possubst' \cup \set{v\mapsto 1 } \bigr)
\end{equation}}
%
consistent with either
$\possubst' \cup \set{v\mapsto 0}$
\ifthenelse{\boolean{conferenceversion}}
{or $\possubst' \cup \set{v\mapsto 1}$.}
{or $\possubst' \cup \set{v\mapsto 1}$
that the following statement holds.}
\begin{subclaim}
\label{subclaim:ihyp}
\firstplayer{} can win the $\pebblesk$-pebble game on $\xformf$
within
$2\pebblesk(i-1)$ rounds
from all positions in~%
$\mathit{Cons} \bigl( \possubst'\ast v \bigr)
$.
\end{subclaim}
Note that a position is in
$\mathit{Cons} \bigl( \possubst'\ast v \bigr)$
if it is consistent with either
$\possubst'\cup\{v\mapsto 0\}$
or
$\possubst'\cup\{v\mapsto 1\}$.
Therefore,
$\mathit{Cons} \bigl( \possubst'\ast v \bigr)$
is the set of all positions over
$\operatorname{Ker} \bigl( \gamma \bigl( \possubst' \bigr) \cup \set{v} \bigr)$
that are consistent with~$\possubst'$.
What remains to show is that from every position
$\alpha\in\mathit{Cons}(\beta)$ \firstplayer{} can reach some
position in
$\mathit{Cons} \bigl( \possubst'\ast v \bigr)$ within $2\pebblesk$~rounds.
We split the proof into two steps, corresponding to the two steps in
the move of \firstplayer from position~$\beta$.
\begin{subclaim}
\label{subclaim:statement-a}
From every position
$\alpha \in \mathit{Cons}(\beta)$ \firstplayer{} can reach some
position in $\mathit{Cons} \bigl( \possubst' \bigr)$
for
$\possubst' \subseteq \beta$
within $\pebblesk$~rounds.
\end{subclaim}
\begin{subclaim}
\label{subclaim:statement-b}
From every position
$\alpha\in\mathit{Cons} \bigl( \possubst' \bigr)$
\firstplayer{} can reach some
position in $\mathit{Cons} \bigl( \possubst'\ast v \bigr)$
within $\pebblesk$~rounds.
\end{subclaim}
\ifthenelse{\boolean{conferenceversion}}
{%
We now establish Subclaim~\ref{subclaim:statement-b}.
The proof of Subclaim~\ref{subclaim:statement-a} is similar and
deferred to the full-length version of the paper.}
{%
Let us establish Subclaims~\ref{subclaim:statement-a}
and~\ref{subclaim:statement-b} in reverse order.}
\begin{subproof}%
[Proof of Subclaim~\ref{subclaim:statement-b}]
\firstplayer starts with
\ifthenelse{\boolean{conferenceversion}}
{some assignment
$\posorig_{\startpossuffix}\in\mathit{Cons}\bigl( \possubst' \bigr)$ defined over}
{an assignment
$\posorig_{\startpossuffix}\in\mathit{Cons}\bigl( \possubst' \bigr)$,
which is defined over the variables}
$\leftvertexset_{\startpossuffix} =
\operatorname{Ker}\bigl(
\gamma\bigl( \operatorname{Vars} \bigl( \possubst' \bigr) \bigr)
\bigr)$,
and wants to reach some assignment
\mbox{$\posorig_{\stoppossuffix} \in \mathit{Cons}\bigl( \possubst'\ast v \bigr)$}
defined over the variables
$\leftvertexset_{\stoppossuffix} =
\operatorname{Ker}\bigl(
\gamma \bigl( \operatorname{Vars} \bigl( \possubst' \bigr)
\cup \set{v} \bigr)
\bigr)$.
If
$
\operatorname{Ker}\bigl(
\gamma\bigl( \operatorname{Vars} \bigl( \possubst' \bigr) \bigr)
\bigr)
=
\operatorname{Ker}\bigl(
\gamma \bigl( \operatorname{Vars} \bigl( \possubst' \bigr)
\cup \set{v} \bigr)
\bigr)$,
then \firstplayer can choose
$\posorig_{\stoppossuffix} = \posorig_{\startpossuffix}$.
To see this, note that if $\posorig_{\startpossuffix}$ assigns a value to some
$u \in N(v)$, then since
$\posorig_{\startpossuffix}\in\mathit{Cons}\bigl( \possubst' \bigr)$
it holds by
\refdef{def:consistent-positions}
that
$
N(u) \subseteq
\gamma\bigl( \operatorname{Vars} \bigl( \possubst' \bigr) \bigr)
$,
and thus
$\posorig_{\startpossuffix}$
is already consistent with
$\possubst'\cup\set{v\mapsto b}$
for some
$b \in \set{0,1}$.
Hence, \firstplayer need not ask any question in this case, but
the induction hypothesis immediately yields the desired
conclusion.
The more interesting case is when
$
\operatorname{Ker}\bigl(
\gamma\bigl( \operatorname{Vars} \bigl( \possubst' \bigr) \bigr)
\bigr)
\neq
\operatorname{Ker}\bigl(
\gamma \bigl( \operatorname{Vars} \bigl( \possubst' \bigr)
\cup \set{v} \bigr)
\bigr)$.
Now \firstplayer first deletes all assignments of variables in
$\leftvertexset_{\startpossuffix} \setminus \leftvertexset_{\stoppossuffix}$ from~$\posorig_{\startpossuffix}$
to get~$\alpha_0$.
Since
$\alpha_0 \subseteq \posorig_{\startpossuffix}$
and $\posorig_{\startpossuffix}$ is consistent with~$\possubst'$ by assumption,
$\alpha_0$ is also consistent with~$\possubst'$.
Afterwards, he asks for all variables in
$U' = \leftvertexset_{\stoppossuffix} \setminus \leftvertexset_{\startpossuffix}$.
We need to argue that regardless of how \secondplayer answers, it
holds that \firstplayer reaches a position that is consistent
with~$\possubst'$ .This is where the peeling argument in
\reflem{lem:peeling_lemma}
is needed.
As discussed above, by our choice of the closure
$\gamma\bigl( \operatorname{Vars}\bigl( \possubst' \bigr) \bigr)$
(obtained using \reflem{lem:ClosedSet})
we know that the bipartite graph
$\expandergraph'
=
\expandersubgraph{\mathcal G}
{\gamma\bigl( \operatorname{Vars}\bigl( \possubst' \bigr) \bigr)}$
is
a \boundaryexpnodeg{\pebblesk}{1}
and furthermore that for
$U' = \leftvertexset_{\stoppossuffix} \setminus \leftvertexset_{\startpossuffix}$
it holds that
$\setsize{U'}
\leq
\setsize{\leftvertexset_{\stoppossuffix}}
\leq
\Setsize{ \operatorname{Vars}\bigl( \possubst' \bigr)\cup \set{v}}
\leq
\pebblesk
$,
as observed right after
\refdef{def:consistent-positions}.
Hence, we can apply
\reflem{lem:peeling_lemma} to~$\expandergraph'$
and~$U'$ to get an ordered sequence
$u_1,\ldots,u_t$ satisfying
\mbox{$N^{\expandergraph'}(u_i)\setminus
N^{\expandergraph'}(\{u_1,\ldots,u_{i-1}\})\neq \emptyset$}.
We will think of \firstplayer as querying the (at
most~$\pebblesk$) vertices in~$U'$ in this order,
after which he ends up with a position~$\posorig_{\stoppossuffix}$ defined on
the variables~$\leftvertexset_{\stoppossuffix}$.
To argue that the position~$\posorig_{\stoppossuffix}$ obtained in this way is
consistent with~$\possubst'$ independently of how
\secondplayer answers, and is hence contained in $\mathit{Cons}\bigl(
\possubst'\ast v \bigr)$, we show inductively that all positions
encountered during the transition from $\posorig_{\startpossuffix}$
to~$\posorig_{\stoppossuffix}$ are consistent with $\possubst'$.
As already noted, this holds for the position~$\alpha_0$
obtained from~$\posorig_{\startpossuffix}$ by deleting all assignments of
variables in $\leftvertexset_{\startpossuffix} \setminus \leftvertexset_{\stoppossuffix}$.
For the induction step,
let $i\geq 0$ and
assume inductively
that the current position~$\alpha_i$ over
\begin{equation}
\label{eq:transitional-position-domain}
U_i
:=
(\leftvertexset_{\startpossuffix} \cap \leftvertexset_{\stoppossuffix}) \cup
\setdescr{u_j}{1\leq j \leq i}
\end{equation}
is consistent with $\possubst'$.
Now \firstplayer asks about the variable~$u_{i+1}$ and \secondplayer
answers with a value~$a_{i+1}$.
Since $\alpha_i$ is consistent with $\possubst'$, there is an
assignment $\possubst_{\mathrm{ext}}\supseteq \possubst'$ that sets the
variables $v\in N(\operatorname{Vars}(\alpha_i))$ to the right values such
that
$\alpha_i(u) = \bigoplus_{v\in N(u)}\possubst_{\mathrm{ext}}(v)$
for all
$u \in \operatorname{Vars}(\alpha_i)$.
By our ordering of
%
\ifthenelse{\boolean{conferenceversion}}
{$U' = \set{u_1,\ldots,u_t}$}
{$U' = \Set{u_1,\ldots,u_t}$}
chosen above
we know that $u_{i+1}$ has at least one
neighbour
on the right-hand side~$V$
that is neither contained in
\mbox{$N^{\mathcal G}(U_i)
=
N^{\mathcal G}(\operatorname{Vars}(\alpha_i))$}
nor in the domain of~$\possubst'$.
Hence, regardless of which value~$a_{i+1}$ \secondplayer chooses
for her answer
we can extend the assignment~$\possubst_{\mathrm{ext}}$ to the
variables
$N^{\mathcal G}(u_{i+1})\setminus
\bigl( N^{\mathcal G}\bigl( \operatorname{Vars}(\alpha_i) \bigr)\cup
\operatorname{Vars}\bigl( \possubst' \bigr) \bigr)$
in such a way that
$\bigoplus_{v\in
N(u_{i+1})}\possubst_{\mathrm{ext}}(v)=a_{i+1}$.
This shows that $\alpha_{i+1}$ defined over
$U_{i+1}
=
(\leftvertexset_{\startpossuffix} \cap \leftvertexset_{\stoppossuffix}) \cup
\setdescr{u_j}{1\leq j \leq i+1}
$
is consistent with~$\possubst'$.
Subclaim~\ref{subclaim:statement-b} now follows by the induction
principle.
\end{subproof}
\ifthenelse{\boolean{conferenceversion}}{}
{Before proving Subclaim~\ref{subclaim:statement-a},
we should perhaps point out why this claim is not vacuous.
Recalling the discussion just below
\reflem{lem:ClosedSet}, this is because the condition
$V_1 \subseteq V_2$
does not allow us to conclude that
$\gamma(V_1) \subseteq \gamma(V_2)$.
\begin{subproof}%
[Proof of Subclaim~\ref{subclaim:statement-a}]
The proof is similar to that of
Subclaim~\ref{subclaim:statement-b} above.
\firstplayer{} starts with an
assignment~$\posorig_{\startpossuffix}\in\mathit{Cons}(\beta)$ and wants to reach
some
assignment in~$\mathit{Cons}(\possubst')$
for $\possubst' \subseteq \beta$
within $\pebblesk$~rounds.
By assumption,
$\posorig_{\startpossuffix}$ is consistent with~$\beta$ and
therefore (since $\possubst'\subseteq \beta$) is also
consistent with~$\possubst'$.
\firstplayer deletes all assignments from the domain
$\leftvertexset_{\startpossuffix}=\operatorname{Ker}(\gamma(\operatorname{Vars}(\beta)))$ of $\posorig_{\startpossuffix}$
that do not occur in the domain
$\leftvertexset_{\stoppossuffix}=\operatorname{Ker}(\gamma(\operatorname{Vars}(\possubst')))$ of positions in
$\mathit{Cons}(\possubst')$,
resulting in the position
$\alpha_0 \subseteq \posorig_{\startpossuffix}$ that is consistent
with~$\possubst'$.
Next, he applies
\reflem{lem:peeling_lemma} to
$\expandergraph'
=
\expandersubgraph{\mathcal G}{\gamma(\operatorname{Vars}(\beta))}$
to obtain an ordering of the remaining variables
$\leftvertexset_{\stoppossuffix}\setminus\leftvertexset_{\startpossuffix}$.
In the same way as above he can query the variables in this order
while maintaining the invariant that the current position is
consistent with~$\possubst'$.
\end{subproof}
}%
Combining
Subclaims~\ref{subclaim:ihyp}, \ref{subclaim:statement-a}
and~\ref{subclaim:statement-b},
we conclude that
\firstplayer wins from every position
\mbox{$\alpha\in\mathit{Cons}(\beta)$}
within $2\pebbleski$~rounds.
This concludes the proof of Claim~\ref{claim:inductionGameStyle}.
\end{proof}
We are finally in a position to give a formal proof of
\reflem{lem:hardnessCondensationXOR}.
\begin{proof}[Proof of Lemma~\ref{lem:hardnessCondensationXOR}]
Let $\expanderdegree_0 \in \Nplus$ be the constant in
\reflem{lem:expanderexistnew}.
Suppose we are given an $m$-variable \mbox{$p$-XOR}
formula~$\ensuremath{F}$ and parameters $\pebblesl_{\parammin}$,
$\pebblesl_{\parammax}$, $r$, $\Delta$
satisfying the conditions in
\reflem{lem:hardnessCondensationXOR}
that
$
\pebblesl_{\parammax}/\pebblesl_{\parammin}
\geq
\Delta
\geq
\expanderdegree_0
$
and
$(2\pebblesl_{\parammax}
\Delta)^{2\Delta} \leq m$.
Fix
$\pebblesk := \pebblesl_{\parammax}$
and
$s := 2\pebblesl_{\parammax}$.
Since
$(s\Delta)^{2\Delta} \leq
m$ and $\Delta\geq\expanderdegree_0$,
we appeal to \reflem{lem:expanderexistnew} to obtain an
\nmboundaryexp{\leftsize}{\ceiling{{\leftsize}^{3/\Delta}}}
{\expanderdegree}{\expansionguarantee}{2}{}
$\mathcal G = (U \overset{.}{\cup}
V,E)$,
and applying \XORification with respect to\xspace~$\mathcal G$
we construct the formula
$\ensuremath{H} := \ensuremath{F}[\expandergraph]$.
Clearly,
$\ensuremath{H}$
is an $(\expanderdegreep)$\nobreakdash-XOR\xspace{} formula
with
$\Ceiling{m^{3/\Delta}}$ variables.
We want to prove that \firstplayer has a winning strategy for the
$(\Delta\pebblesl_{\parammin})$-pebble game
on~$\ensuremath{H}$
as guaranteed by
\reflem{lem:hardnessCondensationXOR}\ref{item:condensationrhs-a},
but that he does not win the
\mbox{$\pebblesl_{\parammax}$-pebble} game on~$\ensuremath{H}$
within
${r}/{(2\pebblesl_{\parammax})}$~rounds
as stated in
\reflem{lem:hardnessCondensationXOR}\ref{item:condensationrhs-b}.
For the upper bound
in \reflem{lem:hardnessCondensationXOR}\ref{item:condensationrhs-a},
we recall that
\firstplayer{} has a winning strategy in the
\mbox{$\pebblesl_{\parammin}$-pebble} game on~$\ensuremath{F}$
by assumption~\ref{item:condensationlhs-a} in the lemma.
He can use this strategy to win the
$(\Delta\pebblesl_{\parammin})$-pebble game on~$\ensuremath{H}$ as
follows.
Whenever his strategy tells him to ask for a variable $u\in
U = \operatorname{Vars}(\ensuremath{F})$, he instead asks for the at
most $\Delta$ variables in
$N(u)\subseteq V = \operatorname{Vars}(\ensuremath{H})$
and assigns to $u$ the value
that corresponds to the parity of the answers \secondplayer{}
gives for~%
$N(u)$.
In this way, he can simulate his strategy on $\ensuremath{F}$
until he reaches
an assignment that contradicts an XOR\xspace{} clause
$c$ from $\ensuremath{F}$.
As the corresponding assignment of the variables $\setdescr{v}{v\in
N(u),u\in \operatorname{Vars}(c)}$ falsifies the constraint
$\subst{c}{\mathcal G}\in \ensuremath{H}$,
at this point \firstplayer wins the
$(\Delta\pebblesl_{\parammin})$-pebble game on~$\ensuremath{H}$.
The lower bound
in \reflem{lem:hardnessCondensationXOR}\ref{item:condensationrhs-b}
follows immediately
from \reflem{lem:HardnessCondensationGameStyle}.
By assumption~\ref{item:condensationlhs-b}
in \reflem{lem:hardnessCondensationXOR},
\firstplayer does not win the
\mbox{$\pebblesl_{\parammax}$-pebble} game on~$\ensuremath{F}$
within $r$~rounds.
Since
$\mathcal G$
is an \nmboundaryexpnodeg{\leftsize}{\rightsize}{2 \pebblesk}{2},
\reflem{lem:HardnessCondensationGameStyle}
says that that he does not win the
$\pebblesl_{\parammax}$-pebble game on
$\ensuremath{H} = \ensuremath{F}[\expandergraph]$
within
${r}/{(2\pebblesl_{\parammax})}$~rounds either.
This concludes the proof of
\reflem{lem:hardnessCondensationXOR}
\end{proof}
%
\makeatletter{}%
\section{Concluding Remarks}
\label{sec:conclusion}
In this paper we prove an
$n^{\bigomega{k/\log k}}$ lower bound
on the minimal quantifier depth of $\ensuremath{\mathsf L\!^{\pebblesk}}$ and $\ensuremath{\mathsf C^{\pebblesk}}$ sentences that
distinguish two finite $n$-element relational structures, nearly
matching the trivial $n^{k-1}$ upper bound.
By the known connection to the $\wldim$-dimensional Weisfeiler--Leman
algorithm,
this result implies
near-optimal
$n^{\bigomega{\wldim/\log \wldim}}$ lower bounds
also on the number of refinement steps of this algorithm.
The key technical ingredient in our proof is the hardness condensation
technique recently introduced by Razborov~\cite{Razborov16NewKind} in
the context of proof complexity, which we translate into the language
of finite variable logics and use to reduce the domain size of
relational structures while maintaining the minimal quantifier depth
required to distinguish them.
An obvious open problem is to improve
our
lower bound. One way to
achieve this would be to strengthen the lower bound on the number of
rounds in the $k$-pebble game on \mbox{$3$-XOR\xspace{}} formulas
in \reflem{lem:pyramids} from
$n^{ 1 / \log k}$
to $n^\delta$ for some
$\delta \gg 1 / \log k$.
By the hardness condensation lemma this would directly improve our
lower bound from $n^{\bigomega{\wldim/\log\wldim}}$ to
$n^{\bigomega{\delta\wldim}}$.
The structures on which our lower bounds hold are $n$-element
relational structures of arity~$\bigtheta{\wldim}$ and
size~$n^{\bigtheta{\wldim}}$. We would have liked to have this
results also for structures of bounded arity, such as graphs.
However, the increase of the arity is inherent in the method of
amplifying hardness by making XOR\xspace substitutions.
An optimal lower bound of $n^{\bigomega{k}}$ on the
quantifier depth required to distinguish two \mbox{$n$-vertex} graphs has
been obtained by the first author\xspace in an earlier work~\cite{Berkholz.2014}
for the \emph{existential-positive fragment} of~$\ensuremath{\mathsf L\!^{\pebblesk}}$. Determining
the quantifier rank of full $\ensuremath{\mathsf L\!^{\pebblesk}}$ and $\ensuremath{\mathsf C^{\pebblesk}}$ on \mbox{$n$-vertex}
graphs remains an open problem.
\newcommand{(\|\strucA\|+\|\strucB\|)}{(\|\mathcal A\|+\|\mathcal B\|)}
\ifthenelse{\boolean{conferenceversion}}
{Another open question}
{Another open question related to our results}
concerns the complexity of finite variable equivalence for
non-constant $k$. What is the complexity of deciding, given
two structures and a parameter~$k$, whether
the structures
are equivalent in $\ensuremath{\mathsf L\!^{\pebblesk}}$ or~$\ensuremath{\mathsf C^{\pebblesk}}$? As this problem can be solved
in time $(\|\strucA\|+\|\strucB\|)^{O(k)}$, it is in \complclassformat{EXPTIME}
if $k$ is part of the input.
It has been conjectured
that this problem is
\mbox{\complclassformat{EXPTIME}}-complete~\cite{GraedelKolaitisLibkinMarxSpencerVardiWeinstein.2007},
but it is not even known whether it is \mbox{\complclassformat{NP}-hard.}
Note that the quantifier depth is connected to the computational
complexity of the equivalence problem by the fact that an upper bound
of the form $n^{\bigoh{1}}$ on $n$-element structures
would have implied that testing equivalence is in \complclassformat{PSPACE}{}.
Hence, our lower bounds on the quantifier depth
can be seen as
a necessary requirement for establishing \complclassformat{EXPTIME}-hardness of the
equivalence problem.
%
\makeatletter{}%
\ifthenelse{\boolean{conferenceversion}}
{\acks}
{\section*{Acknowledgements}}
We are very grateful to Alexander~Razborov for patiently explaining
\ifthenelse{\boolean{conferenceversion}}
{his hardness condensation technique}
{the hardness condensation technique in~\cite{Razborov16NewKind}}
during numerous and detailed discussions.
We also want to thank Neil~Immerman for
helping us find relevant references to previous works and explaining
some of the finer details in~\cite{Immerman.1981}.
The first author\xspace wishes to acknowledge useful feedback from the
participants of Dagstuhl Seminar~15511
\emph{The~Graph Isomorphism Problem}.
Finally, we are most indebted to the anonymous \emph{LICS} reviewers
for very detailed comments that helped improve the manuscript
considerably.
Part of the work of the first author\xspace was performed while at KTH Royal
Institute of Technology supported by a fellowship within the
Postdoc-Programme of the German Academic Exchange Service (DAAD).
The research of the second author\xspace was supported by the
European Research Council under the European Union's Seventh Framework
Programme \mbox{(FP7/2007--2013) /} ERC grant agreement no.~279611
and by
Swedish Research Council grants
\mbox{621-2010-4797}
and
\mbox{621-2012-5645}.
%
|
2,877,628,091,192 | arxiv | \section{Introduction}
\label{sec-intro}
It is known that the investigations of long range rapidity correlations give the information
about the initial stage of high energy hadronic interactions \cite{Dumitru08}.
So, to find a signature of the string fusion and percolation phenomenon \cite{Biro84,Bialas86,BP92,BP93}
in ultrarelativistic heavy ion collisions
the study of the correlations between multiplicities
in two separated rapidity intervals
(known as the forward-backward multiplicity correlations) was proposed \cite{PRL94}.
Later it was realized \cite{BP00,EPJC04,PPR,YF07-1,YF07-2,TMF15,TMF17}
that the investigations of the forward-backward correlations involving intensive observables
in forward and backward observation windows as, {\it e.g.}, the event-mean transverse momentum,
enable to suppress the contribution of trivial, so-called, "volume" fluctuations
originating from fluctuations in the number of initial sources (strings)
and to obtain more clear signal on the process of string fusion,
compared to usual forward-backward multiplicity correlations.
In the present work, we explore another way to suppress the contribution of "volume" fluctuations,
turning
to the more sophisticated correlation observable.
Basing on the multiplicities ${n_F^{}}$ and ${n_B^{}}$ in two separated rapidity windows
we study the properties of the so-called strongly intensive observable
\begin{equation}
\label{SigmaFB}
\Sigma({n_F^{}},{n_B^{}})\equiv\frac{\av{n_F^{}}\,\omega_{n_B^{}}+\av{n_B^{}}\,\omega_{n_F^{}}-2\, {\textrm{cov}}({n_F^{}}\,{n_B^{}})}{\av{n_F^{}}+\av{n_B^{}}} \ ,
\end{equation}
introduced in \cite{GorGaz11}, where
\begin{equation}
\label{cov}
{\textrm{cov}}({n_F^{}},{n_B^{}})\equiv\av{{n_F^{}}{n_B^{}}}-\av{n_F^{}} \av{n_B^{}} \ ,
\end{equation}
and $\omega_{n_F^{}}$ and $\omega_{n_B^{}}$ are the corresponding scaled variances of the multiplicities:
\begin{equation}
\label{om}
\omega_n\equiv \frac{D_n}{\av n} = \frac{\av {n^2}-\av n^2}{\av n} \ .
\end{equation}
In the framework of the model with
color strings as particle emitting sources
we calculate the dependence of the observable (\ref{SigmaFB})
on the string two-particle correlation function,
the width of observation windows and
the rapidity gap between them.
We show that in the case with independent identical strings
the strongly intensive character of this observable is being confirmed:
it depends only on the individual characteristics of a string and
is independent of both
the mean number of strings and its fluctuation.
We also analyze the peculiarities of the behaviour of
the strongly intensive observables between multiplicities
of particles with different electric charges and
as well between multiplicities in two windows separated in rapidity and azimuth.
In the case when the string fusion processes
are taken into account and a formation
of strings of a few different types takes place
we found that,
this observable is equal to a weighted
average of its values for different string types.
Unfortunately, in this case
through the weight factors
the observable becomes dependent on collision conditions
and, strictly speaking, can not be considered
any more as strongly intensive variable.
We also present for a comparison the results of the calculation
of considered observable with PYTHIA event generator.
The paper is organized as follows.
In Section 2 we consider the most simple case
with independent identical strings and symmetric
$2\pi$-azimuth observation windows separated by
a rapidity gap.
In Section 3 we generalize the obtained results for the case
of two acceptance windows separated in rapidity and azimuth.
Section 4 is devoted to the calculations of
the strongly intensive observables between multiplicities
of particles with different electric charges.
In Section 4 the influence of the string fusion processes
is analysed.
Section 5 is devoted to the results obtained
with the PYTHIA event generator.
\section{$\Sigma$ in the model with independent identical strings
\label{sec-model}
We start our consideration from the simple case
with symmetric $2\pi$-azimuth observation windows $\delta\eta_{\!F}=\delta\eta_{\!B}\equiv\delta\eta$ separated by
a rapidity gap $\eta_{gap}$, which corresponds to the distance $\Delta\eta=\eta_{gap}+\delta\eta$ between their centers.
Clear that for symmetric reaction we have
\begin{equation}
\label{n}
\av{n_F^{}}=\av{n_B^{}}\equiv \av n \ , \hs1
\omega_{n_F^{}}=\omega_{n_B^{}}\equiv \omega_n \end{equation}
and the expression (\ref{SigmaFB}) can be simplified to
\begin{eqnarray}
&&{\Sigma(\nF,\nB)}={\omega_n} - {\textrm{cov}}({n_F^{}},{n_B^{}})/\av n=\nonumber\\
&&=\frac{D_n-{\textrm{cov}}({n_F^{}},{n_B^{}})}{\av n}= \frac{\av {n^2}-\av{{n_F^{}}{n_B^{}}}}{\av n} \label{Sigma-s}\ .
\end{eqnarray}
In the framework of the model with independent identical strings \cite{PLB00}
we suppose that the number of strings, $N$, fluctuates event by event
around some mean value, $\av N$, with some scaled variance, ${\omega_N}={D_N}/{\av{N}}$.
We expect that the intensive observables should not depend on $\av N$ and
the strongly intensive observables should not depend on both $\av N$ and ${\omega_N}$.
The fragmentation of each string contributes to the forward and backward observation rapidity windows,
$\delta\eta_F$, and $\delta\eta_B$,
the $\mu_F$ and $\mu_B$ charged particles correspondingly,
which fluctuate around some mean values, $\av {\mu_F}$ and $\av {\mu_B}$,\
with some scaled variances, $\omega_{\mu_F}={D_{\mu_F}}/{\av{\mu_F}}$
and $\omega_{\mu_B}={D_{\mu_B}}/{\av{\mu_B}}$.
Similarly to (\ref{SigmaFB}) we can formally introduce also
the ${\Sigma (\muF, \muB)}$ - the strongly intensive observable between multiplicities,
produced from decay of a single string:
\begin{equation}
\label{SigmuFB}
\Sigma({\mu_F},{\mu_B})\equiv\frac{\av{\mu_F}\,\omega_{\mu_B}
+\av{\mu_B}\,\omega_{\mu_F}-2\, {\textrm{cov}}({\mu_F}\,{\mu_B})}{\av{\mu_F}+\av{\mu_B}} \ .
\end{equation}
For symmetric reaction and symmetric observation windows,
\begin{equation}
\label{avmu}
\av {\mu_F}=\av {\mu_B}\equiv\av {\mu}, \hs1 \omega_{\mu_F}=\omega_{\mu_B}\equiv\omega_\mu \ , \end{equation}
it also can be simplified to
\begin{eqnarray}
&&{\Sigma (\muF, \muB)}={\omega_\mu} - {\textrm{cov}}({\mu_F},{\mu_B})/\av \mu= \nonumber\\
&&=\frac{D_\mu-{\textrm{cov}}({\mu_F},{\mu_B})}{\av \mu}=
\frac{\av {\mu^2}-\av{{\mu_F}{\mu_B}}}{\av \mu} \label{Sigmu-s}\ .
\end{eqnarray}
Clear that in this model
\begin{equation}
\label{avn}
\av n=\av N \av{\mu}=\av N \mu_0 \, \delta\eta \ , \hs1 \omega_n=\omega_{\mu}+{\omega_N} \av{\mu} \ ,
\end{equation}
so we see that the $\omega_n$ is intensive, but not strongly intensive observable.
We also supposed the translation invariance in rapidity,
where $\mu_0$ is a distribution density for particles produced from a single string.
For us it is important to remember that the scaled variance, $\omega_n$,
of the number of particles produced in the rapidity interval $\delta\eta$
is determined by the two-particle correlation function $C_2(\eta_1\!-\!\eta_2)$
\cite{CapKrz78,NPA15}:
\begin{equation}
\label{omn}
\omega_n=1+ \av n \, I_{FF} \ ,
\end{equation}
where
\begin{equation}
\label{IFF}
I_{FF}\equiv\frac{1}{\delta\eta^2} \int_{\delta\eta_F}\!\!\!d\eta_1 \int_{\delta\eta_F} \!\!\!d\eta_2\
C_2(\eta_1\!-\!\eta_2) \ .
\end{equation}
Here the two-particle correlation function is defined by the standard way (see, {\it e.g.}, \cite{Voloshin02}):
\begin{equation}
\label{C2def}
C_2(\eta_1,\eta_2)\equiv\frac{\rho_2 (\eta_1,\eta_2)}{\rho(\eta_1) \rho(\eta_2)}-1 \ ,
\end{equation}
and
\begin{equation}
\label{rho12}
\rho (\eta)\equiv\frac{dN_{ch}}{d\eta}
\ , \hs1
\rho_2 (\eta_1, \eta_2)\equiv\frac{d^2N_{ch}}{d\eta_1\,d\eta_2} \ .
\end{equation}
In the case with a translation invariance in rapidity,
we have the uniform distribution
\begin{equation}
\label{rho1}
\rho (\eta)=\rho_0 \ , \hs1 \rho_2 (\eta_1, \eta_2)=\rho_2 (\eta_1\!-\! \eta_2)
\end{equation}
and the correlation function $C_2(\eta_1\!-\! \eta_2)$ depending only on a difference of rapidities.
Similarly we can also introduce
the two-particle correlation function of a single string
\begin{equation}
\label{Lambda}
\Lambda(\eta_1,\eta_2)\equiv\frac{\lambda_2 (\eta_1,\eta_2)}{\lambda(\eta_1) \lambda(\eta_2)}-1
\end{equation}
for description of the correlation between particles produced from a same string,
where $\lambda(\eta)$ and $\lambda_2 (\eta_1,\eta_2)$ are the corresponding single and
double distributions.
For the translation invariant case
\begin{equation}
\label{rho3}
\lambda (\eta)=\mu_0 \ , \hs1 \lambda_2 (\eta_1, \eta_2)=\lambda_2 (\eta_1\!-\! \eta_2)
\end{equation}
and similarly we have
\begin{equation}
\label{omn1}
\omega_\mu=1+ \av\mu\, J_{FF} \ ,
\end{equation}
where
\begin{equation}
\label{JFF}
J_{FF}\equiv\frac{1}{\delta\eta^2} \int_{\delta\eta_F}\!\!\!d\eta_1 \int_{\delta\eta_F} \!\!\!d\eta_2\
\Lambda(\eta_1\!-\!\eta_2) \ ,
\end{equation}
and the string two-particle correlation function $\Lambda (\eta_1\!-\! \eta_2)$ depends
only on a difference of rapidities.
Note that by formula (\ref{omn1}) we see that the so-called robust variance \cite{Voloshin02}:
\begin{equation}
\label{Rob}
R_\mu\equiv\frac{\omega_\mu-1}{\av\mu}= J_{FF}
\end{equation}
and depends only on a string correlation function $\Lambda(\eta_1\!-\!\eta_2)$.
The similar formulae are valid for the corresponding covariances \cite{NPA15}:
\begin{equation}
\label{nFnB}
\frac{{\textrm{cov}}({n_F^{}},{n_B^{}})}{\av{n_F^{}}\av{n_B^{}}} = I_{FB} \ ,
\end{equation}
where
\begin{equation}
\label{IFB}
I_{FB}\equiv\frac{1}{\delta\eta^2} \int_{\delta\eta_F}\!\!\!d\eta_1 \int_{\delta\eta_B} \!\!\!d\eta_2\
C_2(\eta_1\!-\!\eta_2) \ ,
\end{equation}
where the integration over $\eta_1$ and $\eta_2$ are fulfilled now on different
rapidity intervals $\delta\eta_F$ and $\delta\eta_B$ correspondingly. For one string we also have
\begin{equation}
\label{muFmuB}
\frac{{\textrm{cov}}({\mu_F},{\mu_B})}{\av{\mu_F}\av{\mu_B}} = J_{FB} \ ,
\end{equation}
where
\begin{equation}
\label{JFB}
J_{FB}\equiv\frac{1}{\delta\eta^2} \int_{\delta\eta_F}\!\!\!d\eta_1 \int_{\delta\eta_B} \!\!\!d\eta_2\
\Lambda(\eta_1\!-\!\eta_2) \ ,
\end{equation}
In the considered model with independent identical strings
we can express the observable correlation function $C_2(\eta_1,\eta_2)$
through the correlation function of a single string $\Lambda(\eta_1,\eta_2)$
\cite{NPA15}:
\begin{equation}
\label{C2Lam}
C_2(\eta_1,\eta_2)=\frac{\Lambda(\eta_1,\eta_2)+{\omega_N}}{\av N}
\end{equation}
What leads immediately to
\begin{equation}\label{integrals_homo_nf}
I_{FB}=\frac{J_{FB}+{\omega_N}}{\av N} \ ,
\hs 1
I_{FF}=\frac{J_{FF}+{\omega_N}}{\av N}
\end{equation}
and for symmetric forward-backward windows
\begin{equation}\label{omn3}
\omega_n=D_n/\av n= 1+\av\mu\, [J_{FF}+{\omega_N}] \ ,
\end{equation}
\begin{equation}\label{nFnB3}
{\textrm{cov}}({n_F^{}},{n_B^{}})/\av n= \av\mu\, [J_{FB}+{\omega_N}] \ ,
\end{equation}
We took also into account that $\rho_0=\av N \mu_0$.
Then by (\ref{Sigma-s}), (\ref{omn}) and (\ref{nFnB}) we obtain
\begin{eqnarray}
&&{\Sigma(\nF,\nB)}=1+\av n\, [I_{FF}-I_{FB}]=\nonumber\\
&&=1+\av\mu\, [J_{FF}-J_{FB}]={\Sigma (\muF, \muB)} \label{SigmaJnfu} \ ,
\end{eqnarray}
where the ${\Sigma (\muF, \muB)}$ is the strongly intensive observable between multiplicities,
produced from decay of a single string, defined by (\ref{SigmuFB}) and (\ref{Sigmu-s}).
By (\ref{SigmaJnfu}) we really see that in the framework of this model
the observable ${\Sigma(\nF,\nB)}$ is a strongly intensive,
it is independent of both
the mean number of string $\av N$ and its fluctuation ${\omega_N}$.
It depends only on the string parameters $\mu_0$, $\Lambda(\eta_1\!-\!\eta_2)$
and the width of observation windows, $\av\mu=\mu_0\delta\eta$.
Whereas the scaled variance ${\omega_n}$ is an intensive, but not a strongly intensive observable,
because it is independent on
the mean number of string $\av N$, but through ${\omega_N}$ depends on their fluctuation.
We should note that other quantities that characterize correlations between multiplicities
in two windows such as a correlation coefficient \cite{Uhlig78,Derrick86}
or a variance of asymmetry \cite{Back06} are also not strongly intensive and,
therefore, are more sensitive to experimental event selection procedures.
For small observation windows, of a width $\delta\eta\ll\eta^{}_{corr}$,
where the $\eta^{}_{corr}$ is the characteristic correlation length for particles produced from the same string,
the formulae (\ref{omn3}-\ref{SigmaJnfu}) takes especially
simple form:
\begin{equation}\label{omn_small_nfu}
\omega_n=D_n/\av n= 1+\mu_0\delta\eta\, [\Lambda(0)+{\omega_N}] \ , \end{equation}
\begin{equation}\label{nFnB_small_nfu}
{\textrm{cov}}({n_F^{}},{n_B^{}})/\av n= \mu_0\delta\eta\, [\Lambda(\Delta\eta)+{\omega_N}] \ ,
\end{equation}
\begin{equation}\label{Sigma_small_nfu}
{\Sigma(\nF,\nB)}= 1+\mu_0\delta\eta\, [\Lambda(0)-\Lambda(\Delta\eta)]={\Sigma (\muF, \muB)} \ ,
\end{equation}
where $\Delta\eta=\eta_F\!-\!\eta_B$ is a distance between the centers of the forward
an backward observation windows.
From the last formula we see the main properties of the ${\Sigma(\nF,\nB)}$, which we expect
in this model. Starting from the value 1
it increases with a distance $\Delta\eta$ between the centers of the observation windows,
since the two-particle correlation function of a string $\Lambda(\Delta\eta)$
decrease with $\Delta\eta$. The extent of the $\Sigma(\Delta\eta)$ increase
with $\Delta\eta$ is proportional to the width
of the observation windows $\delta\eta$.
More detailed description of the ${\Sigma(\nF,\nB)}$ needs the knowledge of
the two-particle correlation function of a string $\Lambda(\Delta\eta)$.
In paper \cite{NPA15} in the framework of the model with independent identical strings
this function was fitted using the experimental pp ALICE data
on forward-backward correlations between multiplicities
in windows separated in rapidity and azimuth
at three initial energies together with the value of scaled variance of the number of strings ${\omega_N}$
(see table \ref{param}):
\begin{eqnarray}
&&\Lam{\Delta\eta,\Delta\phi}=\Lambda_1 e^{-\frac{|\Delta\eta|}{\eta_1}} e^{-\frac{\Delta\phi^2}{\varphi^2_1}} +\nonumber\\
&&+\Lambda_2 \left(e^{-\frac{|\Delta\eta-\eta_0|}{\eta_2}}
+ e^{-\frac{|\Delta\eta+\eta_0|}{\eta_2}}\right) e^{-\frac{(|\Delta\phi|-\pi)^2}{\varphi^2_2}} \label{Lam_fit}\ ,
\end{eqnarray}
where it was implied that
\begin{equation} \label{f_obl}
|\Delta\phi|\leq\pi \ .
\end{equation}
For $|\Delta\phi| > \pi$ one must periodically extend $\Lam{\Delta\eta,\Delta\phi}$ to $\Delta\phi\to\Delta\phi+2\pi k$.
With such completion the $\Lam{\Delta\eta,\Delta\phi}$ meets the following requirements:
\begin{eqnarray}
&&\Lam{-\Delta\eta,\Delta\phi}=\Lam{\Delta\eta ,\Delta\phi} \ ,\label{Lam_sym1}\\
&&\Lam{\Delta\eta ,-\Delta\phi}=\Lam{\Delta\eta ,\Delta\phi} \ , \label{Lam_sym2}\\
&&\Lam{\Delta\eta ,\Delta\phi+2\pi k}=\Lam{\Delta\eta ,\Delta\phi} \ . \label{Lam_sym3}
\end{eqnarray}
\begin{table}[!tb]
\caption[dummy]{\label{param}
The value of the parameters in formula (\ref{Lam_fit}) \cite{NPA15}
for the two-particle correlation function of a string $\Lambda(\Delta\eta, \Delta\phi)$
fitted by the experimental pp ALICE data
on forward-backward correlations between multiplicities
in windows separated in rapidity and azimuth
at three initial energies \cite{ALICE15}
together with the value of scaled variance of the number of strings ${\omega_N}$.
}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{$\sqrt{s}$,\ TeV}&0.9&2.76&7.0 \\
\hline \hline
LRC &$\mu_0\omega_N$&0.7 & 1.4 &2.1\\
\hline \hline
&$\mu_0\Lambda_1$&1.5 & 1.9 & 2.3 \\
&$\eta_1$&0.75 &0.75&0.75 \\
&$\phi_1$&1.2 &1.15&1.1 \\
\cline{2-5}
SRC&$\mu_0\Lambda_2$&0.4 &0.4&0.4 \\
&$\eta_2$&2.0 &2.0&2.0 \\
&$\phi_2$&1.7 &1.7&1.7 \\
\cline{2-5}
&$\eta_0$&0.9 &0.9&0.9 \\
\hline
\end{tabular}
\end{table}
Recall that the comparison of the model with experimental data in \cite{NPA15} enables
to fix only the product of the parameters $\mu_0\Lambda_1$, $\mu_0\Lambda_2$ and $\mu_0{\omega_N}$.
Our two-particle correlation functions (\ref{C2def}) and (\ref{Lambda})
defined for $2\pi$-azimuth observation windows
can be obtained by simple integration over azimuth:
\begin{eqnarray}
&&C_2(\Delta\eta) =
\frac{1}{\pi}\int_{0}^{\pi} \!\! C_2(\Delta\eta,\Delta\phi) \, d\Delta\phi \ , \label{int_azim1}\\
&&\Lambda(\Delta\eta) =
\frac{1}{\pi}\int_{0}^{\pi} \!\! \Lambda(\Delta\eta,\Delta\varphi) \, d\Delta\varphi \ .\label{int_azim2}
\end{eqnarray}
So by integration of the fit (\ref{Lam_fit}) we find the $\Lambda(\Delta\eta)$ presented in Fig.~\ref{Lam-exp}.
The obtained dependencies in this figure for three initial energies are well
approximated by the exponent
\begin{equation} \label{Lam_exp}
\Lambda(\Delta\eta)=\Lambda_0 \exp(-{|\Delta\eta|}/{\eta^{}_{corr}}) \ ,
\end{equation}
with the parameters presented in table~\ref{exp-par}.
We see that the correlation length, $\eta^{}_{corr}$, decreases with the increase of collision energy.
This can be interpreted as a signal of an increase with energy of the admixture
of strings of a new type - the fused strings in pp collisions (see below).
\begin{figure}[!tb]
\centering
\includegraphics[width=0.5\textwidth,angle=-90]{lam-exp1.pdf}
\caption{\label{Lam-exp}
The two-particle correlation function of a string $\Lambda(\Delta\eta)$ (integrated over azimuth)
obtained by a fitting \cite{NPA15} of the experimental pp ALICE data \cite{ALICE15}
on forward-backward correlations between multiplicities at three initial energies: 0.9, 2.76 and 7 TeV
(the dashed lines) and the corresponding exponential fits (\ref{Lam_exp}) (solid lines) with the parameters
presented in table \ref{exp-par}.
}
\end{figure}
\begin{table}[!tb]
\caption[dummy]{\label{exp-par}
The value of the parameters in formula (\ref{Lam_exp})
for the two-particle correlation function of a string $\Lambda(\Delta\eta)$
obtained by a fitting \cite{NPA15} of the experimental pp ALICE data \cite{ALICE15}
on forward-backward correlations between multiplicities at three initial energies
(see Fig.~\ref{Lam-exp}).
}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
$\sqrt{s}$,\ TeV&0.9&2.76&7.0 \\
\hline \hline
$\mu_0\Lambda_0$&0.73 & 0.83 &0.93\\
$\eta^{}_{corr}$ &1.52 & 1.43 &1.33\\
\hline
\end{tabular}
\end{table}
\begin{figure}[!tb]
\centering{
\includegraphics[width=0.5\textwidth,angle=0]{Sigma-l.pdf}
}
\caption{\label{Sigma-pp}
The strongly intensive observable, ${\Sigma(\nF,\nB)}$, between multiplicities
in two small pseudorapidity windows (of the width $\delta\eta=$ 0.2 and 0.4)
as a function of the distance between window centers, $\Delta\eta$, calculated
in the model with independent identical strings using
the two-particle correlation function of a string $\Lambda(\Delta\eta)$ (see Fig.\ref{Lam-exp})
obtained by a fitting \cite{NPA15} of the experimental pp ALICE data \cite{ALICE15}
on forward-backward correlations between multiplicities at three initial energies: 0.9, 2.76 and 7 TeV.
}
\end{figure}
The results of the calculation of the strongly intensive observable ${\Sigma(\nF,\nB)}$
by formulae (\ref{SigmaJnfu}) with this two-particle correlation function
for two width of the observation windows $\delta\eta=0.2$ and $0.4$
are presented in Fig.~\ref{Sigma-pp} for three initial energies: 0.9, 2.76 and 7 TeV.
By formulae (\ref{Sigma-s}) and (\ref{omn_small_nfu}-\ref{Sigma_small_nfu})
we can understand the behaviour of ${\Sigma(\nF,\nB)}$ in this figure as follows.
The formula (\ref{Sigma-s}) shows that
for symmetric reaction and symmetric observation windows
the ${\Sigma(\nF,\nB)}$ is proportional to the difference
between the variance $D_{n_F}$ and the covariance ${\textrm{cov}}({n_F^{}},{n_B^{}})$.
It is important to remember that the value both of them
are determined by the string two-particle correlation function $\Lambda$
and the scaled variance in the number of strings ${\omega_N}$
(see formulae (\ref{omn_small_nfu}) and (\ref{nFnB_small_nfu})).
In particular in the absence of correlations between particles
produced from a given source
the multiplicity distribution
from such source will be poissonian (${\omega_\mu}=1$, see formula (\ref{omn1})).
First of all we see that in ${\Sigma(\nF,\nB)}$,
which by (\ref{Sigma-s}) is a difference between $D_n/\av n$ and ${\textrm{cov}}({n_F^{}},{n_B^{}})/\av n$,
the contributions from the variance in the number of strings, ${\omega_N}$,
are being mutually canceled (see formulae (\ref{omn3}), (\ref{nFnB3})
or (\ref{omn_small_nfu}), (\ref{nFnB_small_nfu})),
what reflects the strongly intensive character of the quantity.
Moreover by (\ref{omn_small_nfu}) we see that
at small values of the distance between observation windows $\Delta\eta\ll\eta^{}_{corr}$
the contribution, $\mu_0\delta\eta\Lambda(0)$, of the two-particle correlations to ${\omega_n}$
is being compensated by their contribution, $\mu_0\delta\eta\Lambda(\Delta\eta)$, to ${\textrm{cov}}({n_F^{}},{n_B^{}})/\av n$
and the ${\Sigma(\nF,\nB)}$ is equal to 1.
At large distances between observation windows $\Delta\eta\gg\eta^{}_{corr}$,
by formula (\ref{Lam_exp}), the two-particle correlation function of a string, $\Lambda(\Delta\eta)$,
goes to zero and
the ${\Sigma(\nF,\nB)}$ saturates to ${\omega_\mu}=1+\mu_0\delta\eta\Lambda(0)$. So we have
\begin{eqnarray}
&&{\Sigma(\nF,\nB)}\to1 \hs{0.5} \textrm{at} \hs{0.5} \Delta\eta\ll\eta^{}_{corr} \ , \label{limit1-1}\\
&&{\Sigma(\nF,\nB)}\to{\omega_\mu} \hs{0.5} \textrm{at} \hs{0.5} \Delta\eta\gg\eta^{}_{corr} \label{limit1-2}\ .
\end{eqnarray}
Note that ${\omega_\mu}$ increases with the width, $\delta\eta$, of the observation windows, (\ref{omn_small_nfu}).
\begin{figure}[!tb]
\centering{
\includegraphics[width=70mm,angle=0]{sigma900_1.pdf}
}
\centering{
\includegraphics[width=70mm,angle=0]{sigma2760_1.pdf}
}
\centering{
\includegraphics[width=70mm,angle=0]{sigma7000_1.pdf}
}
\caption{\label{Sigma-dd1}
The strongly intensive observable, ${\Sigma(\nF,\nB)}$, between multiplicities
in two windows of the width $\delta\eta=$ 0.2 and $\delta\phi=\pi/4$
as a function of the distance between window centers $\Delta\eta$ in rapidity and $\Delta\phi$ in azimuth,
calculated
in the model with independent identical strings using
the two-particle correlation function of a string $\Lambda(\Delta\eta,\Delta\phi)$ (\ref{Lam_fit})
obtained by a fitting \cite{NPA15} of the experimental pp ALICE data \cite{ALICE15}
on forward-backward correlations between multiplicities at three initial energies: 0.9, 2.76 and 7 TeV
in ALICE TPC pseudorapidity acceptance.
}
\end{figure}
In Fig.~\ref{Sigma-pp} we see also
some general increase of the ${\Sigma(\nF,\nB)}$ with initial energy,
below in Section \ref{sec-fusion} we will show that
in the framework of the string model it can be interpreted
as a signal of an increase with energy of the admixture
of strings of a new type - the fused strings in pp collisions.
\section{$\Sigma$ for windows separated in rapidity and azimuth
\label{sec-azimuth}
\begin{figure}[!tb]
\centering{
\includegraphics[width=70mm,angle=0]{sigma900_large.pdf}
}
\centering{
\includegraphics[width=70mm,angle=0]{sigma2760_large.pdf}
}
\centering{
\includegraphics[width=70mm,angle=0]{sigma7000_large.pdf}
}
\caption{\label{Sigma-dd2}
The same as in Fig.\ref{Sigma-dd1}, but with $\pi/8$ azimuth window width and
extrapolated to a wider interval of the separation between windows in rapidity:
the $\Delta\eta$ is upto 4 rapidity units.
}
\end{figure}
All results obtained in Section \ref{sec-model}
can be easily extended to the case of
the strongly intensive observable ${\Sigma(\nF,\nB)}$ between multiplicities
in two acceptance windows separated both in rapidity and azimuth.
In particular for symmetric reaction and symmetric small observation windows
of the width $\delta\eta$ in rapidity and $\delta\phi$ in azimuth
with the separation between their centers $\eta_{sep}=\Delta\eta$ and $\phi_{sep}=\Delta\phi$
we find in the model with independent identical strings:
\begin{equation}\label{Sigmadd}
{\Sigma(\nF,\nB)}
=1+ \frac{\delta\eta\,\delta\phi}{2\pi} \mu_0\ [\Lambda(0,0)-\Lambda(\Delta\eta,\Delta\phi)] \ ,
\end{equation}
which is a generalization of the formula (\ref{Sigma_small_nfu}).
If we use now again for
the two-particle correlation function of a string $\Lambda(\Delta\eta,\Delta\phi)$ the approximation
(\ref{Lam_fit}) suggested in paper \cite{NPA15}
with the parameters (see table \ref{param}) fitted by the experimental pp ALICE data \cite{ALICE15}
on forward-backward correlations between multiplicities
in windows separated in rapidity and azimuth
at three initial energies, then we find the behaviour of ${\Sigma(\nF,\nB)}$,
presented in Fig.\ref{Sigma-dd1} in ALICE TPC acceptance and
in Fig.\ref{Sigma-dd2} extrapolated to a wider rapidity interval.
The explanation of this behaviour of ${\Sigma(\nF,\nB)}$
on the base of the formula \ref{Sigmadd} is
absolutely the same as in the end of the Section \ref{sec-model}.
\section{$\Sigma$ with charges
\label{sec-charge}
In Section \ref{sec-model} we have introduced the strongly intensive observable $\Sigma$ based on
multiplicities of the all charged hadrons measured in two pseudorapidity intervals.
Now we consider different combinations of electric charges in these windows
and similarly to formula (\ref{SigmaFB})
we define $\Sigma(n_{F}^{+},n_{B}^{+})$, $\Sigma(n_{F}^{-},n_{B}^{-})$, $\Sigma(n_{F}^{+},n_{B}^{-})$
and $\Sigma(n_{F}^{-},n_{B}^{+})$.
For a symmetric reaction and symmetric observation windows we have
\begin{equation}
\label{sym}
\av{n_F^{+}}=\av{n_B^{+}}\equiv\av {n^+} \ , \hs{0.5}
\omega_{{n_F^{+}}}=\omega_{{n_B^{+}}}\equiv \omega_{n^+}
\end{equation}
and the same for $n^-$. In this case we have also
\begin{eqnarray}
&&{\textrm{cov}}({n_F^{+}},{n_F^{-}})={\textrm{cov}}({n_B^{+}},{n_B^{-}}) \ , \label{symcov-1}\\
&&{\textrm{cov}}({n_F^{+}},{n_B^{-}})={\textrm{cov}}({n_F^{-}},{n_B^{+}}) \ , \label{symcov-2}
\end{eqnarray}
and the definitions can be reduced to
\begin{equation}\label{Sigma-plusplus-def}
\Sigma(n_{F}^{+},n_{B}^{+})=\omega_{n^{+}}-\frac{{\textrm{cov}}(n_{F}^{+},n_{B}^{+})}{\langle n^{+}\rangle} \ ,
\end{equation}
\begin{equation}\label{Sigma-minusminus-def}
\Sigma(n_{F}^{-},n_{B}^{-})=\omega_{n^{-}}-\frac{{\textrm{cov}}(n_{F}^{-},n_{B}^{-})}{\langle n^{-}\rangle} \ ,
\end{equation}
\begin{eqnarray}
&&\Sigma(n_{F}^{+},n_{B}^{-})=\Sigma(n_{F}^{-},n_{B}^{+})=\nonumber\\
&&=\frac{\av {n^+} \omega_{n^-} + \av {n^-} \omega_{n^+}-2\,{\textrm{cov}}({n_F^{+}},{n_B^{-}})}
{\langle n \rangle} \ .\label{charges}
\end{eqnarray}
We can also introduce an additional strongly intensive observable
that measures correlation between multiplicities
of different charges in the same window \cite{Andronov16}:
\begin{eqnarray}
&&\Sigma(n_{F}^{+},n_{F}^{-})=\Sigma(n_{B}^{+},n_{B}^{-})=\nonumber\\
&&=\frac{\av {n^+} \omega_{n^-} + \av {n^-} \omega_{n^+}-2\,{\textrm{cov}}({n_F^{+}},{n_F^{-}})}
{\langle n \rangle} \ .\label{charges-same}
\end{eqnarray}
By expanding $n_F=n_{F}^{+}+n_{F}^{-}$ and $n_B=n_{B}^{+}+n_{B}^{-}$ in (\ref{Sigma-s})
and taking into account (\ref{Sigma-plusplus-def})-(\ref{charges-same}) we find the following elegant relation:
\begin{eqnarray}
&&{\Sigma(\nF,\nB)} =
\frac{\av{n^+}}{\av n} \Sigma(n_{F}^{+},n_{B}^{+})
+\frac{\av{n^-}}{\av n} \Sigma(n_{F}^{-},n_{B}^{-})+\nonumber\\
&& +\Sigma(n_{F}^{+},n_{B}^{-})
- \Sigma(n_{F}^{+},n_{F}^{-})\ .\label{Sigma-relation}
\end{eqnarray}
This relation can be further simplified in case of
charge symmetry, when
\begin{equation}
\label{sym-ch}
\av{n^+}=\av{n^-}=\av{n}/2 \ ,
\end{equation}
and
$$
\omega_{n^+}=\omega_{n^-} \ , \hs{1}
{\textrm{cov}}({n_F^{+}},{n_B^{+}})={\textrm{cov}}({n_F^{-}},{n_B^{-}}) \ ,
$$
what is a very good approximation for mid-rapidity region at LHC collision energies.
In this case we have
\begin{equation}\label{Sigma-final-relation}
{\Sigma(\nF,\nB)} =\Sigma(n_{F}^{+},n_{B}^{+}) +\Sigma(n_{F}^{+},n_{B}^{-}) - \Sigma(n_{F}^{+},n_{F}^{-})\ .
\end{equation}
In order to calculate charge-dependent strongly intensive observables in the model
of independent strings we have to define corresponding one- and two-particle distributions
describing decay properties of a source. For the charge symmetry case we have:
\begin{eqnarray}
&&\lambda^{+}(\eta)=\lambda^{-}(\eta)=\frac{1}{2}\lambda(\eta) \ , \label{l1-charge}\\
&&\Lambda^{++}(\eta_{1},\eta_{2})=\Lambda^{--}(\eta_{1},\eta_{2}) \ , \label{l2-charge-1}\\
&&\Lambda^{+-}(\eta_{1},\eta_{2})=\Lambda^{-+}(\eta_{1},\eta_{2}) \ , \label{l2-charge-2}
\end{eqnarray}
\begin{equation} \label{L-charge}
\Lambda(\eta_{1},\eta_{2})=\frac{1}{2}\left(\Lambda^{++}(\eta_{1},\eta_{2})+
\Lambda^{+-}(\eta_{1},\eta_{2})\right) \ .
\end{equation}
Using the translation invariance in rapidity it is again conveniently to define the following quantities:
\begin{eqnarray}
&&J_{FF}^{++}\equiv\frac{1}{\delta\eta^2} \int_{\delta\eta_F}\!\!\!d\eta_1 \int_{\delta\eta_F} \!\!\!d\eta_2\
\Lambda^{++}(\eta_1\!-\!\eta_2) \ , \label{jffplpl}\\
&&J_{FF}^{+-}\equiv\frac{1}{\delta\eta^2} \int_{\delta\eta_F}\!\!\!d\eta_1 \int_{\delta\eta_F} \!\!\!d\eta_2\
\Lambda^{+-}(\eta_1\!-\!\eta_2) \ , \\
&& J_{FB}^{++}\equiv\frac{1}{\delta\eta^2} \int_{\delta\eta_F}\!\!\!d\eta_1 \int_{\delta\eta_B} \!\!\!d\eta_2\
\Lambda^{++}(\eta_1\!-\!\eta_2) \ , \\
&& J_{FB}^{+-}\equiv\frac{1}{\delta\eta^2} \int_{\delta\eta_F}\!\!\!d\eta_1 \int_{\delta\eta_B} \!\!\!d\eta_2\
\Lambda^{+-}(\eta_1\!-\!\eta_2) \ . \label{jfbplpl}
\end{eqnarray}
Then by (\ref{JFF}), (\ref{JFB}), (\ref{l1-charge}), (\ref{L-charge}) and (\ref{jffplpl}-\ref{jfbplpl}) we have
\begin{eqnarray}
&&\langle \mu^{+}\rangle = \langle \mu^{-} \rangle = \frac{1}{2} \langle \mu \rangle
=\frac{1}{2}\mu_{0}\delta\eta \ , \label{mu-new}\\
&&J_{FF}=\frac{1}{2}\left(J_{FF}^{++}+J_{FF}^{+-}\right) \ , \label{jff-new}\\
&& J_{FB}=\frac{1}{2}\left(J_{FB}^{++}+J_{FB}^{+-}\right) \ .
\end{eqnarray}
What leads to the following relations:
\begin{equation} \label{sigma-plusplus}
\Sigma(n^{+}_{F},n^{+}_{B})=1+\langle\mu^{+}\rangle(J_{FF}^{++}-J_{FB}^{++}) \ ,
\end{equation}
\begin{equation} \label{sigma-plusminus}
\Sigma(n^{+}_{F},n^{-}_{B})=1+\langle\mu^{+}\rangle(J_{FF}^{++}-J_{FB}^{+-}) \ .
\end{equation}
\begin{equation} \label{sigma-plusminus-null}
\Sigma(n^{+}_{F},n^{-}_{F})=1+\langle\mu^{+}\rangle(J_{FF}^{++}-J_{FF}^{+-}) \ ,
\end{equation}
or in the case of small windows:
\begin{equation} \label{sigma-plusplus-small}
\Sigma(n^{+}_{F},n^{+}_{B})=1+\frac{1}{2}\mu_{0}\delta\eta[\Lambda^{++}(0)-\Lambda^{++}(\Delta\eta)] \ ,
\end{equation}
\begin{equation} \label{sigma-plusminus-small}
\Sigma(n^{+}_{F},n^{-}_{B})=1+\frac{1}{2}\mu_{0}\delta\eta[\Lambda^{++}(0)-\Lambda^{+-}(\Delta\eta)] \ ,
\end{equation}
\begin{equation} \label{sigma-plusminus-null-small}
\Sigma(n^{+}_{F},n^{-}_{F})=1+\frac{1}{2}\mu_{0}\delta\eta[\Lambda^{++}(0)-\Lambda^{+-}(0)] \ .
\end{equation}
We see that as expected $\Sigma(n^{+}_{F},n^{+}_{B})\to1$ at $\Delta\eta\to0$, however
the $\Sigma(n^{+}_{F},n^{-}_{B})$ tends to be equal to
$\Sigma(n^{+}_{F},n^{-}_{F})$ in this limit, which is not necessarily equal to $1$.
To deduce how $\Sigma(n^{+}_{F},n^{-}_{B})$ behaves at small $\Delta\eta$ one need additional input from experiments.
Looking at relations (\ref{sigma-plusplus-small}-\ref{sigma-plusminus-null-small}) one can immediately
notice certain similarities with charge-dependent correlations measured via
the so-called balance function $B\left(\Delta \eta\right)$~\cite{ALICE16}.
In this paper the balance function is defined to be proportional to the difference between unlike-sign
and like-sign two-particle correlations functions:
\begin{equation} \label{balance-def}
B(\Delta\eta,\Delta\phi)\equiv\frac{1}{2}[C_{+-}+C_{-+}-C_{++}-C_{--}],
\end{equation}
or simply
\begin{equation} \label{balance-def-1}
B(\Delta\eta,\Delta\phi)=C_{+-}-C_{++} \ .
\end{equation}
The last equation exploits the charge symmetry in mid-rapidities at LHC energies.
From (\ref{C2Lam}) we expect that the $B(\Delta\eta)$ is proportional to
$\Lambda^{+-}(\Delta\eta)\!-\!\Lambda^{++}(\Delta\eta)$,
i.e. to $\Sigma(n^{+}_{F},n^{-}_{B})\!-\!\Sigma(n^{+}_{F},n^{+}_{B})$.
Really, taking into account the normalization of the two-particle correlations functions,
used in paper~\cite{ALICE16}, we find
\begin{equation} \label{balance-proj}
B^{proj}(\Delta\eta)=\frac{1}{4}\mu_0[\Lambda^{+-}(\Delta\eta)-\Lambda^{++}(\Delta\eta)] \ .
\end{equation}
Here following \cite{ALICE16} the pseudorapidity dependence of balance function is defined as a projection
of two-dimensional $B\left(\Delta\eta,\Delta\phi\right)$:
\begin{equation} \label{balance-proj-def}
B^{proj}\left(\Delta\eta\right) \equiv \int_{-\frac{\pi}{2}}^{\frac{3\pi}{2}} B(\Delta\eta,\Delta\phi) \, d\Delta\phi \ .
\end{equation}
Note that as it is pointed out before the subsection 6.1.1 in the paper \cite{ALICE16}
the results for $B^{proj}(\Delta\eta)$ will be two times larger if calculating the two-particle correlation functions,
entering the definition of balance function (\ref{balance-def-1}),
one will not impose the requirement that the transverse momentum of the
"trigger" particle must be higher than the "associated" one,
{\it i.e.} we would have coefficient 1/2 in (\ref{balance-proj}) instead of 1/4
for the balance function normalized in such a way.
So, by fitting the experimental pp ALICE data on balance functions~\cite{ALICE16} and keeping in mind
relation (\ref{L-charge}) one can extract parameters of unlike-sign and like-sign correlation functions
$\Lambda^{+-}\left(\Delta\eta\right)$ and $\Lambda^{++}\left(\Delta\eta\right)$, and, in turn,
predict $\Delta\eta$ dependencies of the variables
$\Sigma(n^{+}_{F},n^{+}_{B})$ and $\Sigma(n^{+}_{F},n^{-}_{B})$.
For better data fitting we have to take into account the HBT correlations
for like-sign two-particle correlation function and
complement the form of parametrization (\ref{Lam_exp}):
\begin{eqnarray}
&&\Lambda^{++}(\Delta\eta)=\Lambda_0^{++} \exp\left(-{|\Delta\eta|}/{{\eta}^{++}}\right) +\nonumber\\
&&+ \Lambda_0^{HBT} \exp\left[-{\left({\Delta\eta}/{{\eta}^{HBT}}\right)}^{2}\right]\ . \label{Lam_exp_pp}
\end{eqnarray}
We keep it unmodified for unlike-sign correlations\footnote{In order to take into account decays of neutral resonances one need to add a characteristic contribution to the unlike-sign correlations function. We postpone this modification for future research.}
\begin{equation} \label{Lam_exp_pm}
\Lambda^{+-}(\Delta\eta)=\Lambda_0^{+-} \exp\left(-{|\Delta\eta|}/{{\eta}^{+-}}\right) \ .
\end{equation}
Here, we assumed also that HBT-correlations appears only for pairs of particles originating from the same string.
We perform simultaneous fitting of the experimental data on pp collisions at
7 TeV for balance functions~\cite{ALICE16}
and of $\mu_{0}\Lambda\left(\Delta\eta\right)$ extracted from forward-backward
correlations~\cite{ALICE15}
(see Fig.~\ref{Lam-exp}). As the forward-backward correlations were measured experimentally
for minimum bias pp events, the results on balance functions for 70-80$\%$ pp centrality class
were selected, assuming that the minimum bias is dominated by the 'peripheral' collisions.
Moreover, in order to compensate the more narrow transverse momentum interval taking into account
in F-B correlations measurements compared to balance function investigations
($(0.3;1.5)$ GeV/c in \cite{ALICE15} and $(0.2;2)$ GeV/c in \cite{ALICE16})
we multiply extracted $\mu_{0}\Lambda\left(\Delta\eta\right)$
by a correction factor $c_{cor}$=1.28 that was estimated in the PYTHIA model
\cite{Sjostrand15,Sjostrand06} as a ratio of mean multiplicities at mid-rapidity
for corresponding $p_{T}$ intervals.
Figure~\ref{Balance-pp} shows comparison of experimental data for 70-80$\%$ centrality class
in pp collisions at 7 TeV with the suggested fit, with parameters being listed in Table~\ref{exp-par-bal}.
\begin{figure}[!tb]
\centering{
\includegraphics[width=\textwidth,angle=0, trim={0 0.4cm 0 0.25cm},clip]{fit_balance.pdf}
}
\caption{\label{Balance-pp}
Left: The projection of balance function, $B^{proj}\left(\Delta\eta\right)$,
as a function of the distance between two particles $\Delta\eta$, measured
by the ALICE experiment \cite{ALICE16}
for 70-80$\%$ centrality class in pp collisions at 7 TeV,
together with the fit, (\ref{balance-proj}),
obtained
using the difference between
the unlike-sign and like-sign two-particle correlation functions of a string,
${\Lambda}^{+-}(\Delta\eta)\!-\!{\Lambda}^{++}(\Delta\eta)$.
Right: The two-particle correlation function of a string, Fig.\ref{Lam-exp},
corrected for $p_{T}$ acceptance (see text)
for all charged particles,
extracted \cite{NPA15}
from the experimental pp ALICE data on forward-backward correlations between multiplicities
at 7 TeV \cite{ALICE15},
together with the fit, (\ref{L-charge}),
obtained
using the sum of the unlike-sign and like-sign two-particle correlation functions of a string
${\Lambda}^{+-}(\Delta\eta)\!+\!{\Lambda}^{++}(\Delta\eta)$.
}
\end{figure}
\begin{table}[!tb]
\caption[dummy]{\label{exp-par-bal}
The value of the parameters in formulae (\ref{Lam_exp_pp},\ref{Lam_exp_pm})
for the two-particle correlation functions of a string ${\Lambda}^{+-}(\Delta\eta)$ and ${\Lambda}^{++}(\Delta\eta)$,
obtained by a simultaneous fitting of the experimental ALICE data on balance function (BF) \cite{ALICE16} and
on forward-backward correlations (FBC) between multiplicities \cite{ALICE15} for pp collisions at 7 TeV
(see Fig. \ref{Balance-pp}).
}
\centering
\begin{tabular}{|c|c|}
\hline
$\sqrt{s}$,\ TeV&7.0 \\
\hline \hline
$\mu_{0}\Lambda_{0}^{+-}$ &1.42\\
$\mu_{0}\Lambda_{0}^{++}$ &0.76\\
$\eta^{+-}$ &1.34\\
$\eta^{++}$ &1.67\\
$\mu_{0}\Lambda_{0}^{HBT}$ &0.25\\
$\eta^{HBT}$ &0.33\\
\hline
\end{tabular}
\end{table}
Figure~\ref{Sigma-pp-result} shows $\Sigma(n^{+}_{F},n^{-}_{B})$, $\Sigma(n^{+}_{F},n^{+}_{B})$
and $\Sigma(n_{F},n_{B})$ dependencies on $\Delta\eta$ obtained with parameters listed in
Table~\ref{exp-par-bal}. All functions show growing behaviour with $\Delta\eta$ with decreasing
difference between unlike-sign and like-sign strongly intensive observables.
This is a consequence of $B\left(\Delta\eta\right)\rightarrow 0$ at large $\Delta\eta$.
Unlike-sign $\Sigma$ is smaller than 1 at small $\Delta\eta$,
becoming greater than 1 at larger $\Delta\eta$. Like-sign $\Sigma$
shows behaviour that is similar to any charge sign case (see Fig.~\ref{Sigma-pp})
but suppressed in absolute value. Note that $\Sigma(n_{F},n_{B})$ was calculated here
by (\ref{Sigma-final-relation}).
It rises slightly faster than in Figure \ref{Sigma-pp} because $p_{T}$ interval was rescaled.
In Table~\ref{exp-par-bal} we see also that as one can expect from the local charge conservation
in string fragmentation process \cite{Wong15} the correlation length, $\eta^{++}$,
for the particles of same charges is larger than the one, $\eta^{+-}$, for opposite charges.
\begin{figure}[!tb]
\centering{
\includegraphics[width=115mm,,angle=0, trim={0 0.4cm 0 0.25cm},clip]{sigma_comparison.pdf}
}
\caption{\label{Sigma-pp-result}
The strongly intensive observables $\Sigma(n^{+}_{F},n^{-}_{B})$ (dashed),
$\Sigma(n^{+}_{F},n^{+}_{B})$ (dashdotted) and $\Sigma(n_{F},n_{B})$ (solid),
as a function of the distance $\Delta\eta$ between two intervals of width $\delta\eta=0.2$,
calculated in the model with independent identical strings, using
the unlike- and like-sign two-particle correlation functions of a string, ${\Lambda}^{+-}(\Delta\eta)$
and ${\Lambda}^{++}(\Delta\eta)$.
}
\end{figure}
\section{$\Sigma$ with string fusion
\label{sec-fusion}
In this section we consider the influence of processes of
interaction between strings
on the strongly intensive observable ${\Sigma(\nF,\nB)}$.
This influence increases with initial energy
and with going from pp to heavy ion collisions.
One of the possible ways to take these processes into account
is to pass
from the model with independent identical strings
to the model with string fusion and percolation \cite{Biro84,Bialas86,BP92,BP93}.
Technically to simplify the account of the string fusion processes
one can introduce the finite lattice (the grid) in the impact parameter plane.
This approach was suggested in
\cite{Vest2} and then was successfully exploited for a description of various phenomena
(correlations, anisotropic azimuthal flows, the ridge)
in ultra relativistic nuclear collisions
\cite{EPJC04,PPR,YF07-1,YF07-2,TMF15,TMF17,BP11,BPV13,EPJA15,KV-EPJWOC14,KovYF13,KV-Dub12}.
In this approach one splits the impact parameter plane into cells, which area is equal
to the transverse area of single string and supposes the fusion of all strings
with the centers in a given cell.
This leads to
the splitting of the transverse area into domains
with different, fluctuating values of color field
within them. What is similar to the attempts to take into account
the density variation in transverse plane
in models based on the BFKL evolution \cite{LR11} and on the CGC approach \cite{KL11}.
In this model the definite set of strings of different types corresponds to the given event.
Each such string, originating from a fusion of $k$ primary strings, is characterized by its own
parameters:
the mean multiplicity per unit of rapidity, ${\mu_{0}^{(k)}}$, and the string correlation function, ${\Lambda_{k}}(\Delta\eta)$.
By (\ref{SigmaJnfu}) these parameters uniquely determine the strongly intensive observable between multiplicities,
produced from decay of a string of the given type, ${\Sigma_{k} (\muF, \muB)}$, defined by (\ref{SigmuFB}) and (\ref{Sigmu-s}).
For example, for two small observation windows, $\delta\eta\ll{\eta^{(k)}_{corr}}$, separated by the rapidity distance $\Delta\eta$,
similarly to (\ref{Sigma_small_nfu}), we have
\begin{equation}\label{Sigma_small_k}
{\Sigma_{k} (\muF, \muB)}=1+{\mu_{0}^{(k)}}\delta\eta\, [{\Lambda_{k}}(0)-{\Lambda_{k}}(\Delta\eta)] \ .
\end{equation}
In this case of the model with $k$ string types
the direct calculation
gives for the strongly intensive observable, ${\Sigma(\nF,\nB)}$
(for symmetric reaction and observation windows $\delta\eta_{\!F}=\delta\eta_{\!B}\equiv\delta\eta$, $\av{n_F^{}}=\av{n_B^{}}\equiv \av n $):
\begin{equation}\label{Sigma-w}
{\Sigma(\nF,\nB)}
= \sum_{k=1}^{\infty} \alpha_k\, {\Sigma_{k} (\muF, \muB)} \ , \hs{1} \alpha_k=\frac{\av{n^{(k)}}}{\av n} \ ,
\end{equation}
where
$\av{n^{(k)}}$ is a mean number of particles produced from all sells with $k$ fused strings
in the observation window $\delta\eta$.
Note that the same result was obtained in the model with two types of strings in \cite{Andronov15}
for the long-range part of ${\Sigma(\nF,\nB)}$,
when at $\Delta\eta\!\gg\!\eta^{}_{corr}$ by (\ref{JFF}) and (\ref{JFB}) we have $J_{FF}\gg J_{FB}$
and by (\ref{omn1}) and (\ref{SigmaJnfu})
${\Sigma_{k} (\muF, \muB)}=\omega_\mu^{(k)}$ with $k=$1,2.
What led to
\begin{equation}\label{SigmaLR}
\left.{\Sigma(\nF,\nB)}\right |_{\Delta\eta\gg\eta^{}_{corr}} = \frac{\avr n 1 \omega_\mu^{(1)}+\avr n 2 \omega_\mu^{(2)}}{\av n} \ .
\end{equation}
One can compare this limit with the one given by the formula (\ref{limit1-2}) for the case of identical strings.
For an arbitrary rapidity distance $\Delta\eta$
between the forward and backward observation windows of the $\delta\eta$ width
in the model with $k$ string types we have
\begin{equation}\label{SigmaDeta}
{\Sigma(\nF,\nB)} = 1+ \delta\eta\sum_{k=1}^{\infty} \alpha_k\,
{\mu_{0}^{(k)}}\, [J^{(k)}_{FF}-J^{(k)}_{FB}] \ ,
\end{equation}
where we have introduced $J^{(k)}_{FF}$ and $J^{(k)}_{FB}$
for the two-partic-le correlation function ${\Lambda_{k}}(\eta_1-\eta_2)$
similarly to (\ref{JFF}) and (\ref{JFB}).
For narrow observation windows, $\delta\eta\ll{\eta^{(k)}_{corr}}$, by (\ref{Sigma_small_k}),
it is simplified to
\begin{equation}\label{Sigma_smk}
{\Sigma(\nF,\nB)} = 1+ \delta\eta\sum_{k=1}^{\infty} \alpha_k\,
{\mu_{0}^{(k)}}\, [{\Lambda_{k}}(0)-{\Lambda_{k}}(\Delta\eta)] \ .
\end{equation}
If we will use also the simple exponential parametrization for
${\Lambda_{k}}(\Delta\eta)$
similar to (\ref{Lam_exp}):
\begin{equation} \label{Lam_expk}
{\Lambda_{k}}(\Delta\eta)={\Lambda_{0}^{(k)}} \exp{(-{|\Delta\eta|}/{{\eta^{(k)}_{corr}}})} \ ,
\end{equation}
then we can rewrite (\ref{Sigma_smk}) as
\begin{equation}\label{Sigma_expk}
{\Sigma(\nF,\nB)} = 1+ \delta\eta\sum_{k=1}^{\infty} \alpha_k\,
{\mu_{0}^{(k)}}{\Lambda_{0}^{(k)}} [1-\exp{(-{|\Delta\eta|}/{{\eta^{(k)}_{corr}}})}] \ .
\end{equation}
We see that in this case each string of the type $k$ is characterized by two parameters:
the product ${\mu_{0}^{(k)}}{\Lambda_{0}^{(k)}}$, where the ${\mu_{0}^{(k)}}$ is the mean multiplicity per unit of rapidity
from a decay of such string, and
its two-particle correlation length ${\eta^{(k)}_{corr}}$,
which determines the correlations between particles,
produced from a decay of the string.
In the framework of the string fusion model \cite{Biro84,Bialas86,BP92,BP93}
one usually supposes that the mean multiplicity per unit of rapidity
for fused string, ${\mu_{0}^{(k)}}$, increase as $\sqrt{k}$ with $k$.
The dependence of the correlation length ${\eta^{(k)}_{corr}}$ on $k$ is not so obvious.
Basing on
a simple geometrical
picture of string fragmentation (see, {\it e.g.}, \cite{Artru,VENUS,Dub08,DIPSY-Tar})
one can expect the decrease of the correlation length, ${\eta^{(k)}_{corr}}$, with increase of $k$.
In this picture with a growth of string tension
the fragmentation process is finished at smaller string segments in rapidity.
The correlation takes place only between particles
originating from a fragmentation of neighbour string segments
and hence the correlation length ${\eta^{(k)}_{corr}}$ will decrease with $k$ for fused strings.
Indirectly this fact is confirmed by the analysis \cite{Titov13}
of the experimental STAR \cite{STAR-nc} and ALICE \cite{ALICE-nc} data on net-charge fluctuations in pp and AA collisions.
The dependence of net-charge fluctuations on the rapidity width of the observation window
can be well described in a string model if one supposes the decrease of the correlation length
with the transition to collisions of heavier nuclei and to higher energies, {\it i.e.} to collisions
in which the proportion of fused strings is increasing.
By (\ref{Sigma_expk}) both these factors,
the increase of ${\mu_{0}^{(k)}}$ and the decrease of ${\eta^{(k)}_{corr}}$ for fused string,
lead to the steeper increase of ${\Sigma_{k} (\muF, \muB)}$, (\ref{Sigma_small_k}), with $\Delta\eta$
and to its saturation at a higher level $\omega_\mu^{(k)}=1+\delta\eta\, {\mu_{0}^{(k)}}{\Lambda_{k}}(0)$.
Due to (\ref{Sigma-w}) this behaviour transmits to the observable ${\Sigma(\nF,\nB)}$,
as the last is a weighted average of ${\Sigma_{k} (\muF, \muB)}$
with the weights
$\alpha_k={\av{n^{(k)}}}/{\av n}$, which are the mean portions of the particles
produced from a given type of strings.
In real experiment we have always a mixture of fused and single strings.
So with the transition to pp collisions at higher energy or/and to collisions of nuclei
the proportion of fused strings will increase and we will observe
the steeper increase of ${\Sigma(\nF,\nB)}$, with $\Delta\eta$ and its saturation at a higher level.
Really, in Fig.\ref{Sigma-pp} we see such behaviour of ${\Sigma(\nF,\nB)}$,
when we compare ${\Sigma(\nF,\nB)}$ for pp collisions at three initial energies: 0.9, 2.76 and 7 TeV,
obtained through fitting \cite{NPA15} of the experimental pp ALICE data \cite{ALICE15}
on forward-backward correlations between multiplicities.
Table \ref{exp-par} illustrates the increase of $\mu_0\Lambda_0$ and
the decrease of the correlation length $\eta^{}_{corr}$
with energy for this data. Note that these values are the some effective ones,
because at each energy we had supposed that all strings are identical.
So they only indirectly reflects the influence of the increase of the proportion of fused strings
with energy in pp collisions.
For studies of the ${\Sigma(\nF,\nB)}$ dependence on multiplicity classes
we can predict the behaviour similar to the one in Fig.\ref{Sigma-pp}.
For more central pp collisions
due to the increase of the proportion of fused strings
in such collisions
we also have to observe
the steeper increase of ${\Sigma(\nF,\nB)}$, with $\Delta\eta$ and its saturation at a higher level.
Note that from a general point of view,
this simultaneously means
that the observable ${\Sigma(\nF,\nB)}$,
strictly speaking, can not be considered
any more as strongly intensive.
Through the weight factors,
$\alpha_k={\av{n^{(k)}}}/{\av n}$, entering the formula
(\ref{Sigma-w}), which
are the mean portions of the particles
produced from a given type of strings,
the observable ${\Sigma(\nF,\nB)}$
becomes dependent on collision conditions
({\it e.g.}, on the collision centrality).
\section{$\Sigma$ with PYTHIA
\label{sec-MC}
In this section for a comparison we present
the results for the strongly intensive observables under consideration
obtained using the PYTHIA event generator.
The PYTHIA event generator \cite{Sjostrand15,Sjostrand06} with the Lund string fragmentation
model \cite{Andersson83} in its core is very successful in description of LHC data on pp collisions.
So one can expect that the results will be in correspondence with the ones obtain above
in a simple string model.
All the results to be shown below are obtained with the PYTHIA8.223 version
using its default Monash 2013 tune \cite{Skands14} from generation of 12 $\times$ $10^6$ events
for all inelastic proton-proton collisions (SoftQCD:inelastic=on). Statistical uncertainties were estimated
using the sub-sample method \cite{Efron81}, with number of sub-samples being equal to 30.
Statistical uncertainties are smaller than a line width and not visible on the plots to be presented below.
In the analysis we considered only charged particles with $0.3<p_{T}<1.5$ GeV/$c$
to be consistent with the ALICE forward-backward correlations measurements~\cite{ALICE15}.
Figure~\ref{Sigma-pp-pythia} shows PYTHIA8 predictions of ${\Sigma(\nF,\nB)}$ dependencies on distance
between two windows for three collision energies.
We see that ${\Sigma(\nF,\nB)}$ grows with separation of windows up to a certain saturation point
with a subsequent decrease. The point of saturation increases with the collision energy growth.
Moreover, for $\Delta\eta<2$ the growth is steeper than for large gaps.
The phase of growing is reminiscent of the independent string model predictions (see Fig.~\ref{Sigma-pp}).
Observed collision energy dependence is also consistent with the predictions from Sect.~\ref{sec-model}.
Change of trend at large separation between windows can be understood
as a consequence of a significant decrease of a mean multiplicities
$\langle n_F\rangle$ and $\langle n_B\rangle$ due to reduction of the number of strings
contributing to both observation windows. Such a decrease leads
to almost poissonian fluctuations of $n_F$ and $n_B$ and consequently ${\Sigma(\nF,\nB)}\to 1$.
This effect is not taken into account in the independent string model,
where at mid-rapidities we assume that the $\langle n_F\rangle$ and $\langle n_B\rangle$
are independent of window positions, due to translation invariance in rapidity.
We supposed also that every string can contribute to both observation windows.
\begin{figure}[!tb]
\centering{
\includegraphics[width=115mm,,angle=0]{sigma_nn_pythia_900gev_long.pdf}
}
\caption{\label{Sigma-pp-pythia}
The strongly intensive observable, ${\Sigma(\nF,\nB)}$, between multiplicities of charged particles with
$0.3<p_{T}<1.5$ GeV/c
in two small pseudorapidity windows (of the width $\delta\eta=$ 0.2)
as a function of the distance between window centers, $\Delta\eta$, calculated
with the Monash 2013 tune of the PYTHIA8.223 model for three collision energies: 0.9, 2.76 and 7 TeV.
}
\end{figure}
Figure~\ref{Sigma-pp-pythia-charge} shows results for different combinations of electric charges.
Additional point with $\Delta\eta=0$ is calculated
for $\Sigma(n^{+}_{F},n^{-}_{B})$ in the same window ($F=B$),
see formulae (\ref{sigma-plusminus-small}) and (\ref{sigma-plusminus-null-small}).
The behavior of $\Sigma(n^{+}_{F},n^{-}_{B})$, $\Sigma(n^{+}_{F},n^{+}_{B})$
and $\Sigma(n_{F},n_{B})$ is in correspondence with the independent string model predictions
(see Figure~\ref{Sigma-pp-result}). Unlike-sign $\Sigma$, starting also from the value around 0.95 at small $\Delta\eta$,
becomes greater than 1 at large $\Delta\eta$. Like-sign $\Sigma$
shows behaviour that is similar to all charged case ${\Sigma(\nF,\nB)}$, but suppressed in absolute value.
Full circles in Figure~\ref{Sigma-pp-pythia-charge} represent results obtained in PYTHIA
using the relation (\ref{Sigma-final-relation}).
Recall that this relation was obtained under the assumption of charge symmetry, (\ref{sym-ch}).
Really, one can see that this relation nicely reproduces ${\Sigma(\nF,\nB)}$
for all charged particles (solid line) in central rapidity region,
where the charge symmetry take place at LHC energies,
while with going to a fragmentation region
it starts to fail and two curves begin to deviate.
\begin{figure}[!tb]
\centering{
\includegraphics[width=115mm,,angle=0]{sigma_nn_pythia_900gev_chargelong.pdf}
}
\caption{\label{Sigma-pp-pythia-charge}
The strongly intensive observable, ${\Sigma(\nF,\nB)}$, between multiplicities of charged particles with $0.3<p_{T}<1.5$ GeV/c
in two small pseudorapidity windows (of the width $\delta\eta=$ 0.2)
as a function of the distance between window centers, $\Delta\eta$, for 0.9 TeV collisions calculated
with the Monash 2013 tune of the PYTHIA8.223 model for different charge combinations.
For comparison the results, obtained by the formula (\ref{Sigma-final-relation}), are shown in full circles.
}
\end{figure}
Results obtained with the PYTHIA8 event generator for ${\Sigma(\nF,\nB)}$ for windows separated both
in pseudorapidity and azimuthal angle are presented in Fig.~\ref{Sigma-pp-pythia-phi}.
The shape of obtained functions - a dip at small values of $\Delta\eta$ and $\Delta\phi$ and
a plateau at larger values - is again in qualitative
agreement with the independent string model
predictions (see Fig.~\ref{Sigma-dd1}).
It is important to note
that in the framework of the simple string model,
in contrast with PYTHIA calculations,
we can clearly see
the physical reasons for such behavior.
\begin{figure}[!tb]
\centering
\includegraphics[width=70mm,,angle=0]{900gev2d.pdf}
\includegraphics[width=70mm,angle=0,clip]{2760gev2d.pdf}\\
\includegraphics[width=70mm,angle=0,clip]{7000gev2d.pdf}
\caption{\label{Sigma-pp-pythia-phi}
The strongly intensive observable, ${\Sigma(\nF,\nB)}$, between multiplicities
in two small pseudorapidity - azimuthal angle windows (of the width $\delta\eta=$ 0.2, $\delta\varphi=\pi/4$)
as a function of the distance between window centers, $\Delta\eta$, $\Delta\varphi$, calculated
with the Monash 2013 tune of the PYTHIA8.223 model for three collision energies: 0.9, 2.76 and 7 TeV.
}
\end{figure}
\section{Summary and conclusions
\label{Concl}
The using of strongly intensive observables are considered
as a way to suppress the contribution of trivial "volume" fluctuations
in experimental studies of the correlation and fluctuation phenomena (see, {\it e.g.} \cite{GorGaz11}).
In present paper we have studied
the properties of strongly intensive observable between multiplicities
in two acceptance windows separated in rapidity and azimuth, ${\Sigma(\nF,\nB)}$,
in the model with quark-gluon strings (color flux tubes) as sources.
We show that in the case with independent identical strings
the strongly intensive character of this observable is being confirmed:
it depends only on the individual characteristics of a string and
is independent of both
the mean number of strings and its fluctuation.
These individual characteristics of a string are
a mean number of particles per unit of rapidity, $\mu_0$,
produced from string fragmentation,
and the two-particle correlation function, $\Lambda(\Delta\eta,\Delta\phi )$,
characterizing the correlations between particles, produced from
the same string.
The ALICE experimental data \cite{ALICE15} on forward-back-ward correlations (FBC)
in small rapidity windows separated in rapidity and azimuth enables to obtain
the information on this string correlation function, $\Lambda(\Delta\eta,\Delta\phi )$, \cite{NPA15}.
Using it we calculate the dependence of the strongly intensive observable, ${\Sigma(\nF,\nB)}$,
on the acceptance of observation windows and the gaps between them.
We have studied also
the strongly intensive observables between multiplicities
taking into account the sigh of particle charge,
$\Sigma({n_F^{+}}, {n_B^{+}})$, $\Sigma({n_F^{+}}, {n_B^{-}})$ and $\Sigma({n_F^{+}}, {n_F^{-}})$.
We express them through the string correlation functions
between like and unlike charged particles, $\Lambda^{++}(\Delta\eta)$ and ${\Lambda}^{+-}(\Delta\eta)$.
To calculate these quantities we need more information
on a string decay process, because the FBC data \cite{ALICE15} contains
information only on the sum of these correlation functions,
$\Lambda(\Delta\eta)=[\Lambda^{++}(\Delta\eta)+{\Lambda}^{+-}(\Delta\eta)]/2$.
We show that the so-called balance function (BF)
can be expressed through the difference of these string correlation functions.
Using the ALICE experimental data \cite{ALICE16} on BF we
extract both correlation functions between like and unlike charged particles
produced from a fragmentation of a single string.
With these correlation functions we calculate the properties of
the strongly intensive observables,
$\Sigma({n_F^{+}}, {n_B^{+}})$, $\Sigma({n_F^{+}}, {n_B^{-}})$ and $\Sigma({n_F^{+}}, {n_F^{-}})$.
In particular we found that as one can expect from the local charge conservation
in string fragmentation process \cite{Wong15} the correlation length
for the particles of same charges is larger than the one for opposite charges.
In the case when the string fusion processes
are taken into account and a formation
of strings of a few different types takes place in a collision
we show that the observable ${\Sigma(\nF,\nB)}$
is proved to be equal to a weighted
average of its values for different string types.
Unfortunately in this case
through the weight factors
this observable becomes dependent on collision conditions
and, strictly speaking, can not be considered
any more as strongly intensive variable.
This complicates the quantitative predictions for the ${\Sigma(\nF,\nB)}$,
as it starts to depend on experimental conditions through
these weight factors.
Nevertheless we argue that the string fusion leads to the
following changes of individual string characteristics:
the multiplicity density per unit of rapidity
from fragmentation of fused string occurs higher
and the correlation length between particles, produced from
the fused string, becomes smaller.
Both these factors lead to the steeper increase of ${\Sigma(\nF,\nB)}$
with rapidity gap between windows, $\Delta\eta$,
and to its saturation at a higher level,
what is qualitatively consistent with available experimental data
on pp collisions at different LHC energies.
We compare our results also with the PYTHIA8 event generator predictions.
We find a similar picture for the dependence
of $\Sigma({n_F^{+}}, {n_B^{+}})$, $\Sigma({n_F^{+}}, {n_B^{-}})$ and $\Sigma({n_F^{}}, {n_B^{}})$
on rapidity gap between windows, $\Delta\eta$, at mid-rapidities,
where the exploited boost invariant version of string model is applicable.
\section*{Acknowledgements
\label{Ackn}
The research was funded by the grant of the Russian Science Foundation (project 16-12-10176).
|
2,877,628,091,193 | arxiv | \section{Introduction}
In current star formation theories, stars initially form in clusters or groups. The same initial mass function (IMF) between field
stars and young embedded clusters provides direct evidence of this\citep{Lada03}. More than 70\% of stars originated from clusters
or groups according to a survey of embedded clusters\citep{Lada03,Lada10}. By reviewing solar system properties, \citet{Adams10}
concluded that our Sun most likely formed in an environment with thousands of stars.
Stars in clusters have basically homogenous parameters (i.e., ages, [Fe/H], etc.); thus searching for planets in clusters,
especially in young open clusters (hereafter YOCs), is very important to understand the formation and evolution of planetary systems.
However, nearly all the detected planets are around field stars, while only four planetary systems are found in open clusters
(hereafter OCs) with radial velocity measurements. Although many groups attempted to find planets by transiting, most of them
had no results (see \citet{Zhou12} for a review and references therein). The four known planets in OCs are: a gas giant planet
around a red giant (TYC 5409-2156-1) in NGC 2423 \citep{Lovis07}, a gas giant planet around a giant star ($\epsilon $ Tauri) in the
Hyades\citep{Sato07}, and two hot Jupiters Pr0201b and Pr0211b in Praesepe\citep{Quinn12}. The last two are the first hot Jupiters known
in OCs. On the other hand, as compared to bounded planets, several more free-floating planets (hereafter FFPs) are found in OCs.
\citet{Lucas00} detected a population ($\sim 13)$ of FFPs in Orion. \citet{Bihain09} found three additional FFPs in $\sigma $~Orions,
which is a very young OC (VYOC; $\sim 3$ Myr).
Both the detection and non-detection of planets in OCs help us to calculate the occurrence of planets in clusters, which includes
the formation and stabilities of planetary systems. The formation of a planetary system is assured by the IR observation of a
circum stellar disk. In theory, \citet{Adams06} also show that photoevaporation of proto planetary disks is only important beyond
30 AU due to the median FUV flux of other stars. After planetary systems were formed, star interactions (merges, flybys, etc.),
galactic tides, stellar evolution, inter-planetary interactions, etc., will influence the final orbital architectures of these
planetary systems.
Several previous works investigated the stabilities of planetary systems in clusters. Solving restricted problems,
\citet{Mal11} and \citet{SB01}, and references therein) simulated the influences of assumed flybys on planetary systems, and concluded
that stars passing by with perihelion $\ge 1000$ AU may be negligible, while closer flybys may excite the eccentricities of
planets. \citet{LA98} and \citet{DS01} studied planets in binary systems that encountered stars or other binaries. This revealed
that after the encounter with binary systems, the planetary systems around both single stars or binaries are more easily disrupted
than those after an encounter with single stars. To model the real dynamical environments of clusters, \citet{Spurzem09} used
hybrid Monte Carlo and $N$-body methods to study the evolution of single planetary systems under more realistic flybys from cluster
environments. They found that the liberation rate of planets per crossing time is constant. In their cluster models, a uniform
stellar mass is assumed with no binary included. \citet{Parker12} developed a sub-structured cluster model and simulated the
orbital distributions of single planetary systems in the cluster after 10 Myr. They concluded that, during the dynamical evolution of
YOCs, the planetary systems experienced a relatively violent evolution during the first few megayears, and the fates of these planets depended
strongly on their initial locations, i.e., planets far from the host star can be disrupted easily. Considering the variation of
inclinations in binary systems, \citet{PG09} indicated that about 10\% of the planets in clusters may be affected by the Kozai mechanism.
As mentioned above, most previous works used restricted problems to study the influences of a single flyby event on the planetary
architectures. However, in a real cluster environment, flybys may continuously influence the planetary system. Also the influence of
planetary interactions was ignored due to their single-planet models. As we know, the pumping of eccentricities in closely packed
multi-planetary systems usually leads to dynamical instabilities \citep{TP02,Zhou07}. Planet-planet scattering as well as secular
resonances also influences the orbits of the planets \citep{Nag11,Wu11}. The Kozai mechanism can pump the eccentricities of
planets in binary systems and therefore planetary systems may become unstable due to strong planet-planet interactions
\citep{Mal07a}.
In this paper, we adopt the multi-planetary system models in very young OCs to investigate their different fates as well as the final
orbital architectures of bounded planets. Different from previous works, we use the strict $N$-body simulations to include both
the dynamical evolution of clusters and mutual planetary interactions. We focus on very young OCs with ages less than 10 Myr, so that the dynamical evolution in the cluster is more important than galactic tides or stellar evolution (see also Section 2.1). Multiple flybys are considered in our simulations. We use a more strict model of OCs here than previous works which
contains a mass spectrum and a fraction of binary stars and sets planets located around each star to reveal their stability and
orbital architectures both in binary systems and around single stars. Note that here we first consider the multi-planetary systems in
reasonable cluster environments. Using this model, we intend to investigate how the OC environments mainy influence the
architecture of bounded planets at different locations in the clusters. We will also obtain the fraction of FFPs and their
spatial distribution in OCs.
The structure of this paper is as follows: we introduce our cluster and planetary models in Section 2. In Section 3, we first represent the
dynamical evolution of clusters. After that, Section 4 shows the statistical results of both stars and planets in clusters. We
study the fates of planets in binaries (Section 5). FFPs in the host cluster and ejected objects are included in Section 6. Finally, we summarize the
main results in Section 7 and discuss some assumptions adopted here.
\section{The Cluster Model and The Initial Setup}
In this section, we represent our VYOC model and the initial setups of planetary systems in clusters.
\subsection{Very Young Open Cluster Model}
The evolution of a general cluster is not only influenced by its internal gravity. Galactic tides and stellar evolutions after the main-sequence phase are still important for the evolution of clusters. The galactic tidal disruption timescale can be evaluated as
$\tau_{\rm tide}=0.077 N^{0.65}/\omega$\citep{GB08}, where $N$ is the number of stars in the cluster and $\omega$ is the angular velocity
around the galaxy center. For a typical cluster with typical $N\sim 1000$ near our solar system, $\tau_{\rm tide}\approx0.3$ Gyr.
Therefore in our model, galactic tides can be ignored in a much shorter timescale of $\sim10$ Myr for VYOCs.
The stellar evolution after the main-sequence phase is also important for the orbital evolution of planetary systems, e.g., the red giant phase
\citep{VL07, VL09}. As we know, the lifetime of stars in the main sequence can be evaluated as a power law by their mass: $\tau_{\rm
MS}\approx10^{10}{\rm yr}(\frac{M}{M_\odot})^{-2.5}$ \citep{Bressan93}, and stars more massive than $16 M_\odot$ would have a lifetime of less than
10 Myr. In the IMF of our cluster model represented next, there are less than 4 stars with a mass larger
than $16 M_\odot$. Due to their large masses, their gravities are important for the cluster but the evolution of these stars is
omitted. We do not consider the residual gas in the cluster due to its limited mass and unknown spatial distribution. The IMF of our
cluster model is taken as two parts \citep{Kroupa02}: \begin{equation} N(M)\propto\left\{
\begin{array}{ll}
(M/M_{\odot })^{-1.3}, & 0.1<M/M_{\odot }<0.5, \\
(M/M_{\odot })^{-2.3}, & 0.5<M/M_{\odot }<50.
\end{array}\right.
\label{fmass} \end{equation}
We truncate the stellar mass $>50M_\odot$ due to the rarity of these stars in the cluster. For example, the most massive star in
Orion is $ < 50M_\odot$ ($\theta$ Ori C; \citealp{Krause07,Krause09}). Small stars less than $ 0.1M_\odot$ are also ignored in our
model due to their limited gravity, and the very low occurrence of planets around these stars. The IMF of our cluster model is shown
in Figure 1(a) with a total mass of 800-900$M_\odot$.
To make our cluster model more reasonable, we take a fraction of binary systems into account. The fraction of binary systems $f_b$ is
relative to the mass of the primary star. Here we adopted four different fractions of binary systems according to different ranges
of the primary stellar masses\footnote{The references in Equation(\ref{Mfb}) are the following: \cite{FM92}, \cite{Mayor92}, \cite{DM91}, and \cite{Mason09}, respectively.}:
\begin{equation} f_b\propto\left\{
\begin{array}{lll}
0.42, & 0.08<M/M_{\odot }\leq0.47, & (\textit{\rm FM92}) \\
0.45, & 0.47<M/M_{\odot }\leq0.84, & (\textit{\rm Mayor92})\\
0.57, & 0.84<M/M_{\odot }\leq2.50, & (\textit{\rm DM91})\\
1.00, & 2.50<M/M_{\odot } , & (\textit{\rm Mason09})\\
\end{array}\right.
\label{Mfb} \end{equation}
The separations and eccentricities of the binary systems are set as follows. According to \citet{DM91} and \citet{Rag10}, the periods $P$ (in
days) of the binaries follow a logarithmic Gaussian distribution,
\begin{equation} f(\log_{\rm 10}P)\propto\exp{\frac{-(\log_{\rm 10}P-\mu)^2}{2\sigma}}, \end{equation}
where $\mu=4.8,\sigma=2.3$. The eccentricities obey a thermal distribution: $f(e)=2e$ \citep{Kroupa08}. Here we only consider 'S' type
planets (planets around each star) in binaries. We constrain the periastrons of binary orbits $\ge 30$ AU, because in these binary
systems, the planets (in orbits $\le 10$ AU) are stable in the restricted three-body problem \citep{MP99}. Meanwhile, 1000 AU is adopted
as the upper limit of the binary semi-major axes. Inclinations are set as 0 for all binaries so their initial orbital planes are all
parallel, while three other orbital elements are chosen randomly. The mass ratio of binary stars is selected as a uniform distribution
according to \citet{DM91}.
In our non-rotating cluster model, each cluster contains 1000 stars in total, located initially in 1 pc$^3$. According to the density
profiles of some YOCs (e.g., NGC2244, 2239, \citealp{Bonatto09}; NGC6611, \citealp{Bonatto06}), the location of these stars can be
described by the two-parameter King model\citep{King66} with the form $\sigma(r)=\sigma_{\rm bg}+\frac{\sigma_{\rm
0}}{(1+(r/r_c)^2)}$, where $\sigma_{\rm bg}$ and $\sigma_{\rm 0}$ represent the stellar density of the background and in the cluster
center, respectively. $r_c$ is the core radius of the cluster and $r$ represents the distance from the center of the cluster. However, the
King model is a projected two-dimensional density profile. To obtain the three-dimensional symmetric spatial locations of each star, we use a modified Plummer
model here, \begin{equation} \rho(r)=\frac{\rho_{\rm 0}}{(1+(r/r_c)^2)^{3/2}}, \label{King} \end{equation} where $r_c=0.38$ pc in this paper. This is
consistent with the King model after integration in the direction of our sight. The velocities of these stars are set to obey a
Gaussian distribution, with a mean value of $v$=1 km s$^{-1}$, and a dispersion of $\sigma$=1 km s$^{-1}$. The direction of their velocities is
isotropic; therefore we truncate the distribution where $v<0$, as seen in Figure 1(b). The distribution of stellar location and
velocities, as well as the IMF of stars in our model, corresponds with the normal assumption that the clusters in our model are in
virial equilibrium. So the virial parameter $Q=K/|P|=0.5$, i.e., the absolute value of potential energy $P$, is twice the total
kinetic energy $K$. We set $Q\simeq0.5$ initially via redistributing the velocities of stars.
\subsection{Setups of Planetary Systems}
As mentioned in Section 1, here we focus on OCs with an age less than 10 Myr. These VYOCs are able to give us insight into the
properties of planetary systems around young stars. A large amount of circum stellar disk fraction is found in the VYOCs by $K$-excess observations:
i.e., $30\%-35\%$ of the T-Tauri stars have a disk in the $\sigma$ Ori cluster with an age of $\sim 3 $ Myr\citep{Hernandez07}. Using the {\it Chandra
X-Ray Observatory}, \citet{Wang11} found a $K$-excess disk frequency of $3.8\%\pm0.7\%$ in the 5-10 Myr old cluster Trumpler 15. The
fraction of circum stellar disks limits the formation rate of planets. Combined with stable rate of planetary systems during
subsequent evolution, we can evaluate a planetary system occurrence in these VYOCs.
Due to the large fraction of disks in VYOCs, here we consider planetary systems with two planets around each cluster member.
Considering perturbations from other planets or flyby stars, their orbital parameters can be changed significantly. We use the normal
assumption that all the planets initially formed in circum stellar disks and their angular momentums are always in the same direction
approximately. Here we set the inclination = 1$^\circ$ and eccentricity = 0 for all the planets. The other three angles of orbital
elements are set randomly. We calculate four different initial masses and locations of planets to model different configurations of
planetary systems, as shown in Table \ref{tbl-1}. Planetary systems with two Jupiters represent those with two gas giants in clusters,
called the 2J model. The one Jupiter and one Earth models with different locations represent more general systems. Note that all
initial planetary systems are stable if they did not experience any close encounter with another star. Hereafter, a planetary system is
unstable when the system loses at least one planet because of close encounters in clusters.
We adopt the MERCURY package for $N$-body simulations\citep{Cham99}. We include the gravities of each star and each planet during
integrations and let the clusters evolve for 10 Myr. During our simulations, we truncate the clusters at 10 pc, i.e., stars and planets
$>10$ pc away from the cluster center are removed as ejected objects. A binary system is thought to be disrupted into two single
stars when its semi major axis is greater than 1000 AU. To judge if the planets are ejected from their host planetary systems, we use
a critical semi major axis of 100 AU, but do not remove the ejected planets. These planets can become FFPs and cruise in the clusters
unless they leave the clusters.
\section{Dynamical Evolution of Clusters}
Before investigating the architectures of planetary systems in the VYOCs, we study the evolution of clusters in this section. Figure 2
shows the variations of the density profiles $\rho$ , half-mass radius $r_h$, virial parameter $Q$, and the percentage of the star
number $N_{\rm S}/N_{\rm Stot}$ with time. Panel (a) gives the densities of stars at 0,1,3,5,10 Myr in the 2J model. Due to the
expansion of the clusters, the density decays with time, which will significantly influence the close encounter rate between planetary
systems and thus results in different dissolution timescales of planetary systems, as shown in Section 3.2. In panel (b), the half-mass radius
$r_h$ (dashed line) increases to 1.5 pc, about three times its initial value. Although the cluster extends to a much larger region in
space, the virial parameter $Q$ (solid line) show it is still at the virial equilibrium ($Q \sim0.5$) at the end of our simulation.
Panel (c) shows that about $3\%$ of the cluster members are ejected out of the cluster in the first 3 Myr in all models. In our models, the
velocity dispersion $\sigma=1\ \rm{km\ s^{-1}}$ $(1\ {\rm km\ s^{-1}}\approx$1\ pc\ Myr$^{-1})$, and about $\sim97\%$ stars with velocity dispersion $<2\sigma=2\ {\rm km\ s^{-1}}$, adding to the mean velocity $1\ {\rm km\ s^{-1}}$, it will take at least
$ {\rm \sim 3~Myr}$ for a star in 1 pc to arrive at the boundary of the cluster (10 pc as we set in Section 2). Between 3-10 Myr, because of
the dissolution of cluster, $12\%$-$17\%$ of the stars escape from the host cluster and only $80\%$-$85\%$ of the stars still stay in
these clusters after 10 Myr.
To obtain the analytic dissolution timescale, we use the half-mass relaxation timescale(\citealp{Spitzer87}, p. 40): \begin{equation} \tau_{\rm
rh}=0.138(\frac{N}{\ln{N}})(\frac{GM_{\rm cl}}{r_{\rm h}^3})^{-1/2}. \label{trh} \end{equation}
Using thefollowing typical values, $N=1000, M_{\rm cl}\sim800M_{\odot }$, and $r_{\rm h}=0.9$ pc at $t=3$Myr in Figure 2(b), we obtain $\tau_{\rm
rh}\approx12$ Myr. For isolated clusters here (without the galactic tide), the escape velocity of the cluster $v_{\rm
esp}=2\langle \sigma^2 \rangle{}^{1/2}=2 $km s$^{-1}$, in the initial Gaussian distribution of velocities, there are
$\sim18.86\%$
of the stars with initial velocities $v\geq v_{\rm esp}$; therefore the dissolution timescale
of the cluster is \begin{equation} \tau_{\rm diss}\approx \tau_{\rm rh}/0.1886 \approx 64 {\rm Myr}. \label{tdiss} \end{equation}
According to this analytic
dissolution timescale, we can estimate that the cluster will lose $7{\rm Myr}/ \tau_{\rm diss} \sim11\%$ of the stars between 3-10 Myr, which is consistent
with our simulation results.
The mass segregation timescale for a star with mass $M$ is also important for the next discussion; here we use a typical expression by Spitzer (1987, p. 74),
\begin{equation} \tau_{\rm seg}=(\frac{\bar{M}_{\rm S}}{M})(\frac{N}{5\ln N})(\frac{r_{\rm h}}{\sigma}), \label{tseg} \end{equation} where $\bar{M}_{\rm S}$
is the mean mass of stars in the cluster. For the typical value used above, we obtain $\tau_{\rm seg}\leq 10$ Myr for an $M\geq2 M_\odot$ star due to energy equipartition in our models.
\section{Planetary Systems around Cluster Members}
\subsection{General Statistical Results}
The general results of the four models are analyzed in detail in this section. Table \ref{tbl-2} represents the number of survival
objects in the clusters in different models. Although 80\%-85\% of the stars are still in clusters, only 66\%-74\% of the planets stay in the
clusters. There are at least 10\% more planets, which were disrupted from their host stars and obtained a large velocity. Finally
these planets escape from the cluster more easily than more massive stars. In these YOCs, we divide the survival planetary systems
into the following three classes.
\begin{itemize}
\item {\it 2pisi systems}. Stable planetary systems maintaining two original planets. 55\%-68\% of the initial systems are in this class.
\item {\it 1pisi systems}. Planetary systems lost one planet and have only one original planet that survived. Only 6\%-18\% of the stars ejected one planet.
\item {\it pisj systems}. Recaptured planetary systems, which is very rare. In our model, they are less than $1\%$.
\end{itemize}
Hereafter these systems are represented as 2pisi, 1pisi, and pisj systems. Besides the retention of planetary systems, a fraction of
stars ($\sim$6\%-13\%) lost both planets ($N_{\rm 0ps}$), primarily because of close flybys as near as $<100$ AU. Due to the ejection of planets, only 2.7\%-5\% of the planets are still cruising in the cluster and become FFPs
($N_{\rm FFPi}$), while at least 12\%-21\% of the planets become FFPs outside the host cluster ($N_{\rm FFPo}$). In our model,
$\sim47\%$ of the stars are initially in binary, and after a 10 Myr evolution in the cluster $\sim64\%$ of the binary systems are preserved. After the evolution of the cluster, we also
find a few "naked" stars, $N_{\rm ss}$, without any planetary or stellar companions.
In Table \ref{tbl-2} we see some rough correlation with the bounded energy of planets ($E_{\rm b}$). $N_{\rm P}$, $N_{\rm FFPi}$, $N_{\rm
FFPo}$ and $N_{ss}$ have negative correlations with $E_{\rm b}$, while $N_{\rm 2pisi}$ is the opposite. The results are reasonable because
larger energy is needed to disrupt planetary systems with higher $E_{\rm b}$. The J5E2 model with a larger $E_{\rm b}$ has 10\%
more survival planets than model J10E4 (also see Figure 6(a)). As \citet{Spurzem09} detailed the influences on single planetary
systems with different semi major axes, we do not survey the influences of $E_{\rm b}$ in detail here due to our limited four
models. More details about binary systems and FFPs will be discussed in Sections 5.2 and 5.3. Here we focus on the architecture of
survival bounded planetary systems and other properties.
\subsection{Architectures of Planetary Systems}
Figure 3 shows the orbital architectures of planetary systems in different models. In each $a$-$e$ plane, planetary systems are divided into
three classes: 2pisi (green triangles), 1pisi (black circles) and pisj (red squares) systems. The filled symbols represent Earth-like
planets while the open symbols represent Jupiter-like planets. Most planets still stay near their initial locations. The outer Jupiter-like
planets can more easily change their angular momentum than inner planets during flybys, and therefore change their locations or
eccentricities more probably. In panels (b)-(d), Earth-like planets are difficult to eject; therefore most 1pisi systems retain their
filled circles.
The distributions of eccentricities and inclinations of these three classes of planetary systems are shown in Figure 4. Obviously the
2pisj systems have only a negligible fraction to change their initial eccentricities ($\sim$3\%$>0.1$) or
inclinations ($\sim$6\%$>10^\circ$). Meanwhile a large part of the 1pisi systems changed their eccentricities larger than 0.1
($\sim50\%$), or inclinations larger than $10^\circ$($\sim30\%$). Ejected planets become FFPs, which can be recaptured by some other
stars randomly. Hence, the pisj systems tend to have much wider and flatter distributions of both inclinations and eccentricities.
Here we note: for these 1pisi systems, 282 inner planets survived compared with 223 outer ones. The ratio is 1.3:1 on average.
Furthermore, to reveal the different properties of these systems retaining inner or outer planets, we plot Figure 5, adding pisj systems
as blue triangles. Black squares mean 1pisi systems with outer planets that survived, while red circles represent systems where inner planets
survived. In Figure 5(a), we give the final locations of all the systems in the inclination-eccentricity plane, which is divided into four
regions by two lines: eccentricity $=0.1$ and inclination $=10^\circ$. The red line is inclination $=90^\circ$. Planets above this
red line move in retrograde orbits. Figure 5(b) gives the fraction of systems in the four regions. Nearly all the pisj systems have an
eccentricity $>0.1$, and none of them have a small eccentricity ($<0.1$) and inclination ($<10^\circ$). In the 1pisi systems where the
outer planet is ejected, about 45\% of the surviving inner planets have small eccentricities and inclinations. However if the inner one is
ejected, only about 25\% of the surviving outer planets have small eccentricities and inclinations, while more than 70\% of the planetary
systems have obviously changed their eccentricities ($>0.1$) or inclinations($>10^\circ$).
The spin-orbit misalignment can be estimated by the Rossiter-McLaughlin effect (see \citealp{Winn10}); thus the inclination study in our
simulations is also interesting. Assuming all stars spin in the same direction invariably for simplification, we have many planets in
misaligned orbits ($>10^\circ$) in clusters. More than $25\%$ of the 1pisi systems that have their outer planets ejected
have the surviving planets in misaligned orbits. The same misaligned fraction is also obtained in 1pisi systems that have inner planets ejected.
We also find 21 planets ($\sim4\%$) in
retrograde orbits that have surviving single planetary systems. These fractions are quite low except in pisj systems, as shown in the smaller panel in
Figure 5(b). However, it is still higher than the occurrence in 2pisi systems, which contain only 14 ($<0.6\%$)
planets in retrograde orbits. Based on the results here and considering all the planetary systems in the clusters, we calculate the lowest
fraction of planetary systems in VYOCs with a misalignment of at least $\sim 6\%$. Only 1\% have planets in retrograde orbits.
\subsection{$r$-correlations and Mass-correlations}
As pointed out by \citet{Binn87}, the frequency of close encounters is sensitive to the stellar density, which decreases with both
evolution time and the distance from the cluster center $r$ (the same hereafter). As shown in Figure 6(a), the fractions of surviving
planets are very different from that of stars (Figure 2(c)) due to the fast decay of stellar density $\rho$ in the center of the clusters.
In all four models, the fraction of surviving planets decreases in the first 1 Myr. After that, $\rho$ decays quickly and the
decreasing rate becomes smaller and smaller.
Same as the previous time dependence, the stabilities of planets change with different locations in OCs. In the center of the clusters, the
density can be much larger than in the outer region, which leads to a higher frequency of close encounters (see Equation (\ref{enc})).
Planetary systems near the center of OCs can be disrupted quickly. This is obvious as shown in Figure 6(b). The
number of surviving planets is denoted by $N_{\rm P}$. The distribution of unstable planetary systems with $N_{\rm P}=0$ peaks at 0.958 pc sharply, while a
fatter distribution of systems with $N_{\rm P}=1$ peaks around 1 pc. The peak for stable systems with $N_{\rm P}=2$ is located at 1.29
pc, i.e., these stable systems stay in the outer region compared with the other two unstable systems. This is consistent with the fact that
planetary systems in the inner region of OCs are probably unstable. In the inner 1 pc$^3$, about 40\%, 30\% and 20\% of the planetary systems
have $N_{\rm P}=0, 1$ and 2, respectively. About 80\% (the horizontal dotted line) of the systems with $N_{\rm P}=0, 1, 2$ are concentrated in
2, 3, 4 pc approximately. We call this correlation {\em r-correlations}.
The variations of angular momentum $\Delta L$ for planets in these three systems are also plotted in Figure 7. We can find an obvious
correlation between the maximum $\Delta L$ (with a unit $M_{\rm J}\cdot\sqrt{{\rm GM}_\odot\cdot {\rm AU}}$) and $r$. In the
center of the clusters, more close encounters take more angular momentum away. In Fig.7, we show linear estimations of the upper limit for
\begin{equation}|\Delta L|=a(10-r)+b, \end{equation}
where the constants $a,b$ for different $N_{\rm P}=2, 1, 0$ are listed in Figure 7. $\Delta L$ of planetary systems in different
three systems at different locations must be less than this limit in our VYOC models.
As shown in Equation (\ref{tseg}), the mass segregation timescale can be less than 10 Myr for stars $>2M_\odot$. The mass segregation leads
to a spatial distribution of stars that correlate to the stellar mass. Massive stars sink into the inner region while small stars
cruise in the outer region. It is similar to the $r$-correlation. However, another influence of stellar mass on planetary stability has the opposite effect.
Large mass stars may hold planets around them more tightly, and thus they need more energy to release these
planets. The stellar mass correlation (called {\em mass-correlations}) combines these two competing effects.
In Figure 8(a) we give the fraction distribution of planetary systems ($f$) for $N_{\rm P}=0, 1, 2$. No planets survived around stars more
massive than $16M_\odot$, because these stars have sunk deep into the center of the cluster in a very short timescale, $\tau_{\rm
seg}<1.2$ Myr, and planetary systems in the center of the cluster are much less stable as pointed out above. The black dashed line columns show
the observational data (the same label as in Figure 6(b)). In order to compare with the
initial IMF $f_0$ of host stars, we represent a normalized fraction (divided by $f_0$) in Figure 8(b) to highlight the fraction
variation. Systems with $N_{\rm P}=2$ remain stable for stars with $M<2.5M_\odot$. A sharp decrease in the fraction for $N_{\rm P}=2$
and large enhancements for $N_{\rm P}=1,0$ exist at $M=2.5M_\odot$. This critical mass indicates a boundary of about 2.5$M_\odot$ for the
two competing effects. For those more massive stars, most planets around them can still be disrupted due to the heavy density ($\rho$)
in the inner region of the cluster, although they are bounded more tightly. Less massive stars cruise in an
environment with a much lower $\rho$ and can hardly release any planets. From an observational aspect, three of the four planetary systems in OCs
are found around stars with masses $< 2.5M_\odot$. The left one, $\epsilon$ Tauri, has a stellar mass of $2.7M_\odot$ \citep{Lovis07}. We also predict that more planets ($>80\%$ in our results) will exist around less
massive stars (0.1-1 $M_\odot$) in these VYOCs.
Although the observational data of planetary systems in OCs are limited, the $r$-correlations and mass-correlations obtained here are
still consistent with observations. In the future, more planetary systems detected in OCs can verify and refine these correlations.
\section{Planets in Binaries}
In our model, an OC contains $\sim47\%$ of the binaries, and the binary fraction decreases with time due to stellar flybys.
Investigating the final fate of planetary systems in these binaries is very helpful for studying the stabilities of planetary systems in the
cluster. In this section, we focus on the orbital variations of binaries (Section 5.1) and especially the planetary systems in binaries
(Section 5.2).
\subsection{Binary Systems}
According to our simulations, a finally binary fraction $f_b$ is achieved
($\sim36\%$)
compared with the initial value ($\sim47\%$), and nearly 64\% of the binary systems in clusters remain after 10 Myr. Besides the
binaries ejected outside the clusters, about 15\%-20\%, as shown in Section 3, $\sim$16-21\% of the binaries were disrupted in the clusters due to
close encounters. We estimate the final number of binary systems $N_{\rm b}$ as two parts: the dissolution factor: $\exp(-{\rm
Age}/\tau_{\rm diss})$ and the disrupted factor $\exp(- {\rm Age}/\tau_{\rm disrupt})$, \begin{equation} N_{\rm b}=N_{\rm b0}\times\exp(-{\rm
Age}/\tau_{\rm Nb}), \label{Nb} \end{equation} where Age means the age of the cluster. The timescale for the decay of the binary number can be
calculated as
\begin{equation} \tau_{\rm Nb}=\frac{\tau_{\rm diss}\times\tau_{\rm disrupt}}{\tau_{\rm diss}+\tau_{\rm disrupt}}. \end{equation}
$\tau_{\rm diss}$ is shown in Equation (\ref{tdiss}). To estimate $\tau_{\rm disrupt}$, we use the encounter timescale obtained by
\citet{Binn87}: \begin{equation} \tau_{\rm enc}\simeq 33{\rm Myr}\times(\frac{100{\rm pc}^{-3}}{\rho})(\frac{v}{\rm 1\ km\ s^{-1}})(\frac{10^3 {\rm
AU}}{r_{\rm peri}})(\frac{M_\odot}{m_{\rm t}}). \label{enc} \end{equation} If the average separation of binary stars is $\bar{a}_{\rm b}$,
assuming the closest distance encountered is $r_{\rm peri}\simeq 2\bar{a}_{\rm b}$, the perturbation of encounters will be the same degree
as the binary companion. These encounters will probably disrupt the binaries. Considering a binary can encounter another single star
or binary, we use 3.5 times the mean mass of star $\bar{M_{\rm S}}$ as the total mass of stars during the encounter $m_{\rm
t}=3.5\bar{M}_{\rm S}$. Taking a typical stellar velocity $v=1\ {\rm km\ s^{-1}}$ , we finally obtain the binary disrupted timescale from
Equation (\ref{enc}): \begin{equation} \tau_{\rm disrupt}\simeq 33 {\rm Myr}\times(\frac{100{\rm pc}^{-3}}{\rho})(\frac{500 {\rm AU}}{\bar{a}_{\rm
b}})(\frac{M_\odot}{3.5\bar{M}_{\rm S}}). \label{disrupt} \end{equation} In our model $\bar{a}_{\rm b}=185$AU, $\bar{M}_{\rm S}=0.8M\odot$. The
stellar density decayed so quickly that here we chose the $\rho\sim60$ pc$^{-3}$ (the value at 1 pc in the cluster with a moderate age of 3
Myr). Substituting these typical values into Equation (\ref{disrupt}), $\tau_{\rm disrupt}\approx53$ Myr. Thus about 17\% of the binaries are
disrupted after 10 Myr, which is consistent with the fraction of the disrupted binary in our simulations.
Based on the observations of YOCs, the binary fraction $f_{\rm b}$ can be estimated by: \begin{equation} f_{\rm b}=f_{\rm b0}\times\exp(\rm
-Age/\tau_{\rm disrupt}). \label{fb} \end{equation} Taking $\tau_{\rm disrupt}=53$ Myr and the initial binary fraction $f_{\rm b0}=47\%$ in our
model, we calculate the binary fraction $f_{\rm b}=39\%$ after 10 Myr according to Equation (\ref{fb}), which is similar to $f_{\rm
b}=36\%$ in our simulations.
The distributions of semi major axes $a_{\rm b}$ and eccentricities $e_{\rm b}$ of survival binary systems are shown in Figure 9. The red
bars show the distribution of the initial fraction($f_{\rm 0}$), while the green bars show the distribution after 10 Myr($f_{\rm 10}$).
The upper panel gives the relative fraction ($f_{\rm 10}/f_{\rm 0}$). The dynamical evolution in clusters can disrupt wider binary
systems more easily due to the lower bounded energy, and the eccentricities of binary systems can also be pumped. Therefore, compared
with the initial distribution, the fraction of binary systems with small $e_{\rm b} (<0.4)$ or large $a_{\rm b} (>200 $AU) decreases
after 10 Myr. Meanwhile, more binary systems with $a_{\rm b}<100$ AU ($>45\%$) or moderate $e_{\rm b} (0.4-0.8)$ ($>55\%$) are left.
Very few new binary systems formed during the evolution of the clusters. In our four models, only 15 new binaries formed in total, with
mean eccentricity$=0.68$ and inclination$=1.66$ rad. Their $a_{\rm b}$ are not regular at all from $60$-$1000$ AU. The contribution to
$f_b$ of these binaries is quite small and can be omitted.
\subsection{Planets around Binary Stars}
During the disruption of binary systems, the planets around each star have different fates. Some collide with host stars, some
are ejected out of the systems, and some still orbit around their host stars. In our models, there are a total of 304 disrupted binary
stars that still stay in clusters after 10 Myr, while others are ejected out of the cluster. Sixty four of them lose all their planets, 59 stars have
only one planet, and the other 181 stars have two bounded planets. We are also concerned with the inclinations of these
planets. For the systems with one planet, 5 in 59 planets are in retrograde orbits. For those with two planets, five systems have at
least one planet in retrograde orbits. In systems containing two planets, planets with mutual inclination $>i_{\rm Kozai}\sim42^\circ$ experience the Kozai effect \citep{Koz62}. There are only six such systems around disrupted stars. Around the original single stars,
only five systems have a mutual inclination of $>i_{\rm Kozai}$. We conclude that, due to perturbation during the disruption of binary
systems, the mutual inclinations of planets around these disrupted single stars are large and have more of a chance of experiencing the Kozai
effect in their long evolution hereafter.
Since lots of the binary systems survived, we next study the planetary systems in these binaries. As we set two planets
around each star, there are a total of four planets in one binary system initially. Figure 10 shows the left number of planets in binaries
$N_{\rm P}$ in the $a_{\rm b}$-$r$ and $e_{\rm b}$-$a_{\rm b}$ planes. $r$ is the distance from the barycenter to the center of the
cluster.
As shown in panels (a) and (b), the binary systems with larger angular momentum (larger $a_{\rm b}$ and smaller $e_{\rm b}$) and
smaller stellar density (larger $r$) probably have more survival planets.
Figure 11 shows $N_{\rm P}$ in different $X$-$Y$ planes. We obtain a rough criterion: binary systems with $a_{\rm b}(1-e_{\rm b}^2)>100$AU
seem to restore all four planets, while others more or less lost planets around them. There is still a small fraction for
close binaries that have a small $e_{\rm b}$ and can preserve all the planets very well. As seen in Figure 10(a), the distance from the
center of cluster $r$ has less influence than that in single stars.
Statistically, only 1411 planets (58\% of the initial planets) stay in 604 binary systems after 10 Myr, i.e., each binary star contains 1.17
planets on average. In these 15 newly formed binaries, a total of 22 planes survived; therefore we obtain a much smaller mean planet
number of 0.7 around each star in these newly formed systems. Compared with single stars, which have $\sim1.8$ planets around each
star on average, multi-planetary systems in binary are much less stable than those around single stars.
\section{FFPs and Ejected Objects}
Besides planets bounded around stars, there are fruitful FFPs in our universe \citep{Sumi11}. In our model, there are a few FFPs left
in the cluster; however there is an abundance of FFPs ejected outside the cluster. This is due to the much larger mean velocities of FFPs compared
with the velocities of stars in the cluster; therefore these FFPs are much easier to be ejected. In this section, we will reveal the
distribution of FFPs and other properties of ejected objects.
As the outside Jupiters are much easier to eject out of planetary systems as seen in Figure 3, there are much more Jupiter-like FFPs
(about 2.5 times) than Earth-like FFPs. However, their spatial distributions are similar, as shown in Figure 12, i.e., the CDFs of these
planets with different sizes are nearly the same. Nearly half of the FFPs are concentrated in 2 pc, while 80\% of the FFPs are concentrated in 4 pc. As
shown in Figure 13(a), the fraction of FFPs is relative to their location $r$; we show a fitting curve to model the distribution: \begin{equation}
f_{\rm FFPs}=0.003+\frac{0.07}{(r-1.14)^2+0.97}. \label{FFP} \end{equation} The maximum fraction peaks at 1.14 pc.
Besides these FFPs cruising in clusters, there are still a large number of objects ($N$=2868 in sum) that were ejected out of
the clusters. The fraction of components of ejected objects is shown in Figure 13(b). 48.2\% of these objects are FFPs ($N$=1381), while
the two-planet systems also have a quite large fraction of 40.6\% ($N$=1164). The very rare one-planet systems are only 2\% ($N$=58). The remaining
9.2\% ($N$=265) are single stars with no planetary or stellar companion around them. Most FFPs are ejected out of the host clusters, as
seen in Table \ref{tbl-2}. $N_{\rm FFPo}$ has a negative correlation to the bounded energy. In all these FFPs, $\sim70\%$ are
contributed by planets in binaries, i.e. planets in binary systems seem to be much less stable.
Based on the above results, the FFPs are likely to be found in the inner region of YOCs, and Jupiter-like FFPs are common. Most FFPs
are ejected outside their host clusters and cruise in the deep universe. For these planetary systems ejected out of host clusters,
more than 80\% keep the same initial number of planets. Few planets ($<10\%$) in these systems have their orbits changed much.
Based on this conclusion, our solar system, which is thought to be formed in a cluster environment, is most likely to form as a similar
current configuration before the ejection from its host cluster.
\section{Discussions and Conclusions}
In VYOCs, in order to repel the galactic tidal force and stellar evolution, we chose an isolated, isotropic, and non-rotating cluster model in
this paper. We investigated the configurations of both multiple and single planetary systems as well as FFPs in VYOCs
without residual gas. In these clusters, a modified King model is adopted to produce the spatial distribution of stars, and virial
equilibrium is satisfied as an assumption. Different from previous works, we add a large binary fraction as well as the IMF of
stars in the cluster model, which is much more realistic than previously adopted models.
Our major conclusions in this paper are listed as follows:
\begin{itemize}
\item After dynamical evolution for 10 Myr, clusters are expanded but still in virial equilibrium. The general statistical
results of the four models (see Table \ref{tbl-1}) are presented in Table \ref{tbl-2}. More than half of the planetary systems still
retain their original planet number. A cluster can lose about 26\%-34\% of its original born planets. The number of surviving planets
($N_{\rm P}$), FFPs inside the cluster ($N_{\rm FFPi}$), FFPs outside ($r>10$ pc)the cluster ($N_{\rm FFPo}$), and single stars without any
companions ($N_{\rm ss}$) depend on the different bounded energy of planets $E_{\rm b}$.
\item More than 90\% of the 2pisi systems change eccentricities less than 0.1 or inclinations less than 10$^\circ$, while most 1pisi systems have eccentricities or inclinations of planets that obviously changed. Planets in pisj systems have a wide, flat distribution of eccentricities and inclinations. In 1pisi systems, inner planets are
preserved preferentially. If an inner planet was ejected, the remaining planet seems to have more probability of changing eccentricities or
inclinations (Section 4.2).
\item Under the assumption that all the stars spin in a fixed direction in our cluster models, at least 6\% of the stars have misaligned planetary systems and 1\%
have retrograde planets. These spin-orbit misalignment systems are likely to be generated in unstable systems (1pisi or pisj).
\item Unstable planetary systems are concentrated in the inner region of the clusters while the stable systems are following a fatter
distribution in the outer region, as shown in Figure \ref{fig6}(b). With a sharp peak at $\sim$1 pc, the fraction of planetary systems with
$N_{\rm P}=0$ in the inner 2 pc is about 80\%. The fraction of systems with $N_{\rm P}=1,2$ in 2 pc is about 60\% and 50\%
respectively. Our results are constisten with observations: two of the four planetary sytems are in 2 pc of OCs.
\item The stellar mass is also a key factor for the stability of bounded planets. We obtained a critical mass of $\sim2.5M_\odot$ in Figure 8,
above which the planetary systems are probably unstable and most of them lose at least one planet. Planetary systems around stars with mass
$<2.5M_\odot$ are likely to maintain all their original planets. According to our results, a large fraction ($>80\% $) of the bounded
planets can be found around stars with ($M=0.1$-$1 M_\odot$) in VYOCs. Massive stars ($>16M_\odot$) tend to lose all the planets around them.
We also compared our results to observations.
\item In YOCs, binary systems can be ejected or disrupted in the timescale $\tau_{\rm diss}$ or $\tau_{\rm disrupt}$. The binary fraction can be estimated by Equation (\ref{fb}) in Section 5.1.
In our model, nearly 64\% of the binaries still exist in the cluster after 10 Myr. However their orbits have been changed due to the evolution of the cluster.
The number of binary systems with $a_{\rm b}>200$ AU decreases by disruption, and binary systems with ecc$<0.4$ likely have their eccentricities pumped
to a moderate value. At the same time, a minority of new binary systems (15 in total) have formed.
\item The planets around disrupted binary stars are more unstable than those around single stars. After the violent perturbations during binary disruptions, more than 30\% of the planetary systems lost at least one planet. However, planetary systems around these disrupted binary stars contribute lots of retrograde planets, as pointed out in Section 5.2.
\item The stability of planetary systems in binaries depends on $a_{\rm b},e_{\rm b}$, as shown in Section 5.2. We give a rough criterion:
planets in binary systems with $a_{\rm b}(1-e_{\rm b}^2)>100$ AU are hardly disrupted during the cluster evolution. The influence of
$r$ on binary systems is less obvious than that in single stars.
\item 15\%-25\% of the planets are released as FFPs, and only 1/4 of them are still cursing in OCs after 10 Myr. However, they can only stay in the inner
domain of the cluster; therefore the CDFs of them correspond to each other. More than 80\% of the FFPs are concentrated in 4 pc, and the
maximum fraction peaks around 1.14 pc, as shown in Figure 13(a) in Section 6.
\item The ejected objects contain $\sim48\%$ fruitful FFPs (see Figure 13(b)). More than 80\% of the ejected stars still have the same initial number of bounded planets. This indicates tht our solar system, which is thought to formed in a cluster environment, is most likely to form as a similar configuration as its current state.
\end{itemize}
However in our cluster models, some assumptions are still too simple. First, the residual gas in the cluster is not included. In
clusters, gas with limited mass can hardly influence the dynamical evolution of the cluster. However, the gas disks around stars play
crucial roles on the formation and evolution of planets \citep{Liu11}. The gravity of the outer gas disk can also lead to secular
effects on multi-planet systems and under some conditions with small mutual inclination can also lead to the onset of the Kozai effect.
\citep{Chen12}. In clusters, the flybys also influence the structure of the gas disk and consequently change the occurrence of planets
\citep{FR09}.
Second, the isotropic assumption and virial equilibrium in our cluster model are also queried by some authors. \citet{SA09} and
\citet{Schmeja11} indicated a substructure in young star-forming regions. As mentioned by \citet{Parker12}, the number of surviving
planets also depends on the initial virial parameter $Q$ of the cluster.
We only choose four models with different planetary systems, therefore it is hard to discuss the influence of bounded energy $E_{\rm
b}$ in detail. In our further work, additional planetary systems are needed to study the correlation between planetary stability
and $E_{\rm b}$. We only set two planets around each star as the first step to consider the multi-planetary systems. The stabilities
of a system with more planets might be much more sensitive with close encounters. Different properties of the clusters, i.e., the total
mass, core radium, number of stars, etc., lead to different timescales of cluster evolution \citep{Mal07b}. Therefore, the fraction of
preserved or ejected planets depends on these parameters too.
The rotation of the cluster will influence the dynamical evolution of cluster directly. Observations of cluster NGC 4244 show an obvious
rotation \citep{Seth08}. Since we adopted a non-rotating cluster, we can only study a one-dimensional $r$-correlation in Section 4.3.
Adding a rotating rate to the clusters, the stability of planetary systems at different latitudes with the same distance is varied due to
the different mean velocities $v$, which is presented in the expression of $\tau_{\rm enc}$ (Equation (\ref{enc})).
As we only study the VYOCs here, the galactic tidal effect and stellar evolution are not important, as pointed out in Section 2.1. However, when studying
a longer evolution of the cluster with age $>10$ Myr, the galactic tides and stellar evolution timescales need be estimated
again. Galactic tides tend to evaporate cluster members and change the properties of clusters \citep{Baum03}. During stellar evolution,
a star will experience red giant branch, horizontal branch, asymptotic giant branch, etc., in the H-R diagram. The stability of planets around it must be checked carefully
during all these phases \citep{VL07}.
{\bf Acknowledgements} This work is supported by the National Basic Research Program of China (2013CB834900), National Natural Science
Foundations of China (Nos. 10833001, 10925313, and 11078001), National Natural Science Funds for Young Scholar (No. 11003010), Fundamental
Research Funds for the Central Universities (No. 1112020102), and the Research Fund for the Doctoral Program of Higher Education of
China (Nos. 20090091110002 and 20090091120025).
|
2,877,628,091,194 | arxiv | \section{Introduction}
\label{sec:intro}
Image-base salient object detection (SOD) aims to detect and segment objects that capture human visual attention, which is a preliminary step for subsequent vision tasks such as object recognition and tracking.
Over the past decades, many benchmark datasets~\cite{yan2013hierarchical, yang2013saliency, wang2017stagewise} have been constructed for the study of SOD. These datasets include natural images with simple and complex scenes, which usually don't focus on specific tasks. Such datasets can be referred to as general SOD datasets (or conventional SOD datasets). Based on these datasets, many SOD methods~\cite{zhang2017amulet,liu2018picanet,wang2018detect,su2019selectivity} have been proposed and achieve great performance. Correspondingly, this kind of task is called general SOD (or conventional SOD). However, in real world, what we usually need is to deal with specific application scenarios (\emph{e.g.}, automatic driving and medical image processing). For these scenarios, the analysis of salient objects plays an important role for assisting subsequent high-level visual tasks and understanding these scenes. As a distinction, this kind of task can be named as task-aware SOD.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth,height=5cm]{figures/dataset_motivation_2.png}
\vspace{-7mm}
\caption{Comparisons of the conventional and task-specific SOD dataset. Images and ground-truth masks of the (a) conventional dataset and (b) task-specific dataset are from DUT-OMRON~\cite{yang2013saliency} and CitySaliency, respectively.}
\label{fig:dataset_motivation}
\end{figure}
Recently some task-specific datasets have been collected. These datasets contain images of specific tasks, such as driving, webpage-related behavior and game. For example, Alletto~{\em et al.}~\cite{alletto2016dr} proposed a publicly available actual driving dataset DR(eye)VE with task-specific fixation maps. Based on these datasets, task-aware fixation prediction (FP) has made progress. However, these still lack task-specific datasets for SOD, which handers the development of task-aware SOD.
Toward this end, we propose a new driving task-oriented dataset (denoted as \textbf{CitySaliency}), which contains 3,475 images with pixel-level annotation based on the common driving task. Some representative examples are shown in the task-specific dataset of Fig.~\ref{fig:dataset_motivation}. In constructing this dataset, we first collect these images from the driving scene. Given these images, we ask 25 volunteers to view these images by simulating
\newpage
\pagestyle{empty}
\noindent
driving task to collect eye-tracking data and human fixations.
Based on these datas, we compute the saliency value of each object with the help of semantic annotations to determine ground-truth masks of salient objects. Finally, the images and masks together form the task-specific dataset. Comparing CitySaliency with conventional datasets, we find there exist two main challenges: 1) cross-domain knowledge difference. As shown in Fig.~\ref{fig:dataset_motivation}, cars and persons in conventional datasets are the focus, while more attention in CitySaliency is paid to drivable roads while ignoring vehicles (first two rows) and pedestrians (last two rows of Fig.~\ref{fig:dataset_motivation}) on both sides of roads. The knowledge difference about different task-related targets brings resistance to SOD. 2) task-specific scene gap. Based on the statistical analysis, we find that CitySaliency has three main characteristics including more discrete distribution, more salient objects and more area occupation, which reflects the complexity of scenes in CitySaliency. These two problems make it difficult to deal with task-aware SOD.
To address these problems, we propose a baseline model for the driving task-aware SOD via a knowledge transfer convolutional neural network. In the network, we put forward an attention-based knowledge transfer module, which is used to transfer the knowledge from general domain to task domain to make up the knowledge difference, assisting the network to find the salient objects. In addition, we introduce an efficient boundary-aware feature decoding module to fine decode features for lots of existing salient objects with structures in complex task-specific scenes. The two modules are integrated in a progressive manner to deal with the task-aware SOD. Experiments show the proposed method outperforms 12 state-of-the-art methods on CitySaliency, which demonstrates the effectiveness of the proposed knowledge transfer network.
The main contributions of this paper include: 1) we propose a new task-specific SOD dataset, which can be used to boost the development of task-aware SOD, 2) we propose a baseline model to deal with the difficulties in the proposed dataset and it consistently outperforms 12 state-of-the-art algorithms, 3) we provide a comprehensive benchmark of state-of-the-art methods and the proposed method on CitySaliency, which reveals the key challenges of the proposed dataset, and validates the usefulness of the proposed method.
\section{A New Dataset for Task-aware SOD}
We propose a new driving task-oriented dataset (named as \textbf{CitySaliency}) for task-aware SOD, which focuses on the common urban driving task. In this section, we will introduce the details in constructing the dataset.
\subsection{Data Collection}
The proposed dataset is built on Cityscapes~\cite{cordts2016cityscapes}, which is a complex urban driving dataset for instance-level semantic labeling. Cityscapes consists of 5,000 images with pixel-level semantic annotation (as shown in Fig.~\ref{fig:ddataset_annotation}(a)(b)), which is split into 2,975 image for training, 500 image for validating and 1,525 for testing (note the annotation of testing set is withheld for official benchmarking purposes). In constructing CitySaliency, we combine the training and validation sets (3,475 images) as a raw data source.
\subsection{Psychophysical Experiments}
To annotate salient objects in complex urban driving scenes, there exists a tough problem: there may exist several candidate objects while different annotator may have different biases in determining which objects are salient. To alleviate this influence, we conduct psychophysical experiments to collect human fixations as auxiliary information, as done in~\cite{li2017benchmark}.
In the psychophysical experiments, 25 subjects with normal vision and never seeing collected images participate in. During the experiment, we show the images on a 22-inch display screen with a resolution of 1680x1050 and install eye-tracking apparatus (SMI RED 500) with a sampling rate of 500HZ to record various eye movements including fixation, saccade and blink.
After the data collection, various information of human eye movement is recorded.
Based on the fixation data, we can compute the fixation density map for each image to annotate the ground-truth salient regions as in ~\cite{li2017benchmark}.
The generated fixation ground truth as displayed in Fig.~\ref{fig:ddataset_annotation}(c).
\begin{figure}[t]
\centering
\includegraphics[width=1.00\columnwidth,height=4.5cm]{figures/dataset_annotation_2.png}
\vspace{-7mm}
\caption{Process of annotating CitySaliency. For (a) an image, we get (c) the human fixations by psychophysical experiments. With (b) semantic annotation from Cityscapes~\cite{cordts2016cityscapes} and (c), we generate (d) the ground truth of salient objects.}
\label{fig:ddataset_annotation}
\end{figure}
\begin{figure}[t]
\centering
\vspace{-2mm}
\includegraphics[width=1.01\columnwidth,height=2.3cm]{figures/dataset_aam.png}
\vspace{-7mm}
\caption{AAMs of two conventional datasets and CitySaliency.}
\label{fig:dataset_aam}
\end{figure}
\subsection{Generation of Salient Object Ground Truth}
Since Cityscapes provides pixel-level and instance-level semantic labeling, location of objects in CitySaliency is known. We represent the fixation density map of an image at time $t$ as $S_t$.
Based on the location of objects and the computed fixation density map, we can compute the saliency score $S(\mathcal{O})$ for each object $\mathcal{O} \in \mathcal{I}_t$ from the global perspective:
\begin{equation}
\begin{split}
& S(\mathcal{O}) = \\
& \frac{\sum_{\mathcal{I}_t \in \mathcal{V}} \text{I}(\mathcal{O} \in \mathcal{I}_t) \cdot \left( (1 + \frac{1}{\Vert\mathcal{O}\Vert}) \sum_{p \in \mathcal{O}} S_t(p) \right)}{\sum_{\mathcal{I}_t \in \mathcal{V}} \text{I}(\mathcal{O} \in \mathcal{I}_t)},
\end{split}
\label{eq:dataset_annotation}
\end{equation}
where $\Vert\mathcal{O}\Vert$ is the number of pixels in object $\mathcal{O}$. In Eq.~(\ref{eq:dataset_annotation}), saliency of an object is defined as the sum of its total fixation density and its average fixation density. After that, we select objects with saliency scores above an empirical threshold of top 80\% of the maximum saliency score. In this manner, we get several salient objects with pixel-wise instance-level annotation and different saliency scores as shown in Fig.~\ref{fig:ddataset_annotation}(d).
\begin{figure}[t]
\centering
\includegraphics[width=0.90\columnwidth, height=2.7cm]{figures/dataset_number.png}
\vspace{-4mm}
\caption{Histograms of the number of salient objects.}
\label{fig:dataset_number}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.90\columnwidth, height=2.7cm]{figures/dataset_area.png}
\vspace{-4mm}
\caption{Histograms of the area of salient objects.}
\label{fig:dataset_area}
\end{figure}
Finally, we obtain CitySaliency with fixation annotation and pixel-wise instance-level salient object annotation. According to the division of Cityscapes, corresponding 2,975 images are for training and 500 images are for testing. For the task of SOD, we make salient objects with same saliency scores \emph{i.e.}, 1 to generate binary masks since we don't need to distinguish different instances. In this paper, we only use the binary masks of CitySaliency except for special instructions.
\begin{figure*}[t]
\centering
\includegraphics[width=1\textwidth, height=6cm]{figures/framework.png}
\vspace{-7mm}
\caption{The framework of the baseline. We first extract the general knowledge by common SOD methods, and then the extracted knowledge is transferred to the task-specific knowledge by an attention-based knowledge transfer module (AKT) to deal with the cross-domain knowledge difference. After that, the task-specific knowledge is decoded by a boundary-aware feature decoding module (BFD) in a progressive manner to detect task-specific salient objects.}
\label{fig:framework}
\end{figure*}
\subsection{Analysis}
Comparing CitySaliency with conventional datasets, we can find there usually exist different task-related targets as displayed in Fig.~\ref{fig:dataset_motivation}. In addition to the respective of tasks,
we provide comparisons for the perspective of scenes by presenting the annotation maps (AAMs) of two conventional image-based SOD datasets (ECSSD~\cite{yan2013hierarchical} and DUT-OMRON~\cite{yang2013saliency}) and CitySaliency, as shown in Fig.~\ref{fig:dataset_aam}, which reflects the distribution of salient objects in the dataset. From Fig.~\ref{fig:dataset_aam}, we can see that conventional datasets are usually center-biased, while the CitySaliency is more discrete. In addition, the histograms of number and area of salient objects are displayed in Fig.~\ref{fig:dataset_number} and~\ref{fig:dataset_area}. We can observe that there are usually more salient objects and there is more area occupation in CitySaliency.
From these observations, we can find two main difficulties \emph{i.e.}, the cross-domain knowledge difference and task-specific scene gap (including more discrete distribution, more salient objects and more area occupation), which makes it difficult to deal with the task-aware SOD for conventional SOD methods.
\section{A Baseline Model}
To address these difficulties (\emph{i.e.}, the cross-domain knowledge difference and task-specific scene gap), we propose a baseline model for driving task-aware SOD via a knowledge transfer network. The framework of our approach is shown in Fig.~\ref{fig:framework}.
\subsection{General Knowledge Extraction}
To extract the general knowledge, we construct the general subnetwork taking ResNet-50~\cite{he2016deep} with feature pyramid as the feature extractor, which is modified by removing the last two layers for the pixel-level prediction task. As shown in Fig.~\ref{fig:framework}, the feature extractor has five residual stages denoted as $\mathcal{E}_\mathcal{G}^{(i)} (\pi_\mathcal{G}^{(i)}), i =\{1, \dots, 5\}$, where $\pi_\mathcal{G}^{(i)}$ is the set of parameters of $\mathcal{E}_\mathcal{G}^{(i)}$. In addition, to obtain larger feature maps and receptive fields, we set the strides of last residual stages $\mathcal{E}_\mathcal{G}^{(5)}$ to 1 and dilation rates of last two stages to 2 and 4. The subnetwork is trained by the conventional datasets and learns the general knowledge (\emph{i.e.}, the output of $\mathcal{E}_\mathcal{G}^{(i)}$).
\subsection{Attention-based Knowledge Transfer}
To deal with the knowledge difference between conventional datasets and CitySaliency, we propose an attention-based knowledge transfer module (AKT), which transfers the extracted general features in general subnetwork to task-specific features in task-specific subnetwork.
As shown in Fig.~\ref{fig:model_akt}, AKT is based on attention mechanism to purify general knowledge. For the sake of simplification, we denote the input convolutional features as $\text{F}_\mathcal{G} \in \mathbb{R}^{H' \times W' \times C}$, which is the output of $\mathcal{E}_\mathcal{G}^{(i)}, i = {1, \dots, 5}$. To purify the features, we can compute an attention map as follows:
\begin{equation}
\text{A}_{{\mathcal{P}}} = \zeta_{s}(\text{F}_{{\mathcal{G}}}) \otimes \zeta_{c} (GAP(\text{F}_{{\mathcal{G}}})),
\label{eq:promotion_attention}
\end{equation}
where $\zeta_{s}(\cdot)$ and $\zeta_{c}(\cdot)$ denotes the Softmax operation on the spatial and channel dimension respectively, $GAP(\cdot)$ means global average pooling, and $\otimes$ and $\oplus$ represents element-wise product and summation. Based on the attention map $\text{A}_{{\mathcal{P}}}$ from general knowledge, we use a residual module to learn the cross-domain knowledge difference from the general domain to task domain and the learned shift is added to the task-specific features $\text{F}_\mathcal{T} \in \mathbb{R}^{H' \times W' \times C}$ produced by task-specific subnetwork. As shown in Fig.~\ref{fig:framework}, the task-specific subnetwork has the same feature extractor as general subnetwork and its residual stages are denoted as $\mathcal{E}_\mathcal{S}^{(i)} (\pi_\mathcal{S}^{(i)}), i =\{1, \dots, 5\}$, where $\pi_\mathcal{S}^{(i)}$ is the set of parameters of $\mathcal{E}_\mathcal{S}^{(i)}$.
\begin{figure}[t]
\centering
\vspace{-5mm}
\includegraphics[width=1.00\columnwidth,height=2.2cm]{figures/model_akt.png}
\vspace{-8mm}
\caption{The structure of AKT. }
\label{fig:model_akt}
\end{figure}
For knowledge transfer, we put AKT after the output of $\mathcal{E}_\mathcal{G}^{(i)}$ to learn the knowledge difference, which are added to $\mathcal{E}_\mathcal{S}^{(i)}$ as the input of $\mathcal{E}_\mathcal{S}^{(i+1)}, \forall i=1, \dots, 4$. For the convenient representation, we denote the AKT behind $\mathcal{E}_\mathcal{G}^{(i)}$ as $\phi_\mathcal{A}^{(i)}(\pi_\mathcal{A}^{(i)})$, where $\pi_\mathcal{A}^{(i)}$ is the set of parameters of $\phi_\mathcal{A}^{(i)}$. In this manner, the knowledge difference of conventional datasets and CitySaliency can be learned and used to make up for task-aware SOD, assisting to find salient objects.
\subsection{Boundary-aware Feature Decoding}
To deal with the complex scenes in the task domain, we introduce an effective boundary-ware feature decoding module (BFD) to fine decode features of lots of salient objects and their structures. Inspired by~\cite{su2019selectivity}, we make the decoding process pay attention to the boundaries and interiors to fine decode the features from the feature extractor of task-specific subnetwork. During decoding, the features from feature extractor $\mathcal{E}_\mathcal{T}^{(i)} (\pi_\mathcal{T}^{(i)})$ and the feature from the finer decoding layer are integrated to form the features, named as $\text{F}_{\mathcal{D}} \in \mathbb{R}^{H'' \times W'' \times C'}$, which is the input of BFD.
\begin{figure}[t]
\centering
\vspace{-5mm}
\includegraphics[width=1.00\columnwidth,height=2.2cm]{figures/model_bfd_2.png}
\vspace{-8mm}
\caption{The structure of BFD.}
\label{fig:model_bfd}
\end{figure}
As shown in Fig.~\ref{fig:model_bfd}, BFD has three parallel branches, \emph{i.e.}, boundary branch ($\phi_\mathcal{B}(\pi_\mathcal{B})$), transition branch ($\phi_\mathcal{T}(\pi_\mathcal{T})$) and interior branch ($\phi_\mathcal{I}(\pi_\mathcal{I})$), where $\pi_\mathcal{B}$, $\pi_\mathcal{T}$ and $\pi_\mathcal{I}$ are the set of parameters of three branches, respectively. In the boundary branch, two convolution and one transposed layers are stacked. In the transition branch, an integrated successive dilation module from~\cite{su2019selectivity} is used to integrate various scale context. The interior branch has the similar structure with the transition branch. Let $\textbf{F}_\mathcal{T}$ be the fused features, which is obtained by the fusion of three maps according to the boundary confidence map $\textbf{M}_\mathcal{B} = \sigma(\phi_\mathcal{B})$ and the interior confidence map $\textbf{M}_\mathcal{I} = \sigma(\phi_\mathcal{I})$. Finally, we obtain the prediction of salient objects as $ \textbf{M}_0 = \sigma(\textbf{F}_\mathcal{T})$.
We put the BFD after the each decoding stage of task-specific subnetwork as shown in Fig.~\ref{fig:framework}. For the sake of simplification, we add the index for each confidence maps as $\textbf{M}_\mathcal{B}^{(i)}$, $\textbf{M}_\mathcal{T}^{(i)}$ and $\textbf{M}_\mathcal{I}^{(i)}, i = {1, \dots, 5}$. Therefore, the overall learning objective can be formulated as:
\begin{equation}
\begin{split}
\min_{\mathbb{P}} \sum_{i =1}^5 L(\textbf{M}_0^{(i)}, G_0) + L(\textbf{M}^{(i)}_{B}, G_\mathcal{B}) + L(\textbf{M}^{(i)}_{I}, G_\mathcal{I}),
\end{split}
\label{eq:overall_loss}
\end{equation}
where $\mathbb{P}$ is the set of $\{\pi^{(i)}_{\mathcal{A}}, \pi^{(i)}_{\mathcal{S}}, \pi^{(i)}_{\mathcal{B}}, \pi^{(i)}_{\mathcal{T}}, \pi^{(i)}_{\mathcal{I}}\}^5_{i=1}$ for convenience of presentation, $L(\cdot, \cdot)$ is the binary cross-entropy loss, and $G_0, G_{\mathcal{B}}, G_{\mathcal{I}}$ represents the ground-truth masks of salient objects, boundaries and interiors, respectively.
\section{Experiments}
\subsection{Settings}
\subsubsection{Evaluation Metrics}
We choose mean absolute error (MAE), weighted F-measure score ($F^w_{\beta}$), F-measure score ($F_{\beta}$) and S-measure ($S_m$) to evaluate methods on CitySaliency.
\subsubsection{Training and Inference}
We use stochastic gradient descent algorithm to train our network. The network is trained in two stages as follow. 1) We first train the general subnetwork. In the optimization process, parameters of the feature extractor are initialized by pre-trained ResNet-50 model with learning rate of $1 \times 10^{-3}$, weight decay of $5 \times 10^{-4}$ and momentum of 0.9. And learning rates of rest layers are set to 10 times larger. The general subnetwork is trained on DUTS~\cite{wang2017stagewise} to learn general knowledge. Training images are resized to the resolution of $512 \times 256$, and applied horizontal flipping. The training process takes 50,000 iterations with mini-batch of 4. 2) We fix the general subnetwork to train the rest network (including AKT and task-specific subnetwork) with the same setting as general subnetwork and the optimization objective is Eq.~(\ref{eq:overall_loss}). We use CitySaliency to training the network. The training process takes 200,000 iterations. During inference, one image is directly fed into the network to produce the saliency map at the output of first stage in the task-specific subnetwork.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth, height=2.8cm]{figures/result_before.png}
\vspace{-7mm}
\caption{Examples of the state-of-the-art methods on CitySaliency without fine-tuning. GT means ground truth.}
\label{fig:result_before}
\end{figure}
\subsection{Model Benchmarking on CitySaliency}
To analyze the proposed dataset, we first benchmark 12 state-of-the-art model performance in Tab.~\ref{tab:performance_before} before fine-tuning on CitySaliency dataset. Some examples are shown in Fig.~\ref{fig:result_before}. From the quantitative and qualitative evaluation, we can find that it is difficult for these models to find correct salient objects and to restore their fine structures.
Beyond the comparisons without fine-tuning, we fine-tune these models and the proposed method on CitySaliency. The performance comparisons of these models are listed in Tab.~\ref{tab:performance_before} and shown in Fig.~\ref{fig:result_after}. we can find that the performance of these methods is all improved, which validates the knowledge difference of conventional datasets and CitySaliency. Also, our method consistently outperforms other state-of-the-art methods on four metrics, even if the dense CRF is used on R3Net and DSS. We can also find the proposed method has better locations and structures of salient objects.
From these experiments, we can believe that task-specific CitySaliency is a challenging dataset, and the proposed model with knowledge transfer and feature decoding are useful for solving the task-aware SOD.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{figures/result_after.png}
\vspace{-7mm}
\caption{Representative examples of the state-of-the-art methods and our approach after being fine-tuned.}
\label{fig:result_after}
\end{figure}
\begin{table}[t]
\centering
\vspace{-4.5mm}
\small
\caption{Performance of 13 state-of-the-art models before and after fine-tuning on CitySaliency. The best two results are in \textbf{\color{red}{red}} and \textbf{\color{blue}{blue}}. ``$^\dagger$'' means utilizing dense CRF~\cite{hou2019deeply}.}
\setlength{\tabcolsep}{0.5mm}{
\renewcommand\arraystretch{1.0}
\begin{tabular}{c | c c c c | c c c c}
\hline
& Before & & & & After \\
\hline
& MAE
& $F^w_\beta$
& $F_\beta$
& $S_m$
& MAE
& $F^w_\beta$
& $F_\beta$
& $S_m$
\\
\hline
UCF\cite{zhang2017learning} & 0.428 & \textbf{\color{red}{0.208}} & 0.291 & \textbf{\color{red}{0.361}} & 0.344 & 0.468 & 0.515 & 0.583 \\
NLDF\cite{luo2017non} & 0.369 & 0.084 & 0.226 & 0.332 & 0.172 & 0.676 & 0.749 & 0.744 \\
Amulet\cite{zhang2017amulet} & 0.416 & \textbf{\color{blue}{0.175}} & 0.274 & 0.333 & 0.170 & 0.671 & 0.766 & 0.737 \\
FSN\cite{chen2017look} & 0.365 & 0.085 & 0.230 &0.339 & 0.146 & 0.725 & 0.780 & 0.755 \\
SRM\cite{wang2017stagewise} & \textbf{\color{red}{0.358}} & 0.099 & \textbf{\color{blue}{0.299}} & \textbf{\color{blue}{0.349}} & 0.171 & 0.639 & 0.759 & 0.721 \\
RAS\cite{chen2018eccv} & 0.380 & 0.075 & 0.174 &0.314 & 0.147 & 0.716 & 0.797 & \textbf{\color{blue}{0.771}} \\
PiCANet\cite{liu2018picanet} & 0.372 & 0.112 & 0.259 & 0.340 & 0.189 & 0.656 & 0.768 & 0.749 \\
R3Net$^\dagger$\cite{deng2018r3net} & 0.392 & 0.074 & 0.153 &0.292 & \textbf{\color{blue}{0.140}} & \textbf{\color{blue}{0.741}} & \textbf{\color{blue}{0.805}} & 0.760 \\
DGRL\cite{wang2018detect} & \textbf{\color{blue}{0.360}} & 0.086 & \textbf{\color{red}{0.401}} & 0.342 & 0.146 & 0.701 & 0.725 & 0.725\\
RFCN\cite{wang2018salient} & 0.364 & 0.090 & 0.229 &0.342 & 0.168 & 0.672 & 0.766 & 0.747 \\
DSS$^\dagger$\cite{hou2019deeply} & 0.386 & 0.080 & 0.185 &0.315 & 0.155 & 0.697 & 0.790 & 0.767 \\
BANet\cite{su2019selectivity} & 0.375 & 0.112 & 0.258 & 0.331 & 0.144 & 0.735 & \textbf{\color{blue}{0.805}} & \textbf{\color{blue}{0.771}} \\
\hline
\textbf{Ours} & - & - & - & - & \textbf{\color{red}{0.133}} & \textbf{\color{red}{0.751}} & \textbf{\color{red}{0.811}} & \textbf{\color{red}{0.785}} \\
\hline
\end{tabular}}
\label{tab:performance_before}
\end{table}
\subsection{Performance Analysis of the Baseline Model}
To investigate the effectiveness of the proposed domain-aware network, we conduct ablation experiments by introducing four different models. The first setting is only the task-specific subnetwork without AKT and BFD, which is regraded as ``Baseline''. To explore the effectiveness of AKT, we add the second (named as ``Baseline + PT'', pre-trained on DUTS-TR and then fine-tuned on CitySaliency) and third (``Baseline + AKT'', with AKT) models. In addition, we add the fourth model arming ``Baseline'' with BFD as ``Baseline + BFD''. We also list the proposed model as ``\textbf{Ours}''.
The comparisons of above models are listed in Tab.~\ref{tab:performance_ablation}. By comparing first three rows, we can observe the pre-trained operation on conventional datasets is useful for task-aware SOD, and AKT is more effective to utilize the general knowledge from conventional datasets. In addition, we can find ``Baseline + BFD'' also has an obvious improvement compared with ``Baseline'', which indicates the effectiveness of BFD. Moreover, the combination of the attention-based knowledge transfer and boundary-aware feature decoding achieve a better performance, which verifies the usefulness of the proposed baseline model. As a conclusion, this model verifies the feasibility of solving driving-aware SOD and push the future research.
\begin{table}[t]
\centering
\vspace{-6mm}
\small
\caption{Performance of ablation experiments.}
\setlength{\tabcolsep}{3mm}{
\renewcommand\arraystretch{1.0}
\begin{tabular}{c | c c c c}
\hline
& MAE $\downarrow$
& $F^w_\beta$ $\uparrow$
& $F_\beta$ $\uparrow$
& $S_m$ $\uparrow$
\\
\hline
Baseline & 0.149 & 0.721 & 0.796 & 0.770 \\
Baseline + PT & 0.143 & 0.726 & 0.797 & 0.772 \\
Baseline + AKT & 0.141 & 0.735 & 0.810 & 0.782 \\
Baseline + BFD & 0.140 & 0.737 & 0.810 & 0.780 \\
\hline
\textbf{Ours} & \textbf{0.133} & \textbf{0.751} & \textbf{0.811} & \textbf{0.785} \\
\hline
\end{tabular}}
\label{tab:performance_ablation}
\end{table}
\section{Conclusion}
In this paper, we construct a new driving task-specific salient object detection (SOD) dataset in urban driving to explore SOD in special tasks. By analyzing this dataset, we find two main difficulties, which prevent the development of task-aware SOD. Inspired by the difficulties, a baseline model via the knowledge transfer network is proposed. This model is organized in a progressive manner, utilizing the attention-based knowledge transfer module and boundary-aware feature decoding module to deal with the existing difficulties. In addition, we provide a comprehensive benchmark of the state-of-the-art methods and our baseline model on the proposed dataset, which shows the challenges of the task-specific dataset, and validates the effectiveness of the proposed model. This work pays more attention to the salient features of objects in the driving scene and does not focus on the dynamic attribute analysis, which will be explored more with the help of time domain signals in the feature.
\section{Acknowledgement}
This work is partially supported by the National Natural Science Foundation of China under the Grant 61922006, and Baidu academic collaboration program.
\bibliographystyle{IEEEbib}
|
2,877,628,091,195 | arxiv | \section{INTRODUCTION}
Discrete-time random walks employing a quantum coin originated in 1993 with the publication of \cite{ADZ93}. Since then, inspired by the promise of applications to the development of super-fast algorithms, especially after the publication of \cite{ NV00, AAKV01, ABNVW01} in the years 2000 and 2001, this field of research has flourished immensely.
A natural spatial setting for the evolution of a discrete-time random walk is an infinite linear lattice. In the simplest classical example, the walker starts at the origin and subsequently, depending on the outcome (heads or tails) of the toss of a fair coin, strolls one step (spatial unit) either to the right or to the left. In the quantum context, the tossing of the coin is modeled by the action on the overall state function of a ``coin operator" while the movement of the walker is modeled by the action of a ``shift operator". The observable aspects of the system are captured by the eigenvalues of these operators. The characteristically quantum-mechanical phenomenon of superposition of states gives rise to spatial probability distributions quite unlike those observed in classical random walks. Typically, sharp spikes of high probability density are observed at specific locations and troughs of near zero probability elsewhere.
A number of cases have been studied in some depth. For instance, the case of one particle and one coin is treated in \cite {NV00, ABNVW01, K02, K05, GJS04}; two entangled particles and one coin in \cite{OPSB06}; one partilce and many coins in \cite{BCA03, MKK07}, and one particle and two entangled coins in \cite{V-ABBB05}, etc. Very recently, Venegas-Andraca \textit{et al}. \cite{V-ABBB05} showed, by way of numerical simulations, that the walker tends to persist with high probability at the initial position as evidenced by a very prominent ``peak" or ``spike" at the origin. The occurrence of spikes and similar phenomena (called localization) also has been reported in two-dimensional Grover walks \cite{MBSS02,TFMK03,IKK04, WKKK08}, in multi-state quantum walk on a circle \cite{IK05}, in one-dimensional quantum walks driven by many coins \cite{BCA03} and in one-dimensional three-state quantum walk \cite{IKS05}.\\
{\indent}In this paper, we concentrate on the case of a discrete-time QRW on a linear lattice with two entangled coins, as proposed by Venegas-Andraca \textit{et al}. \cite{V-ABBB05}. In this framework, abbreviated ``2cQRW", the controlling behavior of the two entangled coins is modeled by a certain $4\times 4$ matrix which is defined as the second-order tensor-power of a $2\times 2$ unitary matrix. As in \cite{IKS05}, we show that the occurrence of localized spikes at the origin and elsewhere reflects the degeneracy of some eigenvalue of the time evolution operator $U(k)$. An explicit formula for the limiting spatial probability density enables us to specify the height of the spike at the origin. For sufficiently large values of the position $x$, we find that the limiting probability density decreases quadratically. This is in sharp contrast to the result obtained by Inui \textit{et al} for a three-state QRW \cite{IKS05}, where it was found that the limiting spatial probability density decreases exponentially for large $x$.
\section{ Formulation of Quantum random walks with entangled coins on the line}
\subsection{Quantum random walks on the line with two entangled coins}
We proceed to define the elements, as formulated in \cite{V-ABBB05}, of a random walk on a discrete linear lattice governed by a pair of entangled qubits. The definitions and corresponding notations are analogous to those for the single-coin (one-qubit) framework outlined in \cite{L08}. For brevity of exposition, as in \cite{V-ABBB05}, the two-coin, or more precisely, {\em two-qubit} framework is abbreviated by the name ``2cQRW".
Akin to the one-qubit case, the coin space of the 2cQRW framework is the Hilbert space $\mathcal{H}_{ec}$ (``$ec$" for entangled coin) spanned by the orthonormal basis $\{|j\rangle;j\in B_c\}$ where $B_c=\{00, 01, 10, 11\}$. The position space is the Hilbert space $\mathcal{H}_p$ spanned by the orthonormal basis $\{|x\rangle;x \in \mathbb{Z} \}.$ The ``overall" state space of the system is $\mathcal{H}= \mathcal{H}_{p} \otimes \mathcal{H}_{ec}$, in terms of which a general state of the system may be expressed by the formula:
\[\psi=\sum_{x\in\mathbb{Z}} \sum_{j\in B_c}\psi(x,j)|x\rangle\otimes|j\rangle.\]
The spatio-temporal progression of the 2cQRW is governed by an ``evolution operator" $U$, which is composed of a {\em coin operator} $A_{ec}$ and a {\em shift operator} $S$.
In the 2cQRW context, the coin operator is defined as the tensor product of two single-qubit operators
$$A_{ec}=A \otimes A,$$
where the unitary operator $A$ acts on a single-coin space (see \cite{L08}).
As in \cite{V-ABBB05}, the shift operator is given by:
\begin{eqnarray}
S=|00\rangle\langle 00|\otimes \sum_i|i+1\rangle \langle i|+|01\rangle \langle 01|\otimes \sum_i|i\rangle \langle i| \nonumber \\
+ |10\rangle\langle 10|\otimes \sum_i|i\rangle \langle i|+|11\rangle \langle 11|\otimes \sum_i|i-1\rangle \langle i|. \label{equso}
\end{eqnarray}
The journey of the particle (a.k.a. walker) along the line is driven by a stochastic sequence of iterations of $S$. At every time step of the walk, depending on the state of the coin, the walker strolls either one spatial unit to the right or to the left or stalls at that step. Note that the walker moves only when both coins reside in the $|00\rangle$ or $|11\rangle$ state. Otherwise, the walker stalls at that step.
Let $I$ denote the identity operator on $\mathcal{H}_p$. Then, in terms of $A_{ec}$ and $S$, the total evolution operator $U$ is given by
$$U = S(I\otimes A_{ec}).$$
Given $\psi_0 \in \mathcal{H}$, where $||\psi_{0}||=1$, the expression $\psi_t= U^t \psi_0$ is called the wave function for the particle at time $t$. The corresponding random walk with initial state $\psi_0$ is represented by the sequence $\{ \psi_t \}_0 ^\infty$.
Let $X$ denote the position operator on $\mathcal{H}_p$, defined by
$X|x\rangle=x|x\rangle$ and let $\psi_t =\sum_{x \in \mathbb{Z}}\sum_{j\in B_{c}}\psi_{t}(x, j)|x\rangle\otimes |j\rangle$ be the wave function for the particle at time $t$. Then the probability $p_t(x)$ of finding the particle at the position $x$ at time $t$ is given by the standard formula
$$p_t(x)=\sum_{j\in B_c}|\psi_t (x, j)|^2,$$
where $|\cdot|$ indicates the modulus of a complex number.
At each instant $t$, the eigenvalues of the operator $X_t\doteq {U^{\dagger}}^{t}XU^t$ equate to the possible values of the particle's position with corresponding probability $p_t(x)$.
\subsection{Fourier transform formulation of the wave function for 2cQRW }
As in the single-coin setting treated in \cite{GJS04}, Fourier transform methods can be applied in the 2cQRW setting to obtain a useful formulation of the wave function. Let $\Psi_{t}^{ec}(x)\equiv [\psi_t (x, 1),\psi_t (x, 2),\psi_t (x, 3),\psi_t (x, 4)]^T$ represent the amplitude of the wave function, whose four components at position $x$ and time $t$ correspond respectively to the coin states 00, 01, 10, and 11. As usual, the superscript $T$ denotes the transpose operator. Assuming that the 2cQRW is launched from the origin, then the initial quantum state of the system is reflected by the components of $\Psi_{0}^{ec}(0)=[\psi_0(0,1),\psi_0(0,2),\psi_0(0,3), \psi_0(0,4)]^T\equiv [\alpha_1, \alpha_2,\alpha_3, \alpha_4]^T$, where $\sum_{j=1}^4|\alpha_j|^2=1.$
To begin, the spatial Fourier transform of $\Psi_{t}^{ec}(x)$ is defined by
$$ \widehat{\Psi_t^{ec}}(k)=\sum_{x\in \mathbb{Z}}\Psi_{t}^{ec}(x)e^{ikx}.$$
For instance, under this transformation, the initial amplitude is related to its Fourier dual by the formula:
\begin{eqnarray}
\widehat{\Psi_{0}^{ec}}(k)=\Psi_{0}^{ec}(0).
\end{eqnarray}
In general, the Fourier dual of the overall state space of the 2cQRW system is the Hilbert space $L^2(\mathbb{K})\otimes \mathcal{H}_{ec}$, consisting of $\mathbb{C}^4$-valued functions:
\begin{equation}
\phi(k) =\left[\begin{array}{c}
\phi_1(k)\\
\phi_2(k) \\
\phi_3(k) \\
\phi_4(k)
\end{array}\right],
\end{equation}
subject to the finiteness condition $$\Arrowvert \phi \Arrowvert^2=\Arrowvert \phi_1 \Arrowvert^2_{L^2}+\Arrowvert \phi_2 \Arrowvert^2_{L^2}+\Arrowvert \phi_3 \Arrowvert^2_{L^2}+\Arrowvert \phi_4 \Arrowvert^2_{L^2}<\infty.$$
Thus, given the initial state $\widehat{\Psi_{0}^{ec}}(k)$, the Fourier dual of the wave function of the 2cQRW system is expressed by
\begin{equation}
\widehat{\Psi_t^{ec}}(k)=U_{ec}(k)^t\widehat{\Psi_{0}^{ec}}(k), \label{eqnwvec}
\end{equation}
where the total evolution operator $U_{ec}(k)$ on $L^2(\mathbb{K})\otimes \mathcal{H}_{ec}$ is given by
\begin{equation}
U_{ec}(k)= \left[\begin{array}{cccc}
e^{i k} & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & e^{-i k}
\end{array}\right] A_{ec}\,.\label{eqnU_{ec}}
\end{equation}
Note that $U_{ec}(k)=U(k/2)\otimes U(k/2)$, where
\begin{equation}
U(k/2)= \left[\begin{array}{cc}
e^{i k/2} & 0\\
0 & e^{-i k/2}
\end{array}\right] A\,.\label{eqnU(k)}
\end{equation}
To wrap up this introductory section, we collect some basic facts about the eigenvalues and eigenvectors of the operators discussed above.
Since the matrix $A$ is unitary, we may assume, without loss of generality, that its determinant is $|A|=e^{i\theta}$, where $\theta$ is a real constant. Similarly, for the unitary matrix $U(k/2)$, we may assume that its eigenvalues are $\lambda_{1}(k)=e^{i\eta(k)}$ and $\lambda_{2}(k)=e^{i(\theta-\eta(k))}$, where $\eta$ is a real-valued differentiable function of $k$. Let $v_1(k)$ and $v_2(k)$ denote the corresponding unit eigenvectors. Since $U_{ec}(k)=U(k/2)\otimes U(k/2)$, it follows that the eigenvalues of $U_{ec}(k)$ are:\vspace{.1in}\\
\hspace*{.3in}
$\left\{
\begin{array}{l}
\Lambda_1(k)=[\lambda_1(k)]^2=e^{i\varphi(k)}$ where $\varphi(k)=2\eta(k)\\
\Lambda_2(k)=\lambda_1 (k)\lambda_2(k)=|A|=e^{i\theta}\\
\Lambda_3(k)=\lambda_1 (k)\lambda_2(k)=|A|=e^{i\theta}\\
\Lambda_4(k)=[\lambda_2(k)]^2=e^{i(2\theta-\varphi(k))}.
\end{array}\right.$
Correspondingly, the unit eigenvectors are:\vspace{.1in}\\
\hspace*{.3in}$\left\{
\begin{array}{l}
V_1(k)=v_1(k)\otimes v_{1}(k)\\
V_2(k)=v_1(k)\otimes v_{2}(k)\\
V_3(k)=v_2(k)\otimes v_{1}(k)\\
V_4(k)=v_2(k)\otimes v_{2}(k).
\end{array}\right.$
Finally, in terms of eigenvalues and eigenvectors, the wave function $ \widehat{\Psi_t^{ec}}(k)$ may be expanded as follows:
\begin{eqnarray}
\widehat{\Psi_t^{ec}}(k)&=& U_{ec}^t(k)\widehat{\Psi_0^{ec}}(k)\nonumber\\
\mbox{}&=& e^{it\varphi(k)}\langle V_1(k),\widehat{\Psi_0^{ec}}(k)\rangle V_1(k)\nonumber\\
\mbox{}&\mbox{}&+e^{it\theta}\langle V_2(k),\widehat{\Psi_0^{ec}}(k)\rangle V_2(k)\nonumber\\
\mbox{}&\mbox{}&+e^{it\theta}\langle V_3(k),\widehat{\Psi_0^{ec}}(k)\rangle V_3(k)\nonumber\\
\mbox{}&\mbox{}&+e^{it(2\theta-\varphi(k))}\langle V_4(k),\widehat{\Psi_0^{ec}}(k) \rangle V_4(k).\label{eqnwaec}
\end{eqnarray}
\section{Spatial probability distribution of 2cQRW}
For simplicity of notation, occasionally the explicit dependency on the parameter $k$ of the quantities $\{V_{j}\}_{j=1}^4$, $\widehat{\Psi_t^{ec}}$ and $\widehat{\Psi_0^{ec}}$ will be suppressed. By inverse Fourier transformation, the amplitude of the wave function of the particle at the position $x$ and the time $t$ is given by
\begin{eqnarray}
\Psi_t^{ec}(x) &=& [\psi_t(x,1),\psi_t(x,2),\psi_t(x,3), \psi_t(x,4)]^T \nonumber \\
\mbox{} &=& \int_0^{2\pi}e^{-ixk}\widehat{\Psi_t^{ec}}\frac{dk}{2\pi}\nonumber\\
\mbox{} &=& I_1+I_2+I_3+I_4, \quad \label{eqnprodis1}
\end{eqnarray}
where\\
\hspace*{.3in}$\left\{
\begin{array}{l}
I_1 = \int_0^{2\pi}e^{-ixk}e^{it\varphi(k)}\langle V_1,\widehat{\Psi_0^{ec}}\rangle V_1\,\frac{dk}{2\pi}\vspace{5pt}\\
I_2 = \int_0^{2\pi}e^{-ixk}e^{it\theta}\langle V_2,\widehat{\Psi_0^{ec}}\rangle V_2 \,\frac{dk}{2\pi}\vspace{5pt}\\
I_3 = \int_0^{2\pi}e^{-ixk}e^{it\theta}\langle V_3,\widehat{\Psi_0^{ec}}\rangle V_3\,\frac{dk}{2\pi}\vspace{5pt}\\
I_4 = \int_0^{2\pi}e^{-ixk}e^{it(2\theta-\varphi(k))}\langle V_4,\widehat{\Psi_0^{ec}}\rangle V_4\,\frac{dk}{2\pi}\vspace{5pt}.
\end{array}\right.$\\
In view of \cite{Eedelyi56}, the asymptotic behavior of the four Fourier integrals representing the amplitude components $I_{1}, I_{2}, I_{3} \,\,\mbox{and}\,\, I_{4}$ of $\Psi_t^{ec}(x)$ can be described by the following lemma:\vspace{5pt}
{Lemma 1.}\,\,\, Let $g(k)$ be an $N$-fold continuously differentiable function in the interval $0\leq k \leq 2\pi$. Let $g^{(n)}=d^ng/dk^n$, $A_N(x)=\sum_{n=0}^{N-1}i^{n-1}g^{(n)}(a)(-x)^{-n-1}$ and $B_N(x)=\sum_{n=0}^{N-1}i^{n-1}g^{(n)}(b)(-x)^{-n-1}$. Then, as $|x|\rightarrow \infty$, we have
\begin{eqnarray}
\int_{0}^{2\pi}e^{-ixk}g(k)dk=B_N(x)-A_N(x)+o(x^{-N}). \label{fourierinte1}
\end{eqnarray}
The following theorem justifies the prominent spike at the origin observed by Venegas-Andraca \textit{et al} in \cite{V-ABBB05}, as well as other interesting phenomena.\vspace{5pt}
{Theorem 1.} Let $p_t(x)\!=\!\|\Psi_t^{ec}(x)\|^2\!=\!\sum_{j\in B_c}|\psi_t (x, j)|^2$, where $\|\centerdot\|$ denotes vector norm. Let $p(x)=\lim_{t\rightarrow \infty} p_t(x)$. If the 2cQRW is launched from the origin with initial state $\widehat{\Psi_0^{ec}}$ and governed by evolution operator $U_{ec}(k)$, then the limiting probability of finding the walker at $|x\rangle$ is $$p(x)=\|\int_0^{2\pi}\{e^{-ixk}\sum_{j=2}^{3}\langle V_j(k),\widehat{\Psi_0^{ec}}(k)\rangle V_j(k)\}\frac{dk}{2\pi}\|^2.$$
In particular
\begin{enumerate}
\item[(i)] \, when $x=0$, we have:
\begin{eqnarray}
p(0)=\|\int_0^{2\pi}\{\sum_{j=2}^{3}\langle V_j(k),\widehat{\Psi_0^{ec}}(k)\rangle V_j(k)\}\frac{dk}{2\pi}\|^2. \label{spike-0}
\end{eqnarray}
\item [(ii)]\,as $|x|\rightarrow \infty$, we have:
\begin{eqnarray}
p(x)&\sim & \frac{1}{x^2}\|\sum_{j=2}^3[\langle V_j(0), \widehat{\Psi_0^{ec}}(0)\rangle V_j(0)- \nonumber \\
\mbox{}&\mbox{}&\langle V_j(2\pi), \widehat{\Psi_0^{ec}}(2\pi)\rangle V_j(2\pi)]\|^2
+O(x^{-2}). \label{prob-x}
\end{eqnarray}
\end{enumerate}
Proof. See Appendix A.\vspace{5pt}
The height of the spike at the origin is quantified precisely by Eq.(\ref{spike-0}). For instance, if we choose, as the initial coin state, the Bell state $|\Phi^{+}\rangle=\frac{1}{\sqrt{2}}\left(|00\rangle +|11\rangle\right)$, then the probability of finding the particle at the origin converges to $p(0)= 3-2\sqrt{2}\thickapprox 0.171573$ (justification below). This agrees quite accurately with the graphical representation in FIG. 1 (see below) and the observations reported in \cite{V-ABBB05}. No such spikes are evident in QRWs driven by single coins.
The formula Eq.(\ref{prob-x}) for $p(x)$ implies that the limiting probability of finding the particle at any fixed location \underline{does} \underline{not} vanish. This is in sharp contrast to the behavior of single-coin QRWs \cite{NV00, ABNVW01, K02, K05}, for which the analogous limiting probability converges everywhere to zero.
In \cite{IKS05}, which treats the case of a one-dimensional three-state quantum walk, there too the behavior of $p(x)$ is observed to be everywhere non-vanishing. However, one important difference distinguishes that case from the present case. Whereas in \cite{IKS05} the rate of decay of $p(x)$ as $|x|\rightarrow \infty$ is {\em exponential}, in the present case the rate of decay of $p(x)$ as $|x|\rightarrow \infty$ is {\em quadratic}.
By the proof of Theorem 1 (see Appendix A), the everywhere non-vanishing behavior of $p(x)$ as well as the occurrence of the spike at the origin both are due to the independence on $k$ of the degenerate eigenvalues of the evolution operator $U_{ec}$.
We remark that the sequence $\{p(x)\}_{x=0}^{\infty}$ of limiting probabilities does not itself amount to a probability distribution. In fact the sum $\sum_{x\in \mathbb{Z}}p(x)$ usually is less than unity. The following theorem evaluates this sum.\vspace{5pt}
{Theorem 2.}\,\, If the 2cQRW is launched from the origin with initial state $\widehat{\Psi_0^{ec}}$ and governed by evolution operator $U_{ec}(k)$, then
\begin{eqnarray}
\sum_{x\in \mathbb{Z}}p(x)
=\int_{0}^{2\pi}\sum_{j=2}^3|\langle V_j(k),\widehat{\Psi_0^{ec}}(k)\rangle|^2 \frac{dk}{2\pi}. \label{localization}
\end{eqnarray}
Proof. See Appendix B.\vspace{5pt}
To illustrate the above theorem, suppose the 2cQRW is launched from the origin with initial state $|\Phi^{+}\rangle=\frac{1}{\sqrt{2}}\left(|00\rangle +|11\rangle\right)$, governed by the coin operator $H^{\otimes 2}$, where $H$ is the $2\times 2$ Hadamard operator. Then, by (\ref{localization}), $\sum_{x\in \mathbb{Z}}p(x)=\sqrt{2}-1$. This example is discussed further in the sequel.
The third and final theorem of this article provides detailed information on the local behavior of $p_t(x)$, the probability of finding the particle at time $t$ and position $x$. In preparation for this theorem, we digress for a moment to review some prerequisites.
As in Eq.(\ref{eqnU_{ec}}), let $\varphi(k)$ denote the {\em phase} of the principal eigenvalue $\Lambda_1=e^{i\varphi(k)}$ associated with the evolution operator $U_{ec}(k)$. To ensure sufficient control over the analytic properties of $\varphi$, our choice of the coin operator $A_{ec}$ must be restricted accordingly. Specifically, $\varphi$ must possesses no {\em stationary points} (see \cite{NV00, Eedelyi56}) of order greater than 1. Moreover, the maximum and minimum values of the function $\varphi^{\prime}(k)$ over $[0,2\pi]$ must coincide respectively with $|\varphi^{\prime}(k_0)|$ and $-|\varphi^{\prime}(k_0)|$, where $k_0$ is a zero of $\varphi^{\prime\prime}(k)$ of multiplicity one.
Provided that $\beta$ is neither 0 nor $\frac{\pi}{2}$, the aforementioned assumptions are valid at least for the important case of the coin operator $A(\beta)^{\otimes 2}$ treated in \cite{L08}. If $\beta$ is 0 or $\frac{\pi}{2}$, then the resulting QRW is trivial.
In the special case when $\beta=\frac{\pi}{4}$, our coin operator $A(\frac{\pi}{4})^{\otimes 2}$ coincides with the $4\times 4$ Hadamard-based matrix $H^{\otimes 2}$. This example is fully elaborated below.\vspace{5pt}
{Theorem 3.}\,\,\, Suppose the 2cQRW is launched from the origin with initial state $\widehat{\Psi_0^{ec}}(k)$ and governed by evolution operator $U_{ec}(k)$ based on the coin operator $A_{ec}=A(\beta)^{\otimes 2}$. Let $\varphi(k)$ denote the phase function of the principal non-degenerate eigenvalue $\Lambda_1=e^{i\varphi(k)}$ of $U_{ec}(k)$ and let $M=|\varphi^{\prime}(k_0)|$, where $k_0$ is a zero of $\varphi^{\prime\prime}(k)$.
\begin{enumerate}
\item[(i)] When $x=0$, we have
\begin{eqnarray}
p_t(0)&=& p(0)+O(t^{-\frac{1}{2}}) \label{eqnprodis}.
\end{eqnarray}
\item[(ii)] If $x$ is an integer within a small fixed distance $\delta$ of $\pm{tM}$ then $p_t(x)=O(t^{-\frac{2}{3}})$.
\item[(iii)] For a fixed $\epsilon > 0$, if $t\geq {x} \geq t(M+\epsilon)$ or $t\geq -{x} \geq t(M+\epsilon)$, then $p_t(x)=O(t^{-2})$.
\item[(iv)] For a fixed $\epsilon > 0$, if $t^{\frac{1}{2}}\leq x \leq t(M-\epsilon)$ or $t^{\frac{1}{2}}\leq -x \leq t(M-\epsilon)$, then $p_t(x)=O(t^{-1})$.
\item[(v)] If $|x|<t^{\frac{1}{2}}$, and $|x|\approx 0$, then
\begin{eqnarray}
p_t(x)&=&p(x)+O(t^{-\frac{1}{2}})\label{eqnprodis}.
\end{eqnarray}
\item[(vi)] If $|x|<t^{\frac{1}{2}}$, and $|x|\approx t^{\frac{1}{2}}$, then $p_t(x)=O(t^{-1}).$
\end{enumerate}\vspace{5pt}
According to this theorem, if $t$ is fixed and sufficiently large, the distribution of $p_t(x)$ is characterized by a spike at the origin and two minor spikes located at $\pm t\varphi^{\prime}(k_{0})$. As $t$ increases, the two minor spikes shrink in height and drift off to infinity in both directions (see FIG. 1), while the spike at the origin persists and settles to a specific height given by Eq.(\ref{spike-0}). These inferences confirm the observations based on numerical simulations reported in \cite{V-ABBB05}.\vspace{5pt}
{Example.}\,\,\, For the duration of this example, let us agree to adopt the convection whereby any row vector, when enclosed in round parentheses, is identified with the corresponding column vector enclosed in square brackets. For instance:
\[(a_{1},a_{2},a_{3})=\left[\begin{array}{c}a_{1}\\a_{2}\\a_{3}\end{array}\right]\]
Suppose the 2cQRW is launched from the origin with initial state $|\Phi^{+}\rangle=\frac{1}{\sqrt{2}}\left(|00\rangle +|11\rangle\right)$ and governed by the coin operator
$$H^{\otimes 2}=\left[\frac{1}{\sqrt{2}}\left(|0\rangle \langle 0|+|0\rangle \langle 1|+|1\rangle \langle 0|-|1\rangle \langle 1|\right)\right]^{\otimes 2},$$
which equals the operator $A_{ec}=A(\beta)^{\otimes 2}$ in Theorem 3 when $\beta=\frac{\pi}{4}$.
The total evolution operator is given by $U_{ec}(k)=U(k/2) \otimes U(k/2)$, where
\begin{equation}
U(k/2)= \frac{1}{\sqrt{2}}\left[\begin{array}{cc}
e^{i k/2} & e^{i k/2}\\
e^{-i k/2} &-e^{-i k/2}
\end{array}\right]. \label{eqnU(k/2)}
\end{equation}
Let $\varphi(k)=2\sin^{-1}\left(\frac{\sin(k/2)}{\sqrt{2}}\right)$. Then the two eigenvalues of $U(k/2)$ are
\[\begin{array}{l}
\lambda_1(k)=e^{i\varphi(k)/2}\vspace{7pt}\\
\lambda_2(k)=-e^{-i\varphi(k)/2}.
\end{array}\]\vspace{5pt}
and the four eigenvalues of $U_{ec}(k)=U(k/2) \otimes U(k/2)$ are:\vspace{5pt}
\hspace*{.3in}
$\left\{
\begin{array}{l}
\Lambda_1(k)=\lambda_1(k)^2=e^{i\varphi(k)}\\
\Lambda_2(k)=\lambda_1 (k)\lambda_2(k)=-1\\
\Lambda_3(k)=\lambda_1 (k)\lambda_2(k)=-1\\
\Lambda_4(k)=\lambda_2(k)^2=e^{-i\varphi(k)}.
\end{array}\right.$\vspace{5pt}
To apply Theorem 3, observe that
\begin{eqnarray*}
\varphi^{\prime}(k)=\frac{\cos(k/2)}{\sqrt{2-\sin^2(k/2)}},\\
\varphi^{\prime \prime}(k)=\frac{-\sin(k/2)}{2[2-\sin^2(k/2)]^{\frac{3}{2}}}.
\end{eqnarray*}
so that the zeros of $\varphi^{\prime \prime}(k)$, both of multiplicity one, occur at $0$ and $2\pi$. Thus, the maximum value $M=\frac{\sqrt{2}}{2}$ of $\varphi^{\prime}(k)$ is attained at $k=0$ while its minimum value $-\frac{\sqrt{2}}{2}$ is attained at $k=2\pi$.
For economy of notation, let
\[\begin{array}{l}
\gamma_{1}(k)=-\cos\frac{k}{2}+\sqrt{1+\cos^2\frac{k}{2}}\vspace{7pt}\\
\gamma_{2}(k)=-\cos\frac{k}{2}-\sqrt{1+\cos^2\frac{k}{2}},
\end{array}\]
and
\[\begin{array}{l}
N_{1}(k)=2-2\gamma_{1}(k)\cos\frac{k}{2}\vspace{7pt}\\
N_{2}(k)=2-2\gamma_{2}(k)\cos\frac{k}{2},
\end{array}\]
in terms of which, the two unit eigenvectors of $U(k/2)$ corresponding to the eigenvalues $\lambda_1(k)$ and $\lambda_2(k)$ are:
\[\begin{array}{l}
v_{1}(k)=\frac{1}{\sqrt{N_1}}\left(e^{ik/2},\gamma_{1}(k)\right)\vspace{5pt}\\
v_{2}(k)=\frac{1}{\sqrt{N_2}}\left(e^{ik/2},\gamma_{2}(k)\right),
\end{array}\]
and the four unit eigenvectors of $U_{ec}(k)$ are:\vspace{5pt}\\
\hspace*{10pt}$\left\{
\begin{array}{l}
V_1(k)=\frac{1}{N_1}\left(e^{ik}, e^{ik/2}\gamma_{1}(k), e^{ik/2}\gamma_{1}(k), \gamma_{1}(k)^{2}\right)\vspace{5pt}\\
V_2(k)=\frac{1}{\sqrt{N_1N_2}}\left( e^{ik}, e^{ik/2}\gamma_{2}(k), e^{ik/2}\gamma_{1}(k),-1 \right)\vspace{5pt}\\
V_3(k)=\frac{1}{\sqrt{N_1N_2}}\left(e^{ik}, e^{ik/2}\gamma_{1}(k), e^{ik/2}\gamma_{2}(k), -1 \right)\vspace{5pt}\\
V_4(k)=\frac{1}{N_2}\left(e^{ik}, e^{ik/2}\gamma_{2}(k), e^{ik/2}\gamma_{2}(k), \gamma_{2}(k)^{2}\right)\vspace{5pt}.
\end{array}\right.$
By Eq.(\ref{eqnwaec}), with $\widehat{\Psi _0^{ec}}(k)=\left(\frac{\sqrt{2}}{2},0,0,\frac{\sqrt{2}}{2}\right)$, the wave function $\widehat{\Psi_t^{ec}}(k)=U_{ec}^t(k)\widehat{\Psi_0^{ec}}(k)$ may be expressed as a sum of four components
\begin{equation}\widehat{\Psi_t^{ec}}(k)=H_{1}(k,t)+H_{2}(k,t)+H_{3}(k,t)+H_{4}(k,t),\nonumber\end{equation}
where:\vspace{5pt}\\
\hspace*{10pt}$\left\{
\begin{array}{l}
H_{1}(k,t)=\frac{\sqrt{2}e^{it\varphi(k)}}{2N_1}\left(e^{-ik}+\gamma_{1}(k)^{2}\right)V_{1}(k)\vspace{5pt}\\
H_{2}(k,t)=\frac{\sqrt{2}e^{i\pi t}}{2\sqrt{N_1N_2}}\left( e^{-ik}-1 \right)V_{2}(k)\vspace{5pt}\\
H_{3}(k,t)=\frac{\sqrt{2}e^{i\pi t}}{2\sqrt{N_1N_2}}\left(e^{-ik} -1 \right)V_{3}(k)\vspace{5pt}\\
H_{4}(k,t)=\frac{\sqrt{2}e^{it\varphi(k)}}{2N_2}\left(e^{-ik}+\gamma_{2}(k)^{2}\right)V_{4}(k).\vspace{5pt}
\end{array}\right.$
Applying inverse Fourier transformation, as in Eq.(\ref{eqnprodis1}), the four summands of
\begin{equation}\Psi_t^{ec}(x)=I_{1}+I_{2}+I_{3}+I_{4}\label{4Is}\end{equation}
are:\vspace{5pt}
\hspace*{3pt}$\left\{
\begin{array}{l}
I_1 = \int_0^{2\pi}e^{-ixk}\frac{\sqrt{2}e^{it\varphi(k)}}{2N_1}\left(e^{-ik}+\gamma_{1}(k)^{2}\right)V_{1}(k)\,\frac{dk}{2\pi}\vspace{5pt}\\
I_2 = \int_0^{2\pi}e^{-ixk}\frac{\sqrt{2}e^{i\pi t}}{2\sqrt{N_1N_2}}\left( e^{-ik}-1 \right)V_{2}(k)\vspace{5pt}\,\frac{dk}{2\pi}\vspace{5pt}\\
I_3 = \int_0^{2\pi}e^{-ixk}\frac{\sqrt{2}e^{i\pi t}}{2\sqrt{N_1N_2}}\left(e^{-ik} -1 \right)V_{3}(k)\vspace{5pt}\,\frac{dk}{2\pi}\vspace{5pt}\\
I_4 = \int_0^{2\pi}e^{-ixk}\frac{\sqrt{2}e^{it\varphi(k)}}{2N_2}\left(e^{-ik}+\gamma_{2}(k)^{2}\right)V_{4}(k)\,\frac{dk}{2\pi}\vspace{5pt}.
\end{array}\right.$\\
To determine the height of the spike at the origin, as given by Eq.(\ref{spike-0}), we must evaluate the sum of the second and third terms of
(\ref{4Is}) at $x=0$. After some algebraic manipulations, using the identity $N_{1}N_{2}=4+4\cos^{2}\frac{k}{2}$ and the formulas
\begin{eqnarray*}
\int_0^{2\pi}\frac{1}{1+\cos^2\frac{k}{2}}\frac{dk}{2\pi}&=&\frac{\sqrt{2}}{2},\\
\int_0^{2\pi}\frac{\cos k}{1+\cos^2\frac{k}{2}}\frac{dk}{2\pi}&=&2-\frac{3\sqrt{2}}{2},\\
\int_0^{2\pi}\frac{\sin k}{1+\cos^2\frac{k}{2}}\frac{dk}{2\pi}&=&0,
\end{eqnarray*}
we arrive at the formula:
\begin{equation}I_{2}+I_{3}=\frac{1}{2}\left((-1)^t(2-\sqrt{2}),0,0, (-1)^t(2-\sqrt{2})\right).\label{eqn23}\end{equation}
Therefore, the probability of finding the particle at the origin converges to
\begin{eqnarray*}
p(0)&=&\|I_{2}+I_{3}\|\\
&=&3-2\sqrt{2}\\
&\thickapprox& 0.171573.
\end{eqnarray*}
Our computer simulation (see FIG. 1) accords well with this theoretical value as does the simulation result 0.171242 reported in \cite{V-ABBB05}.
\begin{figure}
\includegraphics[height=2.0in]{QRWFig_0001
\caption{\label{fig:wide}The position probability distribution $p_{t}(x)$ for a 2cQRW with initial coin state $|\Phi^{+}\rangle=\frac{1}{\sqrt{2}}\left(|00\rangle +|11\rangle\right)$ after $t=400$ steps.}
\end{figure}
We proceed to elucidate the destinies of the two minor spikes, which plainly are visible in FIG. 1. By Theorem 3, with $M=\frac{\sqrt{2}}{2}$, each of the spikes, respectively, is predicted to lurk within a small neighborhood of the positions $x=\pm \frac{\sqrt{2}}{2}t$. It suffices to consider only the spike at $x=\frac{\sqrt{2}}{2}t$, since the other one may be treated similarly. Also, to serve as our small neighborhood, it suffices to adopt the interval $J_{t}=[\frac{\sqrt{2}}{2}t-1,\frac{\sqrt{2}}{2}t+1]$.
Let $x=\frac{\sqrt{2}}{2}t+c\in J_{t}$. Substituting in each of the summands of Eq.(\ref{4Is}) this value of $x$ and performing some simple algebraic manipulations, we obtain:\vspace{5pt}
{\hspace*{-10pt}}
$\left\{
\begin{array}{l}
I_1 = \int_0^{2\pi}\!\!e^{it[\varphi(k)-\frac{\sqrt{2}}{2}k]}e^{-ik\delta}\frac{\sqrt{2}}{2N_1} \left(e^{-ik}+\gamma_{1}(k)^{2}\right)V_{1}(k)\,\frac{dk}{2\pi}\vspace{5pt}\\
I_2 = \int_0^{2\pi}e^{i\pi t}e^{-ixk}\frac{\sqrt{2}}{2\sqrt{N_1N_2}} \left( e^{-ik}-1 \right)V_{2}(k)\vspace{5pt}\,\frac{dk}{2\pi}\vspace{5pt}\\
I_3 = \int_0^{2\pi}e^{i\pi t}e^{-ixk}\frac{\sqrt{2}}{2\sqrt{N_1N_2}} \left(e^{-ik} -1 \right)V_{3}(k)\vspace{5pt}\,\frac{dk}{2\pi}\vspace{5pt}\\
I_4 = \int_0^{2\pi}\!\!e^{it[-\varphi(k)-\frac{\sqrt{2}}{2}k]}e^{-ik\delta}\frac{\sqrt{2}}{2N_2}\!\left(e^{-ik}\!+\!\gamma_{2}(k)^{2}\right)\!V_{4}(k)\,\frac{dk}{2\pi}\vspace{5pt}.
\end{array}\right.$
By Lemma 1, both $I_2$ and $I_3$ decay as $O(t^{-1})$. Hence, as $t\rightarrow\infty$, their contributions to the value of $p_{t}(x)$ become eventually negligible relative to $I_{1}$ and $I_{4}$, whose values, as shown below, decay no faster than $O(t^{-\frac{2}{3}})$.
Without much effort, one verifies that the modified phase function $\varphi(k)-\frac{\sqrt{2}}{2}k$ possesses a stationary point of order 2 at $k=0$ and, by similar reasoning, $-\varphi(k)-\frac{\sqrt{2}}{2}k$ possesses a stationary point of order 2 at $k=2\pi$. Thus, by the method of ``stationary phase" \cite{NV00, Eedelyi56}, the following values are obtained for the leading terms of $I_1$ and $I_4$:\vspace{5pt}
\begin{eqnarray*}
I_1 &\thicksim& C_{1}\cdot (1,\sqrt{2}-1,\sqrt{2}-1,3-2\sqrt{2}),\\
I_4 &\thicksim& C_{2}\cdot (1,\sqrt{2}-1,\sqrt{2}-1,3-2\sqrt{2}),
\end{eqnarray*}where
\begin{eqnarray*}
C_{1}&=&\frac{1}{2\pi}\frac{2\sqrt{2}-2}{(4-2\sqrt{2})^2} e^{-i\frac{\pi}{6}}\left(\frac{96}{t\sqrt{2}}\right)^{\frac{1}{3}}\frac{\Gamma(\frac{1}{3})}{3}\nonumber \\
C_{2}&=&\frac{1}{2\pi}\frac{2\sqrt{2}-2}{(4-2\sqrt{2})^2}e^{-ic2\pi} e^{-it\sqrt{2}\pi+i\frac{\pi}{6}}\left(\frac{96}{t\sqrt{2}}\right)^{\frac{1}{3}}\frac{\Gamma(\frac{1}{3})}{3}.
\end{eqnarray*}
Finally, after some further manipulations, we obtain:
\begin{eqnarray*}
p_t(x)&\thicksim& \|I_1+I_4\|^2\\
&\thicksim& \|e^{-i\frac{\pi}{6}}+e^{-ic2\pi}e^{-it\sqrt{2}\pi+i\frac{\pi}{6}}\|^2
\cdot \frac{(6\sqrt{2})^{\frac{2}{3}}\Gamma(\frac{1}{3})^2}{18\pi^2}t^{-\frac{2}{3}}\\
&\thicksim&\frac{(6\sqrt{2})^{\frac{2}{3}}\Gamma(\frac{1}{3})^2}{6\pi^2}t^{-\frac{2}{3}}.
\end{eqnarray*}
\section{RELATED IDEAS FOR FURTHER RESEARCH}
As above, let $X$ denote the position operator on the position space $\mathcal{H}_p$ and let $X_t\doteq {U^{\dagger}}^{t}XU^t$, where $U$ denotes the evolution operator. A natural and potentially more useful statistical description of the evolution of the 2cQRW as time $t\rightarrow \infty$ can be developed in terms of the ``normalized" operator $\frac{1}{t}X_{t}$.
Suppose the 2cQRW, controlled by the coin operator $H^{\otimes 2}$, is launched from the origin in the initial state $\Psi_{0}^{ec}(0)=\alpha_1|00\rangle+\alpha_2|01\rangle+\alpha_3|10\rangle+\alpha_4|11\rangle$, where $\sum_{j=1}^4|\alpha_j|^2=1$. For $y\in [-1,1]$, let $\delta_0(y)$ denote the \textit{point mass at the origin} and let $I_{(a,b)}(y)$ denote the \textit{indicator function} of the real interval $(a,b)$. Then, as $t\rightarrow \infty$, the normalized position distribution $f_{t}(y)$ associated with $\frac{1}{t}X_{t}$ converges, in the sense of a weak limit, to the density function
\begin{eqnarray}
f(y)=c_{00}\delta_0(y)+\frac{I_{(-1/\sqrt{2},1/\sqrt{2})}(y)}{\pi(1-y^2)\sqrt{1-2y^2}}\sum_{j=0}^2c_j y^j.\label{generaldensityfn}
\end{eqnarray}
In the above formula, the coefficients $c_{00}$, $c_{0}$, $c_{1}$ and $c_{2}$ are given by
\begin{eqnarray}
c_{00}&=&\frac{\sqrt{2}}{4}+\frac{1}{2}(2-\sqrt{2})(|\alpha_2|^2+|\alpha_3|^2)\nonumber\\
\mbox{}&\mbox{}&+\frac{1}{2}\mathrm{Re}[(2-\sqrt{2})(\alpha_2\overline{\alpha_4}+\alpha_3\overline{\alpha_4}-\alpha_1\overline{\alpha_2}-\alpha_1\overline{\alpha_3})\nonumber\\
\mbox{}&\mbox{}&+(3\sqrt{2}-4)\alpha_1 \overline{\alpha_4}-\sqrt{2}\alpha_2\overline{\alpha_3}]\label{c_{00}}\\
c_{0}&=&\frac{1}{2}+\mathrm{Re}\{\alpha_2\overline{\alpha_3}-\alpha_1\overline{\alpha_4}\} \label{c_0}\\
c_{1}&=&|\alpha_1|^2-|\alpha_4|^2\nonumber\\
\mbox{}&\mbox{}&+\mathrm{Re}(\alpha_1\overline{\alpha_2}+\alpha_1\overline{\alpha_3}+\alpha_2\overline{\alpha_4}+\alpha_3\overline{\alpha_4})\nonumber\\
c_{2}&=&\frac{1}{2}(|\alpha_1|^2+|\alpha_4|^2-|\alpha_2|^2-|\alpha_3|^2)\nonumber\\
\mbox{}&\mbox{}&+\mathrm{Re}[3\alpha_1\overline{\alpha_4}+\alpha_1\overline{\alpha_2}+\alpha_1\overline{\alpha_3}\nonumber\\
\mbox{}&\mbox{}&-\alpha_2\overline{\alpha_3}-\alpha_2\overline{\alpha_4}-\alpha_3\overline{\alpha_4})]\nonumber.
\end{eqnarray}
The derivation of Eq.(\ref{generaldensityfn}) is due to the method by Grimmett \textit{et al.} \cite{GJS04}.
For instance, if the 2cQRW is launched from the Bell state $|\Phi^{+}\rangle=\frac{1}{\sqrt{2}}(|00\rangle +|11\rangle)$, then we obtain
\begin{eqnarray}
f(y)=(\sqrt{2}-1)\delta_0(y)+\frac{2y^2I_{(-1/\sqrt{2},1/\sqrt{2})}(y)}{\pi(1-y^2)\sqrt{1-2y^2}}.\label{specificdensityfn}
\end{eqnarray}
It is no accident that the coefficient $c_{00}=\sqrt{2}-1$ of $\delta_{0}(y)$ in Eq. (\ref{specificdensityfn}) coincides with the value of the sum in Eq. (\ref{localization}). In this particular instance, we have
$$c_{00}=\sum_{x\in \mathbb{Z}}p(x)=\sqrt{2}-1.$$
A detailed study of the limit distribution of $\frac{1}{t}X_t$ for 2cQRW will be presented in a forthcoming publication.
\begin{acknowledgments}
We thank our undergraduate research assistants, Jamin Gallman and Brian Cunningham, for helping with the computer simulation to generate the graph in FIG. 1. The authors gratefully acknowledge support from Project HBCU-UP/BETTER at Bowie State University.
\end{acknowledgments}
|
2,877,628,091,196 | arxiv | \section{Introduction}
\label{sec:intro}
The theory of cosmic inflation was originally advocated as a solution to the flatness and horizon problems \cite{Guth1981, Linde1982} of the big-bang cosmology. When treated quantum-mechanically, inflation can also provide a mechanism for the generation of the perturbations that have resulted in the anisotropies observed in the cosmic microwave background \cite{Hawking1982, Starobinsky1982, Guth1982, Linde1983a}. It is usually formulated in terms of a single scalar field, minimally coupled to gravity, whose potential energy dominates over its kinetic energy for a short period of time and drives the accelerated expansion of the universe. This phase can be most easily achieved if the scalar potential $\mathcal{V}(\phi)$ has a relatively flat plateau and the scalar field can slowly roll down until it reaches the minimum.
Over the years a vast plethora of inflationary models have been proposed, originating from diverse physics frameworks. Recently, the increasing sensitivity of the experiments, and in particular measurements from the Planck and BICEP2/Keck Collaborations \cite{Ade2015, Ade2016}, have put stringent constraints on many of these models. The simplest models, where a single scalar field is minimally coupled to gravity, seem to be disfavored\footnote{See however \cite{Ballesteros2016}.}. On the other hand, slightly more convoluted models such as the Starobinsky model \cite{Starobinsky1980, Pallis2017, Bruck2015, Copeland2015, Kehagias2014, Asaka2016}, nonminimal Higgs inflation \cite{Bezrukov2008, Bezrukov2009, DeSimone2009, Barbon2009, Bezrukov2009a, Barvinsky2009, Lerner2010, Lerner2010a, Popa2011, Bezrukov2011, Kamada2012, Steinwachs2012, Bezrukov2013, George2014, Hamada2014, Hamada2014a, Hamada2015, Bezrukov2014, Allison2014, Salvio2015, Bruck2016, Calmet2016}, or the so-called $\alpha$--attractors \cite{Kallosh2013b, Kallosh2013d, Kallosh2013a, Kallosh2014a, Linde2015, Galante2015, Broy2015, Kallosh2015, Carrasco2015, Yi2016, Odintsov2016, Bhattacharya2017, Dimopoulos2017} give predictions for the observables that lie inside the sweet spot of the measurements. A common feature of these models is that they can be formulated in terms of a nonminimal coupling function $F(\phi)$ between the inflaton $\phi$ and the scalar curvature $R$. Such nonminimal coupling is expected to be generated at the quantum level of the theory even if it is absent in the classical action \cite{Faraoni1996}. These nonminimally coupled theories belong to a general class of gravity theories termed \textit{scalar-tensor (ST) theories} \cite{Faraoni2004a}. Other examples of such theories include, among others, the $f(R)$ models \cite{Sotiriou2010, Capozziello2011, Nojiri2011, DeFelice2010, Clifton2012, Rinaldi2014, Bamba2014, Nojiri2017}, scale-invariant models \cite{Shaposhnikov2009a, Khoze2013, Kannike2014, Gabrielli2014, Salvio2014, Csaki2014a, Kannike2015, Barrie2016, Marzola2016a, Marzola2016b, Rinaldi2016a, Farzinnia2016, Kannike2016, Rinaldi2016b, Karananas2016, Tambalo2017, Kannike2017, Ferreira2017, Salvio2017, Kannike2017a} and nonminimal inflationary models \cite{Fakir1990, Makino1991, Faraoni1996, Torres1997, Faraoni2000, Koh2005, Park2008a, Nozari2008, Bauer2008, Nozari2010, Okada2010, Pallis2010, Edwards2014, Kehagias2014, Inagaki2015, Artymowski2017}.
Scalar-tensor theories are usually formulated in either the \textit{Jordan frame (JF)} or the \textit{Einstein frame (EF)}. In the JF the Planck mass is a dynamical quantity that depends on the value of the scalar field, whose self-interactions are described by a potential. Furthermore, the scalar field is minimally coupled to the metric, and the matter part of the action is just the standard one. In the EF the gravitational action has the standard Einstein-Hilbert form plus a scalar field described by an effective potential. Moreover, the scalar appears in the matter sector of the action through the rescaling factor which multiplies the metric tensor. The two frames are mathematically equivalent at the classical level\footnote{See also \cite{Kamenshchik2015, Herrero-Valea2016, Pandey2016, Pandey2017} for considerations on the quantum equivalence of the frames.} since one can always switch between them by applying a conformal transformation of the metric and a field redefinition, collectively referred to as \textit{frame transformation}. Nevertheless, the physical equivalence of the frames with respect to the physical predictions has become a matter of a long-standing debate \cite{Capozziello1997, Dick1998, Faraoni1999, Faraoni1999a, Flanagan2004, BHADRA2007, Nozari2009, Capozziello2010, Corda2011, Qiu2012, Qiu2012a, Quiros2013, Chiba2013a, Postma2014, Qiu2015, Domenech2015a, Bahamonde2016, Brooker2016, Bhattacharya2017a, Bahamonde2017}.
Inflation is usually studied with the help of the so-called \textit{slow-roll parameters} which are generally frame-covariant \cite{Kaiser1995, Nozari2010, DeFelice2011, Burns2016, Karamitsos2017}. Nevertheless, if we analyze the slow-roll regimes in the JF and EF using invariant quantities then we can quickly move between different parametrizations. This invariant formalism was recently proposed and developed in \cite{Jaerv2015, Jaerv2015a, Kuusk2016, Kuusk2016a, Jaerv2017}. In \cite{Kuusk2016} the authors calculated the spectral indices up to second order in the slow-roll parameters in both the EF and JF and showed that the two frames are physically equivalent. Here we extend their results up to third order in the slow-roll parameters and also examine how the different definitions for the number of $e$-folds in the two frames affect the observables.
This paper is organized as follows: in Sec. \ref{sec:formalism} we review the invariant formalism introduced in \cite{Kuusk2016}. After presenting the three principal quantities which are invariant under a conformal transformation of the metric and a redefinition of the scalar field, we consider the slow-roll approximation in the two frames and define the corresponding \textit{Hubble slow-roll} parameters (HSRPs). We also define a hierarchy of \textit{potential slow-roll} parameters (PSRPs) which are frame independent. As shown in \cite{Jaerv2017}, this formalism proves to be attractive since many inflationary models can be classified according to the form of their invariant potentials. This provides an elegant explanation as to why vastly different models can produce the same predictions for the inflationary observables.
In Sec. \ref{sec:spectra-indices} we adopt the Green's function method considered in \cite{Gong2001} and calculate the spectral indices up to third order in the slow-roll parameters in both the JF and EF. Then, using the relations between the HSRPs we find that the two frames are equivalent. Furthermore, since the HSRPs can be related to the PSRPs, we express the spectral indices in terms of the PSRPs which are manifestly frame invariant.
In Sec. \ref{sec:efolds} we consider the nonminimal Coleman-Weinberg model developed in \cite{Kannike2016} and compare the predictions of the third order corrected expressions we obtained with the most commonly used first order results. Furthermore, even though the expressions for the observables that we obtain are frame invariant, the definition of the number of $e$-folds is not and this results to different predictions. To this end, we examine how the predictions change if the required 50--60 number of $e$-folds is taken in the Einstein or in the Jordan frame. Finally, we examine how the predicted values for the inflationary observables are affected by the end-of-inflation condition. The \textit{exact} condition for inflation to end is when $\eps_H=1$. The usual approach is to Taylor approximate this condition with PSRPs. Most authors use only the first order approximation $\eps_H \approx \eps_V$ since this is indeed a good approximation for almost all of the inflationary epoch save for the last few $e$-folds before inflation ends when this approximation breaks down. Since we have obtained the third-order corrected expressions for the inflationary observables we also compare the results against three more end-of-inflation conditions, namely, the third-order Taylor approximation of the condition $\eps_H=1$ with PSRPs, as well as against the Pad\'{e} [1/1] and Pad\'{e} [2/2] approximants. All of these considerations prove to be relevant since the differences in the predictions that we obtain are within the accuracy of future experiments and may prove instrumental in ruling out various inflationary models.
In Sec. \ref{sec:Conclusions} we summarize our results and conclude. Useful formulas are presented in the Appendixes.
\section{Invariant formalism and slow-roll approximation}
\label{sec:formalism}
In this section, we consider the general action of a single scalar field that describes a wide class of scalar-tensor gravity theories. By using the frame and parametrization invariant formalism introduced in \cite{Jaerv2017, Jaerv2015, Jaerv2015a, Kuusk2016a, Kuusk2016} we write down the field equations of motion in terms of quantities that are invariant under conformal rescalings of the metric and redefinitions of the scalar field.
\subsection{General action}
\label{subsec:action}
The most general action for scalar-tensor theories has the form \cite{Flanagan2004}
\be
S = \int \text{d}^4 x \sqrt{-g} \left\lbrace \frac{1}{2}\mathcal{A}(\Phi) R - \frac{1}{2}B(\Phi) g^{\mu\nu} \left( \nabla_\mu \Phi \right) \left( \nabla_\nu \Phi \right) - \mathcal{V}(\Phi) \right\rbrace + S_m \left[ e^{2 \sigma(\Phi)} g_{\mu\nu} , \chi \right] ,
\label{Action}
\ee
where in the first term $g$ is the metric determinant, $R$ denotes the Ricci scalar associated with the metric $g_{\mu\nu}$ and $\mathcal{V} (\Phi)$ is the scalar potential. In the second term, $S_m$ stands for the matter part of the action. Furthermore, the four functions $\mathcal{A} (\Phi)$, $\mathcal{B} (\Phi)$, $\mathcal{V} (\Phi)$ and $\sigma (\Phi)$ are arbitrary dimensionless functions of the scalar field $\Phi$ that completely characterize a model, and we call them \textit{model functions}. Throughout, we normalize $\Phi$ in terms of the reduced Planck mass, $M_P / (8 \pi G)^{1/2} \equiv 1$.
We assume that the background metric is the flat Friedmann--Lema\^{i}tre--Robertson--Walker (FLRW) with the space-positive signature
\be
ds^2= a^2(\tau)(-d\tau^2+dx^2+dy^2+dz^2) ,
\label{FLRW_metric}
\ee
where $a(\tau)$ is the scale factor of the Universe as a function of the frame-invariant conformal time.
By considering a rescaling of the metric
\be
g_{\mu\nu} = e^{2 \bar{\gamma} (\bar{\Phi})} \bar{g}_{\mu\nu}
\label{trans_metric1}
\ee
and a redefinition of the field
\be
\Phi = \bar{f} (\bar{\Phi})
\label{trans_field}
\ee
one can easily verify that the action \eqref{Action} is invariant up to a boundary term, if the model functions transform according to the following relations:
\ba
\bar{\mathcal{A}}(\bar{\Phi}) &=& e^{2 \bar{\gamma} (\bar{\Phi})} \mathcal{A} \left( \bar{f} (\bar{\Phi}) \right) ,
\label{trans_A}
\\
\nonumber \\
\bar{\mathcal{B}}(\bar{\Phi}) &=& e^{2 \bar{\gamma} (\bar{\Phi})} \left[ (\bar{f}')^2 \mathcal{B} \left( \bar{f} (\bar{\Phi}) \right) - 6 (\bar{\gamma}')^2 \mathcal{A} \left( \bar{f} (\bar{\Phi}) \right) - 6 \bar{\gamma}' \bar{f}' \mathcal{A}' \right] ,
\label{trans_B}
\\
\nonumber \\
\bar{\mathcal{V}} (\bar{\Phi}) &=& e^{4 \bar{\gamma} (\bar{\Phi})} \mathcal{V} \left( \bar{f} (\bar{\Phi}) \right) ,
\label{trans_V}
\\
\nonumber \\
\bar{\sigma} (\bar{\Phi}) &=& \sigma \left( \bar{f} (\bar{\Phi}) \right) + \bar{\gamma} (\bar{\Phi}) ,
\label{trans_sigma}
\ea
where a prime indicates differentiation with respect to the argument, e.g. $\bar{\gamma}' \equiv \text{d} \bar{\gamma} (\bar{\Phi}) / \text{d} \bar{\Phi}$ and $\mathcal{A}' \equiv \text{d} \mathcal{A}(\Phi) / \text{d} \Phi$, and an overbar denotes quantities which are given in terms of the conformal metric $\bar{g}_{\mu\nu}$.
Now, using the transformations \eqref{trans_metric1}-\eqref{trans_field} one can fix two out of the four arbitrary functions $\left\lbrace \mathcal{A} , \mathcal{B} , \mathcal{V} , \sigma \right\rbrace $. Different choices for these functions correspond to different \textit{parametrizations}. For example, the choice
\be
\mathcal{A} = F(\phi), \quad \mathcal{B} = 1 , \quad \mathcal{V} = \mathcal{V} (\phi) , \quad \sigma = 0 ,
\ee
corresponds to the JF in the Boisseau-Esposito-Far\`{e}se-Polarski-Starobinski parametrization \cite{Boisseau2000, Esposito-Farese2001}, the choice
\be
\mathcal{A} = \Psi, \quad \mathcal{B} = \frac{\omega(\Psi)}{\Psi} , \quad \mathcal{V} = \mathcal{V} (\Psi) , \quad \sigma = 0 ,
\ee
corresponds to the JF in the Brans-Dicke-Bergmann-Wagoner parametrization \cite{Brans1961, Bergmann1968, Wagoner1970}, while the choice
\be
\mathcal{A} = 1, \quad \mathcal{B} = 2 , \quad \mathcal{V} = \mathcal{V} (\varphi) , \quad \sigma = \sigma(\varphi) ,
\ee
represents the EF in the canonical parametrization \cite{Dicke1962a, Brans1961, Bergmann1968, Wagoner1970}.
\subsection{Invariants}
\label{subsec:invariants}
Next, we follow \cite{Kuusk2016} and consider three quantities which are invariant under a conformal rescaling of the metric and a reparametrization of the scalar field as a result of the transformation properties \eqref{trans_A}-\eqref{trans_sigma} of the model functions. These invariants are
\be
\mathcal{I}_m (\Phi) \equiv \frac{e^{2 \sigma (\Phi)}}{\mathcal{A} (\Phi)} ,
\label{inv_m}
\ee
\be
\mathcal{I}_{\mathcal{V}} (\Phi) \equiv \frac{\mathcal{V} (\Phi)}{(\mathcal{A} (\Phi))^2} ,
\label{inv_v}
\ee
\be
\mathcal{I}_\phi (\Phi) \equiv \int \left( \frac{2 \mathcal{A} \mathcal{B} + 3 (\mathcal{A}')^2}{4 \mathcal{A}^2} \right)^{1/2} \text{d} \Phi .
\label{inv_phi}
\ee
The first invariant, $\mathcal{I}_m(\Phi)$, is a quantity that characterizes the nonminimality of a theory. For constant $\mathcal{I}_m(\Phi)$ the scalar field is minimally coupled to gravity, and we are dealing with standard general relativity. On the other hand, if $\mathcal{I}_m'(\Phi) \not\equiv 0$, then this invariant is a dynamical function and the scalar field is nonminimally coupled to gravity, as is the case in the JF. The second invariant, $\mathcal{I}_{\mathcal{V}}(\Phi)$, contains the self-interactions of the scalar field and plays the role of an invariant potential. Finally, the third invariant, $\mathcal{I}_\phi(\Phi)$, measures the volume of the one-dimensional space of the scalar field and can be interpreted as the invariant propagating scalar degree of freedom.
The transformation properties of the model functions can also be used to define tensorial invariants, for example~\cite{Kuusk2016}
\be
\hat{g}_{\mu\nu} \equiv \mathcal{A}(\Phi) g_{\mu\nu} .
\label{trans_metric2}
\ee
The above choice is not unique since the tensor \eqref{trans_metric2} does not change its transformation properties if it is multiplied by a scalar invariant, i.e.,
\be
\bar{g}_{\mu\nu} \equiv e^{2 \sigma (\Phi)} g_{\mu\nu} = \mathcal{I}_m \hat{g}_{\mu\nu}
\label{trans_metric3}
\ee
is also invariant under the transformations \eqref{trans_metric1} and \eqref{trans_field}.
In the following, a barred or a hatted variable will represent the quantity evaluated in the JF or EF, respectively. The relation between the time coordinate, the scale factor and the Hubble parameter in the two frames is \cite{Kuusk2016}
\be
\frac{\text{d}}{\text{d} \bar{t}} = \frac{1}{\sqrt{\mathcal{I}_m}} \frac{\text{d}}{\text{d} \hat{t}} , \quad \bar{a} (\bar{t}) = \sqrt{\mathcal{I}_m} \hat{a} (\hat{t}) ,
\label{trans_time_scale-factor}
\ee
\be
\bar{H} = \frac{1}{\sqrt{\mathcal{I}_m}} \left( \hat{H} + \frac{1}{2} \frac{\text{d} \ln{\mathcal{I}_m}}{\text{d} \hat{t}} \right) .
\label{trans_Hubble}
\ee
An interesting and appealing feature of the invariant formalism, which was pointed out in \cite{Jaerv2017}, is that inflationary models with very different background physical motivations can be described by similar invariant potentials and thus lead to the same predictions for the inflationary observables. As an example, let us consider \textit{induced gravity} inflation \cite{Accetta1985, Carugno1993, Kaiser1994, Kaiser1994a, Cervantes-Cota1995, Cerioni2009, Giudice2014} and \textit{Starobinsky} inflation \cite{Starobinsky1980, Pallis2017, Bruck2016a, Bruck2015, Copeland2015, Kehagias2014, Asaka2016}. The former is described by the model functions
\ba
\mathcal{A} (\Phi) &=& \xi \Phi^2 ,
\\
\mathcal{B} (\Phi) &=& 1 ,
\\
\sigma (\Phi) &=& 0 ,
\\
\mathcal{V} (\Phi) &=& \lambda \left( \Phi^2 - v^2 \right)^2 ,
\ea
where $\xi$ is the nonminimal coupling and $v$ is the vacuum expectation value (VEV) of the scalar field $\Phi$ which induces the Planck mass scale,
\be
1 = \xi v^2 .
\ee
For Starobinsky inflation with $f(R) = R + b R^2$ one has~\cite{Burns2016}
\ba
\mathcal{A} (\Phi) &=& \Phi ,
\\
\mathcal{B} (\Phi) &=& 0 ,
\\
\sigma (\Phi) &=& 0 ,
\\
\mathcal{V} (\Phi) &=& \frac{b}{2} \left( \frac{\Phi-1}{2b}\right)^2.
\ea
Next, following the recipe of \cite{Jaerv2017} we can obtain the invariant potentials $\mathcal{I}_{\mathcal{V}}$ for the two models. As a first step, using \eqref{inv_phi} we calculate the form of the invariant fields
\ba
\text{Induced gravity:}&& \qquad \mathcal{I}_\phi = \sqrt{\frac{1 + 6 \xi}{2 \xi}} \ln \left( \frac{\Phi}{v_\Phi} \right) ,
\\
\nonumber \\
\text{Starobinsky:}&& \qquad \mathcal{I}_\phi = \frac{\sqrt{3}}{2} \ln \Phi .
\ea
Afterwards, inverting the above relations we find $\Phi(\mathcal{I}_\phi)$ and then using \eqref{inv_v} we calculate \\
$\mathcal{I}_{\mathcal{V}}(\Phi(\mathcal{I}_\phi)) = \mathcal{I}_{\mathcal{V}}(\mathcal{I}_\phi)$ and obtain
\ba
\text{Induced gravity:}&& \qquad \mathcal{I}_{\mathcal{V}} (\mathcal{I}_\phi)= \frac{\lam}{\xi^2}\left( 1- e^{-\sqrt{\frac{8 \xi}{1+6 \xi}}\mathcal{I}_\phi} \right)^2,
\label{Induced_Inv_pot}
\\
\nonumber \\
\text{Starobinsky:}&& \qquad \mathcal{I}_{\mathcal{V}} (\mathcal{I}_\phi)=\frac{1}{8 \, b} \left( 1- e^{-\frac{2}{\sqrt{3}}\mathcal{I}_\phi}\right)^2.
\label{Staro_Inv_pot}
\ea
The forms of the invariant potentials suggest that for large values of the nonminimal coupling ($\xi \gtrsim 1$) the shape of the induced gravity invariant potential \eqref{Induced_Inv_pot} coincides with its Starobinsky counterpart \eqref{Staro_Inv_pot}, a behavior depicted in Fig.~\ref{fig:Induced-v-Staro}. As a consequence, the two models yield identical predictions in the strong coupling regime. On the other hand, in the weak coupling limit induced gravity gives the same predictions with quadratic inflation \cite{Linde1983a}. Indeed, when
\be
\mathcal{I}_\phi \ll \sqrt{\frac{1+6 \xi}{8 \xi}} \, ,
\label{Induced_to_quadratic_condition}
\ee
the invariant potential for induced gravity becomes \cite{Kallosh2014, Kallosh2013a}
\be
\mathcal{I}_{\mathcal{V}} = M^2 \mathcal{I}_\phi^2 , \quad \text{with} \quad M^2 = \frac{8 \lambda}{\xi \left( 1 + 6 \xi \right)} \, .
\ee
Note in \eqref{Induced_to_quadratic_condition} that as $\xi$ becomes smaller the allowed range for the field $\mathcal{I}_\phi$ in which induced gravity and quadratic inflation produce similar predictions becomes wider. As a consequence, only for small values of $\xi$ the field $\mathcal{I}_\phi$ can produce the required 50-60 number of $e$-folds. This is why the induced gravity predictions reach the quadratic inflation attractor in the small coupling regime.
\begin{figure}
\includegraphics[width=.5\textwidth]{Fig1a.pdf}
\includegraphics[width=.5\textwidth]{Fig1b.pdf}
\caption{The normalized invariant inflationary potentials for induced gravity and Starobinsky models for $\xi=2$. In the strong coupling limit the invariant potentials have a similar form and lead to the same predictions, while in the limit \eqref{Induced_to_quadratic_condition} induced gravity approaches the quadratic inflation attractor (inset in left plot).}
\label{fig:Induced-v-Staro}
\end{figure}
\subsection{Slow-roll in the Jordan frame}
\label{subsec:SRJF}
Let us consider the slow rolling of the inflaton field in the JF. Taking the functional derivative of the action \eqref{Action} with respect to the metric and the scalar field in the JF, we can write down the equations of motion in terms of the invariants as
\ba
\bar{H}^2 &=& \frac{1}{3} \left( \frac{\text{d} \mathcal{I}_\phi}{\text{d} \bar{t}} \right)^2 + \bar{H} \frac{\text{d} \ln{\mathcal{I}_m}}{\text{d} \bar{t}} - \frac{1}{4} \left( \frac{\text{d} \ln{\mathcal{I}_m}}{\text{d} \bar{t}} \right)^2 + \frac{1}{3} \frac{\mathcal{I}_{\mathcal{V}}}{\mathcal{I}_m} ,
\label{EOM1A_JF}
\\
\nonumber \\
\frac{\text{d} \bar{H}}{\text{d} \bar{t}} &=& - \frac{1}{2} \bar{H} \frac{\text{d} \ln{\mathcal{I}_m}}{\text{d} \bar{t}} + \frac{1}{4} \left( \frac{\text{d} \ln{\mathcal{I}_m}}{\text{d} \bar{t}} \right)^2 - \left( \frac{\text{d} \mathcal{I}_\phi}{\text{d} \bar{t}} \right)^2 + \frac{1}{2} \frac{\text{d}^2 \ln{\mathcal{I}_m}}{\text{d} \bar{t}^2} ,
\label{EOM2A_JF}
\\
\nonumber \\
\frac{\text{d}^2 \mathcal{I}_\phi}{\text{d} \bar{t}^2} &=& \left( - 3 \bar{H} + \frac{\text{d} \ln{\mathcal{I}_m}}{\text{d} \bar{t}} \right) \frac{\text{d} \mathcal{I}_\phi}{\text{d} \bar{t}} - \frac{1}{2 \mathcal{I}_m} \frac{\text{d} \mathcal{I}_{\mathcal{V}}}{\text{d} \mathcal{I}_\phi} ,
\label{EOM3A_JF}
\ea
where we have neglected the contributions of the matter part of the action since we assume that the energy density and pressure of the scalar field dominate during the inflationary epoch.
The standard HSRPs in the JF have the form~\cite{Kuusk2016}
\be
\bar{\epsilon}_0 \equiv - \frac{1}{\bar{H}^2} \frac{\text{d} \bar{H}}{\text{d} \bar{t}} = - \frac{\text{d} \ln{\bar{H}}}{\text{d} \ln{\bar{a}}} , \qquad \bar{\eta} \equiv - \left( \bar{H} \frac{\text{d} \mathcal{I}_\phi}{\text{d} \bar{t}} \right)^{-1} \frac{\text{d}^2 \mathcal{I}_\phi}{\text{d} \bar{t}^2}.
\label{eps_eta_JF}
\ee
Inflation in the JF occurs as long as $\bar{\epsilon}_0 < 1$, and slow rollover happens while $\bar{\epsilon}_0 \ll 1$. In the next section, we will be concerned with higher order corrections to the inflationary indices. As a result, we will need a series of slow-roll parameters which, following~\cite{Kuusk2016}, we take to be
\be
\bar{\kappa}_0 \equiv \frac{1}{\bar{H}^2} \left( \frac{\text{d} \mathcal{I}_\phi}{\text{d} \bar{t}} \right)^2 = \left( \frac{\text{d} \mathcal{I}_\phi}{\text{d} \ln{\bar{a}}} \right)^2 ,
\label{ka0_JF}
\ee
\be
\bar{\kappa}_1 \equiv \frac{1}{\bar{H} \bar{\kappa}_0} \frac{\text{d} \bar{\kappa}_0}{\text{d} \bar{t}} = \frac{\text{d} \ln{\bar{\kappa}_0}}{\text{d} \ln{\bar{a}}} = 2 \left( - \bar{\eta} + \bar{\epsilon}_0 \right) ,
\label{ka1_JF}
\ee
\be
\bar{\kappa}_{i+1} \equiv \frac{1}{\bar{H} \bar{\kappa}_i} \frac{\text{d} \bar{\kappa}_i}{\text{d} \bar{t}} = \frac{\text{d} \ln{\bar{\kappa}_i}}{\text{d} \ln{\bar{a}}}.
\label{kai+1_JF}
\ee
In the JF, it is also useful to consider a second series of slow-roll parameters involving the invariant $\mathcal{I}_m$ and thus related to the nonminimal coupling. This series has the form~\cite{Kuusk2016}
\be
\bar{\lambda}_0 \equiv \frac{1}{2 \bar{H}} \frac{\text{d} \ln{\mathcal{I}_m}}{\text{d} \bar{t}} = \frac{1}{2} \frac{\text{d} \ln{\mathcal{I}_m}}{\text{d} \ln{\bar{a}}} ,
\label{la0_JF}
\ee
\be
\bar{\lambda}_1 \equiv \frac{1}{\bar{H} \bar{\lambda}_0} \frac{\text{d} \bar{\lambda}_0}{\text{d} \bar{t}} = \frac{\text{d} \ln{\bar{\lambda}_0}}{\text{d} \ln{\bar{a}}} ,
\label{la1_JF}
\ee
\be
\bar{\lambda}_{i+1} \equiv \frac{1}{\bar{H} \bar{\lambda}_i} \frac{\text{d} \bar{\lambda}_i}{\text{d} \bar{t}} = \frac{\text{d} \ln{\bar{\lambda}_i}}{\text{d} \ln{\bar{a}}} .
\label{lai+1_JF}
\ee
Now, using the definitions of the slow-roll parameters~\eqref{eps_eta_JF}-\eqref{lai+1_JF} we can rewrite the system of the field equations~\eqref{EOM1A_JF}-\eqref{EOM3A_JF} as
\ba
\mathcal{I}_{\mathcal{V}} &=& \bar{H}^2 \mathcal{I}_m \left( 3 - \bar{\kappa}_0 - 6 \bar{\lambda}_0 + 3 \bar{\lambda}_0^2 \right) ,
\label{EOM1B_JF}
\\
\nonumber \\
\bar{\kappa}_0 &=& \bar{\epsilon}_0 - \bar{\lambda}_0 \left( 1 + \bar{\epsilon}_0 - \bar{\lambda}_0 - \bar{\lambda}_1 \right) ,
\label{EOM2B_JF}
\\
\nonumber \\
- \frac{1}{2\mathcal{I}_m} \frac{\text{d} \mathcal{I}_{\mathcal{V}}}{\text{d} \mathcal{I}_\phi} &=& \bar{H} \frac{\text{d} \mathcal{I}_\phi}{\text{d} \bar{t}} \left( 3 - \bar{\epsilon}_0 + \frac{1}{2} \bar{\kappa}_1 - 2 \bar{\lambda}_0 \right) .
\label{EOM3B_JF}
\ea
In the slow-roll regime we must have~\cite{Kuusk2016}
\be
\vert \bar{\kappa}_0 \vert \ll 1 , \quad \vert \bar{\kappa}_1 \vert \ll 1 , \quad \vert \bar{\lambda}_0 \vert \ll 1 , \quad \vert \bar{\lambda}_1 \vert \ll 1 ,
\label{SR_cond_JF}
\ee
and then the slow-rolling inflaton obeys the following approximate equations:
\be
\mathcal{I}_{\mathcal{V}} \approx 3 \bar{H}^2 \mathcal{I}_m , \qquad 3 \bar{H} \frac{\text{d} \mathcal{I}_\phi}{\text{d} \bar{t}} \approx - \frac{1}{2 \mathcal{I}_m} \frac{\text{d}
\mathcal{I}_{\mathcal{V}}}{\text{d} \mathcal{I}_\phi}.
\label{SR_EOMs_JF}
\ee
\subsection{Slow-roll in the Einstein frame}
\label{subsec:SREF}
Analogously to the JF, the field equations in terms of the invariants in the EF have the form~\cite{Kuusk2016}
\be
\hat{H}^2 = \frac{1}{3} \left[ \left( \frac{\text{d} \mathcal{I}_\phi}{\text{d} \hat{t}} \right)^2 + \mathcal{I}_{\mathcal{V}} \right] ,
\label{EOM1A_EF}
\ee
\be
\frac{\text{d} \hat{H}}{\text{d} \hat{t}} = - \left( \frac{\text{d} \mathcal{I}_\phi}{\text{d} \hat{t}} \right)^2 ,
\label{EOM2A_EF}
\ee
\be
\frac{\text{d}^2 \mathcal{I}_\phi}{\text{d} \hat{t}^2} = - 3 \hat{H} \frac{\text{d} \mathcal{I}_\phi}{\text{d} \hat{t}} - \frac{1}{2} \frac{\text{d} \mathcal{I}_{\mathcal{V}}}{\text{d} \mathcal{I}_\phi} .
\label{EOM3A_EF}
\ee
The standard slow-roll parameters now are
\be
\hat{\epsilon}_0 \equiv - \frac{1}{\hat{H}^2} \frac{\text{d} \hat{H}}{\text{d} \hat{t}} = - \frac{\text{d} \ln{\hat{H}}}{\text{d} \ln{\hat{a}}} , \qquad \hat{\eta} \equiv - \left( \hat{H} \frac{\text{d} \mathcal{I}_\phi}{\text{d} \hat{t}} \right)^{-1} \frac{\text{d}^2 \mathcal{I}_\phi}{\text{d} \hat{t}^2},
\label{eps_eta_EF}
\ee
and again it will be useful to consider the following series of slow-roll parameters:
\be
\hat{\kappa}_0 \equiv \frac{1}{\hat{H}^2} \left( \frac{\text{d} \mathcal{I}_\phi}{\text{d} \hat{t}} \right)^2 = \left( \frac{\text{d} \mathcal{I}_\phi}{\text{d} \ln{\hat{a}}} \right)^2 ,
\label{ka0_EF}
\ee
\be
\hat{\kappa}_1 \equiv \frac{1}{\hat{H} \hat{\kappa}_0} \frac{\text{d} \hat{\kappa}_0}{\text{d} \hat{t}} = \frac{\text{d} \ln{\hat{\kappa}_0}}{\text{d} \ln{\hat{a}}} = 2 \left( - \hat{\eta} + \hat{\epsilon}_0 \right) ,
\label{ka1_EF}
\ee
\be
\hat{\kappa}_{i+1} \equiv \frac{1}{\hat{H} \hat{\kappa}_i} \frac{\text{d} \hat{\kappa}_i}{\text{d} \hat{t}} = \frac{\text{d} \ln{\hat{\kappa}_i}}{\text{d} \ln{\hat{a}}} .
\label{kai+1_EF}
\ee
With the above definitions, the system~\eqref{EOM1A_EF}-\eqref{EOM3A_EF} can be rewritten as
\ba
\mathcal{I}_{\mathcal{V}} &=& \hat{H}^2 \left( 3 - \hat{\kappa}_0 \right) ,
\label{EOM1B_EF}
\\
\nonumber \\
\hat{\kappa}_0 &=& \hat{\epsilon}_0 ,
\label{EOM2B_EF}
\\
\nonumber \\
-\frac{1}{2} \frac{\text{d} \mathcal{I}_{\mathcal{V}}}{\text{d} \mathcal{I}_\phi} &=& \hat{H} \frac{\text{d} \mathcal{I}_\phi}{\text{d} \hat{t}} \left( 3 - \hat{\epsilon}_0 + \frac{1}{2} \hat{\kappa}_1 \right) .
\label{EOM3B_EF}
\ea
The slow-roll conditions are now simply
\be
\vert \hat{\kappa}_0 \vert \ll 1 , \qquad \vert \hat{\kappa}_1 \vert \ll 1 ,
\label{SR_cond_EF}
\ee
and the approximate forms of the equations~\eqref{EOM1B_EF}, \eqref{EOM3B_EF} become
\be
\mathcal{I}_{\mathcal{V}} \approx 3 \hat{H}^2 , \qquad 3 \hat{H} \frac{\text{d} \mathcal{I}_\phi}{\text{d} \hat{t}} \approx - \frac{1}{2} \frac{\text{d} \mathcal{I}_{\mathcal{V}}}{\text{d} \mathcal{I}_\phi} .
\label{SR_EOMs_EF}
\ee
In the next section, we will calculate the inflationary indices up to third order in the slow-roll parameters in both the EF and JF and then compare the results. It will prove useful to relate the EF slow-roll parameters with the JF ones. This can be done using Eqs.~\eqref{trans_time_scale-factor}, \eqref{trans_Hubble}. We have
\be
\hat{\kappa}_0 = \frac{\bar{\kappa}_0}{(1 - \bar{\lambda}_0)^2} , \qquad \hat{\kappa}_1 = \frac{\bar{\kappa}_1}{1 - \bar{\lambda}_0} + \frac{2 \bar{\lambda}_0 \bar{\lambda}_1}{(1 - \bar{\lambda}_0)^2} ,
\label{ka_EF-to-JF}
\ee
\be
\hat{\epsilon}_0 = \frac{\bar{\epsilon}_0 - \bar{\lambda}_0}{1 - \bar{\lambda}_0} + \frac{\bar{\lambda}_0 \bar{\lambda}_1}{(1 - \bar{\lambda}_0)^2} .
\label{eps_JF-to-EF-to-JF}
\ee
\subsection{Invariant potential slow-roll parameters}
In the spirit of~\cite{Liddle1994}, we also define a hierarchy of slow-roll parameters in terms of the invariant inflaton potential. The standard potential slow-roll parameter $\epsilon_V$ assumes the form \cite{Kuusk2016}
\begin{equation}
\eps_V=\frac{1}{4 \mathcal{I}_{\mathcal{V}}^2}\left(\frac{\text{d} \mathcal{I}_{\mathcal{V}}}{\text{d} \mathcal{I}_\phi} \right)^2 ,
\label{eps_V}
\end{equation}
while $\eta_V$ and higher-order parameters can be encoded in
\begin{equation}
{}^n\beta_V \equiv \left( \frac{1}{2 \mathcal{I}_{\mathcal{V}}} \right)^{n}\left( \frac{\text{d} \mathcal{I}_{\mathcal{V}}}{\text{d} \mathcal{I}_\phi} \right)^{n-1} \left( \frac{\text{d}^{(n+1)} \mathcal{I}_{\mathcal{V}}}{\text{d} \mathcal{I}_\phi^{(n+1)}} \right) ,
\label{hier_V}
\end{equation}
where ${}^n\beta_V$ is a parameter of order $n$ in the slow-roll approximation. The first three parameters arising from this hierarchy are
\ba
\eta_V &=& \frac{1}{2 \mathcal{I}_{\mathcal{V}}}\left( \frac{\text{d}^2 \mathcal{I}_{\mathcal{V}}}{\text{d} \mathcal{I}_\phi^2}\right) ,
\label{eta_V}
\\
\nonumber \\
\zeta_V^2 &=& \frac{1}{4 \mathcal{I}_{\mathcal{V}}^2}\left(\frac{\text{d} \mathcal{I}_{\mathcal{V}}}{\text{d} \mathcal{I}_\phi} \right) \left(\frac{\text{d}^3 \mathcal{I}_{\mathcal{V}}}{\text{d} \mathcal{I}_\phi^3} \right) ,
\label{zeta_V}
\\
\nonumber \\
\rho_V^3 &=& \frac{1}{8 \mathcal{I}_{\mathcal{V}}^3}\left( \frac{\text{d}^2 \mathcal{I}_{\mathcal{V}}}{\text{d} \mathcal{I}_\phi^2}\right) \left( \frac{\text{d}^4 \mathcal{I}_{\mathcal{V}}}{\text{d} \mathcal{I}_\phi^4}\right) .
\label{rho_V}
\ea
Note that we have changed the symbols $\xi$ and $\sigma$ of \cite{Liddle1994} in order to avoid confusion with the nonminimal coupling and one of the model functions, respectively.
\section{Higher-order spectral indices}
\label{sec:spectra-indices}
In this section, we compute the tensor and scalar power spectra up to second-order corrections in the slow-roll approximation and the corresponding spectral indices in both the JF and EF using the invariant slow-roll parameters of Secs~\ref{subsec:SRJF} and \ref{subsec:SREF}. We present the detailed calculation in the JF, and only give the final results for the EF since the calculation follows along the same lines with JF.
\subsection{Jordan frame analysis}
The evolution of linear (tensor and scalar) curvature cosmological perturbations in a flat FLRW background and in the presence of a scalar inflaton field is governed by the \textit{Mukhanov-Sasaki equation} (MSE) \cite{Sasaki1986, Mukhanov1988} which reads \cite{Hwang1990a, Hwang1996, Hwang1997a, Hwang1997, Hwang1998, Hwang1998a, Noh2001}
\be
\frac{\text{d} ^2 \nu}{\text{d} \tau ^2}+\left( k^2 -\frac{1}{z}\frac{\text{d} ^2 z}{\text{d} \tau^2}\right) \nu=0 ,
\label{Mukhanov-Sasaki-nu}
\ee
where $k$ corresponds to the scale of the Fourier mode $\mathbf{k}$ of the gauge-invariant \textit{comoving curvature perturbation} $\mathcal{R}_k$ \cite{Bardeen1980}. Furthermore, the field $\nu$ (usually referred to as the \textit{Mukhanov field}) is related to $\mathcal{R}_k$ via $\nu \equiv z \mathcal{R}_k$, where $z$ is a parametrization-independent quantity that depends on both the background and the type of perturbations~\cite{Kuusk2016}. For tensor perturbations,
\be
z = \frac{\bar{a}}{\sqrt{\mathcal{I}_m}} = \hat{a} ,
\ee
while for scalar perturbations
\be
z = \sqrt{\frac{2}{\mathcal{I}_m}} \frac{\bar{a}}{\bar{H} \left( 1 - \bar{\lambda}_0 \right)} \frac{\text{d} \mathcal{I}_\phi}{\text{d} \bar{t}} = \sqrt{2} \frac{\hat{a}}{\hat{H}} \frac{\text{d} \mathcal{I}_\phi}{\text{d} \hat{t}} .
\ee
Therefore, the evolution equation \eqref{Mukhanov-Sasaki-nu} is parametrization-independent and also has the same functional form for tensor and scalar perturbations. The two asymptotic solutions for the scalar field $\nu$ corresponding to the subhorizon and the superhorizon limit can be written respectively as
\begin{equation}
\nu\rightarrow \left\{
\begin{array}{ll}
\frac{1}{\sqrt{2 k}} e^{-ik\tau} \;\; &\text{as} \;\; -k \tau \rightarrow \infty,\\
A_k z \;\; &\text{as} \;\; -k \tau\rightarrow 0.
\end{array}
\right.
\label{MS-limits-nu}
\end{equation}
The power spectrum for cosmological perturbations is usually defined by the two-point correlation function for $\mathcal{R}_k$ in the following way:
\begin{equation}
\left\langle \mathcal{R}_k,\mathcal{R}_{k'} \right\rangle = (2 \pi)^2 \delta^3 (\mathbf{k}-\mathbf{k}') P_{\mathcal{R}}(k) ,
\end{equation}
where all quantities are calculated at the time when the mode $k$ crosses the horizon [when $k^{-1}$ equals the Hubble radius $(aH)^{-1}$]. Note that the horizon-crossing condition is not the same in the two frames. In the EF one has the condition $k=\hat{a}\hat{H}$ while in the JF using \eqref{trans_time_scale-factor},\eqref{trans_Hubble} and \eqref{la0_JF} one should use $k=\bar{a}\bar{H}(1-\bar{\lam}_0)$ to evaluate quantities at the time of horizon crossing. Now, using the relation between $\mathcal{R}_k$ and the Mukhanov field and the asymptotic superhorizon limit \eqref{MS-limits-nu} we can rewrite the power spectrum as
\begin{equation}
P(k) =\left( \frac{k^3}{2 \pi^2} \right) \lim_{-k \tau \rightarrow 0} \left| \frac{\nu}{z} \right|^2=\frac{k^3}{2 \pi^2} |A_k|^2.
\label{power-spectrum-def}
\end{equation}
This way the calculation of the spectrum reduces to simply finding the form of the amplitude of the field $\nu$ in the superhorizon limit. The MSE is usually solved in terms of Hankel functions by treating the slow-roll parameters as constant during inflation \cite{Stewart1993}. Since we want to obtain higher-order results for the power spectra and the spectral indices we cannot adhere to this assumption. Instead, we employ the Green's function method introduced by Stewart and Gong~\cite{Gong2001} which is valid to any order\footnote{See \cite{Stewart2002, Gong2004, Kim2004, Wei2004, Kim2005, Kadota2005, Joy2005, Dvorkin2011, Adshead2011, Kumazaki2011, Miranda2012, Adshead2013, BeltranJimenez2013, Adshead2014, Gong2014, Achucarro2014, Motohashi2015, Motohashi2017} for various extensions and applications of this method and \cite{Schwarz2001, Martin2003, Casadio2005, Casadio2005a, Casadio2005b, Habib2004, Habib2005, Lorenz2008, Zhu2014, Zhu2014b, Alinea2016, Rojas2009, Rojas2012} for other related methods.}.
Now, in order to compute $A_k$ one has to solve the MSE \eqref{Mukhanov-Sasaki-nu} which is a second-order differential equation. Thus in order to uniquely specify the solution for the field $\nu$ the use of two boundary conditions is necessary. To this end, one can use the asymptotic solutions \eqref{MS-limits-nu} as boundary conditions. By introducing the dimensionless variable $x \equiv -k \tau$ and redefining the field as $y \equiv \sqrt{2 k} \nu$, the asymptotic solutions become
\begin{equation}
y\rightarrow \left\{
\begin{array}{ll}
e^{-ix} \;\; &\text{as} \;\; x \rightarrow \infty,\\
\sqrt{2 k} A_k z \;\; &\text{as} \;\; x \rightarrow 0.
\end{array}.
\right.
\label{MS-limits-y}
\end{equation}
Also, by assuming the following ansatz for $z$:
\begin{equation}
z=\frac{1}{x}f(\ln{x}) ,
\label{ansatz}
\end{equation}
we can recast the MSE in the form
\begin{equation}
\frac{\text{d} ^2y}{\text{d} x^2}+\left(1-\frac{2}{x^2} \right) y=\frac{1}{x^2} g(\ln{x})y ,
\label{Mukhanov-Sasaki-y}
\end{equation}
where the function $g$ is defined through
\begin{equation}
g(\ln{x})=\frac{1}{f(\ln{x})} \left[-3 \frac{\text{d} f(\ln{x})}{\text{d} \ln{x}}+ \frac{\text{d} ^2 f(\ln{x})}{\text{d} (\ln{x})^2} \right].
\label{g-func-def}
\end{equation}
The homogeneous solution with the appropriate asymptotic behavior at $x \rightarrow \infty$ is
\begin{equation}
y_0(x)=\left(1+\frac{i}{x} \right) e^{i x}.
\label{hom-sol}
\end{equation}
By ``appropriate behavior" we mean that \eqref{hom-sol} reduces to the usual Minkowski modes in the deep subhorizon regime.
Combining \eqref{Mukhanov-Sasaki-y} and \eqref{MS-limits-y} we can rewrite the MSE as an integral equation
\begin{equation}
y(x)=y_0(x)+\frac{i}{2}\int_x^{\infty} \text{d} u \frac{1}{u^2} g (\ln{u})y(u) \left[ y_0^*(u) y_0(x)-y_0^*(x) y_0(u) \right]
\label{int-sol}
\end{equation}
and seek a perturbative solution to \eqref{int-sol}. We start by Taylor-expanding $xz$ around $x=1$ in the following way:
\begin{equation}
xz=f(\ln{x})=\sum_{n=0}^{\infty} \frac{f_n}{n!}(\ln{x})^n,
\label{xz-expansion}
\end{equation}
where the $n$--th order coefficient of the expansion is of the same order in slow-roll and is given by
\begin{equation}
f_n=\frac{\text{d} ^n (xz)}{\text{d} (\ln{x})^n}.
\label{f-coeff}
\end{equation}
In terms of the slow-roll parameters
\begin{equation}
\bar{\epsilon}_n=\frac{(-1)^{n+1}}{\bar{H}} \frac{\bar{H}^{(n+1)}}{\bar{H}^{(n)}}
\label{eps-hier}
\end{equation}
we can expand the conformal time up to second order corrections and thus have the following approximation~\cite{Alinea2016}:
\begin{equation}
x=-k\tau = -k\int\frac{\text{d} \bar{t}}{\bar{a}} = \frac{k}{\bar{a}\bar{H}}(1+\bar{\eps}_0+3 \bar{\eps}_0^2+\bar{\eps}_0 \bar{\eps}_1) .
\label{x-expansion-eps}
\end{equation}
Then, using the relations
\begin{equation}
\bar{\eps}_0=\bar{\lam}_0+\frac{\bar{\kap}_0}{(1-\bar{\lam}_0)}-\frac{\bar{\lam}_0 \bar{\lam}_1}{(1-\bar{\lam}_0)} ,
\label{eps-to-kappa1}
\end{equation}
\begin{equation}
\bar{\eps}_0^2 \approx \bar{\lam}_0^2+\frac{\bar{\kap}_0^2}{(1-\bar{\lam}_0)^2}+2\frac{\bar{\lam}_0\bar{\kap}_0}{(1-\bar{\lam}_0)} ,
\label{eps-to-kappa2}
\end{equation}
\begin{equation}
2 \bar{\eps}_0^2+\bar{\eps}_0 \bar{\eps}_1 \approx \bar{\lam}_0 \bar{\lam}_1+\frac{\bar{\kap}_0\bar{\kap}_1}{(1-\bar{\lam}_0)} ,
\label{eps-to-kappa3}
\end{equation}
we can express $x$ in terms of the $\bar{\kappa}$ and $\bar{\lambda}$ slow-roll parameters,
\begin{equation}
x = \frac{k}{\bar{a}\bar{H}} \left( 1+\bar{\lam}_0+\bar{\kap}_0+3\bar{\lam}_0\bar{\kap}_0+\bar{\kap}_0\bar{\kap}_1+\bar{\kap}_0^2+\bar{\lam}_0^2\right) .
\label{x-expansion-kappa}
\end{equation}
The second-order power spectrum is then given in terms of the coefficients $f_0$, $f_1$ and $f_2$ as~\cite{Gong2001}
\be
P(k)=\frac{k^2}{(2\pi)^2}\frac{1}{f_0^2}\left[1-2 \alp \, \frac{f_1}{f_0}+\left( 3 \alp^2 -4 +\frac{5 \pi^2}{12}\right) \left( \frac{f_1}{f_0}\right)^2 +\left( -\alp^2+\frac{\pi^2}{12}\right) \frac{f_2}{f_0}\right],
\label{power-spectrum-result}
\ee
where $\alp \equiv (2-\ln{2}-\gamma) \simeq 0.729637$ and $\gamma \simeq 0.577216$ is the Euler-–Mascheroni constant~\cite{Stewart2002}.
For tensor perturbations in the JF we have that up to second order terms
\ba
f^{T}_0 &=& \left.\frac{k}{\bar{H} \sqrt{\mathcal{I}_m}}\left( 1+\bar{\lam}_0+\bar{\kap}_0+3\bar{\lam}_0\bar{\kap}_0+\bar{\kap}_0\bar{\kap}_1+2\bar{\kap}_0^2+\bar{\lam}_0^2\right)\right\vert_{k= \bar{a} \bar{H} (1-\bar{\lam}_0)} ,
\label{p0-coeff}
\\
\nonumber \\
f^{T}_1 &=& \left.\frac{k}{\bar{H} \sqrt{\mathcal{I}_m}} \left( -\bar{\kap}_0-3\bar{\kap}_0\bar{\lam}_0-2\bar{\kap}_0^2-\bar{\kap}_0\bar{\kap}_1 \right)\right\vert_{k= \bar{a} \bar{H} (1-\bar{\lam}_0)} ,
\label{p1-coeff}
\\
\nonumber \\
f^{T}_2 &=& \left.\frac{k}{\bar{H} \sqrt{\mathcal{I}_m}} \left( \bar{\kap}_0^2+\bar{\kap}_0\bar{\kap}_1 \right)\right\vert_{k= \bar{a} \bar{H} (1-\bar{\lam}_0)},
\label{p2-coeff}
\ea
where the slow-roll parameters are evaluated at the time of the horizon crossing. We have also introduced the superscript ``T" to discriminate from the corresponding coefficients of the scalar perturbations which will be denoted by an ``S".
Substitution of these coefficients into \eqref{power-spectrum-result} results in the following expression for the second order corrected tensor power spectrum in the slow-roll approximation:
\be
\begin{split}
\bar{P}_{T} =
\left[ \frac{\bar{H}^2\mathcal{I}_m}{(2\pi)^2} \right] & \left[ 1-2\bar{\lam}_0+(2\alp-2)\bar{\kap}_0+\bar{\lam}_0^2+\left( 2 \alp^2-2\alp-5+\frac{\pi^2}{2} \right)\bar{\kap}_0^2
\right. \\
&\quad \left. {}
+\left( -\alp^2+2\alp-2+\frac{\pi^2}{12}\right)\bar{\kap}_0\bar{\kap}_1 \right].
\end{split}
\label{tensor-spectrum-JF}
\ee
The tensor spectral index is defined as the logarithmic derivative of the power spectrum
\begin{equation}
\bar{n}_{T} \equiv \frac{\text{d} \ln{\bar{P}_{T}(k)}}{\text{d} \ln{k}}
\end{equation}
and thus the third order JF tensor scalar spectral index is obtained to be
\be
\begin{split}
\bar{n}_{T} =&-2\bar{\kap}_0-2\bar{\kap}_0^2-4\bar{\lam}_0 \bar{\kap}_0+(2 \alp-2)\bar{\kap}_0\bar{\kap}_1-6\bar{\lam}_0^2\bar{\kap}_0+(4 \alp-2)\bar{\lam}_0\bar{\lam}_1\bar{\kap}_0-8\bar{\lam}_0\bar{\kap}_0^2
\\
& +(6 \alp-6)\bar{\lam}_0\bar{\kap}_0\bar{\kap}_1-2\bar{\kap}_0^3+(6 \alp-16+\pi^2)\bar{\kap}_0^2\bar{\kap}_1+\left(-\alp^2+ 2 \alp-2+\frac{\pi^2}{12} \right) (\bar{\kap}_0\bar{\kap}_1^2+\bar{\kap}_0\bar{\kap}_1\bar{\kap}_2).
\end{split}
\label{tensor-index-JF}
\ee
For scalar perturbations in the JF the coefficients $f^{S}$ are slightly more complicated than their $f^{T}$ counterparts and have the following second order forms:
\ba
f^{S}_0 &=& \frac{k}{\bar{H}^2}\sqrt{\frac{2}{\mathcal{I}_m}}\frac{\text{d} \mathcal{I}_\phi }{\text{d} \bar{t}}\left.\left[
1+2\bar{\lam}_0+\bar{\kap}_0+4\bar{\lam}_0\bar{\kap}_0+\frac{3}{2}\bar{\kap}_0\bar{\kap}_1+2\bar{\kap}_0^2+3\bar{\lam}_0^2 \right]\right\vert_{k= \bar{a} \bar{H} (1-\bar{\lam}_0)} ,
\label{f0-coeff}
\\
\nonumber \\
f^{S}_1 &=& -\frac{k}{\bar{H}^2}\sqrt{\frac{2}{\mathcal{I}_m}}\frac{\text{d} \mathcal{I}_\phi }{\text{d} \bar{t}} \left. \left[ \bar{\kap}_0+\frac{\bar{\kap}_1}{2}+2\bar{\kap}_0\bar{\kap}_1+4\bar{\kap}_0\bar{\lam}_0+\frac{3}{2}\bar{\lam}_0\bar{\kap}_1+\bar{\lam}_0\bar{\lam}_1+2\bar{\kap}_0^2 \right]\right\vert_{k= \bar{a} \bar{H} (1-\bar{\lam}_0)} ,
\label{f1-coeff}
\\
\nonumber \\
f^{S}_2 &=& \frac{k}{\bar{H}^2}\sqrt{\frac{2}{\mathcal{I}_m}}\frac{\text{d} \mathcal{I}_\phi }{\text{d} \bar{t}}\left.\left[ \frac{\bar{\kap}_1^2}{4}+2\bar{\kap}_0\bar{\kap}_1+\bar{\kap}_0^2+\frac{\bar{\kap}_1\bar{\kap}_2}{2} \right]\right\vert_{k= \bar{a} \bar{H} (1-\bar{\lam}_0)} .
\label{f2-coeff}
\ea
Then the scalar power spectrum in the JF is
\be
\begin{split}
\bar{P}_{S} =
\left[ \frac{\bar{H}^4}{(2\pi)^2}\frac{\mathcal{I}_m}{2} \left(\frac{\text{d} \mathcal{I}_\phi }{\text{d}\bar{t}}\right)^{-2} \right]& \left[ 1-4\bar{\lam}_0+(2\alp-2)\bar{\kap}_0+\alp\bar{\kap}_1+\left(2\alp^2-2 \alp-5+\frac{\pi^2}{2} \right)\bar{\kap}_0^2
\right. \\
&\quad \left. {}
+(4-4\alp)\bar{\lam}_0\bar{\kap}_0+(-3\alp)\bar{\lam}_0\bar{\kap}_1+\left( \frac{\alp^2}{2}-1+\frac{\pi^2}{8}\right)\bar{\kap}_1^2+6\bar{\lam}_0^2
\right. \\
&\quad \left. {}
+2\bar{\alp}\bar{\lam}_0\bar{\lam}_1+\left( \alp^2+\alp-7+\frac{7\pi^2}{12} \right)\bar{\kap}_0\bar{\kap}_1+ \left( -\frac{\alp^2}{2}+\frac{\pi^2}{24} \right)\bar{\kap}_1\bar{\kap}_2 \right].
\label{scalar-spectrum-JF}
\end{split}
\ee
Substitution of the latter in the definition of the scalar spectral index
\begin{equation}
\bar{n}_{S} \equiv1+ \frac{\text{d} \ln{\bar{P}_{S}}}{\text{d} \ln{k}}
\end{equation}
results in the following third order expression for the scalar index in the JF:
\be
\begin{split}
\bar{n}_{S} =&
1-2 \bar{\kap}_0-\bar{\kap}_1-2\bar{\kap}_0^2-2\bar{\lam}_0\bar{\lam}_1+\alp\bar{\kap}_1\bar{\kap}_2-\bar{\kap}_1\bar{\lam}_0-4\bar{\kap}_0\bar{\lam}_0+(2\alp-3)\bar{\kap}_1 \bar{\kap}_0-2\bar{\kap}_0^3-8\bar{\lam}_0\bar{\kap}_0^2
\\
& -6\bar{\lam}_0^2\bar{\kap}_0+(6\alp-17+\pi^2)\bar{\kap}_0^2\bar{\kap}_1-\bar{\kap}_1\bar{\lam}_0^2+\left( -2+\frac{\pi^2}{4} \right)\bar{\kap}_1^2\bar{\kap}_2-4\bar{\lam}_0^2\bar{\lam}_1+2\alp\bar{\lam}_0 \bar{\lam}_1^2
\\
& +\left(-\frac{\alp^2}{2}+ \frac{\pi^2}{24} \right)\bar{\kap}_1\bar{\kap}_2^2+\left( -\alp^2+3 \alp-7+\frac{7 \pi^2}{12} \right)\bar{\kap}_0\bar{\kap}_1^2 +2\alp\bar{\lam}_0\bar{\lam}_1\bar{\lam}_2+(6\alp-9)\bar{\lam}_0\bar{\kap}_0\bar{\kap}_1
\\
& +(4\alp-4)\bar{\lam}_0\bar{\lam}_1\bar{\kap}_0+(\alp+1)\bar{\kap}_1\bar{\lam}_0\bar{\lam}_1+2\alp\bar{\lam}_0\bar{\kap}_1\bar{\kap}_2
+ \left( -\frac{\alp^2}{2}+\frac{\pi^2}{24} \right)\bar{\kap}_1\bar{\kap}_2\bar{\kap}_3
\\
& +\left(-\alp^2+ 4\alp-7+\frac{7 \pi^2}{12} \right)\bar{\kap}_0\bar{\kap}_1\bar{\kap}_2 .
\end{split}
\label{scalar-index-JF}
\ee
Finally, with the higher order corrected expressions for the power spectra for scalar and tensor perturbations in the JF at our disposal, it is trivial to compute the tensor-to-scalar ratio,
\be
\begin{split}
\bar{r} = & 16 \bar{\kap}_0 \left[ 1+2\bar{\lam}_0-\alp\bar{\kap}_1+3\bar{\lam}_0^2-2\alp \bar{\lam}_0\bar{\lam}_1-3\alp\bar{\lam}_0\bar{\kap}_1 +\left(-\alp+5-\frac{\pi^2}{2} \right)\bar{\kap}_0\bar{\kap}_1
\right. \\
& \qquad \left. {}
+\left( \frac{\alp^2}{2}+1-\frac{\pi^2}{8}\right)\bar{\kap}_1^2+\left( \frac{\alp^2}{2}-\frac{\pi^2}{24} \right)\bar{\kap}_1\bar{\kap}_2 \right] .
\end{split}
\label{ratio-JF}
\ee
\subsection{Einstein frame results}
Repeating the same analysis in the EF, we obtain the tensor power spectrum
\be
\hat{P}_{T}=\frac{\hat{H}^2}{(2 \pi)^2} \left[ 1+(2\alp-2)\hat{\kap}_0+\left( 2\alp^2-2\alp-5+\frac{\pi^2}{2}\right)\hat{\kap}_0^2+\left(-\alp^2+ 2\alp-2+\frac{\pi^2}{12} \right)\hat{\kap}_0\hat{\kap}_1 \right] ,
\label{tensor-spectrum-EF}
\ee
the tensor spectral index
\be
\begin{split}
\hat{n}_{T} =&-2 \hat{\kap}_0-2\hat{\kap}_0^2+(2 \alp-2) \hat{\kap}_0 \hat{\kap}_1-2 \hat{\kap}_0^3+(6 \alp-16+\pi^2)\hat{\kap}_0^2 \hat{\kap}_1
\\
& +\left(-\alp^2+ 2 \alp-2+\frac{\pi^2}{12} \right) (\hat{\kap}_0 \hat{\kap}_1^2+\hat{\kap}_0\hat{\kap}_1\hat{\kap}_2) ,
\end{split}
\label{tensor-index-EF}
\ee
the scalar power spectrum
\be
\begin{split}
\hat{P}_{S} =
\left[ \frac{\hat{H}^4}{2(2\pi)^2} \left(\frac{\text{d} \mathcal{I}_\phi }{\text{d} \hat{t}}\right)^{-2} \right]& \left[ 1+(2\alp-2)\hat{\kap}_0+\alp
\hat{\kap}_1+\left( 2\alp^2-2\alp-5+\frac{\pi^2}{2} \right)\hat{\kap}_0^2
\right. \\
&\quad \left. {}
+\left( \frac{\alp^2}{2}-1+\frac{\pi^2}{8}\right)\hat{\kap}_1^2+\left( \alp^2+\alp-7+\frac{7 \pi^2}{12} \right)\hat{\kap}_0\hat{\kap}_1
\right. \\
&\quad \left. {}
+\left( -\frac{\alp^2}{2}+\frac{\pi^2}{24} \right)\hat{\kap}_1\hat{\kap}_2 \right] ,
\label{scalar-spectrum-EF}
\end{split}
\ee
the scalar spectral index
\be
\begin{split}
\hat{n}_{S} =&
1-2 \hat{\kap}_0-\hat{\kap}_1-2\hat{\kap}_0^2+\alp\hat{\kap}_1\hat{\kap}_2+(2\alp-3)\hat{\kap}_0 \hat{\kap}_1-2\hat{\kap}_0^3+(6\alp-17+\pi^2)\hat{\kap}_0^2\hat{\kap}_1
\\
& +\left(-2+ \frac{\pi^2}{4} \right)\hat{\kap}_1^2\hat{\kap}_2+\left( -\frac{\alp^2}{2} +\frac{\pi^2}{24}\right)\hat{\kap}_1\hat{\kap}_2^2+\left( -\alp^2+3 \alp-7+\frac{7 \pi^2}{12} \right)\hat{\kap}_0\hat{\kap}_1^2
\\
&
+ \left(-\frac{\alp^2}{2}+ \frac{\pi^2}{24} \right)\hat{\kap}_1\hat{\kap}_2\hat{\kap}_3+\left( -\alp^2+4\alp-7+\frac{7 \pi^2}{12} \right)\hat{\kap}_0\hat{\kap}_1\hat{\kap}_2 ,
\end{split}
\label{scalar-index-EF}
\ee
and finally the tensor-to-scalar ratio
\be
\hat{r} = 16 \hat{\kap}_0 \left[ 1-\alp\hat{\kap}_1 +\left(-\alp+5-\frac{\pi^2}{2} \right)\hat{\kap}_0\hat{\kap}_1+\left( \frac{\alp^2}{2}+1-\frac{\pi^2}{8}\right)\hat{\kap}_1^2
+\left( \frac{\alp^2}{2}-\frac{\pi^2}{24} \right)\hat{\kap}_1\hat{\kap}_2 \right] .
\label{ratio-EF}
\ee
Note that the above results have been obtained using the condition $k = \hat{a} \hat{H}$ at the time of horizon crossing.
\subsection{Equivalence of the frames up to third order}
It has been reported by the authors of \cite{Kuusk2016} that the EF and JF spectral indices are equivalent up to second order in the slow-roll expansion. In this work we have obtained the third-order corrected expressions for the indices in the two frames. It is thus intriguing to see whether this equivalence extends to the third-order expressions also. Expanding the EF slow-roll parameters \eqref{ka_EF-to-JF} up to third order in the JF slow-roll parameters we have
\be
\hat{\kappa}_0 \approx \bar{\kappa}_0 + 2 \bar{\kappa}_0 \bar{\lambda}_0 + 3 \bar{\kappa}_0 \bar{\lambda}_0^2 ,
\label{hk0-to-bk0}
\ee
\be
\hat{\kappa}_1 \approx \bar{\kappa}_1 + \bar{\kappa}_1 \bar{\lambda}_0 + \bar{\kappa}_1 \bar{\lambda}_0^2 + 2 \bar{\lambda}_0 \bar{\lambda}_1 + 4 \bar{\lambda}_0^2 \bar{\lambda}_1 ,
\label{hk1-to-bk1}
\ee
\be
\hat{\kappa}_1 \hat{\kappa}_2 \approx \bar{\kappa}_1 \bar{\kappa}_2 + 2 \bar{\kappa}_1 \bar{\kappa}_2 \bar{\lambda}_0 + \bar{\kappa}_1 \bar{\lambda}_0 \bar{\lambda}_1 + 2 \bar{\lambda}_0 \bar{\lambda}_1^2 + 2 \bar{\lambda}_0 \bar{\lambda}_1 \bar{\lambda}_2 ,
\label{hk1hk2-to-bk1bk2}
\ee
\be
\hat{\kappa}_0 \hat{\kappa}_1 \hat{\kappa}_2 \approx \bar{\kappa}_0 \bar{\kappa}_1 \bar{\kappa}_2 \, , \quad \hat{\kappa}_1 \hat{\kappa}_2 \hat{\kappa}_3 \approx \bar{\kappa}_1 \bar{\kappa}_2 \bar{\kappa}_3 .
\label{tripleh-to-tripleb}
\ee
Then, plugging \eqref{hk0-to-bk0} - \eqref{tripleh-to-tripleb} in the EF expressions for the indices \eqref{tensor-index-EF} - \eqref{ratio-EF} we find
\be
\hat{n}_T = \bar{n}_T ,
\ee
\be
\hat{n}_S = \bar{n}_S ,
\ee
\be
\hat{r} = \bar{r} .
\ee
Therefore, the spectral indices calculated in the EF and JF coincide. Finally, since the Green's function method is valid up to arbitrary order in the slow-roll expansion, we expect the equivalence between the spectral indices in the JF and EF to also hold to all orders.
\subsection{Invariant expressions for the inflationary observables}
So far we have obtained the spectral indices and the tensor-to-scalar ratio in both the EF and JF. We have also shown that up to third order in the slow-roll expansion the results in the two frames are equivalent. We can take advantage of this equivalence and write down expressions for the inflationary observables only in terms of the invariant potential and its derivatives. The equivalence between the two frames allows then one to rewrite the EF results in terms of the invariant PSRPs and expect these results to hold in the JF too. In order to express the spectral indices in terms of the PSRPs defined in \eqref{eps_V} - \eqref{rho_V} we first use the following relations between the EF HSRPs \eqref{ka0_EF} - \eqref{kai+1_EF} and the ones defined in~\cite{Liddle1994}:
\begin{eqnarray}
\hat{\kap}_0 &=& \eps_H ,\\
\hat{\kap}_1 &=& -2 \eta_H+2 \eps_H ,\\
\hat{\kap}_1\hat{\kap}_2 &=& 4 \eps_H^2-6\eps_H \eta_H+2\zeta_H^2 ,\\
\hat{\kap}_1 \hat{\kap}_2^2+\hat{\kap}_1\hat{\kap}_2\hat{\kap}_3 &=& 16 \eps_H^3-22\eps_H^2 \eta_H+12\eps_H \eta_H^2+10 \eps_H \zeta_H^2-2 \eta_H \zeta_H^2-2 \rho_H^3.
\end{eqnarray}
Then, using the third-order Taylor expansions of the HSRPs in terms of the PSRPs~\cite{Liddle1994}, presented in Appendix \ref{app1:HSRPs-to-PSRPs}, we obtain the inflationary indices up to third order in the PSRPs
\be
\begin{split}
n_{T} =&
-2 \eps_V +\left( 8 \alp-\frac{22}{3} \right) \eps_V^2- \left( 4 \alp-\frac{8}{3} \right) \eps_V \eta_V+ \left(-32 \alp^2+\frac{189}{3}\alp-\frac{996}{9}+\frac{20 \pi^2}{3} \right) \eps_V^3
\\
& +\left( -4 \alp^2+4 \alp-\frac{46}{9}+\frac{\pi^2}{3}\right) \eps_V \eta^2_V +\left( 28 \alp^2-44 \alp +68 -\frac{13 \pi^2}{3} \right) \eps_V^2 \eta_V
\\
&
+ \left( -2 \alp^2+\frac{8}{3} \alp-\frac{28}{9}+\frac{\pi^2}{6}\right)\eps_V \zeta_V^2 ,
\end{split}
\label{tensor-index-V}
\ee
\be
\begin{split}
n_{S} =&1
-6 \eps_V+2\eta_V+\left(24 \alp -\frac{10}{3} \right)\eps^2_V-\left( 16 \alp+2\right)\eps_V \eta_V+\frac{2}{3} \eta_V^2+\left( 2 \alp+\frac{2}{3}\right) \zeta_V^2
\\
& -\left(90 \alp^2-\frac{104}{3}\alp+\frac{3734}{9}-\frac{87 \pi^2}{2} \right)\eps_V^3+\left( 90 \alp^2+\frac{4}{3} \alp+\frac{1190}{3}-\frac{87\pi^2}{2} \right) \eps_V^2 \eta_V
\\
&
- \left( 16 \alp^2+12\alp+\frac{742}{9}-\frac{28 \pi^2}{3}\right) \eps_V \eta_V^2-\left(12 \alp^2+4 \alp+\frac{98}{3}-4 \pi^2 \right)\eps_V \zeta_V^2
\\
&
+\left(\alp^2+\frac{8}{3}\alp+\frac{28}{3}-\frac{13 \pi^2}{2} \right) \eta_V \zeta^2_V+\frac{4}{9}\eta_V^3+\left(\alp^2+\frac{2}{3}\alp+\frac{2}{9}-\frac{\pi^2}{12} \right) \rho_V^3 ,
\end{split}
\label{scalar-index-V}
\ee
\be
\begin{split}
r =&
16 \eps_V \left[1-\left(4 \alp+\frac{4}{3} \right) \eps_V+\left( 2\alp +\frac{2}{3}\right) \eta_V+\left(16 \alp^2+\frac{28}{3}\alp+\frac{356}{9}-\frac{14 \pi^2}{3} \right) \eps_V^2
\right. \\
&\quad \left. {}
-\left(14 \alp^2+10 \alp+\frac{88}{3}-\frac{7\pi^2}{2} \right)\eps_V \eta_V+\left(2 \alp^2+2 \alp+\frac{41}{9}-\frac{\pi^2}{2} \right) \eta_V^2
\right. \\
&\quad \left. {}
+\left(\alp^2+\frac{2}{3}\alp+\frac{2}{9}-\frac{\pi^2}{12} \right)\zeta_V^2 \right]
\end{split}
\label{ratio-V}
\ee
In a given model, once we derive the invariant potential $\mathcal{I}_{\mathcal{V}}$ in terms of the invariant $\mathcal{I}_\phi$, we can readily obtain the PSRPs and express the inflationary observables in an invariant way in terms of $\mathcal{I}_{\mathcal{V}}$ and its derivatives.
\section{Number of $e$-folds}
\label{sec:efolds}
In this section, we consider the difference between the definitions for the number of $e$-folds in the EF and JF and study how it affects the values of the observables. Furthermore, we discuss various approaches for a more accurate determination of the value of the inflaton field at the end of inflation.
\subsection{Einstein vs Jordan}
The number of $e$-folds is usually defined in the EF as
\be
\text{d} \hat{N} \equiv \hat{H} \text{d} \hat{t} = \text{d} \ln \hat{a} = - \frac{1}{\sqrt{\hat{\kappa}_0}} \, \text{d} \mathcal{I}_\phi = - \frac{1}{\sqrt{\hat{\epsilon}_0}} \, \text{d} \mathcal{I}_\phi = - \frac{1}{\sqrt{\epsilon_H}} \, \text{d} \mathcal{I}_\phi .
\label{Eefolds}
\ee
Using \eqref{trans_time_scale-factor} the number of $e$-folds in the JF becomes
\be
\text{d} \bar{N} = \text{d} \hat{N} + \frac{1}{2} \, \text{d} \ln \mathcal{I}_m = \left( - \frac{1}{\sqrt{\epsilon_H}} + \frac{1}{2} \frac{\text{d} \ln \mathcal{I}_m}{\text{d} \mathcal{I}_\phi} \right) \text{d} \mathcal{I}_\phi .
\label{Jefolds}
\ee
We see that the definitions for the number of $e$-folds in the two frames differ by the invariant factor $\frac{1}{2} \, \text{d} \ln \mathcal{I}_m$ which includes the nonminimal coupling in a given theory. Of course, when the scalar field is minimally coupled to gravity the two definitions coincide. Therefore, in general, the same number of $e$-folds in the two frames will translate to different values for the invariant $\mathcal{I}_\phi$. This means that we will get different predictions for the observables depending on whether we use \eqref{Eefolds} or \eqref{Jefolds}. Typically the difference is small, but still comparable to (if not larger than) the difference for the observables if one chooses to use the first, second or third order results for $n_S$ and $r$ in terms of the slow-roll parameters. Furthermore, these types of differences can play a significant role in the future, with the advent of more precise measurements \cite{Matsumura2013, Finelli2016}, in regards to the characterization of an inflationary model as viable or not.
In order to quantify the aforementioned effects, we will next consider the nonminimal Coleman-Weinberg model introduced in \cite{Kannike2016}. The model functions are
\ba
\mathcal{A} (\Phi) &=& \xi \Phi^2 ,
\label{CW-A}
\\
\mathcal{B} (\Phi) &=& 1 ,
\label{CW-B}
\\
\sigma(\Phi) &=& 0 ,
\label{CW-sigma}
\\
\mathcal{V} (\Phi) &=& \Lambda^4 + \frac{1}{8} \beta_{\lambda_\Phi} \left( \ln \frac{\Phi^2}{v^2_\Phi} - \frac{1}{2} \right) \Phi^4 ,
\label{CW-V}
\ea
where the cosmological constant $\Lambda^4$ was included in order to realize $\mathcal{V} (v_\Phi) = 0$ and $\beta_{\lambda_\Phi}$ is the beta function of the quartic scalar coupling $\lambda_\Phi$. Furthermore, in this model the Planck scale is dynamically generated through the VEV of the scalar field $v_\Phi$ and we have
\be
1= \xi v^2_\Phi .
\label{VEV}
\ee
Minimization of the potential \eqref{CW-V} yields
\be
\beta_{\lambda_\Phi} = 16 \, \frac{\Lambda^4}{v^4_\Phi} .
\ee
This means we can eliminate $\beta_{\lambda_\Phi}$ in \eqref{CW-V} and rewrite the potential as
\be
\mathcal{V} (\Phi) = \Lambda^4 \left\lbrace 1 + \left[ 2 \ln \left( \frac{\Phi^2}{v^2_\Phi} \right) - 1 \right] \frac{\Phi^4}{v^4_\Phi} \right\rbrace
\ee
From the expressions of the model functions \eqref{CW-A} - \eqref{CW-V} we can readily obtain the invariants $\mathcal{I}_m$, $\mathcal{I}_{\mathcal{V}}$ and $\mathcal{I}_\phi$. The invariant field takes the form
\be
\mathcal{I}_\phi = \sqrt{\frac{1 + 6 \xi}{2 \xi}} \ln \left( \frac{\Phi}{v_\Phi} \right) .
\ee
By inverting the above equation we can express the invariant $\mathcal{I}_m$ in terms of $\mathcal{I}_\phi$ as
\be
\mathcal{I}_m = e^{- 2 \sqrt{\frac{2 \xi}{1 + 6 \xi}} \mathcal{I}_\phi } ,
\ee
and also the invariant potential $\mathcal{I}_{\mathcal{V}}$ in terms of $\mathcal{I}_\phi$ as
\be
\mathcal{I}_{\mathcal{V}} = \Lambda^4 \left( 4 \sqrt{\frac{2 \xi}{1 + 6 \xi}} \, \mathcal{I}_\phi + e^{- 4 \sqrt{\frac{2 \xi}{1 + 6 \xi}} \mathcal{I}_\phi } - 1 \right) ,
\label{CW-Iv}
\ee
where we used \eqref{VEV}. From the invariant potential \eqref{CW-Iv} we can calculate the PSRPs \eqref{eps_V}, \eqref{eta_V} - \eqref{rho_V} and then the scalar index $n_S$ [c.f. \eqref{scalar-index-V}] and the tensor-to-scalar ratio $r$ [c.f. \eqref{ratio-V}] and compare them with the experimental bounds. Another important observable is the amplitude of scalar perturbations $A_S = \left( 2.14 \pm 0.05 \right) \times 10^{-9}$ \cite{Ade2016a}, which can be used to constrain the value of $\Lambda$ (see Fig. 3 in \cite{Kannike2016}).
Now, depending on whether the field $\Phi$ rolls down from values larger or smaller than its VEV, the invariant $\mathcal{I}_\phi$ can have positive or negative values. Since negative field inflation produces $r \gtrsim 0.15$ \cite{Kannike2016}, which is excluded by observations \cite{Ade2015, Ade2016}, we will not consider it further. Instead, we will only focus on positive field inflation which interpolates between quadratic \cite{Linde1983a} and linear \cite{McAllister2010} inflation depending on the value of the nonminimal coupling $\xi$. In the limit $\xi \rightarrow 0$, the invariant potential is approximated as
\be
\mathcal{I}_{\mathcal{V}} \vert_{\xi \rightarrow 0} \sim 16 \, \xi \, \Lambda^4 \, \mathcal{I}_\phi^2 ,
\ee
while in the limit $\xi \rightarrow \infty$,
\be
\mathcal{I}_{\mathcal{V}} \rvert_{\xi \rightarrow \infty} \sim \frac{4}{\sqrt{3}} \, \Lambda^4 \, \mathcal{I}_\phi .
\ee
Quadratic inflation is excluded by the Planck and BICEP2/Keck results \cite{Ade2015, Ade2016} but linear inflation still lies within the $2 \sigma$ allowed region. In Table \ref{table:E-vs-J-efolds} we present our results for the first and third order scalar index $n_S$ and tensor-to-scalar ratio $r$ for various values of the nonminimal coupling $\xi$. For simplicity, we have assumed that inflation ends at $\Phi = v_\Phi$, or equivalently $\mathcal{I}_\phi^{\rm end} = 0$, where the two frames coincide. Furthermore, we have approximated $\epsilon_H \approx \epsilon_V$ in the expressions \eqref{Eefolds} and \eqref{Jefolds}. In each case, for every value of $\xi$ considered, we have varied $\mathcal{I}_\phi^{\rm HC}$ at horizon crossing in order to get $\hat{N} = 60$ and $\bar{N} = 60$. This means that we obtain a different value for $\mathcal{I}_\phi$ depending on which definition for the $e$-folds we use. Consequently, the predictions for $n_S$ and $r$ differ. For small $\xi$ the difference between the frames is negligible. However, for larger $\xi$ the difference grows and becomes around $0.002$ (or $0.2 \%$) for $n_S$ and $0.005$ (or $8 \%$) for $r$ around $\xi = 10$. For large $\xi$, such a difference is actually larger than the difference between the first and third order results for the observables ($0.03 \%$ for $n_S$ and $1.9 \%$ for $r$). Both of these types of differences however should be within the reach of future experiments such as CORE and LiteBIRD \cite{Matsumura2013, Finelli2016} which are expected to measure $r$ with an accuracy of $10^{-3}$.
\begin{table}
\begin{center}
\begin{tabular}{| c | c | c | c | c | c |}
\hline
& & & & & \\[-1em]
& $n_S^{(\rm I)}$ & $n_S^{(\rm III)}$ & $r^{(\rm I)}$ & $r^{(\rm III)}$ & $\xi$ \\
\hline
& & & & & \\[-1em]
$\hat{N} = 60$ & 0.96702 & 0.96712 & 0.12782 & 0.12552 & $10^{-5}$ \\
\hline
& & & & & \\[-1em]
$\bar{N} = 60$ & 0.96699 & 0.96709 & 0.12792 & 0.12562 & $10^{-5}$ \\
\hline
& & & & & \\[-1em]
$\hat{N} = 60$ & 0.96935 & 0.96956 & 0.09655 & 0.09466 & $10^{-3}$ \\
\hline
& & & & & \\[-1em]
$\bar{N} = 60$ & 0.96911 & 0.96933 & 0.09736 & 0.09544 & $10^{-3}$ \\
\hline
& & & & & \\[-1em]
$\hat{N} = 60$ & 0.97451 & 0.97477 & 0.06796 & 0.06675 & $0.1$ \\
\hline
& & & & & \\[-1em]
$\bar{N} = 60$ & 0.97320 & 0.97348 & 0.07148 & 0.07013 & $0.1$ \\
\hline
& & & & & \\[-1em]
$\hat{N} = 60$ & 0.97482 & 0.97507 & 0.06716 & 0.06597 & $10$ \\
\hline
& & & & & \\[-1em]
$\bar{N} = 60$ & 0.97276 & 0.97305 & 0.07264 & 0.07125 & $10$ \\
\hline
\end{tabular}
\caption{First and third order results for the observables of the nonminimal Coleman-Weinberg model considered in \cite{Kannike2016} for various values of the nonminimal coupling $\xi$ and for $\hat{N} = \bar{N} = 60$. We see that as $\xi$ grows so does the difference between the observables, depending on which definition for the $e$-folds we use.}
\label{table:E-vs-J-efolds}
\end{center}
\end{table}
Another way to illustrate the disparity between the two definitions for the $e$-folds is to examine how the same field excursion affects the number of $e$-folds itself. In Fig.~\ref{fig:DeltaNFinal}, for a wide range of values of $\xi$, we calculate the invariant $\mathcal{I}_\phi^{\rm HC}$ for which $\hat{N} = 50$ and $\hat{N} = 60$. Then, for the same value of $\mathcal{I}_\phi$ we calculate the corresponding JF $e$-folds $\bar{N}$ and plot the difference with the EF $e$-folds $\hat{N}$. One can see that, as expected, the difference asymptotes to zero for $\xi \rightarrow 0$ due to the vanishing second term in \eqref{Jefolds}. On the other hand, as $\xi$ grows so does the difference $\bar{N} - \hat{N}$ until it reaches a value of about $4.3$ $e$-folds for $\hat{N} = 50$ and $4.7$ $e$-folds for $\hat{N} = 60$. Note that for $\xi \gtrsim 10$ the difference stops growing since the model has reached the linear inflation attractor. We perceive the JF definition for the number of e-folds as the fundamental one since it is composed of all three invariants \eqref{inv_m}--\eqref{inv_phi} and also accommodates the EF definition.
\begin{figure}
\centering
\includegraphics[width=11.5cm]{Fig2.pdf}
\caption{The difference between the JF ($\bar{N}$) and the EF ($\hat{N}$) number of $e$-folds as a function of the nonminimal coupling $\xi$ for $\hat{N} = 60$ (top curve) and $\hat{N} = 50$ (bottom curve). We see that as $\xi$ grows we need more $e$-folds in the Jordan frame for the same inflaton field excursion.}
\label{fig:DeltaNFinal}
\end{figure}
\subsection{Taylor vs Pad\'{e}}
Let us also examine how the end-of-inflation condition affects the observables. Inflation ends \textit{exactly} at $\epsilon_H = 1$. Most authors usually adopt the slow-roll approximation and consider the relation between $\epsilon_H$ and the PSRPs at first order in the Taylor expansion and solve
\be
\epsilon_H^{(\rm I)} = \epsilon_V = 1
\label{epsH-Taylor1}
\ee
in order to obtain the inflaton field value at the end of inflation. In our case, since we have obtained $n_S$ and $r$ at third order in the PSRPs, it would seem prudent to also approximate $\epsilon_H$ in the definition of $e$-folds with the third order Taylor expansion and solve
\be
\epsilon_H^{(\rm III)} = \eps_V - \frac{4}{3} \eps_V^2+\frac{2}{3} \eps_V \eta_V+\frac{32}{9}\eps_V^3+\frac{5}{9} \eps_V \eta_V^2-\frac{10}{3} \eps_V^2 \eta_V+\frac{2}{9} \eps_V \zeta_V^2 = 1
\label{epsH-Taylor3}
\ee
in order to obtain $\mathcal{I}_\phi^{\rm end}$. Nevertheless, even though the third order Taylor expansion is a very good approximation around the time of horizon crossing when the slow-roll parameters are small, the same does not hold near the end of inflation when $\epsilon_V$ and $\eta_V$ become of order one since the third order expansion actually blows up and thus fails to accurately describe the entirety of the inflationary epoch. A more accurate option, as pointed out in \cite{Liddle1994}, is to consider a Pad\'{e} approximation for $\epsilon_H$. The $\left[ 1/1 \right]$ Pad\'{e} approximant is given by
\be
\epsilon_H^{\left[ 1/1 \right]} = \frac{\epsilon_V}{1 + \frac{4}{3} \epsilon_V - \frac{2}{3} \eta_V} ,
\label{epsH-Pade11}
\ee
while the $\left[ 2/2 \right]$ approximant has the form
\be
\begin{split}
\epsilon_H^{\left[ \rm{2/2} \right]} = & \, \frac{\epsilon_V+\frac{17}{4} \epsilon_V^2-\frac{5}{3} \epsilon_V \eta_V}{1+\frac{67}{12}\epsilon_V-\frac{7}{3}\eta_V-\frac{7}{2} \epsilon_V \eta_V+\frac{35}{9}\epsilon_V^2+\eta_V^2-\frac{2}{9}\zeta_V^2}
\\
& +\frac{2}{27} \eps_V \rho_V^3-\frac{1}{54} \eps_V^3 \eta_V+\frac{35}{108} \eps_V^2 \eta_V^2-\frac{13}{54} \eps_V^2 \zeta_V^2-\frac{1}{9} \eps_V \eta_V^3 .
\end{split}
\label{epsH-Pade22}
\ee
In Table \ref{table:Taylor-vs-Pade} we present the results for $n_S$ and $r$ for $\xi = 10^{-5}$, $\xi = 0.1$ and $\hat{N} = 50$ having employed the four end-of-inflation conditions for $\mathcal{I}_\phi^{\rm end}$ described above and the corresponding expressions \eqref{epsH-Taylor1} - \eqref{epsH-Pade22} for $\epsilon_H$ in the $e$-folds integral. We find that the difference between the four methods is small for $n_S$ but larger for $r$ which has a greater dependence on $\epsilon_H$. The largest difference for $r$ between the methods occurs for small $\xi$ since its value is sizeable ($r \simeq 0.15$) and a small change in the value of $\mathcal{I}_\phi^{\rm end}$ affects it noticeably. In any case, the differences between the end-of-inflation methods on $n_S$ and $r$ are comparable to the differences between the first and third order results.
\begin{table}
\begin{center}
\begin{tabular}{| c | c | c | c | c | c |}
\hline
& & & & & \\[-1em]
$\mathbf{\hat{N} = 50}$ & $n_S^{(\rm I)}$ & $n_S^{(\rm III)}$ & $r^{(\rm I)}$ & $r^{(\rm III)}$ & $\xi$ \\
\hline
& & & & & \\[-1em]
end: $\epsilon_H^{(\rm I)} = 1$ & 0.96078 & 0.96092 & 0.15238 & 0.14914 & $10^{-5}$ \\
\hline
& & & & & \\[-1em]
end: $\epsilon_H^{\left[ 1/1 \right]} = 1$ & 0.95979 & 0.95994 & 0.15626 & 0.15285 & $10^{-5}$ \\
\hline
& & & & & \\[-1em]
end: $\epsilon_H^{(\rm III)} = 1$ & 0.96032 & 0.96047 & 0.15417 & 0.15085 & $10^{-5}$ \\
\hline
& & & & & \\[-1em]
end: $\epsilon_H^{\left[ 2/2 \right]} = 1$ & 0.96019 & 0.96034 & 0.15468 & 0.15134 & $10^{-5}$ \\
\hline
& & & & & \\[-1em]
end: $\epsilon_H^{(\rm I)} = 1$ & 0.96955 & 0.96991 & 0.08121 & 0.07948 & $0.1$ \\
\hline
& & & & & \\[-1em]
end: $\epsilon_H^{\left[ 1/1 \right]} = 1$ & 0.96870 & 0.96908 & 0.08348 & 0.08165 & $0.1$ \\
\hline
& & & & & \\[-1em]
end: $\epsilon_H^{(\rm III)} = 1$ & 0.96922 & 0.96959 & 0.08208 & 0.08031 & $0.1$ \\
\hline
& & & & & \\[-1em]
end: $\epsilon_H^{\left[ 2/2 \right]} = 1$ & 0.96909 & 0.96946 & 0.08244 & 0.08066 & $0.1$ \\
\hline
\end{tabular}
\caption{First and third order results for the observables of the model \cite{Kannike2016} for two values of the nonminimal coupling $\xi$ and for $\hat{N} = 50$ using the four end-of-inflation conditions described in the text. We see that the differences are small albeit comparable to the differences between the first and third order results.}
\label{table:Taylor-vs-Pade}
\end{center}
\end{table}
\section{Summary and discussion}
\label{sec:Conclusions}
In the first part of this work we briefly reviewed the frame and reparametrization invariant formalism of scalar-tensor theories developed in \cite{Jaerv2015, Jaerv2015a, Kuusk2016, Kuusk2016a, Jaerv2017}. This formalism proves to be useful for inflation since it allows us to classify various models based on their invariant potentials. Therefore, it becomes transparent why theories with very different physical motivations yield similar predictions for the inflationary observables.
Motivated by the imminent advancement in the sensitivity of the experiments, we then calculated the tensor and scalar spectral indices as well as the tensor-to-scalar ratio up to third order in the HSRPs in both the Einstein and Jordan frames employing the Green's function method introduced in \cite{Gong2001}. After this, utilizing the relation between the HSRPs in the two frames, we showed the equivalence of the frames. By construction, the Green's function method is valid to arbitrary order in the slow-roll expansion. Therefore, we expect the equivalence to hold up to any order. In addition, since the HSRPs are related to the PSRPs, we expressed the spectral indices and the ratio in terms of the PSRPs which are manifestly invariant.
Nevertheless, since the definition of the number of $e$-folds is different in the two frames, this can result to different predictions for the observables. We demonstrated this difference by considering the nonminimally coupled Coleman-Weinberg model examined in \cite{Kannike2016} and saw that as the nonminimal coupling grows so does the difference in the predictions. Such a difference can in fact be larger the differences between the first and third order results and will be detectable by the planned future experiments. \textit{We regard the Jordan frame definition for the number of e-folds \eqref{Jefolds} as \textit{the} fundamental one since it can be expressed in terms of all the principal invariants and also includes the Einstein definition}. Furthermore, we examined how various end-of-inflation conditions affect the inflationary observables. We found that the differences between the methods are comparable to the differences between the first and third order results.
The above discussion proves that with the advent of precision experiments, care must be taken when analyzing a given inflationary model since the underlying methods and assumptions used may play an instrumental role in determining the viability of said model.
\section*{Acknowledgments}
T.P. would like to thank the Alexander S. Onassis Public Benefit Foundation for financial support.
\section*{Appendixes}
\begin{appendices}
\section{From Hubble to potential slow-roll parameters}
\label{app1:HSRPs-to-PSRPs}
The HSRPs are related to the PSRPs up to third order in the Taylor expansion via the following expressions \cite{Liddle1994}:
\begin{eqnarray}
\eps_H &=& \eps_V-\frac{4}{3} \eps_V^2+\frac{2}{3} \eps_V \eta_V+\frac{32}{9}\eps_V^3+\frac{5}{9} \eps_V \eta_V^2-\frac{10}{3} \eps_V^2 \eta_V+\frac{2}{9} \eps_V \zeta_V^2 ,\\
\eta_H &=& \eta_V -\eps_V+\frac{8}{3} \eps_V^2+\frac{1}{3} \eta_V^2-\frac{8}{3}\eps_V \eta_V +\frac{1}{3} \zeta_V^2 -12 \eps^3_V +\frac{2}{9} \eta_V^3+16 \eps_V^2 \eta_V \nonumber \\&& -\frac{46}{9} \eps_V \eta_V^2 -\frac{17}{9} \eps_V \zeta_V^2 +\frac{2}{3} \eta_V \zeta^2_V+\frac{1}{9} \rho_V^3 ,\\
\zeta_H^2 &=& \zeta_V^2-3 \eps_V \eta_V+3 \eps_V^2 -20 \eps_V^3+26 \eps_V^2 \eta_V-7 \eps_V \eta_V^2-\frac{13}{3} \eps_V \zeta_V^2+\frac{4}{3} \eta_V \zeta_V^2+\frac{1}{3} \rho_V^3 ,\\
\rho_H^3 &=& \rho_V^3 -3 \eps_V \eta_V^2+18 \eps_V^2 \eta_V -15 \eps_V^3-4 \eps_V \zeta_V^2 .
\end{eqnarray}
\section{Runnings of the spectral indices}
\label{app2:runnings}
The runnings of the tensor and scalar spectral indices up to third order in the HSRPs are given in the JF by
\ba
\frac{\text{d} \bar{n}_T}{\text{d} \ln k} & = & - 2 \bar{\kappa}_0 \bar{\kappa}_1 - 6 \bar{\kappa}_0 \bar{\kappa}_1 \bar{\lambda}_0 - 4 \bar{\kappa}_0 \bar{\lambda}_0 \bar{\lambda}_1 - 6 \bar{\kappa}_0^2 \bar{\kappa}_1 + \left( 2 \alpha - 2 \right) \left( \bar{\kappa}_0 \bar{\kappa}_1^2 + \bar{\kappa}_0 \bar{\kappa}_1 \bar{\kappa}_1 \right) ,
\\
\frac{\text{d} \bar{n}_S}{\text{d} \ln k} & = & - 2 \bar{\kappa}_0 \bar{\kappa}_1 - \bar{\kappa}_1 \bar{\kappa}_2 - 6 \bar{\kappa}_0 \bar{\kappa}_1 \bar{\lambda}_0 - 4 \bar{\kappa}_0 \bar{\lambda}_0 \bar{\lambda}_1 - \bar{\kappa}_1 \bar{\lambda}_0 \bar{\lambda}_1 - 2 \bar{\kappa}_1 \bar{\kappa}_2 \bar{\lambda}_0 - 2 \bar{\lambda}_0 \bar{\lambda}_1 \bar{\lambda}_2 \nonumber \\
&& - 2 \bar{\lambda}_0 \bar{\lambda}_1^2 - 6 \bar{\kappa}_0^2 \bar{\kappa}_1 + \left( 2 \alpha - 3 \right) \bar{\kappa}_0 \bar{\kappa}_1^2 + \left( 2 \alpha - 4 \right) \bar{\kappa}_0 \bar{\kappa}_1 \bar{\kappa}_2 + \alpha \left( \bar{\kappa}_1 \bar{\kappa}_2^2 + \bar{\kappa}_1 \bar{\kappa}_2 \bar{\kappa}_3 \right) ,
\ea
while in the EF the runnings have the form
\ba
\frac{\text{d} \hat{n}_T}{\text{d} \ln k} & = & - 2 \hat{\kappa}_0 \hat{\kappa}_1 - 6 \hat{\kappa}_0^2 \hat{\kappa}_1 + \left( 2 \alpha - 2 \right) \left( \hat{\kappa}_0 \hat{\kappa}_1^2 + \hat{\kappa}_0 \hat{\kappa}_1 \hat{\kappa}_1 \right) ,
\\
\frac{\text{d} \hat{n}_S}{\text{d} \ln k} & = & - 2 \hat{\kappa}_0 \hat{\kappa}_1 - \hat{\kappa}_1 \hat{\kappa}_2 - 6 \hat{\kappa}_0^2 \hat{\kappa}_1 + \left( 2 \alpha - 3 \right) \hat{\kappa}_0 \hat{\kappa}_1^2 + \left( 2 \alpha - 4 \right) \hat{\kappa}_0 \hat{\kappa}_1 \hat{\kappa}_2 \nonumber \\
&& + \alpha \left( \hat{\kappa}_1 \hat{\kappa}_2^2 + \hat{\kappa}_1 \hat{\kappa}_2 \hat{\kappa}_3 \right) .
\ea
Again, plugging \eqref{hk0-to-bk0} - \eqref{tripleh-to-tripleb} into the EF expressions, one can see that the expressions for the runnings of the spectral indices in the two frames coincide. Finally, the runnings of the spectral indices can be written in terms of the PSRPs as
\ba
\frac{\text{d} n_T}{\text{d} \ln k} & = & - 8 \epsilon_V^2 + 4 \epsilon_V \eta_V + \left( 52 \alpha - \frac{148}{3} \right) \epsilon_V^3 - \left( 50 \alpha - 38 \right) \epsilon_V^2 \eta_V \nonumber \\
&&+ \left( 16 \alpha - 12 \right) \epsilon_V \eta_V^2 + \left( 4 \alpha - \frac{8}{3} \right) \epsilon_V \zeta_V^2 ,\\
\frac{\text{d} n_S}{\text{d} \ln k} & = & - 24 \epsilon_V^2 + 16 \epsilon_V \eta_V - 2 \zeta_V^2 + \left( 180 \alpha - \frac{104}{3} \right) \epsilon_V^3 - \left( 180 \alpha + \frac{4}{3} \right) \epsilon_V^2 \eta_V \nonumber \\
&& + \left( 32 \alpha + 12 \right) \epsilon_V \eta_V^2 + \left( 24 \alpha + 4 \right) \epsilon_V \zeta_V^2 - \left( 2 \alpha - \frac{8}{3} \right) \eta_V \zeta_V^2 - \left( 2 \alpha + \frac{2}{3} \right) \rho_V^3 .
\ea
\section{Equation of motion in terms of $e$-folds}
\label{app3:efolds}
We can rewrite the equation of motion for the invariant $\mathcal{I}_\phi$ as a nonlinear second order differential equation with respect to the number of $e$-folds. In the Einstein frame we have
\begin{equation}
\frac{\text{d}^2 \mathcal{I}_\phi }{\text{d} \hat{N}^2}+3\frac{\text{d} \mathcal{I}_\phi }{\text{d} \hat{N}}-\left( \frac{\text{d} \mathcal{I}_\phi }{\text{d} \hat{N}}\right)^3+\left[ 1-\frac{1}{3}\left( \frac{\text{d} \mathcal{I}_\phi }{\text{d} \hat{N}} \right)^2 \right] 3\sqrt{\eps_V} = 0 ,
\label{Eefolds-EOM}
\end{equation}
while in the Jordan frame the equation of motion can be brought to the following form:
\be
\begin{split}
\frac{\text{d}^2 \mathcal{I}_\phi}{\text{d} \bar{N}^2}+3 \frac{\text{d}\mathcal{I}_\phi}{\text{d} \bar{N}}&+\frac{\text{d}\mathcal{I}_\phi}{\text{d} \bar{N}} \left[ 1-\frac{1}{2} \frac{\text{d} \ln{\mathcal{I}_m}}{\text{d} \bar{N}} \right]^{-1} \left[ -\frac{1}{2}\frac{\text{d} \ln{\mathcal{I}_m}}{\text{d} \bar{N}}+\frac{1}{4}\left( \frac{\text{d} \ln{\mathcal{I}_m}}{\text{d} \bar{N}} \right)^2 - \left( \frac{\text{d}\mathcal{I}_\phi}{\text{d} \bar{N}} \right)^2+\frac{1}{2} \frac{\text{d}^2 \ln{\mathcal{I}_m}}{\text{d} \bar{N}^2} \right]
\\
&-\frac{\text{d} \ln{\mathcal{I}_m}}{\text{d} \bar{N}}\frac{\text{d}\mathcal{I}_\phi}{\text{d} \bar{N}}+\left[ 1 + \frac{1}{4} \left(\frac{\text{d} \ln{\mathcal{I}_m}}{\text{d} \bar{N}} \right)^2-\frac{\text{d} \ln{\mathcal{I}_m}}{\text{d} \bar{N}}-\frac{1}{3}\left( \frac{\text{d}\mathcal{I}_\phi}{\text{d} \bar{N}}\right)^2 \right] 3\sqrt{\eps_V} = 0 .
\end{split}
\label{Jefolds-EOM}
\ee
By numerically solving these equations we can obtain the invariant field as a function of the number of e-folds in the two frames. Of course, in the case with minimal coupling we have $\frac{\text{d} \ln{\mathcal{I}_m}}{\text{d} \bar{N}} = \frac{\text{d}^2 \ln{\mathcal{I}_m}}{\text{d} \bar{N}^2} = 0$ and $\bar{N} = \hat{N}$, which means that \eqref{Jefolds-EOM} reduces to \eqref{Eefolds-EOM}.
\end{appendices}
|
2,877,628,091,197 | arxiv | \section{Introduction}
As the demand for higher-quality images explodes,
the integration level of the mobile camera sensors rapidly grows, which results in smaller pixel sizes.
However, the smaller pixel sizes of the sensor directly impact the image quality, especially in a low-light environment.
Pixel-binning has been proposed, which groups nearby pixels to form a bigger pixel to enlarge each pixel's effective size.
The binning improves the signal-to-noise ratio (SNR) in a low-light environment.
Also, it maintains the high-resolution image by rearranging pixels into standard Bayer patterns in a high-illuminance environment.
For example, Quad CFA groups adjacent $2\times 2$ pixels to extract single-color information, while Nona CFA groups $3\times 3$ pixels,
which is depicted in recent flagship smartphones including Samsung Galaxy S21 Ultra and Xiaomi Mi 11 Ultra.
Recently, Q$\times$Q{} Bayer CFA has been proposed, which groups $4\times 4$ pixels as described in Figure \ref{fig:qxqcfa}.
The RAW image captured by a camera sensor is a single channel image since each pixel can obtain single-color information.
An image signal processor (ISP) reconstructs a high-quality RGB image from a RAW image.
An ISP consists of demosaicing, which retrieves color information of each pixel,
and other processing steps such as denoising, white balancing, and gamma correction.
With the rapid advancement of neural networks,
deep learning-based ISP techniques \cite{pynet,eednet,delnet,csanet} significantly improved the quality of reconstructions;
however, most existing works focused on standard Bayer CFA (Figure~\ref{fig:bayercfa}).
Only a few works considered non-Bayer CFAs \cite{nona,2021demosic}.
Since non-Bayer CFAs induce different statistics of image compared to standard Bayer filtered images,
the ISP system should be re-optimized especially for the most recent Q$\times$Q{} Bayer CFA.
However, to the best of our knowledge, there have been no ISPs specifically designed for Q$\times$Q{} Bayer CFA.
\begin{figure}[t]
\centering
\begin{subfigure}{0.3\textwidth}
\centering
\includegraphics[height=4cm]{figures/bayercfa}
\caption{standard Bayer CFA}
\label{fig:bayercfa}
\end{subfigure}
\hspace{1cm}
\begin{subfigure}{0.3\textwidth}
\centering
\includegraphics[height=4.1cm]{figures/qxqcfa}
\caption{Q$\times$Q{} Bayer CFA}
\label{fig:qxqcfa}
\end{subfigure}
\caption{Standard Bayer and Q$\times$Q{} Bayer color filter arrays}
\end{figure}
Another bottleneck of mobile cameras is restrictive resource of computationally limited platform.
Recent trends on over-parameterization on deep learning models \cite{transfomersr,imagetransformer} are not suitable for resource-constrained environment.
For example, recently proposed PyNET \cite{pynet} achieves remarkable performance in RAW to RGB reconstruction;
however, the number of parameters is 47.55M, and the size of trained parameters is 181.6 MB.
In this paper, we propose a next-generation deep learning-based ISP model, PyNET-Q$\times$Q{},
designed explicitly for the resource-constrained environment and Q$\times$Q{} Bayer CFA.
Note that we primarily focus on demosaicing since other tasks, such as whitening and gamma correction, maybe subjective and tunable at software level.
PyNET-Q$\times$Q{} is based on the recently proposed PyNET \cite{pynet}, but is more mobile-friendly by removing a significant portion of model parameters.
On the other hand, the Q$\times$Q{} image inherently has a discontinuity issue,
where the demosaiced image has a blocking effect because of the larger size of pixel groups.
To resolve the discontinuity issue, we introduce two additional techniques for Q$\times$Q{} input:
1) {\it residual learning} which adds a skip connection from an input RAW gray image to the reconstruction,
and 2) {\it sub-pixel convolution} \cite{ESPCN} that replaces interpolation based upsampling.
However, the reduced size of the proposed model results in the degraded reconstructed image quality compared to the original PyNET.
Thus, we incorporate a knowledge distillation strategy to improve the output image quality.
Knowledge distillation \cite{kd} is a process of transferring a distilled soft label from the powerful cumbersome model (teacher) to a smaller model (student).
Hinton et al.\ showed that one could train the smaller model more effectively with aid from the teacher.
However, distillation is originally designed for classification models.
It is not directly applicable for generative models because the notion of softening the label is unclear when the output is an image.
Motivated by Cho and Hariharan \cite{softoutput}, we introduce a novel distillation technique, {\it progressive distillation},
which transfers knowledge from different levels of teachers where the small model (student) learns from the right level of the teacher.
In this work, we train PyNET-Q$\times$Q{} with progressive distillation and verify the effectiveness of the proposed strategy via experiments.
We also emphasize that progressive distillation is not specific to our model, and it applies to distillation for other generative models.
Note that it is hard to obtain ground truth images while training deep learning-based ISP models.
PyNET \cite{pynet} is trained with RAW images captured by Huawei P20 cameraphone (as an input)
and images captured by Canon 5D Mark IV DSLR camera (as a ground truth output).
However, such datasets may cause issues due to the imperfect alignment between input and output images.
Also, it may depend on a specific ISP system with subjective components (such as white-balancing, color correction, and demoireing).
Most other works \cite{demosaic2,demosaic4,demosaic5} apply Bayer CFA to common images to obtain input images,
but filtered images have different statistics from sensor-level images because of ISP.
In this work, we train our model with the {\it hybrid dataset},
a mixture of RAW 3CCD images captured by Hitachi HV-F203SCL and a common dataset DIV2K \cite{div2k}.
Unlike the standard cameras, each pixelsensor of 3CCD camera captures all sensor level RGB color information,
and therefore it can be considered ground truth.
The corresponding Q$\times$Q{} input image is a filtered RAW 3CCD image by Q$\times$Q{} Bayer CFA.
Thus, the hybrid dataset contains variable scenes from DIV2K and more accurate color information from 3CCD images.
We also test the proposed model on Q$\times$Q{} input images obtained by an actual Q$\times$Q{} camera sensor (under development).
Our main contributions are summarized as follows.
\begin{itemize}
\item We propose {\bf the first mobile-friendly demosaicing model for the Q$\times$Q{} Bayer CFA}, PyNET-Q$\times$Q{}.
More precisely, we modify PyNET for QxQ inputs by introducing residual learning and sub-pixel convolution,
then compress (from 181MB to 5MB) for mobile environments.
\item We propose a {\bf novel progressive distillation strategy}, which is generally applicable for generative networks.
\item We add {\bf sensor-level RAW 3CCD images} to a training dataset that are more suitable for demosaicing tasks than common datasets.
\item We demonstrate the performance of our model with {\bf actual Q$\times$Q{} input images} that are obtained from the Q$\times$Q{} camera sensor.
\end{itemize}
\section{Backgrounds}
\subsection{Deep Learning based Demosaicing}
The deep learning-based demosaicing models naturally arise due to the remarkable success of deep neural networks in many computer vision tasks
\cite{demosaic1,demosaic2,demosaic3,demosaic4,demosaic5,kokkinos2018deep}.
However, most of them primarily focused on Bayer CFA.
For non-Bayer CFA demosaicing, Syu et al.\ proposed DMCNN \cite{demosaic7} based on SRCNN \cite{SRCNN}
and showed its effectiveness on several non-Bayer CFA inputs, including diagonal stripe \cite{diag}, GYGM, and Hirakawa \cite{hirakawa}.
Kim et al.\ \cite{2020demosaic} proposed an efficient end-to-end demosaicing model that consists of two pyramid networks for Quad CFA ($2\times 2$).
At the same time, Sharif et al.\ \cite{demosaic6} suggested the joint demosaicing and denoising scheme that leverages depth and spatial attention.
For Nona CFA ($3\times 3$), Sugawara et al.\ \cite{nona} designed a GAN based on a spatial-asymmetric attention module to reduce artifacts.
Kim et al.\ \cite{2021demosic} showed that a duplex pyramid network (DPN) \cite{2020demosaic} achieves low visual artifacts and good edge restoration in demosaicing images obtained by SAMSUNG CMOS sensor.
Beyond the demosaicing-specific models, several deep learning-based ISP models are proposed to replace classic ISP \cite{eccvcomp}.
Recent works have focused on a hierarchical structure such as U-Net \cite{unet} to handle local and global features.
U-Net is an U-shaped network that handles fine and coarse features with contracting and expanding paths.
EEDNET \cite{eednet} used an U-Net structure with the channel attention residual dense block and clip-L1 loss.
W-Net \cite{wnet} used the two-cascaded U-Net architecture with channel attention module,
whereas CameraNet \cite{cameranet} also used the two-cascaded U-Nets, which consists of Restore-Net and Enhance-Net.
Beyond U-Net-based models, DeepISP \cite{deepisp} proposed a network by cascading two CNNs.
PyNET \cite{pynet} improves the performance significantly by using an inverted pyramidal structure with various convolution filters.
PyNET-CA \cite{pynetca} added a channel attention mechanism to PyNET and improved the reconstruction quality further.
We also would like to mention that super-resolution \cite{srsurvey},
which aims to reconstruct a high-resolution (HR) image from a low-resolution (LR) image,
has similar aspects of demosaicing since it fills the missing information of the image.
Classical approaches on super-resolution construct an HR image based on prior knowledge \cite{sparsesrprior,gradientsr}.
In contrast, CNN-based \cite{SRCNN,DnCNN,DRCNN,ESPCN,FSRCNN,IRCNN,VDSR}
and GAN-based \cite{srgan,esrgan} super-resolution models generate photorealistic output images,
outperforming the classical methods without hand-crafted elements.
Note that there are other network models in computer vision,
such as vision transformer (ViT) \cite{dosovitskiy2020vit,liang2021swinir}, and regression-based approaches \cite{drn}.
However, most of them are computationally heavy that do not match to design requirements for mobile environments.
Our model is based on PyNET because of its verified performance on ISP-related tasks
and its compressible CNN-based architecture.
\subsection{PyNET}
Since our model structure is based on PyNET \cite{pynet}, we thoroughly review PyNET in this section.
PyNET has an inverted pyramidal structure with five levels, where we call level 5 is the lowest and level 1 is the highest.
Each level uses differently scaled images;
the lower level operates on lower resolution images and focuses on learning global features.
In contrast, the higher level performs on higher resolution images and learns an image's local details.
Each level of PyNET consists of multi-convolution blocks.
The lower level contains blocks of two convolution layers with a kernel size $3\times 3$,
and the higher level has parallel convolution layers with various kernel sizes ($3\times3$, $5\times5$, $7\times7$, and $9\times9$).
On top of level 1, PyNET has level 0 consisting of a convolution layer and an activation function,
which upscales level 1 output to the target resolution.
PyNET is trained sequentially from the lowest level to the highest level.
The lower level outputs are upsampled to match the scale of features in the next level.
Before applying convolution layers, the higher level concatenates the upsampled feature maps (from the previous level)
and an intermediate scaled input.
All pre-trained lower levels are trained altogether when we train the higher level.
Figure~\ref{fig:orig_pynet} in Section~\ref{app:pynet} illustrates the network architecture of PyNET.
Despite the remarkable performance of PyNET, the inverted pyramidal structure requires numerous parameters,
which is not desirable for mobile environments.
We also note that the input image of PyNET has 4 channels with a half resolution of the original RAW image,
which is described in Figure~\ref{fig:gray}.
It is more vulnerable to discontinuity issues, especially when the input is a Q$\times$Q{} image.
\subsection{Knowledge Distillation}
In knowledge distillation \cite{kd}, a pre-trained cumbersome teacher model provides a distilled output (soft labels)
to a smaller student model to transfer the teacher's knowledge.
Intuitively, the student model should learn better from the stronger model;
however, Cho and Hariharan \cite{softoutput} showed that an accurate teacher is not always good.
The authors showed that a less-trained network could be a better teacher when the student network has insufficient capacity.
This implies that the teacher's knowledge should match the student's capability.
There are variants of knowledge distillations.
Feature distillation \cite{fitnet,at} transfers intermediate feature maps of the teacher model to a student model,
while online distillation \cite{online0,online1,online2,online3,online4} trains the teacher and student models simultaneously.
In adaptive distillation \cite{adaptiveensemble,adaptiveteacher1}, the student network learns from multiple teachers adaptively.
Du et al.\ \cite{adaptiveensemble} update the weights of knowledge distillation losses and feature losses
while training based on teachers' gradients.
Liu et al.\ \cite{adaptiveteacher1} transfer features from multiple teachers
and adaptively adjust weights between teachers' soft targets.
On the other hand, in generative tasks that produce an image (such as super-resolution and demosaicing),
it is nontrivial to directly apply a distillation technique since it does not have a notion of soft labels.
Thus, many super-resolution networks distill features of teacher networks \cite{srdistill1,fakd,lsfd}
with an additional tool such as a regressor to resolve dimension mismatch between the teacher and the student.
Gao et al.\ \cite{srdistill1} transfer first-order statistical map (e.g., average, maximum, or minimum value) of intermediate features,
while FAKD \cite{fakd} proposed spatial affinity of features based distillation to utilize rich high-dimensional statistical information.
LSFD \cite{lsfd} introduced a deeper regressor consisting of five $3\times 3$ convolution layers
to achieve a larger receptive field and an attention method based on a difference (between teacher and student)
that selectively focuses on vulnerable pixel locations.
PISR \cite{pisr} added an encoder that exploits privileged information of ground truth and transfers the knowledge through feature distillation.
Beyond super-resolution, Aguinaldo et al.\ \cite{gancomp1} proposed a distillation method for general GANs
by transferring knowledge based on the pixel-level distance between images from the student and the teacher.
KDGAN \cite{kdgan} proposed three-player distillation with a student, a teacher, and a discriminator.
However, most knowledge distillation approaches are based on a pre-trained fixed teacher,
which does not consider students' increasing capability as training proceeds.
\section{PyNET-Q$\times$Q{}}
\subsection{Model Architecture}
Since PyNET handles RAW type input and shows extraordinary performance with easily compressible architecture,
we propose PyNET-Q$\times$Q{} based on the PyNET.
PyNET-Q$\times$Q{} is specifically tuned for mobile devices with a lighter structure
and additional network design for Q$\times$Q{} input.
First, to make the model lighter, we only keep levels 0 and 1 and remove lower levels (levels 2, 3, 4, and 5)
dedicated to global features.
Also, since multi-convolution block increases FLOP counts extensively,
PyNET-Q$\times$Q{} reduces the number of filters in all blocks by half.
We compensate the model degradation due to compression with a novel distillation technique,
which we discuss in Section~\ref{subsec:progressive distillation}.
\begin{figure}[t]
\centering
\includegraphics[width=1\textwidth]{figures/pynet_qxq}
\caption{The architecture of the proposed PyNET-Q$\times$Q{}}
\label{fig:pynetqxq}
\end{figure}
Recall that Q$\times$Q{} Bayer CFA groups four by four patches to obtain single-color information;
the distance between the same color patches is larger than the regular Bayer pattern input.
Thus, Q$\times$Q{} input produces output images with blocking effects,
which are amplified further when the model has a lower capacity with reduced parameters.
To overcome this problem, we introduce two additional components to PyNET for Q$\times$Q{} image:
1) residual learning and 2) sub-pixel convolution.
We illustrate the network architecture of PyNET-Q$\times$Q{} in Figure~\ref{fig:pynetqxq}.
\subsubsection{Residual learning}
The Q$\times$Q{} input image provides the true pixel values in part,
and the corresponding pixels in the model's output should have a consistent value.
In other words, the model should preserve information in the input Q$\times$Q{} RAW image
and reconstruct the missing parts of the input.
Thus, it is natural to provide an input image to the (near) final layer of the model
so that the model can solely focus on the missing part.
PyNET \cite{pynet} has skip connections in all levels,
but level 0 does not since it only upsamples level 1 output.
The proposed model consists of a residual connection
that provides a single channel gray image to the final layer at level 0.
A gray image is obtained by collecting all channel information of the Q$\times$Q{} RAW input image.
Note that a gray image is a single channel full resolution image,
different from a 4 channel (RGBG) half-resolution PyNET input, as shown in Figure~\ref{fig:gray}.
Similar to DenseNet \cite{huang2017densely}, the final layer at level 0 concatenates (instead of adding) a gray image and level 1's output;
then, the last convolution layer produces the sensor-level RGB output image.
We found that the residual learning provides an overall structure of the original image
that compensates for the discontinuity of the Q$\times$Q{} Bayer CFA.
We provide a detailed discussion on the effect of gray image skip connection in Section~\ref{subsec:results}.
Note that the additional skip connection increases only nine parameters, which is negligible.
\begin{figure}[t]
\centering
\includegraphics[width=.9\textwidth]{figures/gray}
\caption{Difference of gray image and 4 channel (RGBG) half resolution input}
\label{fig:gray}
\end{figure}
\subsubsection{Sub-pixel convolution}
An inverted-pyramidal structure of PyNET requires multiple upsampling steps to transform features from lower levels to higher levels.
PyNET \cite{pynet} upsamples the features with bilinear interpolation followed by a $3\times 3$ convolution,
which causes a cumulative noise in the feature and loses information due to non-invertibility.
On the other hand, deconvolution \cite{deconvolution}, another upsampling technique with fractional stride,
has a checkerboard artifact that is more critical for Q$\times$Q{} inputs.
Motivated from the upsampling method in super-resolution \cite{ESPCN},
we replace the interpolation-based upsampling with sub-pixel convolution.
Sub-pixel convolution inflates the channel using $3\times3\times4$ kernels and rearranges it,
whereas deconvolution adds padding first, followed by a standard $3\times3$ convolution.
Sub-pixel convolution is invertible and has fewer checkerboard artifacts \cite{checkerboardfree}.
\subsection{Progressive Distillation}\label{subsec:progressive distillation}
We apply knowledge distillation to compensate for the performance degradation due to parameter and level reduction.
The teacher model is a full-size PyNET (all five levels and an original number of filters)
and additional components for Q$\times$Q{} input (residual learning and sub-pixel convolution).
We denote this teacher model by {\it enhanced PyNET}.
Figure~\ref{fig:pynet} in Section~\ref{app:pynet} describes the full-size PyNET teacher model,
where the enhancements (sub-pixel convolution and residual learning) are highlighted in magenta.
Motivated by Cho and Hariharan \cite{softoutput}, we adjust the teacher's performance level to smooth the teacher's knowledge in demosaicing tasks.
However, instead of using a less-trained teacher that is fixed, we adaptively switch the teacher's level while training.
We called this strategy {\it progressive distillation}.
The intuition of progressive distillation is to keep switching the teacher model to a better one when the student is ready.
Let $\{T_1, ..., T_k\}$ be the collection of the teachers that share the network architecture
but are trained with the different number of epochs.
The lower index implies the less trained teacher,
i.e., $T_1$ is the least trained, and $T_k$ is the most trained teacher.
The student model is initially trained by itself without the teacher's help.
When the difference between the student's output and the ground truth saturates,
the student starts learning from the less-trained teacher ($T_1$).
Then, whenever the matching rate between the student's output and the teacher's output saturates,
the student switches from the less-trained teacher ($T_i$) to a more-trained teacher ($T_{i+1}$).
Note that $T_k$ is not a fully-trained teacher
since the capacity of the student network is a lot smaller than the teacher's capability.
With progressive distillation, the unprepared student easily imitates the teacher early,
and it can follow the teacher's learning process by adaptively switching teachers.
Recall that the distilled output can be considered a label-smoothing regularizer in classification model distillation.
In our case, less trained teachers' output can be viewed as a softened output,
and therefore it also plays a role of a regularizer.
Note that the proposed progressive distillation is different
from adaptive distillation techniques in the literature \cite{adaptiveensemble,adaptiveteacher1}
that are mainly adjust the weight among teacher ensembles or loss functions.
In contrast, we fix the teacher model and tune the teacher's weight while training.
Also, Salisman and Ho recently proposed a progressive distillation \cite{salimans2022progressive}
that distills a trained deterministic diffusion sampler, which is different from our progressive distillation.
The authors iteratively distilled a slow teacher diffusion model to a faster student model by adjusting sampling steps,
and the distilled student becomes a teacher in the next iteration.
On the other hand, we fix the teacher model and adjust the level of the teacher to match the student's capability.
\subsection{Training}
Similar to the PyNET \cite{pynet}, we train each level of PyNET-Q$\times$Q{} sequentially from level 1 to level 0.
For level 1 training, we do not apply a distillation,
and the student is trained with the $2\times$ downscaled ground truth image.
While training level 0, we use a progressive distillation scheme described above.
\subsubsection{Level 1 training}
Level 1 of the proposed model is trained with $2\times$ downscaled images.
We train level 1 from scratch without any distillation.
Let ${I}^{(1)}_{S}$ and ${I}^{(1)}_{GT}$ denote the demosaiced image (output of level 1) and the downscaled ground truth image, respectively.
Then, level 1 training minimizes the loss of original PyNET model \cite{pynet}, which is given by
\begin{align}
\mathcal{L}^{(1)}_{DE} &= \left\| I^{(1)}_{S} - I^{(1)}_{GT} \right\|^2_{2}
+ \lambda_1\cdot \mathcal{L}^{(1)}_{VGG},
\end{align}
where $\mathcal{L}^{(1)}_{VGG}$ is vgg \cite{vgg} based percepture loss that measures semantic differences of level 1 outputs with weights $\lambda_1>0$.
\subsubsection{Level 0 training}
In level 0 training, we train the whole PyNET-Q$\times$Q{} model
where level 0 is randomly initialized, and level 1 is initialized with the trained parameters from level 1 training.
The loss function consists of PyNET loss and distillation loss.
Let ${I}_{S}$ and ${I}_{GT}$ denote the demosaiced image (output of the model) and the ground truth image, respectively.
Similar to level 1 training, the PyNET loss is given by
\begin{align}
\mathcal{L}_{DE} &= \left\| I_{S} - I_{GT} \right\|^2_{2} + \lambda_1\cdot \mathcal{L}_{VGG} + \lambda_2\cdot \mathcal{L}_{MS-SSIM},
\end{align}
where $\mathcal{L}_{VGG}$ and $\mathcal{L}_{MS-SSIM}$ denote vgg loss and MS-SSIM loss.
For distillation loss, we distill the intermediate features of early layers in level 1.
This is because we removed lower levels in PyNET-Q$\times$Q{},
and PyNET-Q$\times$Q{} level 1 should behave similar to the combination of all lower levels (level 2 to 5) of PyNET.
For a feature distillation, we compute $L_2$ difference between intermediate features.
However, the teacher and the student have different spatial dimensions of features,
where previous studies \cite{srdistill1,fakd,lsfd} introduced additional regressors or extracted statistical information from features.
We also add $1\times1$ regressors $R$ and compare the intermediate features of level 1.
For the $i$-th step of progressive distillation when the student model $S$ learns from the $i$-th teacher $T_i$,
the distillation loss $\mathcal{L}^{(i)}_{DS}$ is given by
\begin{align}
\mathcal{L}^{(i)}_{DS} &= \left\| R(F_{S}) - F_{T_i} \right\|^2_{2},
\end{align}
where ${F}_{S}$ and ${F}_{T_i}$ denote intermediate feature map of level 1 from the student $S$ and the teacher $T_i$, respectively.
Finally, at the $i$-th step of progressive distillation,
we train PyNET-Q$\times$Q{} by minimizing the total loss
\begin{align}
\mathcal{L}_{total}&= \mathcal{L}_{DE} + \alpha \cdot \mathcal{L}^{(i)}_{DS},
\end{align}
with weight $\alpha>0$.
We switch the teacher to $T_{i+1}$ if the feature difference between the student and teacher saturates.
More precisely, we say the feature difference saturates when the variance of differences of the last five epochs are below a certain threshold $\sigma$.
The progressive distillation process is described in Algorithm~\ref{alg:progressive}.
\begin{algorithm}
\caption{Progressive distillation}\label{alg:progressive}
\textbf{Parameters:}\\
\hspace*{\algorithmicindent}$S$: the student model\\
\hspace*{\algorithmicindent}$\{T_1, \ldots, T_k\}$: the pre-trained teacher models
\begin{algorithmic}[1]
\State Initialize the student $S$ level 1 using the parameters from level 1 training
\State Train the student $S$ until $\|I_{S}- I_{GT}\|^2$ saturates
\For {$i=1,2,\ldots,k$}
\State Distill the teacher $T_i$ to the student $S$ until $\|R(F_S) - F_{T_i}\|^2$ saturates
\EndFor
\end{algorithmic}
\end{algorithm}
\section{Experiments}\label{sec:exp}
\begin{figure}[t]
\centering
\includegraphics[width=1\textwidth]{figures/sensor}
\caption{The pipeline for Q$\times$Q{} demosaicing performance evaluation.}
\label{fig:sensor}
\end{figure}
This section provides experimental results of PyNET-Q$\times$Q{} and
compares the results with conventional demosaicing logics.
Recall that we focus on demosaicing, i.e., reconstructing sensor level RGB images from Q$\times$Q{} RAW images.
The remaining ISP steps generate the output RGB image as described in Figure~\ref{fig:sensor}.
Similar to the PyNET, all networks are trained using an ADAM optimizer \cite{kingma2014adam}
with $\beta_1=0.9, \beta_2=0.99, \epsilon=10^{-8}$, and learning rate $10^{-4}$.
PyNET-Q$\times$Q{} was trained sequentially from level 1 to level 0.
Level 1 is optimized with $\lambda_1$=0.1,
whereas level 0 is optimized with $\lambda_1=1$ and $\lambda_2=0.4$.
In progressive distillation, we switch the teacher at 7th and 20th epochs.
A weight parameter $\alpha$ is set to 10 to match the scale of loss values.
\subsection{Datasets}
Most previous deep learning-based models are trained with high-quality common image datasets
such as DIV2K \cite{div2k}, Flickr2K \cite{flickr2k}, WED \cite{wed}, and BSDS500 \cite{BSDS500}.
For example, the widely accepted procedure in super-resolution \cite{liang2021swinir,drn,edsr,carn}
is downsampling the image (using a bicubic kernel) to obtain an input LR image.
In contrast, the original image is considered a target HR image.
Similarly, in demosaicing, one might want to extract an input RAW image from the dataset by applying the Bayer CFA.
However, this approach may cause issues since HR images in common image datasets are outputs of an ISP system.
The statistics of (downsampled) HR images are significantly different from sensor level inputs due to the nature of ISP,
which consists of color correction, gamma correction, and white balancing.
Thus, the model trained with downsampled data may imitate the specific ISP system.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/convdl/conven_1.png}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/convdl/pynet_1.png}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/convdl/enhen_1.png}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/convdl/ad_1.png}
\end{subfigure}\\
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/convdl/conven_2.png}
\label{fig:conventional}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/convdl/pynet_2.png}
\label{fig:pynet_high}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/convdl/enhen_2.png}
\label{fig:enhen_high}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/convdl/ad_2.png}
\label{fig:ad_high}
\end{subfigure}
\caption{Visual comparison between conventional algorithm and deep learning algorithm for Q$\times$Q{} demosaicing.
From left to right, connventional, PyNET, enhanced PyNET, and PyNET-Q$\times$Q{} with progressive distillation.}
\label{fig:conv_vs_dl}
\end{figure}
\FloatBarrier
In this work, we incorporate RAW 3CCD images (captured by Hitachi HV-F203SCL) into our dataset.
In a 3CCD camera, the light coming through the lens is split into three RGB color beams by a dichroic prism,
and three color sensors measure the intensity of each beam.
Thus, it can obtain RGB RAW images at the sensor level without having an additional ISP system.
There is no loss of original color information or statistical mismatch in 3CCD RAW images.
We train PyNET-Q$\times$Q{} on the {\it hybrid dataset},
a mixture of 935 RAW 3CCD images and 900 images from the DIV2K dataset \cite{div2k} for training and validation.
Since ISP processed images in a common dataset,
we apply an inverse gamma function with $\gamma=2.2$ (a common choice for balancing monitors and true color)
to match input statistics by reverting gamma correction.
We show that the hybrid dataset improves the model's capability in experiment, where the visual comparison is provided in Section~\ref{app:dataset}.
Based on PyNET \cite{pynet}, the processed images were cropped to 448 by 448 size patches.
The patches with small pixel variance are discarded for training stability.
Hence, the hybrid dataset consists of 14595 training patches and 481 test patches.
We evaluate the trained model with Q$\times$Q{} images obtained by an actual Q$\times$Q{} image sensor (under development).
We provide sample Q$\times$Q{} images with detailed descriptions in Section~\ref{app:qxq}.
The test Q$\times$Q{} RAW images are cropped to patches of size $2912\times 2912$ because of its massive size ($8000\times6000$).
We provide a 3CCD dataset partially where the detailed instruction is given in Section~\ref{app:3ccd}.
Note that both RAW 3CCD data and Q$\times$Q{} test RAW images are 10-bit images,
which provides more detailed information than 8-bit images.
\begin{figure}[t]
\begin{subfigure}[b]{0.19\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/ds/pynet1.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/ds/enhen1.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/ds/light1.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/ds/kd1.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/ds/ad_1.png}
\end{subfigure}\\
\begin{subfigure}[b]{0.19\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/ds/pynet2.png}
\label{fig:pynet_res}
\end{subfigure}
\begin{subfigure}[b]{0.19\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/ds/enhen_2.png}
\label{fig:enpynet_res}
\end{subfigure}
\begin{subfigure}[b]{0.19\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/ds/light2.png}
\label{fig:pynetqq_res}
\end{subfigure}
\begin{subfigure}[b]{0.19\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/ds/kd2.png}
\label{fig:kd_res}
\end{subfigure}
\begin{subfigure}[b]{0.19\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/ds/ad_2.png}
\label{fig:ad_res}
\end{subfigure}
\caption{Visual comparison of PyNET based models. From left to right: PyNET, Enhanced PyNET, PyNET-Q$\times$Q{}, PyNET-Q$\times$Q{} with KD, and
PyNET-Q$\times$Q{} with PD.}
\label{fig:pynetcompare}
\end{figure}
\subsection{Results}\label{subsec:results}
\subsubsection{Comparison with other methods}
\begin{table}[t]
\setlength{\tabcolsep}{4pt}
\begin{center}
\caption{PSNR, MS-SSIM, and FLOPs of test images ($448\times 448$).
The {\bf bold} and \textcolor{blue}{blue} text indicate the best and second-best scores, respectively.}
\label{table:kd}
\begin{tabular}{lcccc}
\hline\noalign{\smallskip}
Methods & PSNR & MS-SSIM & Paramerters (M) & FLOPS (MAC) \\
\noalign{\smallskip}
\hline
PyNET & 34.964 & \textcolor{blue}{0.9841} & 47.55 M & 342.35 G \\
Enhanced PyNET (Teacher) & {\bf 36.650} &{\bf 0.9870} & 56.97 M & 342.73 G \\
PyNET-Q$\times$Q{} (Student) & 34.564 & 0.9825 & 1.07 M & 54.05 G \\
PyNET-Q$\times$Q{} $+$ KD & 34.562 & 0.9813 & 1.07 M & 54.05 G \\
PyNET-Q$\times$Q{} $+$ PD (Ours)& \textcolor{blue}{34.973} & 0.9833 & 1.07 M & 54.05 G \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
\end{table}
\setlength{\tabcolsep}{1.4pt}
\setlength{\tabcolsep}{4pt}
\begin{table}
\begin{center}
\caption{Ablation study on enhancement. (RL: residual learning, SP: sub-pixel convolution)}
\label{table:ablation}
\begin{tabular}{lcccc}
\hline\noalign{\smallskip}
Methods & RL & SP & PSNR & MS-SSIM \\
\noalign{\smallskip}
\hline
baseline & & &32.594 &0.9754 \\
baseline$+$RL &\checkmark & &33.787 &0.9800 \\
baseline$+$SP & &\checkmark &33.972 &0.9803 \\
baseline$+$RL$+$SP(Ours)&\checkmark &\checkmark &34.564&0.9825 \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
\end{table}
\setlength{\tabcolsep}{1.4pt}
Since this is the first demosaicing model for Q$\times$Q{} images,
we compare PyNET-Q$\times$Q{} with other PyNET variant (PyNET and enhanced) and conventional logic.
We evaluate the models on actual Q$\times$Q{} images by visually inspecting reconstructed outputs, as shown in Figure~\ref{fig:conv_vs_dl}.
The conventional method does not have a smooth edge reconstruction,
while PyNET-based models have a smooth edge, as shown in the upper figures.
In addition, the conventional method tends to amplify a pixel of different colors in the RAW Q$\times$Q{} image,
as shown in the blue box in upper figure.
The high frequency reconstruction of the conventional method is slightly blurry,
whereas the PyNET-based models reconstruct more distinct shapes as shown in the lower figures.
Surprisingly, PyNET-Q$\times$Q{} shows comparable visual quality to enhanced PyNET even with $1/50$ of parameter reduction.
The more visual comparisons of the conventional method and PyNET-Q$\times$Q{} are provided in Section~\ref{app:convsdl}.
\subsubsection{Impact of progressive distillation}
We explore the effectiveness of progressive distillation.
In Table 1, the enhanced PyNET outperforms the other PyNET variants on PSNR and MS-SSIM.
Despite the small number of parameters,
PyNET-Q$\times$Q{} with progressive distillation (PD) achieves the second-best PSNR score even higher than the original PyNET.
Notably, PyNET-Q$\times$Q{} without distillation presents lower scores than the original PyNET,
which implies that our enhancement and progressive distillation compensate for level removal nicely.
Besides the scores, the PyNET-Q$\times$Q{} without distillation presents a lower visual quality of reconstruction, especially the texture.
For example, it contains the blue-ish shadow in Figure~\ref{fig:pynetcompare}.
This is because PyNET-Q$\times$Q{} without distillation fails to handle the global texture and colors,
which is the role of lower levels in the original PyNET.
However, progressive distillation transfers the `knowledge' on lower levels of enhanced PyNET to PyNET-Q$\times$Q{}
so that it reconstructs more realistic shadows.
Compared to the original PyNET, which presents the second-best MS-SSIM,
the proposed model with progressive distillation shows comparable MS-SSIM.
However, the original PyNET shows some artifacts in shadow, as shown in Figure~\ref{fig:pynetcompare}.
On the other hand, PyNET-QxQ with progressive distillation improves these artifacts.
More visual comparisons are provided in Section~\ref{app:qxqcompare}.
\subsubsection{Ablation study}
As a sanity check for additional components of enhanced PyNET, we conduct the following additional experiments.
For a fair comparison, the numbers of parameters of models are the same in all experiments.
Since the goal of the proposed demosaicing algorithm is to make PyNET lighter and improve performance,
we let PyNET with two levels (level 0 and 1) and half of the filters be our baseline.
The proposed enhancements are 1) residual learning with a gray image input (RL) and 2) sub-pixel convolution upsample (SP).
The scores (PSNR and MS-SSIM) of baseline models with and without each enhancement are presented in Table~\ref{table:ablation}.
The residual learning with a gray image improves PSNR by 1.2 dB from baseline.
It also shows a better MS-SSIM result, confirming that residual learning improves perceptual image quality.
The visual evaluation also shows the contribution of residual learning,
as shown in Figure~\ref{fig:baseline_object} and Figure~\ref{fig:withgray_object}.
The baseline has a red false color artifact and a slightly distorted tone and texture.
However, the model with residual learning preserves the original color and restores the original texture more clearly.
The sub-pixel convolution upsampling improves PSNR by 1.38 dB from the baseline.
The significant improvement in PSNR and MS-SSIM shows the importance of the proper upsampling method.
The sub-pixel convolution upsampling also provides a better color reconstruction,
while the baseline shows color deviation, as shown in Figure~\ref{fig:baseline_object} and Figure~\ref{fig:withsp_object}.
The result indicates that the sub-pixel convolution upsample outperforms the interpolation-based upsample.
With both residual learning and subpixel-based upsampling,
the model significantly improves PSNR and MS-SSIM, as well as visual quality, as presented in Figure~\ref{fig:withboth_object}.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/ds/v1.png}
\caption{Base}
\label{fig:baseline_object}
\end{subfigure}
\begin{subfigure}[b]{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/ds/v2.png}
\caption{Base$+$RL}
\label{fig:withgray_object}
\end{subfigure}
\begin{subfigure}[b]{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/ds/v3.png}
\caption{Base$+$SP}
\label{fig:withsp_object}
\end{subfigure}
\begin{subfigure}[b]{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/ds/light3.png}
\caption{Base$+$RL$+$SP}
\label{fig:withboth_object}
\end{subfigure}
\caption{Ablation study on enhancement of PyNET-Q$\times$Q{} }
\label{fig:albation}
\end{figure}
\section{Conclusions}
We proposed PyNET-Q$\times$Q{}, the first deep learning-based demosaicing model for Q$\times$Q{} images.
We enhanced PyNET by adding residual learning and sub-pixel convolution, which has a better fit with Q$\times$Q{} inputs.
Then, we compressed the enhanced PyNET and trained with progressive distillation.
The proposed model has a smaller size of the model (2.3\% compared to PyNET)
while enjoying the comparable demosaicing capability.
We tested our model with an input captured by the actual Q$\times$Q{} image sensor and presented the high-quality demosaiced image.
\clearpage
\bibliographystyle{splncs04}
|
2,877,628,091,198 | arxiv | \section{Introduction.}
Heinz Hopf in \cite{Ho} showed that any immersed constant mean curvature topological sphere in $\mathbb{R}^{3}$ must be a round sphere. To do so, he introduced a quadratic differential on any constant mean curvature surface, the Hopf differential, that turns out to be holomorphic for the conformal structure defined by the first fundamental of the surface and he observed that the zeroes of this differential coincide with the umbilical points of the surface. Finally, using that any holomorphic quadratic differential defined on a topological sphere must be the trivial one, the proof follows from the classification of umbilical surfaces in $\mathbb{R}^{3}$.
It should be remarked that the Codazzi equation of a constant mean curvature surface (or $H-$ surface) in $\mathbb{R}^{3}$ implies that the Hopf differential is holomorphic. Hence, as the Codazzi equation is the same in every 3-dimensional space form, we can extend the Hopf Theorem to any space form.\\
Regarding $H-$ surfaces with non-empty smooth boundary, J.C. Nistche in \cite{Ni} used the Hopf differential to classify free boundary constant mean curvature disks (or $H-$ disks) in the Euclidean ball of $\mathbb{R}^{3}$. He showed that the boundary is a line of curvature. Then, he used the Schwarz Reflection Principle to concluded that the Hopf differential vanishes on any free boundary $H-$ disk and hence the disk must be totally umbilical.
When the boundary is a piece-wise regular curve, J. Choe in \cite{Ch} extended Nitsche's classification when the number of singular points is finite and some additional conditions on the angles of the vertices given by these singular points.
Recently, A. Fraser and R. Schoen in \cite{FS} showed that any two-dimensional free boundary immersion with parallel mean curvature in the Euclidean ball of $\mathbb{R}^{n}$ must be totally umbilical. They proved that the complex quartic differential obtained by squaring the Hopf differential of the surface is holomorphic.\\
The above classifications results show that the existence of a holomorphic quadratic differential is a useful tool to classify $H-$ surfaces of $\mathbb{R}^{3}$. Indeed, the underlying idea for the construction of the Hopf differential relies in the Codazzi pair that can be defined on the surface, for example, on a constant mean curvature surface in $\mathbb{R}^{3}$, the Codazzi pair is obtained by the first and the second fundamental form. Under some geometrical conditions, a Codazzi pair gives rise to holomorphic quadratic differential on the surface.
For surfaces in the homogeneous spaces with 4-dimensional isometry group $\mathbb{E}(\kappa , \tau)$, U. Abresch and H. Rosenberg in \cite{Rosen} showed the existence of a holomorphic quadratic differential, the Abresch-Rosenberg differential, on any $H-$ surface. Hence, they classified the topological spheres with constant mean curvature as the rotationally symmetric ones.
In the same spirit of J. Nitsche, I. Fernández and M. Do Carmo used the Abresch-Rosenberg differential to classify $H-$disks in the product spaces $\mathbb{S}^{2} \times \mathbb{R}$ and $\mathbb{H}^{2} \times \mathbb{R}$; they proved that the $H-$ disk that meets an Abresch-Rosenberg surface, i.e, those whose Abresch-Rosenberg differential along its boundary at a constant angle is part of an Abresch-Rosenberg surface under certain geometrical conditions. Similar results have been obtained by M. P. Cavalcante and J. Lira in \cite{Cal}, they proved that a $H-$disk in the product spaces that meets a slice at a constant angle is a cap.\\
The aim of this paper is to classify $H-$ disks in any homogeneous space $\mathbb{E}(\kappa , \tau)$. Recently, in \cite{ER4} the authors defined a Codazzi pair $(I,II_{AR})$ such that the $(2,0)$- part of $II_{AR}$ is the Abresch-Rosenberg differential for a $H-$ surface in $\mathbb{E}(\kappa , \tau)$. Then, we use the mentioned Codazzi pair to establish a Joachimstahl's Type Theorem for the intersection of $H-$ surfaces, in this way, we can use the Abresch-Rosenberg differential to classify $H-$ disks in $\mathbb{E}(\kappa , \tau)$.
As a consequence, we will show that our results generalize the previous classification of $H-$disks in the product spaces $\mathbb{S}^{2} \times \mathbb{R}$ and $\mathbb{H}^{2} \times \mathbb{R}$ by I. Fernández and M. Do Carmo and we will get a classification for $H-$ disks in $\mathbb{E}(\kappa , \tau)$, when $\tau \neq 0$.\\
The content of this paper is organized as follows:
In section 2, we set up the notation that will be used along this paper and we review standard facts about $H-$surfaces in $\mathbb{E}(\kappa , \tau)$, as well as, the abstract theory of the Codazzi pairs in Riemannian surfaces. Finally, we recall the definition of the Abresch-Rosenberg shape operator for a $H-$ surface in $\mathbb{E}(\kappa , \tau)$ and its properties.\\
In section 3, we define the lines of curvature respect to the Abresch-Rosenberg shape operator and we show the Key Lemma of this work. In fact, the Key Lemma is a Joachimstalh's Type Theorem for the intersection of $H-$ surfaces in $\mathbb{E}(\kappa , \tau)$. Later, we prove that natural geometric configurations about the intersection of the $H-$ surfaces imply the conditions of the Key Lemma.\\
Finally, in section 4, we apply the Key Lemma to classify $H-$ disks in $\mathbb{E}(\kappa , \tau)$ that meet an Abresch-Rosenberg surface along its boundary at a constant angle. First, we classify these types surfaces with smooth boundary and assuming certain geometrical conditions. After, we extend the previous classification to piece-wise differentiable boundary case assuming that the number of singular points is finite and additional suppositions on the angles of the vertices by using a general result about Codazzi pairs in \cite{ER3}.
{\bf Acknowledgements}. I am grateful to professor José María Espinar for his constant encouragement and support to prepare this paper. Also, I would like to thank the warm welcome by the people at the Instituto de Matematica Pura e Aplicada (IMPA) where this work was initiated and the Universidad Federal Fluminense (UFF) for its hospitality, where this paper was concluded.
\section{Preliminaries}
In this preliminary section, we summarize the general theory of $H-$ surfaces in homogeneous Riemannian manifolds and the abstract theory of Codazzi pairs in Riemannian surfaces, for this we follow the references \cite{AEG3,D,ER,ER2,Scott}.
\subsection{Homogeneous Riemanniann Manifolds $\mathbb{E}(\kappa ,\tau)$.}
The simply connected homogeneous Riemannian manifolds whose isometry group have dimension 4 are the three dimensional Riemannian manifolds that fiber over a simply connected two dimensional manifolds space form $\mathbb{M}^{2}(\kappa)$ and its fibers are the trajectories of a unitary Killing vector field $\xi$. According to standard notation, we embrace these spaces as $\mathbb{E}(\kappa ,\tau)$, where $\kappa$ and $\tau$ are constants so that $\kappa - 4\tau^{2} \neq 0$.
In fact, the homogeneous manifolds $\mathbb{E}(\kappa , \tau)$ can be classified depending of numbers $\kappa$ and $\tau$ (\cite{Scott}); if $\tau = 0$, $\mathbb{E}(\kappa , \tau)$ is the product space $\mathbb{M}^2 (\kappa)\times \mathbb{R}$, where $\mathbb{M}^2(\kappa) = \mathbb{S} ^2(\kappa)$ if $\kappa >0$ ($\mathbb{S} ^2(\kappa)$ the sphere of constant curvature $\kappa$), or $\mathbb{M}^2(\kappa) = \mathbb{H} ^2(\kappa)$ if $\kappa < 0$ ($\mathbb{H}^2(\kappa)$ the hyperbolic plane of constant curvature $\kappa$). If $\tau \neq 0$, $\mathbb{E}(\kappa , \tau) $ is either a Berger sphere if $\kappa > 0$, or a Heisenberg space if $\kappa = 0$, or the universal cover of ${\rm PSL}(2,\mathbb{R})$ if $\kappa < 0$. Henceforth we will suppose $\kappa$ is plus or minus one or zero.
As we said, The homogeneous space $\mathbb{E}(\kappa , \tau) $ is a Riemannian submersion $\pi: \mathbb{E}(\kappa , \tau) \rightarrow \mathbb{M}^{2}(\kappa )$ over a simply connected surface of constant sectional curvature $\kappa$. The fibers are the inverse image of a point at $\mathbb{M}^{2}(\kappa)$ by $\pi$. The fibers are the trajectories of a unitary Killing field $\xi$, called the vertical vector field.
Denote by $\overline{\nabla}$ the Levi-Civita connection of $\mathbb{E}(\kappa , \tau)$, then for all $X \in \mathfrak{X} (\mathbb{E}(\kappa , \tau) )$, the following equation holds \cite{Scott}:
$$\overline{\nabla}_{X}\xi = \tau X \wedge \xi ,$$where $\tau$ is the bundle curvature.
\subsection{Immersed surfaces in $\mathbb{E}(\kappa , \tau)$.}
Let $\Sigma \subset \mathbb{E}(\kappa , \tau)$ be an oriented immersed connected surface. We endow $\Sigma$ with the induced metric of $\mathbb{E}(\kappa , \tau)$, the first fundamental form, which we denote by $\meta{}{}$. Denote by $\nabla$, $R$ and $A$ the Levi-Civita connection, the Riemann curvature tensor and the shape operator of $\Sigma$ respectively. So, $$AX = -\overline{\nabla}_{X}N \text{ for all }X \in \mathfrak{X}(\Sigma), $$where $N$ is the unit normal vector field along the surface $\Sigma$. Then $II(X,Y) = \langle AX,Y \rangle$ is the second fundamental Form of $\Sigma$.
Moreover, denote by $J$ the oriented rotation of angle $\frac{\pi}{2}$ on $T\Sigma$, defined by the formula
$$JX = N \wedge X \text{ for all } X \in \mathfrak{X}(\Sigma) ,$$
where $\wedge$ denotes the wedge product of vector fields. Set $\nu = \langle N,\xi\rangle$ and $\mathbf{T} = \xi - \nu N$, that is, $\nu$ is the normal component of the vertical vector field $\xi$, called the angle function, and $\mathbf{T}$ is the tangent component of the vertical vector field $\xi$.
In terms of a local conformal parameter $z$, the first fundamental form $I = \langle,\rangle$ and the second fundamental form are given by
\begin{eqnarray}
I & = & 2\lambda |dz|^{2} \\
II & = & Q dz^{2} + 2 \lambda H |dz|^{2} + \overline{Q}d\overline{z}^{2},
\end{eqnarray}where $Q dz^{2} = - \langle \overline{\nabla}_{\partial_{z}}N,\partial_{z} \rangle$ is the usual Hopf differential of $\Sigma$. Hence, in this conformal coordinate, we have the following:
\begin{lema}[\cite{ER,ER2}]
\label{LemaEQ}
Given an immersed $H-$surface $\Sigma \subset \mathbb{E}(\kappa , \tau)$, the following equations are satisfied:
\begin{eqnarray}
K & = & K_{e} + \tau^{2} + (\kappa - 4\tau^{2})\nu^{2} \label{GaussZ}\\
Q_{\overline{z}} & = & \lambda(\kappa - 4\tau^{2})\nu \mathbf{t} \label{CodazziZ}\\
\mathbf{t}_{z} & = & \frac{\lambda_{z}}{\lambda}\mathbf{t} + Q\nu \label{NablaTZ}\\
\mathbf{t}_{\overline{z}} & = & \lambda(H + i \tau) \nu \label{NablaTZb}\\
\nu_{z} & = & -(H - i\tau)\mathbf{t} -\frac{Q}{\lambda}\overline{\mathbf{t}} \label{DerNuZ} \\
|\mathbf{t} |^{2} & = & \frac{1}{2} \lambda(1-\nu^{2}) \label{normZ},
\end{eqnarray}where $\mathbf{t} = \langle \mathbf{T}, \partial_{z} \rangle$, $\overline{\mathbf{t}} = \langle \mathbf{T}, \partial_{\overline{z}} \rangle$, $K_{e}$ is the extrinsic curvature and $K$ is the Gaussian curvature of $\Sigma$.
\end{lema}
For an immersed $H-$surface $\Sigma \subset \mathbb{E}(\kappa , \tau)$, we can define a global quadratic differential.
\begin{definition}[\cite{Rosen,Rosen1}]\label{DefAR}
Given a local conformal parameter $z$ for $I$, the Abresch-Rosenberg differential is defined by:
$$\mathcal{Q}^{AR} = Q^{AR}dz^{2} = (2(H+i \tau)Q - (\kappa - 4\tau^{2})\mathbf{t}^{2})dz^{2}.$$
Note that $\mathcal{Q}^{AR}$ do not depend on the conformal parameter $z$, hence $\mathcal{Q}^{AR}$ is globally defined on $\Sigma$.
\end{definition}
Therefore using the definition of the Abresch-Rosenberg differential and Lema \ref{LemaEQ}, we obtain
\begin{theorem}[\cite{Rosen,Rosen1}]
\label{thQH}
Let $\Sigma$ be a $H$- surface in $\mathbb{E}(\kappa , \tau)$, then the Abresch-Rosenberg differential $\mathcal{Q}^{AR}$ is holomorphic for the conformal structure induced by the first fundamental form $I$.
\end{theorem}
In the following we recall the classification of the complete $H-$surfaces in $\mathbb{E}(\kappa , \tau)$ whose the Abresch-Rosenberg differential vanish.
\begin{theorem}[\cite{Rosen,Rosen1,ER2}]
\label{QVa}
Let $\Sigma \subset \mathbb{E}(\kappa , \tau)$ be a complete $H-$ surface whose Abresch-Rosenberg differential vanishes. Then $\Sigma$ is invariant by a one parameter group of isometries of $\mathbb{E}(\kappa , \tau)$. Moreover, $\Sigma$ is either a slice in $\mathbb{S}^{2} \times \mathbb{R}$ or $\mathbb{H}^{2} \times \mathbb{R}$ if $H = 0 = \tau$ and in the case $H^{2}+\tau^{2} \neq 0$ the Gauss curvature $K$ of these examples satisfies:
\begin{itemize}
\item If either $4(H^{2} + \tau^{2}) > \kappa-4\tau^{2}$ when $\kappa-4\tau^{2}>0$ or $H^{2} + \tau^{2}> -(\kappa-4\tau^{2})$ when $\kappa-4\tau^{2}<0$, then $K>0$, i.e, $\Sigma$ is a rotationally invariant sphere. In particular, $4H^{2} + \kappa >0$ .
\item If $4H^{2} + \kappa = 0$ and $\nu = 0$, then $K= 0$, i.e $\Sigma$ is either a vertical plane in ${\rm Nil}_{3}$ or a vertical cylinder over a horocycle in $\mathbb{H}^{2} \times \mathbb{R}$ or $\widetilde{{\rm PSL}(2,\mathbb{R})}$.
\item There exists a point with negative Gauss curvature in the rest of examples.
\end{itemize}
\end{theorem}
Then the above theorem motivates the following definition
\begin{definition}\label{Def:ARSurface}
A complete $H-$ surface $\Sigma$ immersed in $\mathbb{E}(\kappa , \tau)$ is an Abresch-Rosenberg surface if the Abresch-Rosenberg differential $\mathcal{Q}^{AR}$ vanishes on $\Sigma$. In particular by Theorem \ref{QVa}, the surface $\Sigma$ must be invariant by a one parameter group of isometries of $\mathbb{E}(\kappa , \tau)$.
\end{definition}
In Figure 1, we show the meridians of the Abresch-Rosenberg surfaces in the product spaces $\mathbb{S}^{2} (\kappa) \times \mathbb{R}$ and $\mathbb{H}^{2}(\kappa) \times \mathbb{R}$. In \cite{Rosen}, the authors gave a complete description of these surfaces.
\begin{figure}[!ht]
\subfloat[Meridians of CMC spheres $S^{2}_{H}$ in \newline $\mathbb{S}^{2}(\kappa) \times \mathbb{R}$ ]{%
\begin{overpic}[width=0.50\textwidth]{Esferas.png}
\end{overpic}
}
\subfloat[Meridians of the CMC surfaces $C^{2}_{H}$ of catenoidal type in $\mathbb{H}^{2}(\kappa) \times \mathbb{R}$]{%
\begin{overpic}[width=0.50\textwidth]{Catenoides.png}
\end{overpic}
}
\subfloat[Meridians of CMC
spheres $S^{2}_{H} $ and \newline disk-like cmc surfaces $D_H^{2}$ in $\mathbb{H}^{2}(\kappa) \times \mathbb{R}$]{%
\begin{overpic}[width=0.50\textwidth]{Discos.png}
\end{overpic}
}
\subfloat[ Parabolic CMC surfaces $P_{H}^{2}$ come as limits in $\mathbb{H}^{2}(\kappa) \times \mathbb{R}$]{%
\begin{overpic}[width=0.50\textwidth]{Parabolics.png}
\end{overpic}
}
\caption{ Meridians of CMC surfaces in the product spaces $\mathbb{M}^{2}(\kappa) \times \mathbb{R}$ whose the Abresch-Rosenberg differential vanishes}
\end{figure}
\subsection{Codazzi Pairs on Surfaces.}
One important tool in this paper is Codazzi pairs. We follow \cite{AEG3} and references therein. We shall denote by $\Sigma$ an orientable (and oriented) smooth Riemannian surface. Otherwise we would work with its oriented two-sheeted covering.
\begin{definition}
A fundamental pair on $\Sigma$ is a pair of real quadratic forms
$(I,II)$ on $\Sigma$, where $I$ is a Riemannian metric.
\end{definition}
Associated with a fundamental pair $(I,II)$ we define the shape operator $S$ of the pair as:
\begin{equation}\label{ii}
II(X,Y)=I(S(X),Y) \text{ for any } X,Y \in \mathfrak{X} (\Sigma).
\end{equation}
Conversely, it is clear from (\ref{ii}) that the quadratic form $II$ is totally
determined by $I$ and $S$. In other words, to give a fundamental pair on $\Sigma$ is equivalent to give a Riemannian metric on $\Sigma$ together with a self-adjoint endomorphism $S$.
We define the mean curvature $H(I,II)$, the extrinsic curvature $K(I,II)$ and the principal curvatures of the fundamental pair $(I,II)$ as one half of the trace, the determinant and the eigenvalues of the endomorphism $S$, respectively.
Now, consider $\Sigma$ as a Riemann surface with respect to the metric $I$ and take a local conformal parameter $z$, then we can write
\begin{equation}\label{parholomorfo}
\begin{split}
I &=2\lambda\,|dz|^2, \\
II &=Q(I,II)\,dz^2+2\lambda\,H\,|dz|^2+\overline{Q}(I,II)\,d\bar{z}^2.
\end{split}
\end{equation}
The quadratic form $Q(I,II)\,dz^2$, which does not depend on the chosen parameter $z$, is known as the Hopf differential of the pair $(I,II)$. A specially interesting case happens when the fundamental pair satisfies the Codazzi equation, that is,
\begin{definition}
We say that a fundamental pair $(I,II)$, with shape operator $S$, is a
Codazzi pair if
\begin{equation}\label{ecuacioncodazzi}
\nabla_XSY-\nabla_YSX-S[X,Y]=0,\qquad X,Y\in\mathfrak{X}(\Sigma),
\end{equation}
where $\nabla$ stands for the Levi-Civita connection associated with the Riemannian metric $I$ and $\mathfrak{X} (\Sigma)$ is the set of smooth vector fields on $\Sigma$.
\end{definition}
Let us also observe that, by equations (\ref{parholomorfo}) and (\ref{ecuacioncodazzi}), we obtain that the fundamental pair $(I,II)$ is a Codazzi pair if and only if
$$
Q(I,II)_{\bar{z}}=\lambda\,H(I,II)_z.
$$
Thus, one has the T.K Milnor Theorem:
\begin{lema}[\cite{Mi}]\label{l0.1}
Let $(I,II)$ be a fundamental pair. Then, any two of the conditions {\rm (i)}, {\rm (ii)}, {\rm (iii)} imply the third:
\begin{itemize}
\item[{\rm (i)}] $(I,II)$ is a Codazzi pair.
\item[{\rm (ii)}] $H(I,II)$ is constant.
\item[{\rm (iii)}] The Hopf differential $Q(I,II)\,dz^2$ of the pair $(I,II)$ is holomorphic.
\end{itemize}
\end{lema}
\subsection{The Abresch-Rosenberg shape operator}
Now, we recall the definition of the Codazzi pair on any $H-$surface in $\mathbb{E}(\kappa , \tau)$ such that the Abresch-Rosenberg differential appears as its Hopf differential. First, we study the case $\tau = 0$. After, we study the case $\tau \neq 0$.
\subsection{$H-$surfaces in $\mathbb{M}^2 (\kappa)\times \mathbb{R}$.}
Consider a complete immersed $H-$surface $\Sigma \subset \mathbb{M}^{2}(\kappa) \times \mathbb{R}$. According to the notation introduced above, we define the self-adjoint endomorphism $S$ along $\Sigma$ as
\begin{equation}
S_{AR}X = 2H \, AX - \kappa \langle X,\mathbf{T} \rangle \mathbf{T} + \frac{\kappa}{2}\abs{\mathbf{T}}^{2}X - 2H^{2}X,
\label{Oper}
\end{equation}where $X \in \mathfrak{X}(\Sigma)$. Now consider the quadratic form $II_{AR}$ associated to $S_{AR}$ given by \eqref{Oper}. In \cite{AEG3}, it was shown that $(I,II_{AR})$ is a Codazzi pair on $\Sigma$ when $H$ is constant. Moreover, it is traceless, i.e., ${\rm tr}(S_{AR}) = 0= H(I,II_{AR})$, and the Hopf differential associated to $(I,II_{AR})$ is the Abresch-Rosenberg differential $\mathcal Q^{AR}$ in $\mathbb{M}^{2}(\kappa) \times \mathbb{R}$.
\subsection{$H-$surfaces in $\mathbb{E}(\kappa , \tau)$ with $\tau \neq 0$.}
Let $\Sigma$ be a $H-$surface in $\mathbb{E}(\kappa , \tau)$, $\tau \neq 0$. In this case we have that $H^{2} + \tau^{2} > 0$. According to the notation introduced above, we define the self-adjoint endomorphism $S$ along $\Sigma$ as
\begin{equation}\label{ARTraceless}
S_{AR}X = A(X) - \alpha \langle \mathbf{T}_{\theta}, X \rangle \mathbf{T}_{\theta} + \frac{\alpha \abs{\mathbf{T}}^{2}}{2}X - HX,
\end{equation} where
\begin{itemize}
\item $\alpha = \dfrac{\kappa - 4\tau^{2}}{2 \sqrt{H^{2} + \tau^{2}}}$,
\item $e^{2i\theta} = \frac{H - i\tau}{\sqrt{H^{2} + \tau^{2}}}$ and
\item $\mathbf{T}_{\theta} = \cos\theta \mathbf{T} + \sin\theta J\mathbf{T}$.
\end{itemize}
Now, consider the quadratic form $II_{AR}$ associated to $S_{AR}$ given by \eqref{ARTraceless}. In \cite{ER4}, it was shown that $(I,II_{AR})$ is a Codazzi pair on $\Sigma$ when $H$ is constant. Moreover, it is traceless and the Hopf differential associated to $(I,II_{AR})$ is the Abresch-Rosenberg differential $\mathcal Q^{AR}$ in $\mathbb{M}^{2}(\kappa) \times \mathbb{R}$ up to the constant $H+i\tau$.
\section{Abresch-Rosenberg lines of curvature.}
We begin this section by defining the concept of line of curvature with respect to the Abresch-Rosenberg shape operator of a $H-$surface $\Sigma$ in $\mathbb{E}(\kappa , \tau)$ and we characterize these curves in terms of the Abresch-Rosenberg differential.
\begin{definition}
\label{Def5}
Let $\Sigma$ be a $H-$surface in $\mathbb{E}(\kappa , \tau)$ and $\Gamma$ a regular curve parametrized by $\gamma:(-\epsilon,\epsilon) \rightarrow \Sigma$. We say that $\Gamma = \gamma (-\epsilon, \epsilon )$ is a line of curvature with respect to the Abresch-Rosenberg shape operator $S_{AR}$ if there exists a smooth function $\lambda: (-\epsilon,\epsilon) \rightarrow \mathbb{R}$ such that $S_{AR}(\gamma'(t)) = \lambda(t)\gamma'(t)$. In such case, we call $\Gamma$ an Abresch-Rosenberg line of curvature, in short, an AR-line of curvature.
\end{definition}
Definition \ref{Def5} says that the tangent vector of $\gamma$ is an eigenvector of $S_{AR}$ along $\gamma$. So, the definition is nothing but the natural extension of a line of curvature to the case of the Abresch-Rosenberg shape operator $S_{AR}$ for a $H-$surface in $\mathbb{E}(\kappa , \tau)$. In analogy with the situation in $\mathbb{R}^{3}$ (see \cite{Ch}), there exists a link between lines of curvatures with respect to $S_{AR}$ and the Abresch-Rosenberg differential $Q^{AR}dz^{2}$.
\begin{prop}
\label{A-R}
Let $\Sigma$ be a $H-$surface in $\mathbb{E}(\kappa , \tau)$. Then, $\gamma:(-\epsilon,\epsilon) \rightarrow \Sigma$ is a line of curvature for $S_{AR}$ if, and only if, the imaginary part of $Q^{AR}dz^{2}$ vanishes along $\gamma$.
\end{prop}
\begin{proof}
Let $z=u+iv$ be a local conformal parameter of $\Sigma$ for the first fundamental form $I$ and set
\begin{equation*}
\begin{split}
II_{AR}(\partial_{u},\partial_{u}) & = I(S_{AR}(\partial_{u}),\partial_{u}) = L,\\
II_{AR}(\partial_{u},\partial_{v}) & = I(S_{AR}(\partial_{u}),\partial_{v}) = M,\\
II_{AR}(\partial_{v},\partial_{v}) & = I(S_{AR}(\partial_{v}),\partial_{v}) = N.
\end{split}
\end{equation*}
The curve $\gamma(t)= (u(t),v(t))$ is a line of curvature with respect to $S_{AR}$ if, and only if, $S_{AR}(\gamma'(t)) = \lambda(t) \gamma'(t)$. In the local coordinates $(u,v)$ this means
\begin{equation}
\label{Matrix}
\begin{bmatrix}
L(\gamma(t)) & M(\gamma(t)) \\
M(\gamma(t)) & N(\gamma(t))
\end{bmatrix}
\begin{bmatrix}
u'(t)\\
v'(t)
\end{bmatrix}
=
\lambda(t)
\begin{bmatrix}
u'(t)\\
v'(t)
\end{bmatrix} .
\end{equation}
Hence, from \eqref{Matrix}, we get the following linear system
\begin{equation}
\label{sys}
\begin{split}
L(\gamma(t))u'(t) + M(\gamma(t))v'(t) & = \lambda(t)u'(t) ,\\
M(\gamma(t))u'(t) + N(\gamma(t))v'(t) & = \lambda(t)v'(t).\\
\end{split}
\end{equation}
On the one hand, from \eqref{sys}, we obtain
\begin{equation}
\label{Rel}
M(\gamma(t))(v'(t))^{2} + (L(\gamma(t))-N(\gamma(t))) u'(t)v'(t) - M(\gamma(t))(u'(t))^{2} = 0 .
\end{equation}
On other hand, from the definition of the Abresch-Rosenberg differential
\begin{equation}
\label{ABLC}
Q^{AR}(\gamma(t))dz(\gamma(t))^{2} = (L(\gamma(t))-N(\gamma(t))-2iM(\gamma(t)))dz(\gamma(t))^{2},
\end{equation}then, a straightforward computation shows that the imaginary part is given by
\begin{equation}\label{Impart}
\begin{split}
{\rm Im}(Q^{AR}(\gamma(t))dz(\gamma(t))^{2})&= M(\gamma(t))(v'(t))^{2} + (L(\gamma(t))-N(\gamma(t))) u'(t)v'(t) \\
& \qquad - M(\gamma(t))(u'(t))^{2},
\end{split}
\end{equation}hence, \eqref{Impart} is nothing but the left hand side of \eqref{Rel}. This shows that the imaginary part of \eqref{ABLC} vanishes when $\gamma(t)$ is line of curvature with respect to $S_{AR}$.
Reciprocally, assume that the imaginary part of $Q^{AR}(\gamma(t))dz(\gamma(t))^{2}$ is zero, this condition is given by \eqref{Impart}. Then, for all $t \in (-\epsilon ,\epsilon) $ so that $u'(t) \neq 0$ and $v'(t) \neq 0$ we have
\begin{equation}
\label{eqlam}
\frac{M(\gamma(t))v'(t) + L(\gamma(t))u'(t)}{u'(t)} = \frac{N(\gamma(t))v'(t) + M(\gamma(t)u'(t))}{v'(t)}.
\end{equation}
Now, define the function $\lambda: (-\epsilon ,\epsilon) \rightarrow \mathbb{R}$ as follows
\begin{equation*}
\lambda(t) =
\begin{cases}
\frac{M(\gamma(t))v'(t) + L(\gamma(t))u'(t)}{u'(t)}& \text{for $t$ such that $u'(t) \neq 0$ and $v'(t) \neq 0$},\\
L(\gamma(t)) & \text{for $t$ such that $u'(t) \neq 0$ and $v'(t) = 0$},\\
N(\gamma(t)) & \text{for $t$ such that $u'(t) = 0$ and $v'(t) \neq 0$}.
\end{cases}
\end{equation*}
Therefore, \eqref{sys} and \eqref{eqlam} imply $S_{AR}(\gamma'(t)) = \lambda(t) \gamma'(t)$ for all $t \in (-\epsilon ,\epsilon)$, this shows $\gamma'(t)$ is line of curvature respect to $S_{AR}$.
\end{proof}
Next, we will establish a Joachimstahl's Type Theorem for the intersection of $H-$surfaces in $\mathbb{E}(\kappa , \tau)$. This is a key step in this work.
\begin{lema}[\bf{Key Lemma}] \label{key}
Let $\Sigma_{i}\subset \mathbb{E}(\kappa , \tau)$ $i=1,2$ be $H_{i}-$surfaces so that $\Sigma _{1}\cap \Sigma _{2} \neq \emptyset$. Let $\Gamma \subset \Sigma _{1}\cap \Sigma _{2}$ be a regular curve of transversal intersection. Assume that along $\Gamma$ it holds
\begin{enumerate}
\item[a)] $\meta{N_{1}}{N_{2}} $ is constant and
\item[b)] $\sqrt{H^{2}_{1}+\tau ^{2}}\langle \mathbf{T} ^{2}_{\theta_{2}},N_{1} \rangle \langle J_{2} \mathbf{T} ^{2}_{\theta_{2}},N_{1} \rangle = \sqrt{H^{2}_{2}+\tau ^{2}}\langle \mathbf{T} ^{1}_{\theta_{1}},N_{2} \rangle \langle J_{1} \mathbf{T} ^{1}_{\theta_{1}},N_{2} \rangle $,
\end{enumerate}where $e^{2i\theta _i} = \frac{H _i - i\tau}{\sqrt{H_i^{2} + \tau^{2}}}$, $\mathbf{T} ^{i}_{\theta_{i}} = \cos\theta_{i}\mathbf{T}_{i} + \sin\theta_{i}J\mathbf{T}_{i}$ and $J\mathbf{T} ^{i}_{\theta_{i}} = N_{i}\wedge \mathbf{T} ^{i}_{\theta_{i}}$ for $i=1,2$. Then, $\Gamma $ is an AR-line of curvature for $\Sigma _1$ if, and only if, $\Gamma$ is an AR-line of curvature for $\Sigma _2$.
\end{lema}
\begin{proof}
We assume that $\Gamma$ is an AR-line of curvature for $\Sigma _{2}$, the other case is completely analogous. First, since $\langle N_{1}(\gamma(t)), N_{2}(\gamma(t)) \rangle$ is constant along $\Gamma = \gamma (-\epsilon, \epsilon)$, where $\gamma $ is parametrized by arc-length, then
\begin{equation}
\label{SOSF3}
\langle A_{1}(\gamma'(t)),N_{2}(\gamma(t)) \rangle + \langle N_{1}(\gamma(t)), A_{2}(\gamma'(t)) \rangle = 0 ,
\end{equation}where $A_{1}$ and $A_{2}$ are the shape operators of the second fundamental forms of $\Sigma _{1}$ and $\Sigma _{2} $ respectively. Now, relating $A_{1}$ and $A_{2}$ with $S_{AR}^{1}$ and $S_{AR}^{2}$ respectively and using that $\gamma(t)$ is a line of curvature for $S_{AR}^{2}$, we can rewrite \eqref{SOSF3} as:
\begin{equation}
\label{eqt}
-\alpha_{1} \langle \mathbf{T}^{1}_{\theta_{1}} , \gamma'(t) \rangle \langle \mathbf{T}^{1}_{\theta_{1}},N_{2}\rangle - \langle S_{AR}^{1}(\gamma'(t)), N_{2}\rangle - \alpha_{2} \langle \mathbf{T}^{2}_{\theta_{2}}, \gamma'(t) \rangle \langle \mathbf{T}^{2}_{\theta_{2}}, N_{1} \rangle = 0,
\end{equation}where $\alpha_{i} = \frac{\kappa-4\tau^{2}}{2 \sqrt{H^{2}_{i}+\tau^{2}}}$ and $\mathbf{T} ^{i}_{\theta_{i}} = \cos\theta_{i}\mathbf{T}_{i} + \sin\theta_{i}J\mathbf{T}_{i}$, $i=1,2$.
We can orient $\gamma$ so that $ (1-d^{2}) ^{-1}N_{1} \wedge N_{2} = \gamma'(t)$, where $d$ is the contact constant angle between $\Sigma _{1} $ and $\Sigma _{2}$.
Since the intersection is transversal, $\{ N_{1}, N_{2}, \gamma'(t) \}$ is an oriented basis of $T_{\gamma(t)}\mathbb{E}(\kappa , \tau)$ for each $t$ were the intersection is transversal. Then, the following equations hold:
\begin{equation}
\label{eqt31}
\begin{split}
\langle \mathbf{T}^{1}_{\theta_{1}}, \gamma'(t) \rangle & = (1-d^{2})^{-1} \langle \mathbf{T}^{1}_{\theta_{1}}, N_{1}\wedge N_{2} \rangle = - (1-d^{2})^{-1}\langle J_{1} \mathbf{T} ^{1}_{\theta_{1}},N_{2} \rangle , \\
\langle \mathbf{T}^{2}_{\theta_{2}}, \gamma'(t) \rangle & = (1-d^{2})^{-1} \langle \mathbf{T}^{2}_{\theta_{2}}, N_{1}\wedge N_{2} \rangle = (1-d^{2})^{-1}\langle J_{2} \mathbf{T} ^{2}_{\theta_{2}},N_{1} \rangle ,
\end{split}
\end{equation}where $J_{i} \mathbf{T} ^{i}_{\theta_{i}} = N_{i} \wedge \mathbf{T} ^{i}_{\theta _{i}}$ for $i=1,2$. Therefore, \eqref{eqt} and \eqref{eqt3} imply that
\begin{equation*}
\begin{split}
\langle S_{AR}^{1}(\gamma'(t)), N_{2}\rangle &= \frac{(1-d^{2})^{-1}(\kappa -4\tau ^{2})}{\sqrt{H^{2}_{1}+\tau ^{2}}}\langle \mathbf{T} ^{1}_{\theta_{1}},N_{2} \rangle \langle J_{1} \mathbf{T} ^{1}_{\theta_{1}},N_{2} \rangle \\
& \qquad -\frac{(1-d^{2})^{-1}(\kappa -4\tau ^{2})}{\sqrt{H^{2}_{2}+\tau ^{2}}}\langle \mathbf{T} ^{2}_{\theta_{2}},N_{1} \rangle \langle J_{2} \mathbf{T} ^{2}_{\theta_{2}},N_{1} \rangle \\
& = 0,
\end{split}
\end{equation*}where we have used item $b)$. Thus, $\langle S_{AR}^{1}(\gamma'(t)), N_{2}\rangle = 0 $ along $\Gamma$. Therefore, since $\langle S_{AR}^{1}(\gamma'(t)), N_{1}\rangle = 0 $ along $\Gamma$, we obtain that $S_{AR}^{1}(\gamma'(t)) = \lambda (t) \gamma'(t)$, that is, $\Gamma$ is an AR-line of curvature for $\Sigma _1$.
\end{proof}
Observe that we can see item $b)$ above geometrically as follows. Let $X$ be a unitary vector field along $\Sigma$, not necessarily tangential. Then, $\set{\mathbf{T} _\theta , J \mathbf{T} _\theta}$ is an orthogonal frame along $\Sigma$ away from the points where $\abs{\mathbf{T}} =0$. Let $\beta $ be the (oriented) angle between $\mathbf{T} $ and $X$, that is,
$$ \meta{\mathbf{T} }{X} = \abs{\mathbf{T}} \cos \beta ,$$and hence,
$$ 2 \meta{\mathbf{T} _\theta }{X}\meta{J \mathbf{T} _\theta}{X} = \abs{\mathbf{T} }^2 \sin (2(\beta - \theta)) .$$
So, coming back to the situation on the Key Lemma, let $\omega _{ij}$ denote the (oriented) angle between $\mathbf{T} ^i$ and $N_j$, for $i,j =1,2$ and $i \neq j$. Hence, item $b)$ can be re-written as
$$\frac{\abs{\mathbf{T} _1}^2}{\sqrt{H^{2}_{1}+\tau ^{2}}}\sin (2(\omega _{12}-\theta _1))= \frac{\abs{\mathbf{T} _2}^2}{\sqrt{H^{2}_{2}+\tau ^{2}}}\sin (2(\omega _{21}-\theta_2)) . $$
The Key Lemma gives us general conditions for $\Gamma$ being an AR-line of curvature of both surfaces $\Sigma _1$ and $\Sigma _2$. Nevertheless, we will see that certain geometric configurations imply the conditions on the Key Lemma.\\
A curve $\Gamma$ on a surface $\Sigma$ in $\mathbb{M}^2 (\kappa)\times \mathbb{R}$ is horizontal if $\Gamma$ is contained in a horizontal slice $\mathbb{M}^{2}(\kappa) \times \{ \xi_{0} \}$, for some $\xi_{0} \in \mathbb{R}$. On the other hand, the curve $\Gamma$ in $\Sigma$ is said to be vertical if it is an integral curve of the vector field $\mathbf{T}$.
\begin{Corollary}
\label{CoroDCF}
Let $\Sigma_{1}$ and $\Sigma_{2}$ two constant mean curvature surfaces in $\mathbb{M}^2 (\kappa)\times \mathbb{R}$ with mean curvatures $H_{1}$ and $H_{2}$, normal vectors $N_{1}$ and $N_{2}$ and angle functions $\nu_{1}$ and $\nu_{2}$, respectively. Let $\Gamma \subset \Sigma _{1}\cap \Sigma _{2}$ be a regular curve parametrized by $\gamma$. Suppose that $\Sigma_{1}$ and $\Sigma_{2}$ intersects transversally along $\Gamma$ at a constant angle and $\Gamma$ is a AR-line of curvature for $\Sigma_{1}$. Assume along $\Gamma$ one of the following conditions holds:
\begin{enumerate}
\item $\Gamma$ is an horizontal curve of $\Sigma_{1}$.
\item $\Gamma$ is a vertical curve of $\Sigma_{1}$ and $\Sigma_{2}$.
\item If $H_{1}=H_{2} \neq 0$ , the angle function $\nu_{1}$ is opposite to the angle function $\nu_{2}$.
\end{enumerate}
Then $\Gamma$ is an AR-line of curvature for $\Sigma_{2}$.
\end{Corollary}
\begin{proof}
In the first case, assume that $\Gamma = \gamma(-\epsilon,\epsilon)$ where $\gamma$ is parametrized by arc-length. In $\mathbb{M}^2 (\kappa)\times \mathbb{R}$, we have $\mathbf{T}^{1}_{\theta_1} = \mathbf{T}_{1}$ and $\mathbf{T}^{2}_{\theta_{2}} = \mathbf{T}_{2}$. Since $\gamma$ is horizontal, we obtain
\begin{equation}
\label{eqt33}
\begin{split}
(1-d^{2})\langle J_{1} \mathbf{T} ^{1}_{\theta_{1}},N_{2} \rangle &= - \langle \mathbf{T}^{1}, \gamma'(t) \rangle = - \langle \xi, \gamma'(t) \rangle = 0,\\
(1-d^{2})\langle J_{2} \mathbf{T} ^{2}_{\theta_{2}},N_{1} \rangle &= \langle \mathbf{T}^{2},\gamma'(t) \rangle = \langle \xi,\gamma'(t) \rangle = 0.
\end{split}
\end{equation}
From \eqref{eqt33} and Lemma \ref{key}, then $\gamma(t)$ is AR-line of curvature of $\Sigma_{2}$.\\
In the second case, $\mathbf{T}_{2}=\mathbf{T}_{1}= \gamma'(t)$ for each $t$, hence $\langle \mathbf{T}_{1},N_{2} \rangle = \langle \mathbf{T}_{2},N_{1} \rangle = 0$ so the hypothesis of Lemma \ref{key} holds clearly and then $\gamma$ is a AR-line of curvature of $\Sigma_{2}$.\\
In the third case, suppose $H_{1}=H_{2}=H$ and $\nu_{1}=-\nu_{2}$, therefore
\begin{equation}
\label{Epro}
\begin{split}
H \langle\mathbf{T}_{2},N_{1} \rangle \langle \mathbf{T}_{2},\gamma'(t) \rangle & = H \langle \mathbf{T}_{2},N_{1}\rangle \langle \xi,\gamma'(t) \rangle \\
& = H (\nu_{2}-\nu_{1}d) \langle \mathbf{T}_{1}+\nu_{1}N_{1},\gamma'(t) \rangle \\
& = -H(\nu_{1}- \nu_{2}d) \langle \mathbf{T}_{1},\gamma'(t)\rangle \\
& = -H \langle \mathbf{T}_{1},N_{2} \rangle \langle \mathbf{T}_{1},\gamma'(t) \rangle.
\end{split}
\end{equation}
Hence, from \eqref{eqt31} and \eqref{Epro}, we can see again that the hypothesis of Lemma \ref{key} hold. Then, in any case, $\Gamma$ is a AR-line of curvature of $\Sigma_{2}$.
\end{proof}
Next, we give certain geometric configurations for $H-$ surfaces in $\mathbb{E}(\kappa , \tau)$, $\tau \neq 0$, that imply the Key Lemma.
\begin{Corollary}
\label{CorodCF}
Let $\Sigma_{1}$ and $\Sigma_{2}$ two $H-$ surfaces in $\mathbb{E}(\kappa , \tau)$, $\tau \neq 0$, with normal vectors $N_{1}$ and $N_{2}$ and angle functions $\nu_{1}$ and $\nu_{2}$, respectively. Let $\Gamma \subset \Sigma _{1}\cap \Sigma _{2}$ be a regular curve. Suppose that $\Sigma_{1}$ and $\Sigma_{2}$ intersect along $\Gamma$ at a constant angle. Assume also that
\begin{enumerate}
\item If both surfaces are tangent along $\Gamma$, then $N_{1} = N_{2}$ along $\Gamma$.
\item If the intersection is transversal along $\Gamma$, then their respective angle functions satisfy $\nu_{1} = -\nu_{2}$ along $\Gamma$.
\end{enumerate}
Then, $\Gamma$ is AR-line of curvature for $\Sigma_{1}$ if, and only if, $\Gamma$ is AR-line of curvature for $\Sigma_{2}$.
\end{Corollary}
\begin{proof}
Let $S^{i}_{AR} X = A_{i}(X) - \alpha_{i} \langle \mathbf{T}^{i}_{\theta_{i}}, X \rangle \mathbf{T}^{i}_{\theta_{i}} + \frac{\alpha_{i} \abs{\mathbf{T}_{i}}^{2}}{2}X-H_{i}X$ the Abresch-Rosenberg shape operator of $\Sigma_{i}$, $i=1,2$ and $J_{1}$, $J_{2}$ the rotations on the tangent bundles of $\Sigma_{1}$ and $\Sigma_{2}$ respectively.\\
In the first case, we have that $\mathbf{T}^{1}_{\theta_{1}} \equiv \mathbf{T}^{2}_{\theta_{2}}$ along $\Gamma$ since $\mathbf{T}_{1}\equiv \mathbf{T}_{2}$ and the surfaces has the same mean curvature. Moreover, if $\Gamma = \gamma(-\epsilon,\epsilon)$, therefore $J_{1}\gamma' = J_{2}\gamma'$ and so $II^{1}_{AR}(\gamma',J_{1}\gamma') = II^{2}_{AR}(\gamma',J_{2}\gamma')$.\\
Suppose now that we are in case 2, then $\theta_{1}=\theta_{2}=\theta$ and this implies $\alpha_{1}=\alpha_{2}=\alpha$, so
\begin{equation}
\label{eqt1}
\begin{split}
\alpha \langle \mathbf{T}^{1}_{\theta} , \gamma'(t) \rangle \langle \mathbf{T}^{1}_{\theta},N_{2}\rangle &= \alpha \langle \mathbf{T}_{1},\gamma'(t) \rangle \langle \mathbf{T}_{1},N_{2} \rangle \cos^{2}\theta \\
& \quad + \alpha\cos\theta\sin\theta \langle \mathbf{T}_{1},\gamma'(t) \rangle \langle J_1\mathbf{T}_{1},N_{2} \rangle \\
& \quad + \alpha\langle J_1\mathbf{T}_{1},\gamma'(t) \rangle \langle \mathbf{T}_{1},N_{2} \rangle \sin \theta \cos\theta \\
& \quad + \alpha \langle J_1\mathbf{T}_{1}, \gamma'(t) \rangle \langle J_1\mathbf{T}_{1},N_{2} \rangle \sin^{2}\theta.
\end{split}
\end{equation}
We oriented $\gamma$ such that $\{N_{1},N_{2},\gamma'(t) \}$ is an oriented basis of $T_{\gamma(t)}\mathbb{E}(\kappa , \tau)$ for each t were the intersection is transversal, then the following equations holds
\begin{equation}
\label{eqt3}
\begin{split}
\langle J_1\mathbf{T}_{1}, \gamma'(t) \rangle & = \langle N_{1} \wedge \mathbf{T}_{1},\gamma'(t) \rangle = (1-d^2) \langle N_{2},\mathbf{T}_{1} \rangle. \\
\langle J_1\mathbf{T}_{1}, N_{2} \rangle & = \langle N_{1} \wedge \mathbf{T}_{1}, N_{2} \rangle = -\frac{1}{1-d^2}\langle \gamma'(t), \mathbf{T}_{1} \rangle.
\end{split}
\end{equation}
Hence using \eqref{eqt3} in the equation \eqref{eqt1}, we can rewrite the equation \eqref{eqt1} as
\begin{equation}
\label{eqt4}
\begin{split}
\alpha \langle \mathbf{T}^{1}_{\theta} , \gamma'(t) \rangle \langle \mathbf{T}^{1}_{\theta},N_{2}\rangle &= \alpha (\cos^{2}\theta -\sin^{2}\theta )\langle \mathbf{T}_{1},\gamma'(t) \rangle \langle \mathbf{T}_{1},N_{2} \rangle \\
& \quad + \alpha \cos \theta\sin\theta (\langle \mathbf{T}_{1}, N_{2} \rangle^{2} - \langle \mathbf{T}_{1},\gamma'(t)\rangle^{2} ).
\end{split}
\end{equation}
Analogously, we obtain
\begin{equation}
\label{eqt7}
\begin{split}
\alpha \langle \mathbf{T}^{2}_{\theta} , \gamma'(t) \rangle \langle \mathbf{T}^{2}_{\theta},N_{1}\rangle &= \alpha ( \cos^{2}\theta-\sin^{2}\theta) \langle \mathbf{T}_{2},\gamma'(t) \rangle \langle \mathbf{T}_{2},N_{1} \rangle \\
& + \alpha \cos \theta\sin\theta (\langle \mathbf{T}_{2},\gamma'(t)\rangle^{2} - \langle \mathbf{T}_{2}, N_{1} \rangle^{2}).
\end{split}
\end{equation}
Now, the hypothesis implies that
\begin{equation}
\label{angles}
\begin{split}
\langle \mathbf{T}_{1},N_{2} \rangle&=\nu_{2}-\nu_{1}d = -\langle \mathbf{T}_{1},N_{2} \rangle\\
\langle \mathbf{T}_{1},\gamma'(t)\rangle&=\langle\mathbf{T}_{2},\gamma'(t)\rangle=\langle\xi, \gamma'(t)\rangle.
\end{split}
\end{equation}
Finally, we substitutes the equations in \eqref{angles} in the equation \eqref{eqt4} and then from equation \eqref{eqt7}, we obtain $$\sqrt{H^{2}+\tau ^{2}}\langle \mathbf{T} ^{2}_{\theta},N_{1} \rangle \langle J_{2} \mathbf{T} ^{2}_{\theta},N_{1} \rangle = \sqrt{H^{2}+\tau ^{2}}\langle \mathbf{T} ^{1}_{\theta},N_{2} \rangle \langle J_{1} \mathbf{T} ^{1}_{\theta},N_{2} \rangle,$$
therefore, the Lemmma \ref{key} shows that $\Gamma$ is and AR-line of curvature of $\Sigma_{1}$ if, and only if $\Gamma$ is an AR- line of curvature of $\Sigma_{2}$.
\end{proof}
\begin{remark}
Note that we are assuming $H^{2}+\tau^{2} \neq 0$. When $\tau=0$, we consider the usual Abresch-Rosenberg shape operator. In \cite{dCF}, the authors studied lines of curvature with respect to the usual Abresch-Rosenberg shape operator and classified disks immersions in $\mathbb{M}^2 (\kappa)\times \mathbb{R}$. Their principal result is contained in Corollary \ref{CorodCF}, when $\tau = 0$. Hence, Lemma \ref{key} can be seen as a generalization to the case $\tau \neq 0$.
\end{remark}
\section{Capillary disks in $\mathbb{E}(\kappa , \tau)$.}
Throughout this section, we will denote by $\phi:\mathbb{D} \rightarrow \mathbb{E}(\kappa , \tau)$ an immersion from the disk $\mathbb{D}= \{z \in \mathbb{C}: \abs{z} < 1 \}$ onto $\mathbb{E}(\kappa , \tau)$ of constant mean curvature, we call it $H-$disk. Moreover, we will assume that the boundary $\Gamma$ is a smooth curve.
\subsection{Immersed compact disks in $\mathbb{E}(\kappa , \tau)$, $\tau=0$}
The classification of immersed compact disks with constant mean curvature in $\mathbb{M}^2 (\kappa)\times \mathbb{R}$ has been studied by M. Do Carmo and I. Fernández in \cite{dCF}, under certain conditions on the curve $\Gamma$, they showed that $\phi(\mathbb{D})$ is part of an Abresch-Rosenberg surface in $\mathbb{M}^2 (\kappa)\times \mathbb{R}$. In this section, we will classified immersed compact disks, assuming more general geometric conditions about $\Gamma$ than the conditions given in \cite{dCF}. \\
Let $\Omega$ be an Abresch-Rosenberg surface in $\mathbb{M}^2 (\kappa)\times \mathbb{R}$. We denote by $\nu_{1} = \langle \xi,N_{1} \rangle$ the angle function defined along the immersion $\phi$, where $N_{1}$ is the unit normal vector field defined along $\phi(\overline{\mathbb{D}})$ and by $\nu_{2} = \langle \xi, N_{2} \rangle$ the angle function defined along $\Omega$, where $N_{2}$ is the unit normal vector field defined along surface $\Omega$.
\begin{theorem}
\label{Coro2}
Let $\phi:{\overline{\mathbb{D}}} \rightarrow \mathbb{M}^2 (\kappa)\times \mathbb{R}$ be a non minimal $H_{1}-$disk with regular boundary $\Gamma$. Suppose that $\phi$ meets transversally an Abresch-Rosenberg $H_{2}-$surface $\Omega$ along $\Gamma$ at a constant angle. Assume also that $\Gamma$ is of one of the following types:
\begin{enumerate}
\item $\Gamma$ is an horizontal curve.
\item $\Gamma$ is a vertical curve of the immersion $\phi$ and the surface $\Omega$.
\item If $H_{1}=H_{2}$, and $\nu_{1} =-\nu _2$ along $\Gamma$.
\end{enumerate}
Then, $\phi(\overline{\mathbb{D}})$ is part of an Abresch-Rosenberg surface in $\mathbb{M}^2 (\kappa)\times \mathbb{R}$.
\end{theorem}
\begin{proof}
Set $\phi(\mathbb{S}^{1})=\Gamma$. Since $\Gamma$ is an AR-line of curvature of the Abresch-Rosenberg surface $\Omega$, then from the hypothesis and Corollary \ref{CoroDCF}, $\gamma$ is an AR-line of curvature of the immersion $\phi$. So, from Proposition \ref{A-R}, the imaginary part of the Abresch-Rosenberg differential must be zero along $\gamma$. Finally, the Schwarz Reflection Principle implies that the Abresch-Rosenberg differential must be zero on $\phi(\overline{\mathbb{D}})$ and Theorem \ref{QVa} gives the result.
\end{proof}
\begin{example} In $\mathbb{H}^{2} \times \mathbb{R}$. Consider the totally geodesic plane $\Sigma$ parametrized as
$$\psi(x,y) = (\cosh(x),0,\sinh(x),y)$$ where $x,y \in \mathbb{R}$ and the rotational sphere $\Omega$ with constant mean curvature $\frac{1}{\sqrt{2}}$ parametrized as
$$\varphi(u,v)=(\cosh(r(u)), \sinh(r(u))\cos(v), \sinh(r(u))\sin(v),h(u)) $$
where $(u,v) \in [-1,1] \times \mathbb{R}/2\pi$,
$r(u) = 2 {\rm arcsinh}(\sqrt{1-u^{2}})$ and $h(u)=\frac{4}{\sqrt{2}} {\rm arcsin}(\frac{\sqrt{2}}{2}u)$.\\
Observe that the surface $\Sigma$ is an example of a minimal surface whose Abresch-Rosenberg differential does not vanish [see \cite{Rosen},section 4], moreover, a straightforward computation shows that $$N_{\psi}(x,y)= (0,1,0,0)$$
and $$N_{\varphi}(u,v)= (h'(u) \sinh(r(u)), h'(u) \cos(v) \cosh(r(u)), h'(u) \cosh(r(u)) \sin(v), -r'(u))$$
are the normal vector fields of $\psi$ and $\varphi$ respectively. \\
Now, the intersection set $C =\Sigma \cap \Omega$ is the curve parametrized by $\gamma$ as
$$\gamma(s) = (\cosh(r(s)), 0 , \sinh(r(s))\sin(t), h(s)),$$ where $s \in [-1,1]$ and $t=\frac{\pi}{2}$ or $t=\frac{3 \pi}{2}$. So, computing $\langle N_{\psi}, N_{\varphi} \rangle$ along the curve $\gamma$, we find that
$$ \langle N_{\psi}, N_{\varphi} \rangle |_{\gamma(s)} = h'(s) \cos(t) \cosh(r(s)) = 0.$$
Hence, along $\gamma$, the surfaces $\Sigma$ and $\Omega$ intersect at constant angle. \\
Finally, we define the surface $\Gamma$ that is the part of the totally geodesic plane $\Sigma$ inside of the rotational surface $\Omega$ after of the intersection $\Sigma \cap \Omega$. Then we have that $\partial \Gamma \equiv C$ and there exists an immersion $\phi:\mathbb{D} \rightarrow \Sigma$ such that
\begin{itemize}
\item $\phi(\mathbb{D}) = \Gamma$,
\item $\phi(\mathbb{S}^{1})= C$ and
\item $\phi$ meets $\Omega$ along the curve $C$ at a constant angle,
\end{itemize}but $\phi$ is not a part of an Abresch-Rosenberg surface.\\
\end{example}
In Theorem \ref{Coro2} we have omitted two cases:
\begin{enumerate}
\item When $\Omega$ is a minimal Abresch-Rosenberg surface in $\mathbb{M}^{2}(\kappa) \times \mathbb{R}$ then $\Omega$ is a slice by Theorem \ref{QVa}. When $\Omega$ is a slice was studied in \cite[Theorem 9]{Cal}.
\item When the immersion $\phi$ is a minimal disk and $\Gamma$ is horizontal, one can solve this case using the Maximum principle, comparing $\phi$ with a slice.
\end{enumerate}
\begin{remark}
Theorem \ref{Coro2} generalizes the classification result given by Do Carmo and Fernandez in \cite[Corollary 4.1]{dCF} for immersed compact disks with constant mean curvature. We only assume that $\Gamma$ is horizontal, without assuming that $\Gamma$ is a line of curvature of the second fundamental form of the immersion.
\end{remark}
\subsection{Immersed compact disks in $\mathbb{E}(\kappa , \tau)$, $\tau \neq 0$}
Now, we deal with $H-$ disks in $\mathbb{E}(\kappa , \tau)$, $\tau \neq 0$. We remember that for this class of immersions, we consider the Abresch-Rosenberg shape operator on $\phi(\mathbb{D})$ defined by formula \eqref{ARTraceless}. Then, using Corollary \ref{CorodCF}, we extend the above classification result for the case $\tau \neq 0$.
\begin{theorem}
\label{tNoZ}
Let $\phi: \mathbb{D} \rightarrow \mathbb{E}(\kappa , \tau)$, $\tau \neq 0$, be a $H-$disk with regular boundary, suppose the boundary is parametrized by a regular curve $\gamma$ and it is of one of the following types
\begin{enumerate}
\item $\gamma$ is the tangent intersection of the immersion $\phi$ with an Abresch-Rosenberg surface $\Omega$ with the same mean curvature vector.
\item $\gamma$ is the transverse intersection with constant angle of the immersion $\phi$ with an Abresch-Rosenberg surface $\Omega$ with the same mean curvature and whose angle function is opposite to the angle function of the immersion $\phi$ along $\gamma$.
\end{enumerate}
Then, $\phi(\mathbb{D})$ is a part of an Abresch-Rosenberg surface in $\mathbb{E}(\kappa , \tau)$.
\end{theorem}
\begin{proof}
Set $\phi(\mathbb{S}^{1})=\Gamma$. Since $\Gamma$ is an AR-line of curvature of the Abresch-Rosenberg surface $\Omega$, then from hypothesis and Corollary \ref{CorodCF}, $\gamma$ is an AR-line of curvature of the immersion $\phi$. So, from Proposition \ref{A-R}, the imaginary part of the Abresch-Rosenberg differential must be zero along $\gamma$. Finally, the Schwarz Reflection Lemma implies that the Abresch-Rosenberg differential must be zero on $\phi(\overline{\mathbb{D}})$ and Theorem \ref{QVa} gives the result.
\end{proof}
\subsection{Immersed compact disks with non-regular boundary in $\mathbb{E}(\kappa , \tau)$.}
\label{CompaNoreg}
Now, we will study $H-$disks $\phi: \mathbb{D} \rightarrow \mathbb{E}(\kappa , \tau)$ with piece-wise regular boundary. Indeed, we suppose that $\phi(\mathbb{D})$ is contained in the interior of a differentiable surface without boundary in $\mathbb{E}(\kappa , \tau)$.\\
First, we will recall a result that gives conditions for a disk type surface to be umbilical with respect to a Codazzi pair with constant mean curvature.
\begin{theorem}[\cite{ER3}]
\label{ESP-FER}
Let $\Sigma$ be a compact disk with piece-wise smooth boundary. We will call the vertices of the surface to the finite set of non-regular boundary points. Assume that $\Sigma$ is contained as an interior set in a differentiable surface $\hat{\Sigma}$ without boundary.
Let $(I,II)$ be a Codazzi pair with constant mean curvature $H(I,II)$ on $\hat{\Sigma}$. Assume also that the following conditions holds:
\begin{enumerate}
\item The number of vertices in $\partial \Sigma$ with an angle $< \pi$ (measured with respect to the metric $I$) is less than 3.
\item The regular curves in $\partial \Sigma$ are lines of curvature for the pair $(I,II)$
\end{enumerate}
Then $\Sigma$ is totally umbilical for the pair $(I,II)$.
\end{theorem}
If $\Sigma$ is a $H-$ surface in $\mathbb{E}(\kappa , \tau)$, therefore the fundamental pair $(I,II_{S})$ defined by equation \eqref{Oper} when $\tau=0$ and by equation \eqref{ARTraceless} when $\tau \neq 0$ is a Codazzi pair and the mean curvature of pair $H(I,II_{S})$ is constant. Then, using Theorem \ref{ESP-FER} we obtain the following:
\begin{theorem}
\label{CoroSR}
Let $\Sigma$ be a $H-$ disk in $\mathbb{E}(\kappa , \tau)$ with piece-wise differentiable boundary. Assume also that the following conditions are satisfied:
\begin{enumerate}
\item $\Sigma$ is contained as an interior set in a smooth $H$-surface $\hat{\Sigma}$ in $\mathbb{E}(\kappa , \tau)$ without boundary.
\item The number of vertices in $\partial \Sigma$ with angle $<$ $\pi$ is less than or equal to 3.
\item The regular curves in $\partial \Sigma$ are AR-lines of curvatures of $\Sigma$.
\end{enumerate}
Then, $\Sigma$ is a part of an Abresch-Rosenberg surface in $\mathbb{E}(\kappa , \tau)$.
\end{theorem}
Theorem \ref{Coro2} shows that if a $H-$ disk in $\mathbb{M}^2 (\kappa)\times \mathbb{R}$ with horizontal curve $\Gamma$ as boundary meets an Abresch-Rosenberg surface at a constant angle along $\Gamma$, then $\Gamma$ is an AR- line of curvature of the immersion, hence Theorem \ref{CoroSR} implies the following corollary:
\begin{Corollary}
\label{NR1}
Let $\phi: \mathbb{D} \rightarrow \mathbb{M}^2 (\kappa)\times \mathbb{R}$ be a non minimal $H-$ disk, with piece-wise differentiable boundary $\Gamma$ that meets transversally an Abresch-Rosenberg $H_{2}-$ surface $\Omega$ along $\Gamma$ at a constant angle. Assume also that the following conditions are satisfied:
\begin{enumerate}
\item $\phi(\mathbb{D})$ is contained as an interior set in a smooth $H-$surface $\hat{\Sigma}$ in $\mathbb{E}(\kappa , \tau)$ without boundary.
\item The number of vertices in $\Gamma$ with angle $<$ $\pi$ is less than or equal to 3.
\item Every regular component $\gamma$ of $\Gamma$ is a one of following types:
\begin{itemize}
\item $\gamma$ is a horizontal curve of $\Omega$.
\item $\gamma$ is a vertical curve of immersion $\phi$ and the surface $\Omega$.
\item If $H=H_{2}$ then the angle function of immersion $\phi$ is opposite to the angle function of $\Omega$ along $\gamma$.
\end{itemize}
\end{enumerate}
Then, $\phi(\mathbb{D})$ is a part of an Abresch-Rosenberg surface in $\mathbb{M}^2 (\kappa)\times \mathbb{R}$.
\end{Corollary}
Theorem \ref{tNoZ}, shows that if a $H-$ disk in $\mathbb{E}(\kappa , \tau)$, $\tau \neq 0$, has regular boundary, then under certain conditions over $\Gamma$, we conclude that $\Gamma$ is an AR- line of curvature of the immersion. So, using Theorem \ref{CoroSR}, we obtain:
\begin{Corollary}
\label{NR2}
Let $\phi: \mathbb{D} \rightarrow \mathbb{E}(\kappa , \tau)$, $\tau \neq 0$, be a $H-$disk with piece-wise differentiable boundary $\Gamma$. Assume also that the following conditions are satisfied:
\begin{enumerate}
\item $\phi(\mathbb{D})$ is contained as an interior set in a smooth $H-$surface $\hat{\Sigma}$ in $\mathbb{E}(\kappa , \tau)$ without boundary.
\item The number of vertices in $\Gamma$ with angle $<$ $\pi$ is less than or equal to 3.
\item Every regular component $\gamma$ of $\Gamma$ is one of the following types:
\begin{itemize}
\item $\gamma$ is a tangent intersection of $\phi(\mathbb{D})$ with an Abresch-Rosenberg surface $\Omega$ with the same mean curvature vector.
\item $\gamma$ is a transverse intersection with constant angle of $\phi(\mathbb{D})$ with an Abresch-Rosenberg surface $\Omega$ with the same constant mean curvature and whose angle function is opposite to the angle function $\phi(\mathbb{D})$ along $\gamma$.
\end{itemize}
\end{enumerate}
Then, $\phi(\mathbb{D})$ is a part of an Abresch-Rosenberg surface in $\mathbb{E}(\kappa , \tau)$.\\
\end{Corollary}
The author was supported by CAPES-Brazil and by CNPq-Brazil.\\
|
2,877,628,091,199 | arxiv | \section{Introduction} \label{Introduction}
In a recent paper \emph{"Nonparameteric Analysis of Random Utility Models"}, \cite{kitamura2012} (henceforth KS) develop a test for nonparametric testing of Random Utility Models (RUM). They test the hypothesis that a repeated cross-section of demand data might have been generated by a population of rational consumers. A practical implementation of this test leads to a challenging computational problem. The linear program proposed by \cite{mcfadden1990} is extended with a quadratic objective function, minimizing the Euclidean distance. In effect, the minimum distance between a vector and a cone in a high-dimensional space is calculated. This quadratic program must be solved to compute the test statistic and for each bootstrap replication for the simulation of the critical value. This quadratic program is large, with one variable for each rational choice type. The number of such types rises exponentially with the number of choice situations, and even identifying all types is time consuming. In fact, this is the main limiting factor in KS's implementation. \\
The computational problems handled in this paper are similar to those encountered in the study of random utility models in binary choice settings where rational choice types are represented by strict linear orders over the choice alternatives \citep{block1960}. Specifically, \cite{cavagnaro2014} calculates Bayes factors, a measure for model comparison, for the random utility model. Calculating these factors requires numerous checks to test whether a vector lies inside a polytope. In effect, they test whether the Euclidean distance is equal to zero. For small datasets, inequalities are known describing the polytope, but these descriptions grow quickly with the number of choice alternatives and no full description exists for eight or more choice alternatives \cite{Marti2011ch}. \cite{smeulders2018COR} propose algorithms capable of handling larger datasets, by making use of an adaptation of the linear program of \cite{mcfadden1990}. As in the current paper, the large number of rational choice types, and thus variables in the linear program, makes solving the complete model inefficient. By transforming the problem into an optimization problem, through which the point in the polytope minimizing the Manhattan-distance to the vector is found, a column generation approach can be applied. Informally, a column generation approach makes use of the fact that optimal solutions to optimization problems with large numbers of variables, but relatively few constraints, have optimal solutions that only use a relatively small number of variables. A column generation approach starts with a limited number of variables, and identifies new ones as needed through a separate optimization problem, and thus circumvents the problem of having to identify all rational choice types.\\
In this paper, we will use some of the same ideas. We propose a column generation approach for the Euclidean distance calculation. Furthermore, we note that the tightening procedure in KS is incompatible with column generation, as it requires knowledge of all rational choice types. To overcome this obstacle, we show that a slight modification to the procedure is possible to remove this requirement. To show the practical benefit of the column generation algorithm, we re-analyze the empirical application of \cite{deb2017revealed}(henceforth DKSQ). To increase computation speed, we develop heuristic algorithms to generate interesting rational choice types for this setting. We show the computational improvements make it possible to study much longer budget sequences.\\
The paper unfolds as follows. In section \ref{Sec:RUM}, we briefly describe the Random Utility Model (RUM). Section \ref{Sec:KitSto} lays out the test described in KS, with a focus on the computational problem of calculating the test statistic. Next, in section \ref{Sec:Column Generation} we describe a column generation algorithm, which we use to more efficiently compute the test statistic. Section \ref{Sec:Tight} describes how to handle the tightening of the cone in a manner that is consistent with column generation. An empirical application is contained in Section \ref{Sec:Application}. We first describe the particular model tested in DKSQ. Next, we show how to implement the general column algorithm for this setting. Finally, we show the computational benefits of our approach.
\subsection{Random Utility Models} \label{Sec:RUM}
We briefly describe the RUM in a discrete choice setting. KS handle a continuous choice setting, as does the application by DKSQ, but both rely on discretization to make testing possible. Consider the set $\mathcal{X}$ of all discrete choice options, we denote individual choice options by $x_i$. Let $u: \mathcal{X} \rightarrow \mathbb{R}$ denote a utility function. For simplicity, we assume $u(x_i) \ne u(x_j)$ for all $i, j \in \mathcal{X}, i \ne j$. A choice situation $t$ is characterized by a subset of the discrete choice options, denoted $\mathcal{X}_t \subseteq \mathcal{X}$. A rational actor with a utility function $u$ then picks choice option $x$ satisfying
\begin{align*}
x = \arg \max_{x_j \in \mathcal{X}_t} u(x_j).
\end{align*}
We furthermore denote the choice option $x$ chosen in situation $t$ by $x(t)$.\\
Given the discrete nature of the choice options, there is a finite number of ways an actor can choose over all situations. We characterize a choice type, indexed by $r$, by the choices she makes in each choice situation. Specifically, we encode a choice type $r$ as $\mathbf{a}_r = (a_{r,1,1}, \ldots, a_{r,T,|\mathcal{X}|})$, with $a_{r,t,i} = 1$ if choice option $x_{i}$ is chosen in situation $t$ by type $r$ and $a_{r,t,i} = 0$ otherwise. The set of rational choice types $\mathcal{R}$ is the set of all types $r$ for which there exists some utility function $u_r$ such that
\begin{align*}
a_{r,t,i} = 1 \text{ if and only if } x_i = \arg \max_{x_j \in \mathcal{X}_t} u_r(x_j)
\end{align*}
Let $P_\mathcal{R}$ be a probability distribution over all rational choice types, and let $p_r$ be the probability of a given choice type. We define the sets $\mathcal{R}_{t,i}$ as the subsets of $\mathcal{R}$ such that $r \in \mathcal{R}_{t,i}$ if and only if $a_{r,t,i} = 1$, i.e. $\mathcal{R}_{t,i}$ is the set of rational choice types which choose $x_i$ in choice situation $t$. Now suppose we observe choices for the given choice situations, with $\pi_{t,i}$ the rate at which option $i$ is chosen in situation $t$.
\begin{definition} \label{def:Stoch Rat}
The observed choices $\pi$ are stochastically rationalizable if and only if there exists a distribution $P_\mathcal{R}$ over choice types, such that
\begin{align}
\sum_{r \in \mathcal{R}_{t,i}} p_r = \pi_{t,i} & & \forall t = 1,\ldots,T, x_i \in \mathcal{X}. \label{RUM}
\end{align}
\end{definition}
Before continuing, we would like to highlight the geometric interpretation of Definition \ref{def:Stoch Rat}. Consider a space, with the number of dimensions equal to the sum of the number of choice options available in each choice situation, over all choice situations. We can interpret $\pi$ as a vector in this space, with $\pi_{t,i}$ the coordinate in the dimension associated with $t$ and $i$. Likewise, the vectors $\mathbf{a}_r$ provide coordinates in each dimension for each rational choice pattern. These vectors describe a convex cone, which we denote by $ \mathcal{C}$
\begin{align} \label{V-Representation}
\mathcal{C} = \{\mathbf{c}| \mathbf{c} = \sum_{r \in \mathcal{R}} \lambda_r \mathbf{a}_r, \lambda_r \geq 0 ,~\forall r \in \mathcal{R}\}.
\end{align}
This representation of the cone is called the $V$-representation, as it is based on the vectors defining the cone. Choice probabilities are rationalizable if and only if $\pi \in \mathcal{C}$. \\
Equivalently, there exists a $H$-representation of the cone, based on hyperplanes. Consider the set of hyperplanes $\mathcal{H} = \mathcal{H}^\leq \cup \mathcal{H}^=$. Each $h \in \mathcal{H}^\leq$ divides the space into half-spaces, one of which is the feasible region (which includes the hyperplane), the other infeasible. For each $h \in \mathcal{H}^=$, only the hyperplane itself is the feasible region. The union of these feasible regions is the cone $\mathcal{C}$. Specifically, consider a set of hyperplanes $\mathcal{H}$. For each $h \in \mathcal{H}$, there exist parameters $b_{h,t,i}$ with $\sum_{t}^{T} \sum_{i}^{I_t} b_{h,t,i} c_{t,i} = 0$ describing the hyperplane. Then
\begin{align}\label{H-Represenation}
\mathcal{C} = \left\{\mathbf{c} \left\vert
\begin{array}{l}
\sum_{t}^{T} \sum_{i}^{I_t} \mathbf{b}_{h}~ \mathbf{c} \leq 0, \forall h \in \mathcal{H}^\leq \\
\sum_{t}^{T} \sum_{i}^{I_t} \mathbf{b}_{h}~ \mathbf{c} = 0, \forall h \in \mathcal{H}^=
\end{array}
\right.
\right\}.
\end{align}
\subsection{Testing the Random Utility Model} \label{Sec:KitSto}
In this section, we briefly lay out the test described by KS, focussing on the computational problems that arise when implementing the test. We refer to KS for a more thorough explanation of the test.\\
\subsubsection{Test Statistic}
Let $\hat{\pi}$ be an estimator for $\pi$. KS propose to use the Euclidean distance between the vector $\hat{\pi}$ and the cone $C$ as the test statistic $J_N$. Formally, $J_N$ is the optimum objective value to the problem (\ref{QP Original Start})-(\ref{QP Original End}). In this problem, $p_r$ denotes the probability associated with type $r$. $s_{t,i}$ denotes the distance, in the dimension associated with patch $i$ on budget $t$, between the linear combination of the types and the estimated choice probabilities $\hat{\pi}_{t,i}$. $N$ is the number of observations over all time periods. Note that if, and only if, $J_N = 0$, $\hat{\pi}$ is stochastically rationalizable in the sense of Definition \ref{def:Stoch Rat}.
\begin{align}\label{QP Original Start}
\text{Minimize} & & J_N = N \sum_{t = 1}^{T} \sum_{i = 1}^{I_t} s^2_{t,i} \\
\text{Subject to} & & \nonumber \\
& & \sum_{r \in\mathcal{R}_{t,i}} p_r + s_{t,i} & = \hat{\pi}_{t,i} & & \forall x_{t,i} \in \mathcal{X} \\
& & p_r & \geq 0 & & \forall r \in\mathcal{R} \label{QP Original End}
\end{align}
The projection of $\hat{\pi}$ onto $C$ is denoted by $\hat{\eta} = \sum_{r \in \mathcal{R}} p_r \mathbf{a}_r$.
\subsubsection{Critical Value} \label{Sec:CritVal}
The critical value is computed through a bootstrap procedure, which relies on a tuning paramater $\tau_N$. Given $R$ bootstrap replications with sample frequencies $\hat{\pi}^{*(r)}$ for $r = 1, \ldots, R$, the critical value for $J_N$ is computed as follows.
\begin{enumerate}
\item Obtain the $\tau_N$-tightened estimator $\hat{\eta}^{\tau_N}$, with $\hat{\eta}^{\tau_N} = \sum_{r \in\mathcal{R}} p_r \mathbf{a}_r$, solving (\ref{QP Tight Start})-(\ref{QP Tight End}).
\begin{align}\label{QP Tight Start}
\text{Minimize} & & J_N = N \sum_{t = 1}^{T} \sum_{i = 1}^{I_t} s^2_{t,i} \\
\text{Subject to} & & \nonumber \\
& & \sum_{r \in\mathcal{R}_{t,i}} p_r + s_{t,i} & = \hat{\pi}_{t,i} & & \forall x_{t,i} \in \mathcal{X} \\
& & p_r & \geq \tau_N / |\mathcal{R}| & & \forall r \in\mathcal{R} \label{QP Tight End}
\end{align}
\item Define the $\tau_N$-tightened recentered bootstrap estimators.
\begin{align}
\hat{\pi}_{\tau_N}^{*(r)} = \hat{\pi}^{*(r)} - \hat{\pi} + \hat{\eta}_{\tau_N}.
\end{align}
\item The bootstrap test statistics $J_N^{*(r)}(\tau_N)$ are the solutions to (\ref{QP Tight Start})-(\ref{QP Tight End}), using $\hat{\pi}^{*(r)}$ for the right-hand sides of the inequalities.
\item Use the empirical distribution of $J_N^{*(r)}(\tau_N)$, $m = 1, \ldots, M$ to obtain the critical value for $J_N$.
\end{enumerate}
\subsection{Computational Difficulties}
To compute the tests statistic $J_N$ and to obtain a critical value for it, the problem (\ref{QP Original Start})-(\ref{QP Original End}) must be solved once, and the problem (\ref{QP Tight Start})-(\ref{QP Tight End}) solved $1 + M$ times (once to obtain the $\tau_N$-tightened estimator, and then once for each bootstrap replication). As mentioned by KS, solving these problems is computationally challenging. The straightforward approach implemented by KS requires that each rational choice type $r \in \mathcal{R}$ is first identified (though this must be done only once), and then a large quadratic program must be solved. The number of rational choice types can however rise exponentially with the number of periods considered. This makes the approach by KS computationally costly for moderately sized instances, and makes larger instances impossible. Table \ref{table:rationalchoicetypes} shows the approximate number of rational choice types for different size instances in the application of DKSQ.\footnote{The number of total choice types is calculated exactly, random sampling is used to estimate the ratio of rational choice types to total choice types.}
\begin{table}[htbp]
\centering
\begin{tabular}{r|ll|ll|ll|}
& \multicolumn{2}{c|}{3 Goods} & \multicolumn{2}{c|}{4 Goods} & \multicolumn{2}{c|}{5 Goods} \\
& \multicolumn{1}{c}{Min} & \multicolumn{1}{c|}{Max} & \multicolumn{1}{c}{Min} & \multicolumn{1}{c|}{Max} & \multicolumn{1}{c}{Min} & \multicolumn{1}{c|}{Max} \\ \hline
6 Periods & $3.00*10^{1}$ & $5.44*10^{4}$ & $1.38*10^{3}$ & $4.30*10^{5}$ & $1.38*10^{3}$ & $4.30*10^{5}$ \\
10 Periods & $5.14*10^{3}$ & $3.35*10^{8}$ & $1.52*10^{8}$ & $1.03*10^{13}$ & $6.76*10^{12}$ & $6.93*10^{16}$ \\
15 Periods & $2.12*10^{9}$ & $3.87*10^{15}$ & $6.76*10^{17}$ & $7.05*10^{21}$ & $1.05*10^{19}$ & $4.07*10^{22}$ \\
20 Periods & $2.01*10^{18}$ & $2.98*10^{22}$ & & & & \\
\end{tabular}
\caption{Approximate maximum and minimum number of rational choice types in the DKSQ application.}
\label{table:rationalchoicetypes}
\end{table}
In the following sections, we describe how these problems can be solved without requiring the identification of all choice types, by making use of a column generation algorithm. In section \ref{Sec:Column Generation}, we handle the problem (\ref{QP Original Start})-(\ref{QP Original End}). Problem (\ref{QP Tight Start})-(\ref{QP Tight End}) is subtly different, requiring a strictly positive lower bound on the variables $p_r$, associated with the choice types. Solving this problem without identifying all choice types is thus not possible. However, in section \ref{Sec:Tight} we propose minor changes to the KS-procedure for obtaining the critical value, so that these strictly positive lower bounds are no longer necessary.
\section{Euclidean Projection through Column Generation} \label{Sec:Column Generation}
We will tackle this problem by making use of a column generation algorithm. Instead of solving (\ref{QP Original Start})-(\ref{QP Original End}) directly, we will start with a limited version of this problem, using only a small set of its variables. We will call this problem the {\it restricted master}. Given a solution to this problem, we find a hyperplane, separating the vector $\pi$ from the restricted polytope. In a second problem, called the {\it pricing problem} we check whether there exists any point of the full polytope on the side of $\pi$ of the separating hyperplane. If no such point exists, we show the solution to the restricted master is also a solution to (\ref{QP Original Start})-(\ref{QP Original End}). If such a point does exist, we add the corresponding variable to the restricted master and (re-)solve this problem. Such an approach to computing the distance between a point and a polytope, by iteratively taking into account additional vertices of a polytope, is originally described by \cite{wolfe1976}. Wolfe does make use of an exhaustive list of vertices of the polytope, which is impractical given the large number of vertices in our application. \cite{cadoux2010} extends this to a setting without an exhaustive list of vertices. \\
Let us look at the proposed algorithm step-by-step. First, we solve problem (\ref{QP Original Start})-(\ref{QP Original End}) with a restricted set $\bar{\mathcal{R}}$ of $k$ choice patterns. The {\it bar} notation signifies that the variables, sets or solution belongs to a restricted master problem. From the optimal solution to this restricted problem, $\bar{\mathbf{p}}^* = (\bar{p}^*_1, \ldots, \bar{p}^*_k)$ and $\bar{\mathbf{s}}^* = (\bar{s}^*_{1,1}, \ldots, \bar{s}^*_{T,I_T})$, we can construct the Euclidean projection of $\pi$ on the restricted cone $\bar {\mathcal{C}}$. This projection is the vector $\bar{\mathbf{v}}^* = (\bar{v}^*_{1,1}, \ldots, \bar{v}^*_{T,I_T})$ with $\bar{v}^*_{t,i} = \sum_{r \in \bar{\mathcal{R}}_{t,i}} \bar{p}^*_r$. Now consider the characterization of a Euclidean projection on a convex set (in this case the cone $\mathcal{C}$)
\begin{theorem}
$\mathbf{v}^*$ is the Euclidean projection of $\hat{\pi}$ on $\mathcal{C}$ if and only if $(\hat{\pi} - \mathbf{v}^*) \cdot (\mathbf{v} - \mathbf{v}^*) \leq 0 $, for all $\mathbf{v} \in \mathcal{C}$.
\end{theorem}
Since $\mathcal{C}$ is the set of linear combinations of vectors $\mathbf{a}_r, r \in \mathcal{R}$, we can also state the following result:
\begin{theorem}
$\mathbf{v}^*$ is the Euclidean projection of $\pi$ on $\mathcal{C}$ if and only if $(\hat{\pi}- \mathbf{v}^*) \cdot (\mathbf{a}_r - \mathbf{v}^*) \leq 0 $, for all $r \in \mathcal{R}$.
\end{theorem}
\begin{proof}
Suppose that $(\hat{\pi}- \mathbf{v}^*) \cdot (\mathbf{a}_r - \mathbf{v}^*) \leq 0 $, for all $r \in \mathcal{R}$, we now argue that $(\hat{\pi}- \mathbf{v}^*) \cdot (\mathbf{v} - \mathbf{v}^*) \leq 0 $, for all $\mathbf{v} \in \mathcal{C}$. For each $\mathbf{v} \in \mathcal{C}$, there exist non-negative numbers $\lambda_r$ such that $\mathbf{v} = \sum_{r \in \mathcal{R}} \lambda_r \mathbf{a}_r$. Thus, $(\hat{\pi}- \mathbf{v}^*) \cdot (\mathbf{v} - \mathbf{v}^*)$ can be written as $(\hat{\pi}- \mathbf{v}^*) \cdot (\sum_{r \in \mathcal{R}} \lambda_r \mathbf{a}_r - \mathbf{v}^*)$ or as $\sum_{r \in \mathcal{R}} \lambda_c (\hat{\pi}- \mathbf{v}^*) \cdot ( \mathbf{a}_r - \mathbf{v}^*)$. Since $(\hat{\pi}- \mathbf{v}^*) \cdot (\mathbf{a}_r - \mathbf{v}^*) \leq 0 $, for all $r \in \mathcal{R}$, we also have $\sum_{r \in \mathcal{R}} \lambda_c (\hat{\pi}- \mathbf{v}^*) \cdot ( \mathbf{a}_r - \mathbf{v}^*) < 0$.
\end{proof}
Note that $(\hat{\pi}- \mathbf{v}^*) = \mathbf{s}^*$, thus we can rewrite $(\hat{\pi}- \mathbf{v}^*) \cdot (\mathbf{a}_r - \mathbf{v}^*) \leq 0 $ as $\mathbf{s}^* \mathbf{a}_r \leq \mathbf{s}^* \mathbf{v}^*$. Given this result, we can check whether $\bar{\mathbf{v}}^*$ is the Euclidean projection of $\pi$ on $\mathcal{C}$ by solving the following problem:
\begin{problem}
Does there exist a choice pattern $r \in \mathcal{R}$, such that $\bar{\mathbf{s}}^* \mathbf{a}_r \geq \bar{\mathbf{s}}^*\bar{\mathbf{v}}^*$ ?
\end{problem}
To answer this question, we solve a different optimization problem, usually referred to as the {\it pricing} problem.
\begin{align}
\arg \max_{r \in \mathcal{R}} \bar{s}^* \mathbf{a}_r. \label{pricing problem}
\end{align}
It is clear that if we find an optimal solution to (\ref{pricing problem}), we can easily check whether it satisfies the threshold value $(\bar{\mathbf{s}}^* \bar{\mathbf{v}}^*)$. If the threshold is met, the choice type is added to the set of choice patterns considered in the restricted problem, which is then re-solved. Otherwise, the solution $\bar{\mathbf{p}}^*, \bar{\mathbf{s}}^*$ to the restricted problem is also the optimal solution to the problem considering the full set of choice patterns. \\
Although an optimal solution to (\ref{pricing problem}) is preferable, it is important to note that any $r \in \mathcal{R}$ with $\bar{\mathbf{s}} \mathbf{a}_r \geq \bar{\mathbf{s}}^* \bar{\mathbf{v}}^*$ is sufficient to continue with the column generation. To speed up computation, it can thus be more interesting to quickly find any type $r \in \mathcal{R}$ meeting the threshold than to spend a longer time finding the solution to (\ref{pricing problem}). Algorithm \ref{QP Algorithm} summarizes the column generation algorithm.
\begin{algorithm} \caption{Quadratic Program Column Generation Algorithm}
\label{QP Algorithm}
\begin{algorithmic}[1]
\STATE Solve Initial Restricted Master Problem, optimal solution $\bar{p}^*, \bar{s}^*, \bar{v}^*$. \label{master}
\WHILE {there exists $r \in \mathcal{R}$ with $\bar{s}^* a_r \geq \bar{s}^*\bar{v}^*$}
\STATE Find a choice pattern $r \in \mathcal{R}$ with $\bar{s}^* a_r \geq \bar{s}^*\bar{v}^*$.
\STATE Set $\bar{\mathcal{R}} := \bar{\mathcal{R}} \cup d$.
\STATE Re-Solve Restricted Master Problem, optimal solution $\bar{p}^*, \bar{s}^*, \bar{v}^*$.
\ENDWHILE
\STATE Restricted Master Solution $\bar{p}^*, \bar{s}^*, \bar{v}^*$ is the optimal solution $y^*, s^*, v^*$ to the Complete Master Problem.
\end{algorithmic}
\end{algorithm}
This approach allows us to solve (\ref{QP Original Start})-(\ref{QP Original End}) with only a fraction of the rational choice types identified, as we will show in the application.
\section{Solving Tightened Problems} \label{Sec:Tight}
In problems of the form (\ref{QP Tight Start})-(\ref{QP Tight End}), there is the additional complication that there is a strictly positive lower bound on $p_r$ for all $r \in \mathcal{R}$. This is incompatible with the column generation algorithm described in the previous section, which only uses a subset of these variables. We work around this problem in two steps. First, we show that for every problem of the form (\ref{QP Tight Start})-(\ref{QP Tight End}), with strictly positive lower bounds, there exists an equivalent problem with zero lower bounds which can be solved using the column generation algorithm. If all rational choice types have a strictly positie lower bound, finding this equivalent problem still requires knowledge of all rational choice types. However, we also show that the tightening can be achieved by setting strictly positive lower bounds for only a subset of the rational choice types.
\begin{lemma}
The problem
\begin{align}\label{QP Tight Eq Start}
\text{Minimize} & & J_N = N \sum_{t = 1}^{T} \sum_{x_i \in \mathcal{X}} s^2_{t,i} \\
\text{Subject to} & & \nonumber \\
& & \sum_{r \in\mathcal{R}_{t,i}} p_r + s_{t,i} & = \hat{\pi}_{t,i} - \sum_{r \in\mathcal{R}_{t,i}} \tau_N / |\mathcal{R}| & & \forall x_{t,i} \in \mathcal{X} \\
& & p_r & \geq 0& & \forall r \in\mathcal{R} \label{QP Tight Eq End}
\end{align}
is equivalent to problem (\ref{QP Tight Start})-(\ref{QP Tight End}).
\end{lemma}
\begin{proof}
Given a feasible solution $(s_{t,i}, p_r)$ to (\ref{QP Tight Start})-(\ref{QP Tight End}), $(s_{t,i}, p'_r = p_r - \tau_N / |\mathcal{R}|)$ is a feasible solution to (\ref{QP Tight Eq Start})-(\ref{QP Tight Eq End}). Since both problems have the same objective function, and the $s_{t,i}$ variables have the same value in both feasible solutions, a solution to (\ref{QP Tight Start})-(\ref{QP Tight End}) implies the existence of a solution to (\ref{QP Tight Eq Start})-(\ref{QP Tight Eq End}) with the same objective value. Likewise, given a feasible solution $(s'_{t,i}, p'_r)$ to (\ref{QP Tight Eq Start})-(\ref{QP Tight Eq End}), $(s'_{t,i}, p_r = p'_r + \tau_N / |\mathcal{R}|)$ is a feasible solution to (\ref{QP Tight Start})-(\ref{QP Tight End}), again with the same objective value. Thus, the optimal solution to both problems will have the same value.
\end{proof}
Stating the equivalent problem still requires knowledge of all rational choice types to adjust the right hand side of the constraints. We therefore propose a tightening based on only a subset of the rational choice types.\\
Consider a subset of the rational choice types $\mathcal{R}' \subset \mathcal{R}$, such that for each hyperplane $h \in \mathcal{H}^\leq$, there exists at least one $r \in \mathcal{R}'$ such that $\sum_{t=1}^{T}\sum_{i=1}^{I_t} b_{h,t,i} a_{r,t,i} < 0$. The proof of Lemma 4.1 in KS can be applied to prove the following Lemma.
\begin{lemma}
Define
\begin{align*}
\mathcal{C} = \{\sum_{r \in \mathcal{R}} \lambda_r a_r| \lambda_r \geq 0 ,~\forall r \in \mathcal{R}\}.
\end{align*}
and let
\begin{align*}
\mathcal{C} = \left\{c \left\vert
\begin{array}{l}
\sum_{t}^{T} \sum_{i}^{I_t} b_{h,t,i}~ c_{t,i} \leq 0, \forall h \in \mathcal{H}^\leq \\
\sum_{t}^{T} \sum_{i}^{I_t} b_{h,t,i}~ c_{t,i} = 0, \forall h \in \mathcal{H}^=
\end{array}
\right.
\right\}.
\end{align*}
be its $H$-representation. For $\tau > 0$, define
\begin{align*}
\mathcal{C} = \left\{\sum_{r \in \mathcal{R}} \lambda_r a_r \left\vert
\begin{array}{ll}
\lambda_r \geq 0 ,&\forall r \in \mathcal{R}\backslash \mathcal{R}' \\
\lambda_r \geq \tau/|\mathcal{R}'|, &\forall r \in \mathcal{R}'
\end{array}
\right.
\right\}.
\end{align*}
Then one also has
\begin{align*}
\mathcal{C} = \left\{c \left\vert
\begin{array}{ll}
\sum_{t}^{T} \sum_{i}^{I_t} b_{h,t,i}~ c_{t,i} \leq -\tau\phi_h,&\forall h \in \mathcal{H}^\leq \\
\sum_{t}^{T} \sum_{i}^{I_t} b_{h,t,i}~ c_{t,i} = 0,&\forall h \in \mathcal{H}^=
\end{array}
\right.
\right\}.
\end{align*}
with $\phi_h > 0$ for all $h \in \mathcal{H}^\leq$.
\end{lemma}
This result follows immediately from the proof of Lemma 4.1 in KS.\\
\subsection{Bounds}\label{Sec:Bounds}
Note that identifying the exact distribution of $J_N^{*(m)}(\tau_N)$ is unnecessary, as only the ratio of bootstrap test statistics larger and smaller than $J_N$ is necessary to check whether it falls above or below the critical value. This can be exploited by making use of bounds on the bootstrap test statistics $J_N^{*(m)}(\tau_N)$, to more quickly determine the $p$-value. Specifically, if at any point in the column generation algorithm we can determine, for a given bootstrap repetition, that $J_N^{*(m)}(\tau_N)$ is either
strictly larger or smaller than $J_N$, we terminate the algorithm. In this case we save the lower, or respectively the upper bound. This approach saves time, since bootstrap test statistics must not be computed exactly, but the resulting p-values do not change. \\
Calculating a upper bound on the bootstrap test statistic is straightforward. The objective value of the restricted master problem is immediately a upper bound on the objective value of the complete master problem, as any solution to the restricted master is also feasible for the complete master. Since a restricted master is already solved in every iteration of the column generation algorithm, no additional work is required to obtain this upper bound.\\
Lower bounds on the bootstrap test statistic can be obtained based on (optimal) solutions to the pricing problem as follows. Consider a pricing problem (\ref{con: Price Assign})-(\ref{con: Pricing End}) with objective function $\sum_{t = 1}^{T} \sum_{i = 1}^{I_t} s_{t,i}a_{t,i}$, and let the optimal solution value to this pricing problem be $z^*$. In this case, for each $r \in \mathcal{R}$, we have $\sum_{t = 1}^{T} \sum_{i = 1}^{I_t} s_{t,i}a_{r,t,i} \leq z^*$. Since the cone $\mathcal{C}$ is the set of linear combinations of the vectors $a_r$, this in turn implies that
\begin{align}
\sum_{t = 1}^{T} \sum_{i = 1}^{I_t} s_{t,i}c_{t,i} \leq z^*, \forall c \in \mathcal{C}.
\end{align}
By solving the following, relatively simple, quadratic optimization problem we thus obtain a lower bound on the bootstrap test statistic.
\begin{align}\label{QP 2 Start}
\text{Minimize} & &\sum_{t = 1}^{T} \sum_{i = 1}^{I_t} v^2_{t,i} \\
\text{Subject to} & & \nonumber \\
& & c_{t,i} + v_{t,i} & = \hat{\pi}^{*(r)}_{t,i} & & \forall x_{t,i} \in \mathcal{X} \\
& & \sum_{t = 1}^{T} \sum_{i = 1}^{I_t} s_{t,i}c_{t,i} & \leq z^* & \label{QP 2 End}
\end{align}
Note that obtaining this lower bound requires an optimal solution to the pricing problem. This leads to the following trade-off, where using heuristics leads to a more efficient column generation algorithm, with less time spent per iteration. On the other hand, solving the pricing problem to optimality allows us to obtain a lower bound on the test statistic, which may end the column generation algorithm outright. In our implementation, we will only compute the lower bound in iterations for which the pricing problem was solved exactly.
\section{Empirical Application} \label{Sec:Application}
For our empirical application, we use our improved algorithms to replicate the tests performed by \cite{deb2017revealed}. We first briefly summarize the model tested by these authors in Section \ref{sec:GAPP}. We focus on those aspects that are important to the computational problem of calculating the test statistic and critical value. To apply the general column generation approach described in previous sections to the setting of the application, some customization is required. This is described in Section \ref{sec:Pricing Problem}. \\
\subsection{Generalized Axiom of Revealed Price Preference} \label{sec:GAPP}
Consider a dataset $\mathcal{D} = \{(\mathbf{p}_t,\mathbf{q}_t)\}_{t=1}^T$, with $\mathbf{q}_t \in \mathbb{R}^L_+$ a bundle of $L$ goods bought at price vector $\mathbf{p}^t \in \mathbb{R}^L_{++}$. We are interested in the preference of the consumer over prices. Suppose, that for observations $t$ and $t'$, we have $\mathbf{p}_{t'}\mathbf{q}_t < \mathbf{p}_{t}\mathbf{q}_t$. In this case, the consumer would prefer prices $\mathbf{p}_{t'}$ over prices $\mathbf{p}_t$, since the former allows the consumer to purchase the same bundle of goods, and have (more) money left over to spend in other ways. Formally, we denote $\mathbf{p}_{t'}\mathbf{q}_t <(\leq) \mathbf{p}_{t}\mathbf{q}_t$ by $\mathbf{p}_{t'} \succ_p(\succeq_p) \mathbf{p}_t$. Furthermore, we denote the relation $\mathbf{p}_{t'} \succeq_p^* \mathbf{p}_t$ if there exists a chain of price vector such that $\mathbf{p}_{t'} \succeq_p \ldots \succeq_p \mathbf{p}_t$, and $\mathbf{p}_{t'} \succ_p^* \mathbf{p}_t$ if such a chain exists with at least one $\succ_p$ relation included.
\begin{definition}
The dataset $\mathcal{D} = \{(\mathbf{p}_t, \mathbf{q}_t)\}_{t=1}^T$ satisfies the Generalized Axiom of Revealed Price Preference (GAPP) if there do not exist two observations $t, t' \in T$, such that $\mathbf{p}_{t'} \succeq_p^* \mathbf{p}_t$ and $\mathbf{p}_{t} \succ_p^* \mathbf{p}_{t'}$
\end{definition}
Now consider a augmented utility function $u(\mathbf{q},-\mathbf{pq})$. Note that the utility depends both on the bundle of goods, as well as on the amount of money expended. DKSQ prove the following theorem.
\begin{theorem}
Given a dataset $\mathcal{D} = \{(\mathbf{p}_t,\mathbf{q}_t)\}_{t=1}^T$, the following are equivalent:
\begin{enumerate}
\item $\mathcal{D}$ can be rationalized by an augmented utility function.
\item $\mathcal{D}$ satisfies GAPP.
\item $\mathcal{D}$ can be rationalized by an augmented utility function that is strictly increasing, continuous and concave. Moreover, $u$ is such that $\max_{\mathbf{q} \in \mathbb{R}^L_+} u(\mathbf{q},-\mathbf{pq})$ has a solution for all $\mathbf{p} \in \mathbb{R}^L_{++}$.
\end{enumerate}
\end{theorem}
This model can be discretized, which is necessary to employ the algorithms discussed previously. For each period $t= 1,\ldots,T$, the set of possible choices $(\mathbb{R}^L_+)$ is partitioned into subspaces $x_{t,1}, \ldots, x_{t,I_t}$, we use the word ``patch" to refer to these elements. This partitioning is such that (1) $\mathbb{R}^L_+ = \bigcup_{i=1}^{I_t} x_{t,i}$ and (2) for all bundles $\mathbf{q}, \mathbf{q}' \in x_{t,i}$ and each other period $t'$, $\mathbf{q}$ and $\mathbf{q}'$ induce the same revealed preference relations, (3) the partition is of minimal size. Analogous to KS and DKSQ, we only consider patches corresponding to strict price preference relations. $\mathcal{X}_t$ denotes the set of patches of periods $t$, while $\mathcal{X}$ is the set of all patches. Note that for each time periods, the number of patches, and thus possible choices we must account for, is now bounded from above by $2^T$.\\
Instead of all possible utility functions, we only consider rational choice types. We encode a choice type $r$ as $\mathbf{a}_r = (a_{r,1,1}, \ldots, a_{r,T,I_T})$, with $a_{r,t,i} = 1$ if the patch $x_{t,i}$ chosen at time $t$ by type $r$ and $a_{r,t,i} = 0$ otherwise. The set of rational choice types $\mathcal{R}$ is the set of all types $r$ for which the chosen patches induce price preference relations satisfying GAPP. We furthermore define the sets $\mathcal{R}_{t,i} := \{r \in\mathcal{R}| a_{r,t,i} = 1\}$. Given that there exists a finite number of patches, the number of rational choice types to be considered is also finite.
\subsection{Setting Specific Pricing Problem}\label{sec:Pricing Problem}
In the previous sections, we lay out how the column generation approach can be used to calculate the test statistic and how to handle the tightening procedure. This description is given in a general way, without any reference to a specific setting. In case of the master problem, this is not necessary. Formulation (\ref{QP Original Start})-(\ref{QP Original End}) can be used for any discrete choice setting. However, the set of rational choice types $\mathcal{R}$ is determined by the setting. The pricing problem (\ref{pricing problem}), must thus also be tailored to it. In this section, we formulate a pricing problem to test GAPP, and discuss ways to solve it efficiently.\\
The binary variable $\alpha_{t,i}$ indicates which patch is chosen on each budget. $\alpha_{t,i} = 1$ if patch $x_{t,i}$ is chosen from $\mathcal{X}_t$, and $\alpha_{t,i} = 0$ otherwise. The binary variables $\rho_{t,j}$ represent the preference relations between $\mathbf{p}_t$ and $\mathbf{p}_j$. If the patch chosen in $\mathcal{X}_t$ induces $\mathbf{p}_t \succ_p\mathbf{p}_j$, then $\rho_{t,j} = 1$, otherwise $\rho_{t,j} = 0$. $X_{t,i,j}$ is a parameter indicating the price preferences induced by the choice of patch $x_{t,i}$, with $X_{t,i,j} = 1$ if $x_{t,i}$ induces $\mathbf{p}_j \succ_p\mathbf{p}_t$ and $X_{t,i,j} = 0$ otherwise.
\begin{align}
\text{Maximize} & &\sum_{t = 1}^{T} \sum_{i = 1}^{I_t} s_{t,i}\alpha_{t,i} \label{Pricing Start}\\
\text{Subject to} & & \nonumber \\
& & \sum_{i=1}^{I_t} \alpha_{t,i} & = 1 & & \forall t = 1, \ldots, T \label{con: Price Assign}\\
& & \sum_{i = 1}^{I_t}\alpha_{t,i}X_{t,i,j} - \rho_{j,t} & \leq 0& & \forall j,t = 1, \ldots, T \label{con: Price Direct}\\
& & \rho_{j,t} + \rho_{t,k} - \rho_{j,k} & \leq 1 & & \forall k,j,t = 1, \ldots, T \label{con: Price Trans}\\
& & \rho_{j,t} + \rho_{t,j} & \leq 1 & & \forall j,t = 1, \ldots, T \label{con: Price SARP}\\
& & \rho_{j,t} & \in \{0,1\} & & \forall j,t,= 1, \ldots, T \label{con: Price Bin 1}\\
& & \alpha_{t,i} & \in \{0,1\} & & \forall t = 1, \ldots, T, i = 1, \ldots, I_t \label{con: Pricing End}
\end{align}
Constraint (\ref{con: Price Assign}) ensures exactly one patch is chosen on each budget. Constraints (\ref{con: Price Direct})-(\ref{con: Price SARP}) ensure GAPP is satisfied for the chosen patches. First, Constraint (\ref{con: Price Direct}) ensures that if a chosen patch induces a price preference relation ($X_{t,i,j} = 1$), $\rho_{j,t}$ must also be set to one. Constraint (\ref{con: Price Trans}) makes sure that the $\rho$-variables also reflect the transitivity of the preference relations. Finally, Constraint (\ref{con: Price SARP}) enforces that the preference relation is acyclic. Together, these constraints enforce that the $a_{t,i}$ variables encode a valid choice pattern that is consistent with GAPP. An optimal solution to this integer program shows whether or not a rational choice type exists that can be added to the master problem, if so, the $a_{t,i}$ variables encode one such type.\\
As mentioned earlier, an optimal solution to the pricing problem is not necessary to advance the column generation algorithm. Any rational choice type for which $\bar{s} a_r \geq \bar{s}^* \bar{v}^*$ can be added to the restricted master problem to obtain a better solution. Since solving the pricing problem to optimality is often computationally costly, we propose to solve the pricing problem using heuristics, which are usually much faster. Only if we can not identify new choice types to add to the restricted master using the heuristic algorithms, will we use exact algorithms. Algorithm \ref{Alg: Pricing} shows how the heuristic and exact approaches work together. In the implementation, we use a {\it best insertion algorithm} \citep{LOP} adapted for this particular problem. A detailed description of the implemented heuristic can be found in Appendix B.\\
\begin{algorithm} \caption{Solving the Pricing Problem}
\label{Alg: Pricing}
\begin{algorithmic}[1]
\STATE Solve the pricing problem using heuristical algorithms.
\IF {The best solution has a value $< \bar{s}^*\bar{v}^*$}
\STATE Solve the pricing problem using exact algorithms.
\ENDIF
\end{algorithmic}
\end{algorithm}
The tightening procedure requires a subset of the rational choice types to be identified a priori. This set $\mathcal{R}'$ can be generated by randomly drawing choice types, testing whether they are rational and then keeping only rational choice types so generated. If the probability that a randomly chosen choice type is rational is low, this approach can be time consuming. To speed up the process, we opted for a semi-random method. In this method, we first randomly generate choice types. Small changes are then made to these choice types to remove violations of rationality. In the application, we set the size of the subset to 1,000 rational choice types. A detailed description of the procedure can be found in Appendix C.
\subsection{Results} \label{sec:Results}
The column generation algorithm described in the previous sections is implemented in C++, and CPLEX 12.8 is used to solve both the quadratic master problem, as well as the exact pricing problems. Computational experiments were run on a computer with a quad-core 2.6 GHz processor and 16Gb RAM. For the first bootstrap iteration, we initialize the set $\bar{\mathcal{R}}$ as an empty set. At the end of each bootstrap iteration, the set $\bar{\mathcal{R}}$ is saved and used as the starting set for the next bootstrap iteration. This approach generally speeds up computation, as good solutions for different bootstrap iterations usually have rational choice types in common, which do not need to be re-generated using these starting sets.\footnote{The set $\bar{\mathcal{R}}$ can become large over time, slowing down computation. If this is the case, it can be beneficial to record how often variables are used in the optimal solution and to periodically remove rarely used variables.}\\
In this section, we discuss the speed-ups that are achieved through the use of various techniques discussed above. Specifically, we iteratively compare the following configurations:
\begin{enumerate}
\item All pricing problems solved exactly, no use of bounds.
\item Heuristic \& exact algorithms for the pricing problem, no use of bounds.
\item Heuristic \& exact algorithms for the pricing problem, Upper Bound used.
\item Heuristic \& exact algorithms for the pricing problem, Upper \& Lower Bound used.
\end{enumerate}
These algorithms are applied to the U.K. Family Expenditure Survey. Table \ref{Table:3Goods} contains the minimum, maximum and average computation time for these configurations over the different instances for a given number of periods. Computation times were capped at 1 hour (3600 seconds) for each instance.
\begin{table}[ht!]
\centering
\begin{tabular}{l|r|r|r|r|r|r|r|r|r|r|r|r|}
&\multicolumn{3}{c}{Exact}&\multicolumn{3}{|c}{Heur. - No Bounds}&\multicolumn{3}{|c}{Heur.- UB}&\multicolumn {3}{|c|}{Heur.- All Bounds} \\
&{Min}&{Avg}&{Max}&{Min}&{Avg}&{Max}&{Min}&{Avg}&{Max}&{Min}&{Avg}&{Max}\\ \hline
6 Periods& 1 & 7 & 13 & 1 & 6 & 11 & 2 & 4 & 10 & 1 & 4 & 9 \\
10 Periods& 11 & 103 & 335 & 9 & 42 & 118 & 5 & 29 & 82 & 4 & 27 & 76 \\
15 Periods& 249 & NA & $>$ 3600 & 81 & 1372 & 3557 & 44 & 643 & 1559 & 42 & 565 & 1327 \\
\end{tabular}
\caption{Minimum, Maximum and Average computation times for 3 goods.}
\label{Table:3Goods}
\end{table}
AS DSKQ report that their current techniques do not allow the testing of more than 8 periods, it is clear that even in the simplest configuration, the column generation algorithm allows the testing of much larger datasets than are possible using the approach by KS and DKSQ. Table \ref{Table:3Goods} furthermore show the large impact the use of heuristics for the pricing problem and the use of bounds has on total computation time. While the influence is limited for the smaller instances, the addition of heuristics lowers average computation time by almost 60\% for the 10 periods instances. For 15 periods, this decrease is nearly 75\% for the instances which finished in both configurations. Likewise, the use of bounds to terminate computation earlier speeds up computation considerably, with a decrease in computation time of 35\% for 10 periods and nearly 60\% for 15 periods. Most of this speed-up is due to the lower bound, though for the 15 periods instances the addition of upper bounds lowered computation times by an additional 12\%. While no instances of 20 periods finished within 1 hour, 142 bootstrap iterations were finished for the hardest instance, suggesting all instances could be finished within about 7 hours. For the full 25 periods dataset, 4.25 hours were necessary to complete 100 bootstrap iterations. \\
In the instances we tested, increasing the number of goods generally increased the computational difficulty of the problem. A higher number of goods led to higher numbers of patches, which in turn increased the number of (rational) choice types. Table \ref{table:rationalchoicetypes} clearly shows this. The increased difficulty is also clearly noticeable in the computation times. Whereas the 10 period instances for 3 goods are solved in 27 seconds on average, this was a lower bound for the 4 good, 10 periods instances. For 4 goods, 1 out of 16 instances was not finished within 1 hour, the other 15 took less than 8 minutes on average. 5 Good instances have slightly higher, but comparable, computation times. Due to the higher difficulty of the 4 and 5 good instances, larger instances still take significant amounts of time. For 15 periods, only 6 bootstrap repetitions were complete for the hardest instance, implying about 6 days of total computation time for 1000 bootstraps.
\section{Discussion}
In this paper, we have shown that while the approach to testing random utility models developed by KS and DKSQ is computationally challenging, advanced algorithms allow for tests of far larger datasets. A main ingredient is to avoid complete enumeration of rational choice types, but generate these as necessary. Applying these algorithms to a model of consumption developed by DKSQ and empirical data from the U.K., we show that the model is supported by the data even over longer periods of time.
\bibliographystyle{plainnat}
|
2,877,628,091,200 | arxiv | \section{Introduction}
Phenomena in spin glasses represent one of the central topics of modern
solid state physics; they are of fundamental interest and also have a
variety of possible applications \cite{Mydosh93}. The spin-glass state
is realized in intermetallic alloys, for instance, when ions of a
magnetic metal (like Fe, Mn) are introduced in small amounts into the
matrix of non-magnetic noble metals (like Au, Ag, Cu, Pt). The local
magnetic moments interact co-operatively with each other via the
conduction electrons by the agency of the indirect
Ruderman-Kittel-Kasuya-Yosida (RKKY) exchange interaction
\cite{Freeman72}. Magnitude and sign of the interaction depend on the
distance between impurities. Combined with the spatial disorder this
provides conditions for a spin-glass state.
Among the exceptional properties of spin glasses compared to other
magnetic materials is the temperature behavior of their magnetic
susceptibility, which reveals a kink at a certain temperature $T_f$
(the freezing temperature) whose shape and position depend on the
magnitude and alternation frequency of the probing field
\cite{Binder86}. Spin glasses possess magnetic memory: the magnitude of
magnetization created by an external magnetic field below $T_f$ depends
on the pre-history of the system. Typical for a spin-glass state are
relaxational phenomena with characteristic times which at low
temperatures can by far exceed the duration of the experiment. In spite
of the large number of theoretical and experimental investigations,
there is still no generally accepted consensus on the nature of the
spin-glass state and the majority of properties of spin glasses remain
not fully understood \cite{Mydosh93,Binder86,Fischer91,Mezard87}.
Since the RKKY interaction plays a fundamental role in the physics of
spin glasses, the behavior of the subsystem of free electrons should be
intimately linked to the formation and stabilization of the spin-glass
phase. The magnitude of the RKKY interaction depends on the electronic
mean free path, as was first shown by de Gennes \cite{deGennes62}.
Thus, investigating the characteristics of conduction electrons gives
insight into the peculiar physics of spin glasses. The most direct way
to study the properties of delocalized electrons is provided by
electrical transport experiments. Immediately following the first works
on spin glasses, the electrical resistance of ``classical'' systems
like AuFe, CuMn, AuMn, and AuCr has been investigated in a detailed and
systematic way as a function of temperature, magnetic field and
concentration of magnetic centers
\cite{Loram70,Ford70,Mydosh74,Ford76,Campbell82}. It was shown that the
magnetic contribution to the electrical resistivity $\rho(T)$ reveals a
$T^{3/2}$ temperature dependence at the lowest temperatures and a $T^2$
dependence close to $T_f$; at elevated temperatures $T>T_f$ there is a
broad maximum in $\rho(T)$ which is due to a competition between Kondo
and RKKY interactions in the subsystems of electrons and magnetic
moments. Existing theories encounter serious difficulties to reproduce
the temperature behavior of the resistivity in broad intervals of
temperatures and impurity concentrations \cite{Binder86}. Certain
difficulties are also caused by deviations from Matthiessen's rule at
elevated temperatures.
Fundamental information on the properties of the electronic subsystem
can be obtained by optical spectroscopy, which for instance allows one
to extract such characteristics of free carriers as mechanisms of
scattering and relaxation, energy gaps and pseudogaps in the density of
states, localization and hopping parameters, size and granularity
effects in thin conducting films \cite{DresselGruner02}. However, to
our knowledge, there are no data published on optical spectroscopy of
spin glasses. The reason may be purely technical: since these materials
are highly conducting, almost like regular metals, it is practically
impossible to measure their electrodynamic properties by standard
spectroscopical techniques, especially in the far-infrared range and at
even lower frequencies where effects of interactions of mobile
electrons with localized spins and between these spins should reveal
themselves. Here we present the first measurements of the
electrodynamic response of the spin-glass compound AuFe in a broad
range of frequencies with an emphasis on the THz range corresponding to
energies of the radiation quanta which are close to the RKKY binding
energy.
\section{Experimental Techniques}
For the measurements we have chosen the well-studied spin-glass
compound AuFe. A set of films with different thicknesses and Fe
concentrations was prepared. The high purity metals were co-sputtered
onto a high-resistive Si substrate (size $10\times 10$~mm$^2$,
thickness about 0.5~mm). Before the argon sputter gas was admitted the
equipment was pumped down to UHV conditions ($10^{-7}$~torr) to prevent
oxidation of the films during fabrication. The films were analyzed
using Rutherford backscattering and electron microprobe analysis; the
thickness and composition was homogeneous. In this paper we concentrate
on the results obtained for an AuFe film with 6 at.\%\ of Fe and about
50 nm thickness. We also measured a pure Au film of the same thickness
prepared under the same conditions.
For the THz investigations a coherent source spectrometer
\cite{Kozlov98} was used which operates in the frequency range from 30
GHz up to 1.5 THz (1 - 50~cm$^{-1}$). This range is covered by a set of
backward-wave oscillators as powerful sources of radiation whose
frequency can be continuously tuned within certain limits. In a
quasioptical arrangement the complex (amplitude and phase) transmission
and reflection coefficients can be measured at temperatures from 2~K to
1000~K and in a magnetic field up to 8~Tesla if required. Dynamical
conductivity of Au and AuFe films was directly determined from THz
transmissivity and reflectivity spectra in a way we have used for
measurements other conducting films, like heavy fermions
\cite{Dressel02} or superconductors \cite{Pronin96}.
In order to complete our overall picture, the samples were optically
characterized up to the ultraviolet. The room temperature experiments
were conducted on the same Au and AuFe films as the THz investigations.
In the infrared spectral range ($600- 7000$~cm$^{-1}$), optical reflectivity
R$(\omega)$ measurements were performed using an infrared microscope
connected to a Bruker IFS 66v Fourier transform spectrometer. An
aluminum mirror served as reference, whose reflectivity was corrected
by the literature data \cite{Palik85}. A Woollam vertical variable
angle spectroscopic ellipsometer (VASE) equipped with a Berek
compensator was utilized to measure in the energy range between
5000~cm$^{-1}$\ and 33\,000~cm$^{-1}$\ with a resolution of 200~cm$^{-1}$\ under multiple
angles of incident between 65$^{\circ}$ and 85$^{\circ}$. From the
ellipsometric measurements we obtain the real and imaginary parts of
the refractive index which then allow us to directly evaluate any
optical parameter like the reflectivity $R(\omega)$ or the conductivity
$\sigma(\omega)$.
\section{Experimental Results and Discussion}
\begin{figure}
\centering\resizebox{1\columnwidth}{!}{\includegraphics{Fig1f.eps}}
\caption{(a) Room temperature reflectivity and (b) conductivity of Au and AuFe (6 at.\%\ Fe) films.
The full circles below 50~cm$^{-1}$\ are from THz transmission and reflection measurements.
The solid curves correspond to the IR reflection measurements.
The reflectivity calculated from the ellipsometric measurements is given be grey open circles and lines.
The thick lines in the conductivity spectra between 5000~cm$^{-1}$\ and 33\,000~cm$^{-1}$\ are directly
calculated from ellisometric data.
The dotted lines are the fits by a simple Drude-Lorentz model.
The dashed lines indicate a combined fit of reflectivity and conductivity by an advanced
Drude-Lorentz model based on the variational dielectric function
introduced in \protect\cite{Kuzmenko05}.}\label{fig:opticalspectra}
\end{figure}
To analyze the frequency dependent transport, we first consider the
room temperature results displayed in Fig.~\ref{fig:opticalspectra}. It
is seen that the spectra for both Au and AuFe are metallic
\cite{DresselGruner02}: the reflectivity reveals a characteristic
plasma edge around 20000~cm$^{-1}$ and the conductivity $\sigma(\omega)$ is
only weekly frequency dependent at low frequencies and quickly drops
between $10^3$~cm$^{-1}$\ and $10^4$~cm$^{-1}$. The increase of the conductivity at
even higher frequencies (above $2\cdot 10^4$~cm$^{-1}$) is caused by
electronic interband transitions. In order to extract the microscopic
characteristics of charge carriers, we fitted the spectra by the Drude
model of conductivity \cite{DresselGruner02}:
$\hat{\sigma}(\omega)=\sigma_{\rm dc}/(1-i\omega\tau)$, where
$\sigma_{\rm dc}$ denotes the dc conductivity and $\tau=1/(2\pi
c\gamma)$ the relaxation time and $\gamma$ the relaxation rate of
charge carriers (c is the speed of light). The higher frequency
interband transitions were roughly modelled by additional Lorentz
oscillators. The results of the fit are presented by dashed lines in
Fig.~\ref{fig:opticalspectra}, the parameters are summarized in
Table~\ref{tab:1}. As indicated by the dotted lines, both the
reflectivity and conductivity spectra can be perfectly reproduced by a
more advanced procedure based on the variational analysis of the
optical reflectivity and conductivity spectra introduced by Kuzmenko
\cite{Kuzmenko05}. The low-frequency conductivity is smaller and the
scattering rate of carriers larger (more than ten times) in AuFe
compared to Au. Obviously, the differences should be ascribed to
additional magnetic scattering of electrons in AuFe. Although the
plasma frequency is not affected when Fe is diluted in Au, the
distribution of spectral weight (as measured by the center of gravity,
for instance) is shifted to higher energies.
\begin{table}
\caption{Drude parameters of the free charge carriers in Au and AuFe (6
at.\%\ Fe) films obtained by a Drude-Lorentz fit to the room
temperature spectra of Fig.~\ref{fig:opticalspectra} with the dc
($\omega\rightarrow 0$) conductivity $\sigma_{\rm dc}$, the scattering
rate $\gamma$, scattering time $\tau$, and the plasma frequency
$\omega_p=\sqrt{4\pi ne^2/m}$ with $n$ and $e$ being the concentration
and the charge of the carriers, and $m$ their effective mass.}
\label{tab:1}
\begin{tabular}{ccccc}
\hline\noalign{\smallskip}
Film & $\sigma_{\rm dc}$ ($\rm\Omega^{-1}$cm$^{-1}$)& $\gamma$ (cm$^{-1}$) & $\tau$ (s)& $\omega_p$ (cm$^{-1}$) \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Au & 350\,500&~~236 & $2.25\times 10^{-14}$ & 70\,450 \\
AuFe & ~~33\,600 & 2445 & $2.17\times 10^{-15}$ & 70\,100 \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\begin{figure}
\centering\resizebox{0.5\columnwidth}{!}{\includegraphics{Fig2f.eps}}
\caption{(a) Temperature dependent ac and dc resistivities of Au and AuFe.
The solid dots correspond to ac resistivity $\rho(\omega)$ = 1/$\sigma(\omega)$ of AuFe film (6 at.\%\ Fe)
measured at 35~cm$^{-1}$\ (1.05~THz).
The dashed line refers to the dc resistivity obtained for bulk AuFe with 5 at.\%\ of Fe \cite{Mydosh74}.
The solid line indicates the dc resistivity of bulk gold \protect\cite{Mydosh74}.
The square shows the ac resistivity of Au film at 35 cm$^{-1}$ (1.05 THz).
(b) The temperature dependence of the magnetic contribution to the ac
resistivity evaluated by $\Delta\rho(T)=\rho_{\rm AuFe}(T)-\rho_{\rm
Au}(T)$ is shown by the open circles; for comparison the ac resistivity
of the AuFe film is re-plotted on a linear scale (solid
dots).}\label{fig:temperaturedependence}
\end{figure}
The temperature dependence of the transport characteristics is
presented in Fig.~\ref{fig:temperaturedependence}. The upper panel
compares the ac resistivity $\rho(\omega)$ = 1/$\sigma(\omega)$ of the
AuFe (6 at.\%\ Fe) and Au films to the dc resistivity of a bulk AuFe
(with slightly different Fe concentration of 5 at.\%) and of pure bulk
Au samples (data from Ref.~\cite{Mydosh74}).
First, it is obvious that at all temperatures the resistivity of our
AuFe film is very close to that of the bulk material: for example, at
room temperature $\rho_{\rm AuFe}(\rm film) \approx 30~\mu\Omega$cm and
$\rho_{\rm AuFe}(\rm bulk)\approx 40~\mu\Omega$cm. The same holds for
the pure Au samples: $\rho_{\rm Au}(\rm film) \approx 3~\mu\Omega$cm
and $\rho_{\rm Au}(\rm bulk)\approx 2~\mu\Omega$cm. This agreement
indicates the very good quality of our thin films and that there are
basically no effects on their ac electrical properties connected with a
possible granular structure. The same conclusion is also drawn from the
measurements of the freezing temperatures $T_f\approx 25$~K of our AuFe
film which appears to be basically the same as those for bulk samples.
Furthermore, it is seen from Fig.~\ref{fig:temperaturedependence} that
at all temperatures the resistivity of AuFe is much larger than the
resistivity of Au; the difference increases when cooling down. This is
a consequence of scattering of the charge carriers on magnetic
impurities that prevails over phonon scattering in the entire
temperature range. The resistivity $\rho(T)$ reveals a broad feature
around $100-150$~K which is ascribed to the interplay of Kondo and RKKY
regimes \cite{Ford70,Mydosh74,Ford76,Campbell82}. At high temperatures
thermal excitations exceed the RKKY energy of interacting impurities
and a Kondo-like scattering of electrons on independent magnetic
moments dominates. This leads to a weak increase of the magnetic
contribution to the resistivity $\rho_{\rm mag}$ upon cooling as
demonstrated by the open circles in
Fig.~\ref{fig:temperaturedependence}b where the difference $\Delta\rho
=\rho_{\rm AuFe} -\rho_{\rm Au}=\rho_{\rm mag}$ is plotted. At low
temperatures the RKKY interaction between magnetic moments starts to
surmount and causes a noticeable suppression of the magnetic
contribution to the resistivity. Assuming Matthiessen's rule
\cite{Bass72}, the scattering rate of electrons due to magnetic
interaction in AuFe at $T=300$~K can be calculated as $\gamma_{\rm mag}
= \gamma_{\rm AuFe} - \gamma_{\rm Au} \approx 2210$~cm$^{-1}$.
\begin{figure}[b]
\centering\resizebox{0.5\columnwidth}{!}{\includegraphics{Fig3f.eps}}
\caption{Terahertz conductivity spectra of the AuFe film (6 at.\%\ Fe)
at various temperatures. For $T<100$~K a positive dispersion indicates localization effects.
The lines are guides to the eye. The inset shows schematically how the Drude-like conductivity spectrum of free electrons (solid line) is modified by a gap-like structure (dashed line) due to
the RKKY interaction between Fe magnetic moments mediated by these electrons.}\label{fig:THzspectra}
\end{figure}
In the low temperature regime, also the frequency dependent
conductivity of AuFe is distinct from the room temperature behavior as
demonstrated in Fig.~\ref{fig:THzspectra} where the THz spectra of
$\sigma(\omega)$ for AuFe are presented. Above approximately 100~K,
$\sigma(\omega)$ is basically frequency independent in accordance with
the Drude model which predicts a constant conductivity for frequencies
much smaller than the scattering rate \cite{DresselGruner02}, as
sketched in the inset of Fig.~\ref{fig:THzspectra} by the solid line;
the scattering rate for AuFe equals 2445~cm$^{-1}$\ at 300~K (Table
\ref{tab:1}), i.e., far above the range of frequencies presented in
Fig.~\ref{fig:THzspectra}. This changes drastically for $T<100$~K, when
the conductivity $\sigma(\omega)$ increases towards high frequencies.
We associate this conductivity dispersion with a pseudogap which
appears in the free electron excitations when the RKKY interaction
between Fe centers mediated by the conduction electrons sets in. In a
simple picture the electrons which participate in the RKKY interaction
can be regarded as being to some extent bound to (or localized between)
the corresponding pairs of magnetic moments, as long as the thermal
energy $k_BT$ does not exceed this ``binding energy'' which should be
of order of the RKKY interaction $\Delta_{\rm RKKY}$. This will lead to
a corresponding reduction of the dc conductivity and also of the ac
conductivity for frequencies below $\Delta_{\rm RKKY}/\hbar$. At higher
frequencies, $\omega>\Delta_{\rm RKKY}/\hbar$, the electrons will no
longer be affected (and localized) by the RKKY interaction and hence
the conductivity $\sigma(\omega)$ should increase around $\Delta_{\rm
RKKY}/\hbar$ to approach the unperturbed value. In other words, one
would expect a gap-like feature to appear in the conductivity spectrum,
as depicted by the dashed line in the inset of
Fig.~\ref{fig:THzspectra}. The RKKY energy is approximately given by
the freezing temperature $T_f$ \cite{Schilling76}, which for AuFe (6
at.\%\ of Fe) is about 25 K \cite{Mydosh74,Canella72}, yielding
$\Delta_{\rm RKKY}\approx k_BT_f\approx 2.2$~meV. For the
characteristic frequency we then obtain $\Delta_{\rm
RKKY}/\hbar\approx17$~cm$^{-1}$\ (510 GHz). This falls just in the range
where the dispersion of the conductivity of AuFe in the spin-glass
state is observed. The effect amounts to approximately 10\%, meaning
that about one tenth of the conduction electrons participate in the
RKKY interaction.
According to our picture of spin-glass systems, the conduction
electrons experience two effects from the RKKY interaction mediated by
these electrons. On one hand, a decrease of the resistivity is commonly
observed while cooling below $T_f$ because the RKKY correlations
between magnetic moments progressively suppress the Kondo-type
scattering. On the other hand, a certain fraction of carriers is
increasingly bound to the magnetic moments by participating in the RKKY
interaction and is thus taken out of the conduction channel. The
competing character of the two effects is clearly seen in
Fig.~\ref{fig:THzspectra}: while cooling down, the gap-like feature
appears on top of a background conductivity which increases basically
at all shown frequencies. In order to verify our assumptions, a
comprehensive study is required on various spin-glass materials with
different freezing temperature and consequently different $\Delta_{\rm
RKKY}$.
\section{Conclusions}
The optical spectra of pure gold and spin glass AuFe (6 at.\%\ Fe)
films have been investigated in a broad frequency range from 10~cm$^{-1}$\ up
to 33\,000~cm$^{-1}$\ using three different spectroscopic techniques. At
ambient temperature the microscopic charge-carrier parameters in pure
gold and in AuFe are determined. For the spin glass AuFe the scattering
rate of the carriers is significantly enlarged due to their interaction
with localized magnetic moments. At reduced temperatures ($T<100$~K)
when the RKKY interaction gains importance as the spin-glass state is
formed, a pseudogap feature in the optical conductivity spectrum is
detected of a magnitude close to the RKKY energy in AuFe. We associate
the origin of the pseudogap with partial localization of those
electrons which are involved in the RKKY interaction between magnetic
moments.
\section{Aknowledgements} The work was supported by the Russian foundation
for Basic Research, grant N06-02-16010-a and the Deutsche
For\-schungs\-ge\-mein\-schaft (DFG). We also thank the Foundation for
Fundamental Research of Matter (FOM).
|
2,877,628,091,201 | arxiv |
\section{Analysis}
\label{sect:analysis}
\marco{We follow the same approach in formally proving two of our main results,
\marco{Theorem \ref{thm-homo-convergence} and Theorem \ref{thm:convergence-hetero}}.
The approach comprises two main steps.}
Firstly, we show that the evolution of market share $\boldsymbol{\phi}$ can be cast as a stochastic RMA dynamic.
Secondly, we show the convergence of such RMA dynamics, by \marco{utilising their equivalences} to the mirror descent of convex functions, and bounding its gap from the optimal.
\subsection{Influence Dynamic as RMA}
\begin{defn}(\cite{benaim1999dynamics,robbins1951stochastic})
A Robbins-Monro algorithm (RMA) is a discrete-time stochastic process $z^t$ whose general structure is specified by
\[
\mathbf{z}^{t} - \mathbf{z}^{t-1} ~=~ \gamma^{t} \cdot \left( F(\mathbf{z}^{t-1}) + U^t \right),
\]
where $\mathbf{z}^t\in \mathbb{R}^n$ for some $n\ge 1$, $F:\mathbb{R}^n\rightarrow \mathbb{R}^n$ is a deterministic continuous vector field, $\gamma^t$ is deterministic and satisfies $\gamma^t > 0$, $\sum_{t\ge 1} \gamma^t = +\infty$ and $\lim_{t\rightarrow \infty} \gamma^t = 0$, and $\mathbb{E}[U^t | \mathcal{F}^{t-1}] = 0$ where
$\mathcal{F}^{t-1}$ is the natural filtration on the entire process.
\marco{The corresponding ordinary differential equation (ODE) system of the RMA is $\dot{\mathbf{z}} = F(\mathbf{z})$.}
\end{defn}
Note that market share $\boldsymbol{\phi}$ will change only when there is a purchase. Thus, Maldonado et al.~\cite{maldonado2018popularity} modify the time schedule to only count those times at which a purchase occurs,
and show the following lemma.
\begin{lem
In the stochastic T-O market, the update of market share follows the following RMA w.r.t.~the modified time schedule:
\[
\boldsymbol{\phi}^t - \boldsymbol{\phi}^{t-1} ~=~ \frac{1}{t} \cdot \left( (\mathbf{p}(\boldsymbol{\phi}^{t-1}) - \boldsymbol{\phi}^{t-1}) + U^t \right),
\]
where $U^t$ is the random variable defined as below. Let $\mathbf{e}^t$ denote the random unit vector whose
$j$-th entry is $1$ if item $j$ is purchased at time $t$. Then $U^t = \mathbf{e}^t - \mathbb{E}[\mathbf{e}^t | \mathcal{F}^{k-1}]$. (Recall that $e^t_j = 1$ with probability $p_j(\boldsymbol{\phi}^{t-1})$.)
\end{lem}
\hq{The proof of this lemma could be found in \cite{maldonado2018popularity} (for the homogeneous setting) and \cite{supp3065}\ref{sec:supp-RMA} (for the heterogeneous setting).} With the above lemma, we can apply the seminal results of Bena{\"\i}m~\cite{benaim1999dynamics}
to show that the RMA trajectory is the asymptotic pseudo trajectory of the mirror descent update \eqref{deterministic update rule homogeneous}. By using the
convergence theorem established in \cite{CT93,cheung2018dynamics}, we show that both dynamics converge to the global minimisers of \eqref{Efficiency Entropy Problem}. This allows us to present a new proof of \lx{\Cref{thm-homo-convergence} in~\cite{maldonado2018popularity} as shown below}.
\subsection{Convergence of Mirror Descent}
Let $d_h(\cdot,\cdot)$ denote the Bregman divergence in \Cref{defn::Bregmann divergence}. We assume that the function $h$ is strictly convex. Consequently, $d_h(\cdot,\cdot)$ is strictly convex in its first parameter, and $d_h(\mathbf{x},\mathbf{y}) = 0$ if and only if $\mathbf{x} = \mathbf{y}$.
\begin{defn}
A function $f$ is $L$-Bregman-convex with respect to the Bregman divergence $d_h$ if
for any $\mathbf{y} \in \text{rint}(C)$ and $\mathbf{x}\in C$,
\[
f(\mathbf{x}) ~\leq~ f(\mathbf{y}) + \langle\nabla f(\mathbf{y}), \mathbf{x}-\mathbf{y}\rangle + L \cdot d_h(\mathbf{x},\mathbf{y}).
\]
\end{defn}
Given an $L$-Bregman-convex function $f$ with respect to the Bregman divergence $d_h$,
the mirror descent rule with respect to the Bregman divergence $d_h$ is given by $\mathbf{x}^{t+1} = g(\mathbf{x}^t)$, where
\begin{equation}
g (\mathbf{x}^t) = \argmin_{\mathbf{x}\in C} \left\{f(\mathbf{x}^t) + \langle\nabla f(\mathbf{x}^t), \mathbf{x} - \mathbf{x}^t\rangle +L \cdot d_h(\mathbf{x},\mathbf{x}^t) \right\}. \label{MDMinimizer}
\end{equation}
\lx{The update in \eqref{MDMinimizer} is the same as that of a general mirror descent \eqref{general MD rule}, except with step size defined by $L$. It enables us to use the following theorem to bound the difference to the optimal.}
\begin{thm}[\cite{cheung2018dynamics}, Theorem 3.2]
Suppose $f$ is an $L$-Bregman-convex function with respect to $d_h$, and $\mathbf{x}^t$ is the point reached after $t$ applications of the mirror descent update rule $\mathbf{x}^{t+1} = g(\mathbf{x}^t)$. Then for all $t\ge 1$,
\[
f(\mathbf{x}^t) - f(\mathbf{x}^*) \leq \frac{L\cdot d_h(\mathbf{x}^*, \mathbf{x}^0 )}{t}.
\]
\end{thm}
Thankfully,
the objective functions of mirror descent updates for both homogeneous and heterogeneous cases are
Bregman convex, and the proof can be found in \cite{supp3065} Section \ref{Calculation of objective function}.
\marco{Lastly, we apply the theorem below to complete the proof. Note that what we have just showed is the convergence of \emph{discrete-time} mirror descent updates $\mathbf{x}^{t+1} = g(\mathbf{x}^t)$,
but the theorem requires condition that guarantee convergence of the ODE system $\dot{\mathbf{x}} = g(\mathbf{x}) - \mathbf{x}$. We need to convert the discrete-time convergence to its ODE analogue, which is simple. The conversion can be found in \cite{supp3065} Section \ref{sec:supp-shorter-proof}.}
\begin{thm}
Consider an ODE $\dot{\mathbf{x}} = h(\mathbf{x})$.
Suppose there is a continuously differentiable function $f:\mathbb{R}^d \rightarrow \mathbb{R}$ such that
(i) $\lim_{\|\mathbf{x}\|\rightarrow \infty} f(\mathbf{x}) = +\infty$;
(ii) the set of minimum points of $f$, $X^*$, is non-empty; and
(iii) $\innerProd{f'(\mathbf{x})}{h(\mathbf{x})} \le 0$ for all $\mathbf{x}$, with equality holds if and only if $\mathbf{x}\in X^*$.
Then almost surely, the Robbins-Monro algorithm of the ODE converges to a non-empty subset of $X^*$.
\end{thm}
\section{Existence of TOME}
\label{supp:TOME_exist}
In this section, we prove of Theorem \ref{thm:exist-TOME}.
By the definition of $y_j(\boldsymbol{\phi})$ in \eqref{choice TO hetero},
the following inequality holds for any $\boldsymbol{\phi} \in \Delta$ and $j\in \mathcal{I}$:
\[
\begin{aligned}
y_j(\boldsymbol{\phi}) &~=~ \sum_{i=1}^{|\mathcal{U}|} w_i q_{ij} \cdot \frac{v_{ij} (\phi_j)^{r_i}}{\sum_{k=1}^{|\mathcal{I}|} v_{ik} (\phi_k)^{r_i}}\\
&~\ge~ \underbrace{\frac{w_{i^*(j)} v_{i^*(j),j} q_{i^*(j),j}}{\sum_{k=1}^{|\mathcal{I}|} v_{i^*(j),k}}}_{c_j}\cdot (\phi_j)^{r_{i^*(j)}},
\end{aligned}
\]
for some type $i^*(j)$ with $v_{i^*(j),j} q_{i^*(j),j} > 0$, which exists due to condition (ii).
Note that if $\phi_j \ge (c_j)^{1/(1-r_{i^*(j)})}$, then
\begin{equation}\label{y fixed point}
y_j(\boldsymbol{\phi}) \ge c_j \cdot (c_j)^{r_{i^*(j)}/(1-r_{i^*(j)})} = (c_j)^{1/(1-r_{i^*(j)})}.
\end{equation}
On the other hand, recall that $p_j(\boldsymbol{\phi}) = y_j(\boldsymbol{\phi}) / (\sum_{k=1}^{|\mathcal{I}|} y_k(\boldsymbol{\phi}))$.
Since $\sum_{k=1}^{|\mathcal{I}|} y_k(\boldsymbol{\phi}) \le 1$,
\begin{equation}\label{p fixed point}
p_j(\boldsymbol{\phi}) \ge y_j(\boldsymbol{\phi}).
\end{equation}
Let $S$ denote the set $\left\{\boldsymbol{\phi}\in \Delta : \forall j\in \mathcal{I},~1\ge \phi_j\ge (c_j)^{1/(1-r_{i^*(j)})}\right\}$.
Since $0 < c_j \le 1$, $1/(1-r_{i^*(j)}) \ge 1$ and $0\le q_{ij}\le 1$,
\[
\begin{aligned}
\sum_{j=1}^{|\mathcal{I}|}(c_j)^{1/(1-r_{i^*(j)})} &~\le~ \sum_{j=1}^{|\mathcal{I}|} c_j \\
&~=~ \sum_{i=1}^{|\mathcal{U}|} \sum_{j:i^*(j)=i} c_j ~\le~ \sum_{i=1}^{|\mathcal{U}|} w_i ~=~ 1,
\end{aligned}
\]
hence $S$ is non-empty. And of course, $S$ is compact and convex.
Due to \eqref{y fixed point} and \eqref{p fixed point}, $\mathbf{p}$ is a continuous function that maps $S$ to a subset of $S$.
By Brouwer's fixed point theorem, $\mathbf{p}$ has a fixed point in $S$, which is a TOME of the market.
\section{Objective Function for Homogeneous Case}
\label{supp:social-welfare}
We present the proof of Theorem \ref{thm:social-welfare} below.
\begin{proof}
For the case $r>1$, since $\phi_j^r \leq \phi_j$ for any $\phi_j \in [0,1]$ in this case, we have
\begin{equation}
\sum_j^{|\mathcal{I}|} \bar{q}_j\phi_j^r \leq \sum_j^{|\mathcal{I}|} \bar{q}_j\phi_j \leq \sum_j^{|\mathcal{I}|} \bar{q}_{j^*}\phi_j = \bar{q}_{j^*},
\end{equation}
where $j^* \in \arg\max_j\{\bar{q}_j: j\in \mathcal{I}\}$. This is obtained only if $\phi_{j^*} =1$, and this is one of the equilibria of the market.\\
When $r\in (0,1]$, we first transform the maximisation problem into a minimisation problem for simplicity. The problem is given by
\begin{equation}
\begin{aligned}
\min \quad -\sum_j^{|\mathcal{I}|} \bar{q}_j\phi_j^r, \\
\text{subject to } \boldsymbol{\phi}\in\Delta.
\end{aligned}
\end{equation} Now, the problem is convex, we use the KKT theorem to solve the problem. Firstly, the Lagrangian is given by
\begin{equation}
\mathcal{L}_1 = -\sum_j^{|\mathcal{I}|} \bar{q}_j\phi_j^{r} + \lambda(\sum_{i= 1}^{|\mathcal{I}|} \phi_i) - \boldsymbol{\eta}^T\boldsymbol{\phi},
\end{equation}
Then, we know that
\begin{equation}
\frac{\partial\mathcal{L}_1 }{\partial \phi_j} = -r\bar{q}_j\phi_j^{r-1} + \lambda - \eta_j =0
\end{equation}
for any $j\in \mathcal{I}$, where $\eta_j$ is the $j$ th component of $\boldsymbol{\eta}$. Moreover, $\phi_j = 0$ satisfies the $j$ th component of the equilibrium equation \eqref{choice TO}. Consider the indices $j\in \mathcal{I}$ with $\phi_j \neq 0$ (which implies $\eta_j = 0$ by KKT theorem), we have
\begin{equation}
\frac{\partial\mathcal{L}_1 }{\partial \phi_j} = -r\bar{q}_j\phi_j^{r-1} + \lambda =0.
\end{equation}
By multiplying $\phi_j$ on both sides of the above equation we have,
\begin{equation}
-r\bar{q}_j\phi_j^r + \lambda\phi_j =0. \label{52}
\end{equation}
Hence,
\begin{equation}
\sum_{j=1}^{|\mathcal{I}|} -r\bar{q}_j\phi_j^r + \sum_{j=1}^{|\mathcal{I}|}\lambda\phi_j = \sum_{j=1}^{|\mathcal{I}|} -r\bar{q}_j\phi_j + \lambda =0,
\end{equation}
which implies
\begin{equation}
\lambda =\sum_{j=1}^{|\mathcal{I}|} r\bar{q}_j\phi_j^r.
\end{equation}
Therefore, by \eqref{52}, we could get
\begin{equation}
\phi_j =\frac{r\bar{q}_j\phi_j^r}{\lambda} = \frac{\bar{q}_j\phi_j^r}{\sum_{j=1}^{|\mathcal{I}|} \bar{q}_j\phi_j^r},
\end{equation}
which is exactly the equilibria equation \eqref{choice TO}, which concludes the proof.
\end{proof}
It remains to check that, given a trial-offer market model, whether the equilibrium which the dynamic is converging to is exactly the maximiser of \eqref{TotalUtilityProblem}. The following corollary verifies that it is indeed true.
\begin{cor}
If $r<1$, the unique interior equilibrium $\boldsymbol{\phi}^* \in \widetilde{\Delta}$ is the unique maximiser of \eqref{TotalUtilityProblem}.
If $r=1$ and $\bar{q}_1 > \bar{q}_2 >\ldots > \bar{q}_{|\mathcal{I}|}$, the unique equilibrium $\boldsymbol{\phi}^* \in \widetilde{\Delta}$ is the unique maximiser of \eqref{TotalUtilityProblem}. \label{Cor1}
\end{cor}
\begin{proof}
If $r<1$, with $Q = \{i\in \mathcal{I}, \phi_i \neq 0\}$, by theorem 5.1 of \cite{maldonado2018popularity}, we note that the equilibrium is given by
\begin{equation}
\phi^*_j = \frac{\bar{q}_j^{\frac{1}{1-r}}}{\sum_{i\in Q}\bar{q}_i^{\frac{1}{1-r}}}, \quad j\in Q
\end{equation}
Then, the value of objective function evaluated at this point is
\begin{equation}
\sum_{j\in Q} \frac{\bar{q}_j^{\frac{1}{1-r}}}{(\sum_{i\in Q}q_i^{\frac{1}{1-r}})^r} = (\sum_{i\in Q}\bar{q}_i^{\frac{1}{1-r}})^{1-r} = \Vert q_Q \Vert_{\frac{1}{1-r}},
\end{equation}
where $q_Q$ is the vector concatenating all $\bar{q}_j$ with $j\in Q$, and the last inequality is obtained by noting that $q$ does not have zero components. Hence, it is clear that the function value is maximised when $Q = \mathcal{I}$, which implies that the maximum is obtained by the unique interior point of \eqref{TotalUtilityProblem}.\\
If $r=1$, it is clear that the maximum of the objective function is $\bar{q}_1$ which is obtained by taking $\boldsymbol{\phi} = \mathbf{e}_1$.
\end{proof}
\section{Objective function for heterogeneous case}\label{Objective function for heterogeneous case}
We present the proof of \Cref{prop:Nash-SW} below.
\begin{proof}
Let $f$ be the logarithm of the objective function, then the partial derivatives are given by
\begin{equation}
\frac{\partial f}{\partial \phi_j} = \sum_{i=1}^{|\mathcal{U}|} w_ia_i^*\frac{q_{ij}v_{ij}\phi_j^{r_i-1}}{\sum_{k=1}^{|\mathcal{I}|}q_{ik}v_{ik}\phi_k^{r_i}}.
\end{equation}
We note that, if $\phi_j =0$ for some $j\in \mathcal{I}$, then this component clearly satisfies the equilibrium equation. Therefore, we consider the set $Q \subset \mathcal{I}$ which includes all item indices $j$ with $\phi_j^* \neq 0$. Therefore, we have $\sum_{j\in Q} \phi_j= 1$. By the KKT theorem, we have
\begin{equation}
\sum_{i=1}^{|\mathcal{U}|} w_ia_i^*\frac{q_{ij}v_{ij}\phi_j^{r_i-1}}{\sum_{k=1}^{|\mathcal{I}|}q_{ik}v_{ik}\phi_k^{r_i}} + \lambda = 0, \label{KKT_multi}
\end{equation}
for all $j\in Q$. Hence,
\begin{equation}
\begin{aligned}
0 & = \sum_{j\in Q}\sum_{i=1}^{|\mathcal{U}|} w_ia_i^*\frac{q_{ij}v_{ij}\phi_j^{r_i}}{\sum_{k=1}^{|\mathcal{I}|}q_{ik}v_{ik}\phi_k^{r_i}} + \lambda\sum_{j\in Q} s_j\\
& = \sum_{i=1}^{|\mathcal{U}|} w_ia_i^*\frac{\sum_{j\in Q}q_{ij}v_{ij}\phi_j^{r}}{\sum_{k=1}^{|\mathcal{I}|}q_{ik}v_{ik}\phi_k^{r_i}} + \lambda\\
& = \sum_{i=1}^{|\mathcal{U}|} w_ia_i^* + \lambda.\\
\end{aligned}
\end{equation}
Plugging this into \eqref{KKT_multi} we could get
\begin{equation}
s_j = \sum_{i=1}^{|\mathcal{U}|} \frac{w_ia_i^*}{\sum_{h=1}^{|\mathcal{U}|}w_ha_h^*} \cdot \frac{q_{ij}v_{ij}\phi_j^{r_i}}{\sum_{k=1}^{|\mathcal{I}|}q_{ik}v_{ik}\phi_k^{r_i}},
\end{equation}
which is exactly the equilibrium equation.
\end{proof}
\section{Special case: $v_{ij} = v_j$ for heterogeneous setup}\label{Special case: $v_{ij} = v_j$ for heterogeneous setup}
\begin{prop}
Suppose $v_{ij} = v_i$ for all $i\in \mathcal{U}$ and $j \in \mathcal{I}$, then the market dynamics converge and the equilibrium could be written as
\begin{equation}
\phi^*_j = \frac{(\bar{q_j}v_j)^{\frac{1}{1-r}}}{\sum_{j=1}^{|\mathcal{I}|}(\bar{q_j}v_j)^{\frac{1}{1-r}}},
\end{equation}
where $\bar{q_j} = \sum_{i=1}^{|\mathcal{U}|}w_iq_{ij}$.
\end{prop}
\begin{proof}
For any $j\in \mathcal{I}$, the market dynamics could be written as
\begin{equation}
\begin{aligned}
p^{mult}_j(\mathbf{s}^t) &= \sum_{i=1}^{|\mathcal{U}|} \frac{w_ia_i^t}{\sum_{h=1}^{|\mathcal{U}|}w_ha_h^t}\frac{q_{ij}v_{j}(\phi_j^t)^{r}}{\sum_{k=1}^{|\mathcal{I}|}q_{ik}v_{k}(\phi_k^t)^{r}}\\
& = \sum_{i=1}^{|\mathcal{U}|} \frac{w_i(\sum_{n=1}^{|\mathcal{I}|}q_{in}v_l(\phi_l^t)^r)}{\sum_{h=1}^{|\mathcal{U}|}w_h(\sum_{l=1}^{|\mathcal{I}|}q_{hl}v_l(\phi_l^t)^r)}\frac{q_{ij}v_{j}(\phi_j^t)^{r}}{\sum_{k=1}^{|\mathcal{I}|}q_{ik}v_{k}(\phi_k^t)^{r}}\\
& = \sum_{i=1}^{|\mathcal{U}|} \frac{w_iq_{ij}v_{j}(\phi_j^t)^{r}}{\sum_{h=1}^{|\mathcal{U}|}w_h(\sum_{l=1}^{|\mathcal{I}|}q_{hl}v_l(\phi_l^t)^r)}\\
& = \frac{\bar{q_j}v_{j}(\phi_j^t)^{r}}{\sum_{h=1}^{|\mathcal{U}|}\bar{q_h}v_l(\phi_l^t)^r},
\end{aligned}
\end{equation}
which is in the same form as the single user case, hence the convergence result follows.
\end{proof}
\section{Results on Mirror Descent} \label{Results on Mirror Descent}
To analyse this special case, we will use a similar approach as \cite{cheung2018dynamics}, which seeks for an objective function such that the market dynamics could be related to mirror descent. Before the analysis of this special case, we first present some preliminary results on mirror descent.
\begin{defn}
Let $C$ be a compact and convex set, given a differentiable convex function $h$ with domain $C$, the Bregman divergence genearated bt $h$, which is denoted by $d_h$ is defined as
\begin{equation}
d_h(\mathbf{x},\mathbf{y}) = h(\mathbf{x}) - h(\mathbf{y}) - \langle\nabla h(\mathbf{y}), \mathbf{x} - \mathbf{y} \rangle.
\end{equation}
where $\mathbf{x} \in C$, $\mathbf{y} \in \text{rint}(C) = \{\mathbf{y}\in C: \text{for all } \mathbf{z}\in C\text{, there exists some }\lambda>1 \text{ such that } \lambda \mathbf{y}+ (1-\lambda)\mathbf{z} \in C\}$.
\end{defn}
\begin{defn}
The function $f$ is $(\sigma,L)$-Bregman strongly convex with respect to the Bregman divergence $d_h$ if for any $\mathbf{y} \in \text{rint}C$ and $\mathbf{x}\in C$,
\begin{equation}
\begin{aligned}
f(\mathbf{y}) + &\langle\nabla f(\mathbf{y}), \mathbf{x}-\mathbf{y}\rangle + \sigma \cdot d_h(\mathbf{x},\mathbf{y}) \\
&\leq f(\mathbf{x}) \leq f(\mathbf{y}) + \langle\nabla f(\mathbf{y}), \mathbf{x}-\mathbf{y}\rangle + L \cdot d_h(\mathbf{x},\mathbf{y}).
\end{aligned}
\end{equation}
\end{defn}
The mirror descent rule with respect to the Bregman divergence $d_h$ is given by the following,
\begin{equation}
g (\mathbf{x}^t) = \arg\min_{\mathbf{x}\in C} \left\{f(\mathbf{x}^t) + \langle\nabla f(\mathbf{x}^t), \mathbf{x} - \mathbf{x}^t\rangle + \frac{1}{\alpha_t}d_h(\mathbf{x},\mathbf{x}^t) \right\}. \label{MirrorDescentMinimiser}
\end{equation}
\begin{equation}
\mathbf{x}^{t+1}_{\text{standard}} = g(\mathbf{x}^t). \label{MirrorDescentStandard}
\end{equation}
In addition, we also define the \emph{weighted} mirror descent rule as follows,
\begin{equation}
\mathbf{x}^{t+1}_{\text{weighted}} = \beta_tg(\mathbf{x}^t) + (1-\beta_t)\mathbf{x}^t.\label{MirrorDescentWeighted}
\end{equation}
Firstly, we give the convergence result for standard mirror descent
\begin{thm}[\cite{cheung2018dynamics}, Theorem 3.1]
Suppose that $f$ is $(\sigma,L)$-strongly Bregman convex with respect to $d_h$. With the update rule \eqref{MirrorDescentStandard} whereas $\alpha_t = 1/L$, for all $t\geq1$,
\begin{equation}
f(\mathbf{x}^t) - f(\mathbf{x}^*) \leq \frac{\sigma}{(\frac{L}{L-\sigma})^t -1}\cdot d_h(\mathbf{x}^*, \mathbf{x}^0).
\end{equation}
\label{Mirror_descent_convergence_thm_standard}
\end{thm}
For general convex function, we also have the following result,
\begin{thm}[\cite{cheung2018dynamics}, Theorem 3.2]
Suppose $f$ is $L$-Bregman convex function with respect to $d_h$, and $\mathbf{x}^T$ is the point reached after $T$ applications of the mirror descent update rule \eqref{MirrorDescentStandard}. Then,
\begin{equation}
f(\mathbf{x}^T) - f(\mathbf{x}^*) \leq \frac{L\cdot d_h(\mathbf{x}^*, \mathbf{x}^0 )}{T}.
\end{equation}
\end{thm}
Next, beyond existing results on the standard cases, we perform the analysis on the weighted update. It turns out that the weighted update rule will also converge to the global minimum of $f$ under certain assumptions.
\begin{thm}
Suppose that $f$ is $(\sigma,L)$-strongly Bregman convex with respect to $d_h$. In addition, $d_h(\mathbf{x},\mathbf{y})$ is convex on both variables. Under the conditions $\sum_{t=0}^{\infty}\beta_t = \infty$, and $0<\beta_t\leq1$ for any $t\geq0, t\in \mathbb{Z}$, with the update rule \eqref{MirrorDescentWeighted}, the dynamic converges to the global minimum of $f$, which is
\begin{equation}
\lim_{t\rightarrow \infty} \mathbf{x}_{\text{weighted}}^t = \mathbf{x}^*.
\end{equation}
where the step size $\alpha_t = 1$. \label{theorem_weighted_converge}
\end{thm}
The proof will rely on the following lemmas.
\begin{lem}
Suppose that $f$ is $(\sigma,L)$-strongly Bregman convex with respect to $d_h$. $d_h(\mathbf{x},\mathbf{y})$ is convex on both variables.
With the update rule \eqref{MirrorDescentWeighted},
\begin{equation}
f(\mathbf{x}^{t+1}) \leq f(\mathbf{x}^t)
\end{equation}
\end{lem}
\begin{proof}
By Bregman strong convexity, we have
\begin{equation}
\begin{aligned}
f(\mathbf{x}^{t+1}) - f(\mathbf{x}^t) &\leq \innerProd{\nabla f(\mathbf{x}^t)}{\mathbf{x}^{t+1} - \mathbf{x}^t} + L\cdot d_h(\mathbf{x}^{t+1}, \mathbf{x}^t)\\
&\leq \beta_t \innerProd{\nabla f(\mathbf{x}^t)}{ g(\mathbf{x}^t) - \mathbf{x}^t} + \beta_tL\cdot d_h(g(\mathbf{x}^t), \mathbf{x}^t).
\end{aligned}
\end{equation}
Noticing the relation \eqref{MirrorDescentMinimiser}, by replacing $g(\mathbf{x}^t)$ with $\mathbf{x}^t$, the result follows.
\end{proof}
\begin{lem}[\cite{birnbaum2011distributed}, lemma 9]
If $x^+$ is the optimal solution to the following optimisation problem
\begin{equation}
\begin{aligned}
\min \ \varphi(x) + d_h(x,y),\\
\text{subject to } x\in C,
\end{aligned}
\end{equation}
where $\varphi(x)$ is a convex function and $C$ is a compact convex set. Then,
\begin{equation}
\varphi(x)+ d_h(x,y) \geq \varphi(x^+)+ d_h(x^+,y) + d_h(x,x^+).
\end{equation} \label{lemma 17}
\end{lem}
\begin{proof}[Proof of \Cref{theorem_weighted_converge}]
By Bregman strong convexity, we have
\begin{equation}
\begin{aligned}
f(\mathbf{x}^{t+1}) - f(\mathbf{x}^t) \leq &\beta_t \innerProd{\nabla f(\mathbf{x}^t)}{ g(\mathbf{x}^t) - \mathbf{x}^t} \\
&+ \beta_tL\cdot d_h(g(\mathbf{x}^t), \mathbf{x}^t).
\end{aligned}
\end{equation}
With the above lemma, taking $x^+ = g(\mathbf{x}^{t}),\ y = \mathbf{x}^t,\ x= \mathbf{x}^*$, we have
\begin{equation}
\begin{aligned}
\beta_t&\innerProd{\nabla f(\mathbf{x}^t)}{ g(\mathbf{x}^t) - \mathbf{x}^t} + \beta_tL\cdot d_h(g(\mathbf{x}^t), \mathbf{x}^t)\\
&\leq \beta_t\innerProd{\nabla f(\mathbf{x}^t)}{ \mathbf{x}^* - \mathbf{x}^t} + \beta_tL\cdot (d_h(\mathbf{x}^*, \mathbf{x}^t) - d_h(\mathbf{x}^*, g(\mathbf{x}^t))).
\end{aligned}
\end{equation}
Using Bregman strong convexity, we also have
\begin{equation}
\innerProd{\nabla f(\mathbf{x}^t)}{ \mathbf{x}^* - \mathbf{x}^t} \leq f(\mathbf{x}^*) - f(\mathbf{x}^t) - \sigma d_h(\mathbf{x}^*,\mathbf{x}^t).
\end{equation}
Combining the above inequalities, we obtain
\begin{equation}
\begin{aligned}
f&(\mathbf{x}^{t+1}) - f(\mathbf{x}^t) \leq \beta_t(f(\mathbf{x}^*) - f(\mathbf{x}^t) - \sigma d_h(\mathbf{x}^*,\mathbf{x}^t)) \\
&+\beta_t(f(\mathbf{x}^*) - f(\mathbf{x}^t) - \sigma d_h(\mathbf{x}^*,\mathbf{x}^t)) \\
&+ \beta_tL\cdot (d_h(\mathbf{x}^*, \mathbf{x}^t) - d_h(\mathbf{x}^*, g(\mathbf{x}^t))).
\end{aligned}
\end{equation}
Adding $f(\mathbf{x}^t) - f(\mathbf{x}^*)$ on both sides,
\begin{equation}
\begin{aligned}
f&(\mathbf{x}^{t+1}) - f(\mathbf{x}^*) \leq (1-\beta_t)(f(\mathbf{x}^t) - f(\mathbf{x}^*))\\
&+ \beta_t\cdot ((L-\sigma)d_h(\mathbf{x}^*, \mathbf{x}^t) - Ld_h(\mathbf{x}^*, g(\mathbf{x}^t))).
\end{aligned}
\end{equation}
As $f(\mathbf{x}^{t}) \geq f(\mathbf{x}^{t+1})$, we also have
\begin{equation}
\beta_t(f(\mathbf{x}^{t+1}) - f(\mathbf{x}^*)) \leq \beta_t\cdot ((L-\sigma)d_h(\mathbf{x}^*, \mathbf{x}^t) - Ld_h(\mathbf{x}^*, g(\mathbf{x}^t))).
\end{equation}
Since $d_h$ is convex in its second variable, we also have
\begin{equation}
\begin{aligned}
d_h(\mathbf{x}^*, \mathbf{x}^t) - d_h(\mathbf{x}^*,\mathbf{x}^{t+1}) &\geq d_h(\mathbf{x}^*, \mathbf{x}^t)- (1-\beta_t)d_h(\mathbf{x}^*,\mathbf{x}^{t})\\
&\quad - \beta_td_h(\mathbf{x}^*,g(\mathbf{x}^{t})) \\
&= \beta_t(d_h(\mathbf{x}^*, \mathbf{x}^t)- d_h(\mathbf{x}^*,g(\mathbf{x}^{t}))).
\end{aligned}
\end{equation}
Hence,
\begin{equation}
\beta_t(f(\mathbf{x}^{t+1}) - f(\mathbf{x}^*)) \leq L(d_h(\mathbf{x}^*, \mathbf{x}^t) - d_h(\mathbf{x}^*,\mathbf{x}^{t+1})).
\end{equation}
The RHS is a telescoping sum, therefore
\begin{equation}
\sum_{t=0}^{N} \beta_t(f(\mathbf{x}^{t+1}) - f(\mathbf{x}^*)) \leq Ld_h(\mathbf{x}^*,\mathbf{x}^0),
\end{equation}
which yields
\begin{equation}
\lim_{N\rightarrow \infty}\sum_{t=0}^{N} \beta_t(f(\mathbf{x}^{t+1}) - f(\mathbf{x}^*)) < \infty.
\end{equation}
By the fact that $\sum_{t=0}^{\infty}\beta_t = \infty$, we have,
\begin{equation}
\liminf _{t\rightarrow \infty}(f(\mathbf{x}^{t}) - f(\mathbf{x}^*)) = 0.
\end{equation}
Finally, we could conclude the proof by monotonicity of $f(\mathbf{x}^{t})$.
\end{proof}
\section{The Robbin-Monro Algorithm}\label{sec:supp-RMA}
\subsection{Preliminaries on the Robbin-Monro Algorithm }
In this part, we will summarise the existing results on Robbin-Monro Algorithms we need.
\begin{defn}[\cite{benaim1999dynamics}, section 4.2]
Let $(\Omega,\mathcal{F},\mathbb{P})$ be a probability space and $\{\mathcal{F}_n\}_{n\in \mathbb{N}} \subset \mathcal{F}$ a filtration. A stochastic process $\{z_n\}_{n\in \mathbb{N}} \subset \{\mathbb{R}^m\}^{\mathbb{N}}$ is a Robbin-Monro algorithm if it is in the form
\begin{equation}
z_{n+1} = z_n + \gamma_{n+1}(F(z_n) + U_{n+1}),
\end{equation}
where $F: \mathbb{R}^m \rightarrow \mathbb{R}^m$ is continuous. Furthermore, it should satisfy the following conditions
\begin{itemize}
\item[(i)] $\{\gamma_n\}_{n\in\mathbb{N}}$ is a deterministic sequence.
\item[(ii)] $\{U_n\}_{n\in \mathbb{N}}$ is adapted, and $\mathbb{E}[U_{n+1} |\mathcal{F}_n] = 0$.
\end{itemize}
\end{defn}
Then we need the notion of asymptotic pseudo trajectories, the formulation of such a notion needs us to introduce the concept of semiflow, which is a flow but only considering the part where $t\geq 0$.
\begin{defn}[\cite{benaim1999dynamics}, section 3]
A semiflow $\Phi$ on a metric space $(M,d)$ is a continuous map
\begin{equation}
\begin{aligned}
\Phi: \mathbb{R}_+ \times M \rightarrow M, \\
\Phi(t,x) = \Phi_t(x)
\end{aligned}
\end{equation}
such that
\begin{equation}
\Phi_0 = id, \ \Phi_{t+s} = \Phi_{t} \circ \Phi_{s}
\end{equation}
for all $(t,s) \in \mathbb{R}_+ \times \mathbb{R}_+$.
\end{defn}
Next, for a (semi)flow (which is potentially induced by a vector field), we define the notion of asymptotic pseudotrajactory. Roughly, it means that a trajectory is very close to the integral curve (or flow) if we push $t$ to infinity.
\begin{defn}[\cite{benaim1999dynamics}, section 3]
A continuous function $X: \mathbb{R}_+ \rightarrow M$ is an asymptotic pseudotrajectory for $\boldsymbol{\phi}$ if
\begin{equation}
\lim_{t \rightarrow \infty} \sup_{0\leq h \leq T} d(X(t+h) , \Phi_h(X(t))) = 0
\end{equation}
for any $T>0$.
\end{defn}
Back to our main problem, our ultimate goal is to study the trajectory of a discrete-time process $\{z_n\}_{n\in\mathbb{N}} \subset \mathbb{R}^m$ adapted to the filtration $\{\mathcal{F}_n\}_{n\in\mathbb{N}}$ which could be written as
\begin{equation}
z_{n+1} - z_n = \gamma_{n+1}(F(z_n) + U_{n+1}), \label{RM1}
\end{equation}
where $F \in C^\infty(\mathbb{R}^m,\mathbb{R}^m)$ is a smooth map, $\{\gamma_n\}_{n\in\mathbb{N}}$ are the step sizes satisfying \begin{equation}
\sum_{n=1}^{\infty}\gamma_n = \infty, \lim_{n\rightarrow \infty} \gamma_n = 0, \label{RM2}
\end{equation}
and $U_n \in \mathbb{R}^m $ are random perturbations satisfying $\mathbb{E}[U_{k+1}| \mathcal{F}_k] = 0$. For such a process, the "trajectory" could be defined as the interpolated curve connecting the sequence in $\mathbb{R}^m$ generated by the stochastic process. Set
\begin{equation}
\tau_0 = 0, \tau_n = \sum_{i=1}^{n}\gamma_i,
\end{equation}
where $n\geq 1$. And define the continuous time affine interpolated process $X(t)$ as
\begin{equation}
Z(\tau_n+s) = z_n +s\frac{z_{n+1} - z_n}{\tau_{n+1} - \tau_n}.
\end{equation}
Finally, we have the main tool of analysing such the Robbin-Monro Algorithms, which is summarised in the following theorem,
\begin{thm}[\cite{benaim1999dynamics}, section 4]
Let $F$ be a smooth vector field on $M$, in addition, for any point $p$ it has unique integral curves around $p$. Assume that
\begin{enumerate}
\item $\sup_n \mathbb{E}[\Vert U_{n+1}\Vert^q] \leq \infty$ and $\sum_{n\in\mathbb{N}} \gamma_n^{1+\frac{q}{2}} \leq \infty$ for some $q\geq2$.
\item Either \begin{equation}
\sup_n{\Vert x_n\Vert} \leq \infty
\end{equation}
or $F$ is Lipschitz and bounded on a neighbourhood of $\{x_n: n\geq 0\}$.
\end{enumerate}
Then the interpolated process $Z$ is an asymptotic peseudotrajectory of the flow $\Phi$ induced by $F$ almost surely. \label{RM-converge-thm}
\end{thm}
\subsection{Formulation of the Robbin-Monro Algorithm}\label{Formulation of the Robbin-Monro Algorithm}
Define $D_{ij}^t$ as the number of purchases happen to user group $i\in \mathcal{U}$ and $j \in \mathcal{I}$ up to $t\in \mathbb{N}$ rounds. Then let $\mathbf{D}^t= \in \mathbb{R}^{|\mathcal{I}| \times |\mathcal{U}| + 1}$ which is defined as $\mathbf{D}^t= [D_{dum}^t, D_{i,j}^t]_{i\in \mathcal{U}, j \in \mathcal{I}}$, where $D_{dum}^t$ records the number of rounds with no purchases up to time $t$. Different from \cite{maldonado2018popularity}, we will now consider all of the rounds including the rounds where no purchases happen. Then the evolution of $\mathbf{D}$ could be written as
\begin{equation}
\mathbf{D}^{t+1} = \mathbf{D}^t + \mathbf{E}^{t+1},
\end{equation}
where $\mathbf{E}^{t+1} = [e_{\text{dum}},e_{ij}]_{i\in \mathcal{U}, j\in \mathcal{I}}$ contains $|\mathcal{I}| \times |\mathcal{U}| + 1$ random variables where $\mathbb{P}(e^{t+1}_{ij} = 1) =w_iq_{ij} \frac{v_{ij}(\sum_i{D_{ij}^t})^r}{\sum_jv_{ij}(\sum_i{D_{ij}^t})^r}$, $\mathbb{P}(e^{t+1}_{\text{dum}} = 1) = 1- \sum_{ij}w_iq_{ij} \frac{v_{ij}(\sum_i{D_{ij}^t})^r}{\sum_jv_{ij}(\sum_i{D_{ij}^t})^r}.$ Note that they are not independent since $\Vert\mathbf{E}^{t+1}\Vert_1 =1$ for any $t$ as it refers to the purchase happen at time $t+1$.
Assume $\Vert \mathbf{D}^0\Vert_1 \neq 0$, define $\alpha_{ij}^t= \frac{D^t_{ij}}{\sum_{ij}D^t_{ij} + D_{\text{dum}}^t}, \alpha_{\text{dum}}^t= \frac{D^t_{\text{dum}}}{\sum_{ij}D^t_{ij} + D_{\text{dum}}^t}$ . We note that if $\lim_{t\rightarrow\infty}\alpha^t_{ij}$ exists for all $i,j$, then the limit of market share exists, because the market share could be calculated as
\begin{equation}
\phi_j^t = \frac{\sum_{i}\alpha^{t}_{ij}}{\sum_{ij}\alpha^{t}_{ij}}.
\end{equation}
Let $\Pi^t = \sum_{ij}D^t_{ij} + D_{\text{dum}}^t$, $\mathbf{A}^t = [\alpha_{\text{dum}}^t, \alpha_{ij}^t]_{i\in \mathcal{U}, j \in \mathcal{I}}$ then we have
\begin{equation}
(\Pi^t + 1)\mathbf{A}^{t+1} = \Pi^t\mathbf{A}^t + \mathbf{E}^{t+1}.
\end{equation}
Now we change the variables $\alpha_{ij}^t$ by defining $x_{ij}^t = \frac{\alpha_{ij}^t}{q_{ij}}$. And similar, define the concatenation of $\mathbf{X}^t$ as $\mathbf{X}^t = [x_{ij}]_{i\in \mathcal{U}, j\in \mathcal{I}}$. Define a function $\varLambda(\mathbf{X}^t) = w_i \frac{v_{ij}(\sum_i{q_{ij}x_{ij}^t})^r}{\sum_jv_{ij}(\sum_i{q_{ij}x_{ij}^t})^r}$. Elementwisely, the preceding equation could be expressed as
\begin{equation}
\begin{aligned}
x^{t+1}_{ij} &= \frac{ \Pi^t}{\Pi^t + 1}x^t_{ij}+ \frac{ 1}{\Pi^t + 1}\frac{e_{ij}^{t+1}}{q_{ij}}.\\
& = x^t_{ij} + \frac{ 1}{\Pi^t + 1}(\varLambda(\mathbf{X}^t)_{ij} - x^t_{ij}) + \frac{ 1}{\Pi^t + 1}(\frac{e_{ij}^{t+1}}{q_{ij}} - \varLambda(\mathbf{X}^t)_{ij})\\ \label{Robbin-Monro x}
\end{aligned}
\end{equation}
Let the $\{\mathcal{F}_t\}_{t\in \mathbb{N}}$ be the filtration generated by $\{\sigma(\mathbf{E}^s)\}_{s\leq t}$, we could notice that
\begin{equation}
\begin{aligned}
\mathbb{E}[\frac{e_{ij}^{t+1}}{q_{ij}} | \mathcal{F}_t] &= w_i \frac{v_{ij}(\sum_i{D_{ij}^t})^r}{\sum_jv_{ij}(\sum_i{D_{ij}^t})^r} \\
&=w_i \frac{v_{ij}(\sum_i{\alpha_{ij}^t})^r}{\sum_jv_{ij}(\sum_i{\alpha_{ij}^t})^r} \\
&= w_i \frac{v_{ij}(\sum_i{q_{ij}x_{ij}^t})^r}{\sum_jv_{ij}(\sum_i{q_{ij}x_{ij}^t})^r}.
\end{aligned}
\end{equation}
Then it is clear that \eqref{Robbin-Monro x} is a Robbin-Monro algorithm, and we could claim that the process \eqref{Robbin-Monro x} has the same limiting behaviour as the weighted mirror descent dynamic which we want to consider.
\begin{prop}
Consider the following deterministic process
\begin{equation}
\widehat{\mathbf{X}}^{t+1} = \frac{ \Pi^t}{\Pi^t + 1}\widehat{\mathbf{X}}^t+ \frac{ 1}{\Pi^t + 1}\varLambda(\widehat{\mathbf{X}}^t) \label{deterministic RM}
\end{equation}
with $\widehat{\mathbf{X}}_0 = \mathbf{X}_0$. If the process $\{\widehat{\mathbf{X}}^t\}_{t\in \mathbb{N}}$ converges to $\mathbf{X}_\infty$. Then,
\begin{equation}
\lim_{n\rightarrow \infty} \mathbf{X}_n = \mathbf{X}_\infty
\end{equation} almost surely. \label{Deter-Sto-equi}
\end{prop}
\begin{proof}
We will work on the Euclidean norm $\Vert \cdot \Vert_2$. By definition \eqref{deterministic RM} is also a Robbin-Monro algorithm. Hence, by noting that $\widehat{\mathbf{X}}^t$ lies in a compact set for any $t$ (which could be regarded as a linear transformation of $\Delta$). Then, by \Cref{RM-converge-thm}, it is the asymptotic-pseudo trajectory of $\Phi_t$ where $\Phi_t$ is the flow generated by the ODE
\begin{equation}
\frac{d\mathbf{X}}{dt} = \Lambda(\mathbf{X}) - \mathbf{X}.
\end{equation}
Then, by triangle inequality
\begin{equation}
\Vert \widehat{\mathbf{X}}^t - \mathbf{X}^t\Vert_2 \leq \Vert \widehat{\mathbf{X}}^t - \Phi_t(\mathbf{X}_0) \Vert_2 + \Vert \mathbf{X}^t - \Phi_t(\mathbf{X}_0) \Vert_2.
\end{equation}
Taking ${t\rightarrow\infty}$ on both sides will yield the result directly.
\end{proof}
\section{Calculation of objective function} \label{Calculation of objective function}
In this section, we show that the form $\varLambda(\mathbf{X})$ is the standard mirror descent update of the following optimisation problem in the special case $q_{ij} = q_j$ for all $i\in \mathcal{U}$.
\begin{equation}
\begin{aligned}
&\min \quad \Psi(\mathbf{X}) = \sum_{ij}x_{ij}\log(\frac{x_{ij}}{v_{ij}}) + (r-1)\sum_{ij}x_{ij} \\ &\quad\quad\quad\quad\quad- r\sum_{j}(\sum_{i} x_{ij})\log(\sum_i(q_jx_{ij})), \\
&\text{subject to } \sum_j x_{ij} = w_i, \ \sum_iw_i =1 \label{OBJ-MirrorDesecnt} \text{ and } x_{ij}\geq 0 \text{ for all $i,j$}.
\end{aligned}
\end{equation}
The following lemma illustrates the calculation of the mirror descent update rule of the above problem,
\begin{lem}
The dynamic $$\mathbf{X}^{t+1} = \varLambda(\mathbf{X}^t)$$ is the standard mirror descent update \eqref{MirrorDescentStandard} of the problem \eqref{OBJ-MirrorDesecnt} with step size $\alpha_t =1$ for all $t\in \mathbb{N}$.
\end{lem}
\begin{proof}
It is straightforward to see that
\begin{equation}
\frac{\partial \Psi}{\partial x_{ij}^t} = -\log v_{ij} - r\log(\sum_i q_jx_{ij}^t) + \log x_{ij}^t. \label{70}
\end{equation}
In our case, let $F$ denote the objective function in \eqref{MirrorDescentMinimiser}, then
\begin{equation}
\frac{\partial F}{\partial x_{ij}} = \frac{\partial \Psi}{\partial x_{ij}^t} + \log{x_{ij}} + 1 -\log{x_{ij}^t}.
\end{equation}
Using the Lagrange multiplier, we have
\begin{equation}
\frac{\partial \mathcal{L}}{\partial x_{ij}} = \frac{\partial \Psi}{\partial x_{ij}^t} + \log{x_{ij}} + 1 -\log{x_{ij}^t} + \lambda = 0.
\end{equation}
Then, we could summarise that
\begin{equation}
x_{ij} = \exp(C - \frac{\partial F}{\partial x_{ij}})x_{ij}^t. \label{73}
\end{equation}
And noting the feasible condition, we have
\begin{equation}
w_i = \sum_j x_{ij} = \sum_j \exp(C - \frac{\partial F}{\partial x_{ij}})x_{ij}^t.
\end{equation}
Therefore,
\begin{equation}
\exp(C) = \frac{w_i}{\sum_j\exp( - \frac{\partial F}{\partial x_{ij}})x_{ij}^t}.
\end{equation}
Plugging this with \eqref{73} and \eqref{70}, the result follows directly.
\end{proof}
It remains to show that the function $\Psi$ is $(\sigma,L)$ Bregman strongly convex for some $\sigma, L\geq 0$. The following lemma proves that the objective function indeed possesses such good identities.
\begin{lem}
The function $\Psi$ is $(1-r,1)$-Bregman strongly convex w.r.t the KL divergence.
\end{lem}
\begin{proof}
We only need to consider the following expression
\begin{equation}
\delta = \Psi(\mathbf{X}) - \Psi(\mathbf{Y}) - \innerProd{\nabla \Psi(\mathbf{Y})}{\mathbf{X}- \mathbf{Y}}.
\end{equation}
By standard calculations, we have
\begin{equation}
\delta = r\Big(\sum_j(\sum_i x_{ij})\log(\sum_i y_{ij}) - (\sum_ix_{ij})\log(\sum_ix_{ij})\Big) + KL\infdivx{\mathbf{X}}{\mathbf{Y}}.
\end{equation}
It is clear that $\delta \geq (1-r)KL\infdivx{\mathbf{X}}{\mathbf{Y}}$ by noticing that $KL\infdivx{\mathbf{X}}{\mathbf{Y}}$ is more refined than $r\Big(\sum_j(\sum_i x_{ij})\log(\sum_i y_{ij}) - (\sum_ix_{ij})\log(\sum_ix_{ij})\Big)$. And by the positivity of KL divergence, it is straightforward to see that $\delta \leq KL\infdivx{\mathbf{X}}{\mathbf{Y}}$, which concludes the proof.
\end{proof}
\section{Proof of Theorem \ref{thm:convergence-hetero}}\label{Proof of theorem 13}
Proof of Theorem 13. For the case $v_{ij} = v_j$, the proof could be
found in \Cref{Special case: $v_{ij} = v_j$ for heterogeneous setup}
For the case $q_{ij} = q_j$, firstly, by change of variable, it suffices to
show that an RMA dynamic \eqref{Robbin-Monro x} converges to a unique limit (see
\Cref{Formulation of the Robbin-Monro Algorithm}). Then, we established the equivalence between \eqref{Robbin-Monro x}
and the deterministic dynamic \eqref{deterministic RM} (see Proposition \ref{Deter-Sto-equi}). By lemma
\eqref{Continuous_dynamics_as_mirror_descent}, the deterministic process could be regarded as a weighted mirror
descent described in \eqref{MirrorDescentWeighted} (see \Cref{Results on Mirror Descent}). Then \Cref{theorem_weighted_converge} yields the convergence result.
\section{Proof of Theorem \ref{thm-homo-convergence}}\label{Proof of theorem 10}
For the homogeneous case, we use could reformulate the market share evolution as:
\begin{equation}
\begin{aligned}
\boldsymbol{\phi}^{t+1} &= \frac{t}{t+1}\boldsymbol{\phi}^t + \frac{1}{t+1}\mathbf{e}^{t+1}\\
&=\boldsymbol{\phi}^t + \frac{1}{t+1}(\mathbf{p}(\boldsymbol{\phi}^t) - \boldsymbol{\phi}^t) + \frac{1}{t+1}(\mathbf{e}^{t+1}-\mathbf{p}(\boldsymbol{\phi}^t)). \label{RMA-homo}
\end{aligned}
\end{equation}
This is an RMA. By considering its deterministic analog
\begin{equation}
\boldsymbol{\phi}^{t+1} =\boldsymbol{\phi}^t + \frac{1}{t+1}(\mathbf{p}(\boldsymbol{\phi}^t) - \boldsymbol{\phi}^t)
\end{equation}
which is equivalent to \eqref{RMA-homo} by theorem 34 (using the same arguments as proposition \ref{Deter-Sto-equi}). Hence, by considering the objective function \eqref{Efficiency Entropy Problem} (which is $(1-r,1)$-Bregman strongly convex), we could conclude that this is a weighted mirror descent described in \eqref{MirrorDescentWeighted_main}. Therefore, by theorem \ref{theorem_weighted_converge}, the convergence could be concluded.
\section{Additional simulation results}
\begin{figure*}[h]
\centering
\begin{minipage}{0.52\textwidth}
\centering
\includegraphics[width=1.\textwidth]{figs/hetero_error_bar.pdf}%
\end{minipage
\begin{minipage}{0.52\textwidth}
\centering
\includegraphics[width=1.\textwidth]{figs/homo_error_bar.pdf}%
\end{minipage
\caption{The evolution of market efficiency under different ranking strategies, for the homogeneous case. Each simulation is run for 300,000 new users, a measurement is taken after every 1000 time steps. The solid lines denote the median, the shaded area represent the 25-th to the 75-th percentile of 50 repeated simulations.}
\label{fig:baseline}
\end{figure*}
\section{A Shorter Proof of Theorem \ref{thm-homo-convergence} and Theorem \ref{thm:convergence-hetero}}\label{sec:supp-shorter-proof}
\begin{defn}
Let $C$ be a compact and convex set, given a differentiable convex function $h$ with convex domain $C$,
the Bregman divergence generated by $h$, which is denoted by $d_h$ is defined as
\[
d_h(\mathbf{x},\mathbf{y}) = h(\mathbf{x}) - h(\mathbf{y}) - \langle\nabla h(\mathbf{y}), \mathbf{x} - \mathbf{y} \rangle.
\]
where $\mathbf{x} \in C$, $\mathbf{y} \in \text{rint}(C) = \{\mathbf{y}\in C: \text{for all } \mathbf{z}\in C\text{, there exists some }\lambda>1 \text{ such that } \lambda \mathbf{y}+ (1-\lambda)\mathbf{z} \in C\}$.
\end{defn}
We assume that the function $h$ is strictly convex. Consequently, $d_h(\cdot,\cdot)$ is strictly convex in its first parameter, and $d_h(\mathbf{x},\mathbf{y}) = 0$ if and only if $\mathbf{x} = \mathbf{y}$.
\begin{defn}
A function $f$ is $L$-Bregman-convex with respect to the Bregman divergence $d_h$ if
for any $\mathbf{y} \in \text{rint}(C)$ and $\mathbf{x}\in C$,
\[
f(\mathbf{x}) ~\leq~ f(\mathbf{y}) + \langle\nabla f(\mathbf{y}), \mathbf{x}-\mathbf{y}\rangle + L \cdot d_h(\mathbf{x},\mathbf{y}).
\]
\end{defn}
Given an $L$-Bregman-convex function $f$ with respect to the Bregman divergence $d_h$,
the mirror descent rule with respect to the Bregman divergence $d_h$ is given by $\mathbf{x}^{t+1} = g(\mathbf{x}^t)$, where
\begin{equation}
g (\mathbf{x}^t) = \argmin_{\mathbf{x}\in C} \left\{f(\mathbf{x}^t) + \langle\nabla f(\mathbf{x}^t), \mathbf{x} - \mathbf{x}^t\rangle +L \cdot d_h(\mathbf{x},\mathbf{x}^t) \right\}. \label{MDMinimizer}
\end{equation}
\begin{thm}[\cite{cheung2018dynamics}, Theorem 3.2]
Suppose $f$ is an $L$-Bregman-convex function with respect to $d_h$, and $\mathbf{x}^t$ is the point reached after $t$ applications of the mirror descent update rule $\mathbf{x}^{t+1} = g(\mathbf{x}^t)$. Then for all $t\ge 1$,
\[
f(\mathbf{x}^t) - f(\mathbf{x}^*) \leq \frac{L\cdot d_h(\mathbf{x}^*, \mathbf{x}^0 )}{t}.
\]
\end{thm}
\begin{lem}\label{lem:strict-decrease}
Suppose $X^*$ is the set of minimum points of $f$ in $C$. If $\mathbf{x}\in C$ and $\mathbf{x} \notin X^*$, then $f(g(\mathbf{x})) < f(\mathbf{x})$.
\end{lem}
\begin{proof}
First, we show that $g(\mathbf{x}) \neq \mathbf{x}$. Suppose the contrary.
Then $\mathbf{x}$ is a fixed point of the update rule $\mathbf{x}^{t+1} = g(\mathbf{x}^t)$.
By setting $\mathbf{x}^0 = \mathbf{x}$, by the above theorem, for any $\mathbf{x}^*\in X^*$,
we have $f(\mathbf{x}) - f(\mathbf{x}^*) \le \frac{L\cdot d_h(\mathbf{x}^*, \mathbf{x} )}{t}$ for any $t\ge 1$.
Since the LHS does not depend on $t$, this is possible only when $f(\mathbf{x}) - f(\mathbf{x}^*) = 0$
and hence $\mathbf{x} \in X^*$, a contradiction.
Let $\hat{f}(\mathbf{y}) = f(\mathbf{x}) + \langle\nabla f(\mathbf{x}), \mathbf{y} - \mathbf{x}\rangle +L \cdot d_h(\mathbf{y},\mathbf{x})$.
By the definition of $g$, we have $\hat{f}(g(\mathbf{x})) \le \hat{f}(\mathbf{x})$.
Since $d_h(\cdot,\cdot)$ is strictly convex in its first parameter, $\hat{f}$ is strictly convex.
Since $g(\mathbf{x}) \neq \mathbf{x}$, for any $\mathbf{x}'$ on the line segment between $\mathbf{x}$ and $g(\mathbf{x})$, $\hat{f}(\mathbf{x}') < \hat{f}(\mathbf{x})$.
Then by the definition of $g$ again, we have $\hat{f}(g(\mathbf{x})) \le \hat{f}(\mathbf{x}') < \hat{f}(\mathbf{x}) = f(x)$.
Finally, since $f$ is $L$-Bregman-convex with respect to $d_h$, $f(g(\mathbf{x})) \le \hat{f}(g(\mathbf{x}))$.
This completes the proof of $f(g(\mathbf{x})) < f(x)$.
\end{proof}
To use the established theorems regarding Robbins-Monro algorithm, we consider the following ODE:
\[
\dot{\mathbf{x}} ~=~ g(\mathbf{x}) - \mathbf{x}.
\]
Due to \cref{lem:strict-decrease} and the assumption that $f$ is convex,
we have $\innerProd{f'(\mathbf{x})}{g(\mathbf{x}) - \mathbf{x}} < 0$ for any $\mathbf{x} \notin X^*$.
Thus, the following theorem from \cite[Corollary 3]{Borkar2008} is applicable.
\begin{thm}
Consider an ODE $\dot{\mathbf{x}} = h(\mathbf{x})$.
Suppose there is a continuously differentiable function $f:\mathbb{R}^d \rightarrow \mathbb{R}$ such that
(i) $\lim_{\|\mathbf{x}\|\rightarrow \infty} f(\mathbf{x}) = +\infty$;
(ii) the set of minimum points of $f$, $X^*$, is non-empty; and
(iii) $\innerProd{f'(\mathbf{x})}{h(\mathbf{x})} \le 0$ for all $\mathbf{x}$, with equality holds if and only if $\mathbf{x}\in X^*$.
Then almost surely, the Robbins-Monro algorithm of the ODE converges to a non-empty subset of $X^*$.
\end{thm}
\section{Conclusion}
\label{sec:conclusion}
This paper views the dynamics of cultural markets under an optimisation lens.
We identify new objective functions for trial-offer markets, and establish robust connections between social feedback signals and optimisation processes.
Our results narrow the gap between the theory and practice of recommender systems. In particular, they make the analysis of recommender systems more versatile by incorporating user-specific preferences, and offer a holistic view of market stability and efficiency beyond individual clicks and views.
Simulations using real-world user preferences confirms that markets with heterogenous preferences are more stable and more efficient.
Our work leads to several open research questions, such as convergence rates of the stochastic T-O markets, analysis of general heterogeneous T-O settings, fairness properties of market equilibria, and describing markets that are also learning a recommender systems in-the-loop \cite{mladenov2021recsim}. More generally, we hope the current work opens up new ways to asking and answering a set of research questions at the intersection of classical markets and online attention.
\section{Empirical observations}\label{sect:experiments}
We simulate cultural markets using real-world preferences from the well-known MovieLens dataset~\cite{harper2015movielens}, in order to explore the efficiency and diversity of the market in homogeneous and heterogeneous settings, and under different ranking strategies\footnote{Code and data that reproduce results in this section is at \url{https://github.com/haiqingzhu543/Stability-and-Efficiency-of-Personalised-Cultural-Markets}}.
The simulations aim to answer key questions such as whether heterogenous T-O market is more efficient, whether T-O market is stable as prescribed in \Cref{sec:results}, and whether stability sacrifices diversity.
\begin{figure*}[bt]
\centering
\begin{minipage}{0.52\textwidth}
\centering
\includegraphics[width=1.\textwidth]{figs/eff_hetero_homo_modified.pdf}%
\end{minipage
\begin{minipage}{0.52\textwidth}
\centering
\includegraphics[width=1.\textwidth]{figs/entropy_vs_eff_modified.pdf}%
\end{minipage
\caption{Simulation results on Movielens dataset.
Each simulation is run for 300,000 time steps (each introducing a new user), a measurement is taken after every 1000 time steps. (Left) Market efficiency over time comparing the homogeneous vs heterogenous settings under three ranking strategies (random/popularity/quality). The lines denote the median of 50 simulations with different random initialisations. (Right) Efficiency over the entropy of market shares in heterogeneous setting. The lines denote the median of efficiency over 50 simulations with different random initialisations, markers denote iteration 1000, 100,000, 200,000 and 300,000, \lx{and error bars represent the 25th to 75th percentile range in both efficiency and entropy}. }
\label{fig:empiricalobs}
\end{figure*}
We set up the simulation using the MovieLens-100k dataset \cite{harper2015movielens}. This dataset consists of 100,000 ratings (valued 1-5) from 943 users on 1682 movies, where each user has rated at least 20 movies.
We performed matrix completion using incomplete SVD
\cite{bell2007lessons} via the {\tt Surprise} python package\footnote{\url{https://surpriselib.com}}, yielding a preference matrix $\Gamma = [\gamma_{ij}] \in \mathbb{R}^{943\times1682}$ for each (user, movie) pair.
With $\gamma_{ij} \in [0,1]$ denoting the normalised preference of user $i\in \mathcal{U}$ and item (movie) $j \in \mathcal{I}$.
Denote $\mathcal{O}$ as the set consisting of all indices $(i,j)$ with $\gamma_{i,j}$ observed (in the MovieLens-100k dataset), and $\mathcal{\bar O}$ as the set of unobserved indices (entries estimated with incomplete SVD).\\
To simulate heterogeneous preference types, we then divide the users into $M$ non-overlapping subgroups based on user attributes.
Let $\bigcup_{i=1}^{M}\mathcal{U}_i= \mathcal{U}$, where $\mathcal{U}_i$ denote the set of users in group $i$, and hence the weights $w_i = |\mathcal{U}_i|/ | \mathcal{U}|$. We first calculate the {\em visibility} factor $\hat v_{ij}$ by averaging over the set of observed entries generated by user group $i$ using \eqref{eq:hatVij}.
We also calculate the {\em quality} factor $q_{ij}$ as by averaging over the set of {\it unobserved} entries generated by user group $i$ using \eqref{eq:Qij}.
This choice reflects the intuition that in a T-O market, the probability of purchase depend on a quality factor that is often unkonwn to the platform a priori before a user tries an item. Other strategies for estimating $q_{ij}$ and $\hat v_{ij}$ are left for future work.
\begin{eqnarray}
\hat{v}_{ij} = \frac{1}{| (k,j)\in \mathcal{O}, k \in \mathcal{U}_i |} \sum_{(k,j)\in \mathcal{O}, k \in \mathcal{U}_i} \gamma_{kj}. \label{eq:hatVij}\\
q_{ij} = \frac{1}{|(k,j) \in \mathcal{\bar O}, k \in \mathcal{U}_i| }\sum_{(k,j) \in \mathcal{\bar O}, k \in \mathcal{U}_i} \gamma_{kj} \label{eq:Qij}
\end{eqnarray}
We cluster users into 100 groups using K-means on the rows of $\Gamma$.
This grouping is used to compare market efficiency and diversity in homogenous (all users have the same preferences) and heterogenous (100 user groups) settings.
results for other group compositions are in the supplement~\cite{supp3065}.
We also account for position bias in ranked lists~\cite{craswell2008experimental} in order to construct the actual visibility factor $v_{ij}$, which can vary over time, denoted as $v_{ij}^t$.
Prior {\sc MusicLab} model has included position bias parameters ~\cite{krumme2012QuantifyingSI,maldonado2018popularity}
for the top-50 items, with zero visibility assigned to all other items.
This results in a list of fixed weights $\iota_k$, $k=1,\ldots,1682$, where $\iota_{1},\ldots,\iota_{50 >0}$, and $\iota_{51},\ldots,\iota_{1682} = 0$.
We adopt the separable click-through rate (CTR) model commonly used in modelling auctions \cite{aggarwal2006truthful}, which simply multiplies the inherent visibility with a ranking factor $\eta_{ij}^t$ of item $j$ presented to user $i$ at time $t$. Different ranking strategies produce different $\eta_{ij}^t$ to modulate the visibility term $\hat{v}_{ij}$, which in turn result in different probability distributions on the trial phase (equation \eqref{eq:mnlchoice}).
\begin{equation}
v_{ij}^t = \hat{v}_{ij}\eta_{ij}^t
\end{equation}
We introduce the following three ranking strategies that define the relationship between $\iota_k$ and $\eta_{ij}^t$.
\begin{itemize}[leftmargin=*]
\item Random-ranking. Upon each simulation round, the ranking of items is randomly and uniformly drawn from a permutation of indexes $\{1, 2, \ldots,1682\}$. That is, the visibility term $v_{ij}$ is set to the product of $\hat v_{ij}$ and a random element of $\iota$. This ranking changes for each simulation step. One expects such a strategy to promote diversity while preserving some information on item appeal through $\hat v_{ij}$.
\item Popularity-ranking. This strategy sorts the items by descending market share $\boldsymbol{\phi}^t$, and set $\eta_{ij}^t = \iota_k$, with $k = \arg \text{sort}_{desc} \{\boldsymbol{\phi}^t\} [j]$. This ranking will change over the simulation steps, and is analogous to the original {\sc MusicLab} experimental setting~\cite{salganik2006ExperimentalSO}. One expects this strategy to be unstable due to the randomness early in the simulation, since it could accidentally promote items that users do not like to the top, resulting in the high quality items being buried.
\item Quality-ranking. Denote the descending sorting rank of item $j$ among user group $i$ as $k = \arg \text{sort}_{desc} \{q_{i, :}\} [j]$, where $q_{i, :}$ is the one-dimensional array for qualities factors in group $i$, and $[j]$ denotes array indexing. Then the position bias factor is set to the corresponding item, and $\eta_{ij}^t = \iota_k$. This ranking does not change over the simulation steps, since both $q_{ij}$ and its sorted order remains fixed. One expects this strategy to best align visibility with the underlying quality metrics (unobserved before trying), since it has {\em oracle} access to $q_{ij}$, and should yield high efficiency.
\end{itemize}
In each round of the simulation, one new user arrives at the market and choose an item for a \emph{trial} according to the multinomial logit (equation \ref{eq:mnlchoice}). Then the user will choose whether to \emph{purchase} this particular item by flipping a biased coin parameterized by $q_{ij}$.
Note that these new users are {\it generalisations} of the $M$ groups of user populations in MovieLens via attributes $q_{ij}$ and $\hat v_{ij}$, rather than being subsets (or samples) from the original 943 users.
This setting is consistent with other theoretical and simulation studies of cultural market and recommender systems~\cite{krumme2012QuantifyingSI,jiang2019degenerate}. We report {\em market efficiency} as the empirical version of \Cref{defn:efficiency}, namely, the fraction of users who made a purchase. We also measure diversity among the set of items, by computing the entropy of market share $\boldsymbol{\phi}$ (\Cref{eq:entropy}) at each time step. We explore the relationship between these two metrics.
\Cref{fig:empiricalobs} summarises the trends of efficiency and diversity over the two settings -- homogeneous/heterogeneous -- and three ranking strategies -- random/popularity/quality. For the left figure, it is observed that the quality-ranking oracle has the highest efficiency among the three ranking strategies, followed by popularity ranking, and random ranking has the lowest efficiency when there is a sufficient number of users.
\lx{Taking into account heterogeneous user preferences improves efficiency in both quality and popularity ranking settings, despite the gap being smaller than that between the two ranking strategies.}
Figure 2 (right) compares both efficiency and diversity (measured by the entropy of market shares) at different iterations for different ranking strategies in the heterogenous setting.
\lx{For popularity and quality rankings, item diversity is decreasing over time (curves moving left) while efficiency increases (curve moves slightly up).
Comparing the two, quality ranking yields more diversity across the items (higher entropy) and is more stable (smaller spread on both dimensions).
This observation corroborates \Cref{prop:Nash-SW} that heterogeneous Musiclab objective is ex-post concave.
Popularity ranking results in larger variations in both efficiency and entropy, confirming observations in the original Musiclab experiment~\cite{salganik2006ExperimentalSO} -- that market allocation is unstable due to random initialisations and result in market dominance by a few popular items.}
\lx{As a control group, random ranking yields the lowest efficiency and no apparent differences between homogenous and heterogeneous user preferences. In this setting, efficiency improves slightly due to the joint effect of both visibility and quality terms. But item diversity stays close to the thoeretical maximum in entopy ($\ln (1684) \sim 7.4 nats$) over time, indicating that the random ranking with cut-off at top 50 items is playing a larger role in user choice than signals present in the visibility and quality terms.}
\section{Introduction}
Online content platforms are major sources for everyday entertainment, and the attention allocation within a platform provides a market setting of interest.
There are strong social and economic motivations for the stakeholders to understand these markets.
The platform providers ask how they can improve user experience and raise profits.
They may also want to enforce diversity of the cultural products, so as to achieve a sustainable business model.
The content producers are interested in strategies to improve their products, so as to gain more popularity and raise revenues.
Regulatory bodies and the user population seek to understand the market dynamic and its drivers, with the goal of making attention markets more transparent, accountable and fair.
However, understanding such markets is challenging, since their outcomes are susceptible to intricate social influence dynamics among customers,
which in turn are affected by the recommender systems used by the platform providers.
While various analyses have been done for the classical markets (e.g., Arrow-Debreu/exchange markets, Fisher markets),
the online cultural markets (or more generally, \emph{attention economies}) have several key aspects different from the classical markets,
so it is not clear if
known analyses directly apply.
In classical markets, goods are scarce. Their prices act as the coordination signals to balance demand and supply.
Typically, there exist moderate prices that lead to equilibrium. In online cultural markets, however,
the digital goods can be reproduced with essentially no cost, so their supplies are unlimited.
Users' attention is the scarce commodity that the producers compete for.
Typically, the influence dynamics and recommender systems tend to \emph{cascade}, i.e.~to promote goods which already got high attentions.
This is known to
lead to polarizing and unpredictable outcomes.
Since the seminal empirical work by Salganik et al~\cite{salganik2006ExperimentalSO}, now dubbed \textsc{MusicLab}.
Several mathematical models have been proposed to describe it~\cite{krumme2012QuantifyingSI,abeliuk2015benefits,maldonado2018popularity}, but all of them assumes that user preferences are homogeneous. This is in stark contrast to the rich literature and wide-spread practice of recommender systems that are focused on estimating and catering to heterogeneous preferences in user populations.
On the other hand, recent work in classical markets (esp Fisher markets) offer a range of results to understand equilibria from an algorithmic and optimization perspective~\cite{Zhang2011PR,birnbaum2011distributed,cheung2018dynamics}.
One may wonder: can recent results on Fisher markets be extended to describe the implicit computations in cultural markets?
Specifically for the \textsc{MusicLab} model~\cite{krumme2012QuantifyingSI}, customers' behaviors are characterized by a two-step trial-offer (T-O) process: first,
they select a product to try; second, they decide to purchase or not.
The first step is a stochastic process where the randomness depends on the intrinsic \emph{appeal} of the products,
and also the history of past purchases by customers, creating a feedback loop.
The second step is random depending on the intrinsic \emph{quality} to the customer.
\begin{figure}[bt]
\centering
\includegraphics[width=0.46\textwidth]{figs/teaser_new.pdf}%
\vspace{-0.3cm}
\caption{A core contribution of this paper is to provide an optimisation view (Top) of cultural markets (Bottom), which affords new results on stability, efficiency and equilibrium behavior. (Bottom) An illustration of a culture market with several types of users interacting with a few items (color similarity between users and items indicate differing matches in preferences). Users allocate attention to the items based on proportional-response-esque dynamic \Cref{choice TO hetero}. }
\label{fig:bigalpha}
\vspace{-0.2in}
\end{figure}
\subsection* {Our Contributions}
The main themes of this work is to establish several robust connections between stochastic T-O markets and optimization, and to use these connections to \marco{rigorously} show that the influence dynamics in these markets are stable.
For the homogeneous markets, we discover two objective functions for which the equilibrium of a T-O market
is maximizer of these objectives. The first objective is the ``total utility'' of the market.
The second objective is of particular interest due to its natural interpretation.
It is a weighted sum of the efficiency and the diversity of the market shares in the market, as measured by the Shannon entropy.
While efficiency is a natural benchmark, diversity in cultural market is also important for the healthy development of the platform.
The diversity not only broaden the customer base, it also provides financial support to the less popular producers to keep them in the cultural industry.
Thus, it is of the platform providers' interest to strike for a balance between efficiency and diversity.
Interestingly, we show that the influence dynamic is indeed equivalent to stochastic mirror descent on the second objective. This suggests the dynamic is implicitly optimizing \marco{the} natural objective in the market.
A significant consequence is this allows us to present a new proof of a result of Maldonado et al.~\cite{maldonado2018popularity} that the dynamic converges to an equilibrium of the market almost surely.
For heterogeneous markets, we show that the equilibrium is optimizing an \emph{ex-post} version of Nash social welfare. In classical Fisher market, Nash social welfare is the product of users' utilities, whereas each utility is raised to a power of the user's budget. In our case, the power is the budget times the \emph{efficiency} for that user at the equilibrium. Then we turn our focus to two interesting sub-classes of heterogeneous markets, namely (i) the users have the same appeals on the items, but they perceive the qualities of the items differently; (ii) the users perceive same qualities on the items, but they have different appeals on the items. For (i), we show that it is equivalent to a homogeneous market. For (ii), we design a new objective function, where the influence dynamic is equivalent to stochastic mirror descent on the objective. Again, this allows us to show the dynamic converges to an equilibrium almost surely.
The robust connection between the dynamics and optimization processes echoes with \emph{the self-reinforced efficiency} of some economic systems,
for which there exist natural dynamics or algorithms that can attain equilibrium, while the equilibrium optimizes popular efficiency measure like social welfare. See Related Work below for more relevant discussions.
\lx{We perform simulations using user preferences from the well-known MovieLens-100K dataset. We observe that accounting for heterogenous user preferences improves efficiency in culture markets while preserving stability. We examine the (user-centric) efficiency and (item- or producer- centric) diversity measures across three ranking strategies: random, quality-driven, and popularity-driven. Results confirm quality ranking being more efficient and more diverse than popularity ranking which was implemented in \textsc{MusicLab}~\cite{salganik2006ExperimentalSO} and known to be unstable}.
\subsection*{Related Work}
As early as in 1971, Simon~\cite{simon1971designing} pointed out that in an information-rich world, attention becomes the new scarcity that information consumes.
Examples of attention economies include entertainment such as music, film and television~\citep{salganik2006ExperimentalSO,bell2007lessons}, political campaigns and votes~\cite{BondFJKMSF2012}, scientific publications and researchers~\cite{Fortunato2018TheSO}.
Since Simon's visionary statement, the research community has formulated economic question about attention in a number of different ways, such as articulating the phenomenon of attention scarcity in corporate life~\cite{davenport2001attention}, diagnostic criteria for attention scarcity and solving it as (one off) allocation problems~\cite{falkinger2007attention,falkinger2008limited}, or connecting attention allocation to advertising revenue~\cite{evans2020economics}.
A recent study by Vosoughi et al.~\cite{VosoughiFalseNews2018} showed that false news spreads faster online, suggesting that besides quality,
appeal (e.g., novelty of the false news and the emotion it stimulates) of a digital good is crucial in social influence.
More broadly, the web research community have measured attention to items by individual users~\cite{tong2020brain}, a large set of users~\cite{wu2018beyond}, and attention among a network of items~\cite{Wu2019EstimatingAF}.
The concept of self-reinforced efficiency of economic systems can be traced back to the ``invisible hand'' metaphor of Adam Smith.
One of the first analytical confirmations of the concept is the famous First Welfare Theorem,
which states that a market equilibrium of any complete market is Pareto efficient~\cite{Arrow1951,Debreu1959}.
Furthermore, in a broad class of markets called Eisenberg-Gale Fisher markets,
market equilibrium optimizes a popular efficiency measure called Nash social welfare \cite{EG59,Eisenberg61,JainVazirani-EG2010}.
On the other hand, in combinatorial auction, any Walrasian equilibrium (if exists) optimizes the social welfare \cite{BM1997}.
In many of these economic systems, there are natural adaptive price/bidding dynamics (e.g., t\^atonnement~\cite{walras,ABH59,CMV05,CF08,CCR12,CheungCD2020,CheungHN19} and proportional response~\cite{WZ2007,LLSB08,Zhang2011PR,birnbaum2011distributed,cheung2018dynamics,BMN2018,GaoKroer2020,BranzeiDR2021,CheungLP2021,CheungLSP2022,CheungHN19}) or
auction algorithms (e.g., ascending price auction~\cite{KelsoCrawford1982,NS2006}) that can attain the efficient equilibria. As we shall see, the influence dynamics we study are indeed a stochastic version of proportional response.
\vspace{-0.mm}
\subsection*{Paper Organization}
After describing the models in \Cref{sec:model}, we present our main results formally in \Cref{sec:results}.
In \Cref{sect:analysis}, we provide an overview of the techniques we employ for proving the main results.
This is followed by a discussion of empirical observations in \Cref{sect:experiments}.
\section{Model: The Trial-Offer Market with Heterogeneous User Types}
\label{sec:model}
First, we describe
stochastic trial-offer (T-O) market,
in which users come to the platform
one-by-one to try and purchase the items.
We introduce measures of efficiency and diversity.
Then we describe a continuous and deterministic analogue of the stochastic model which will be useful for analysis.
In this work, we still use {\em purchase} to denote user completing a transaction on an item, where the resource a user {\em spends} is attention. One could think of it as a unit amount of time. Without loss of generality, we assume that each user has the same {\em budget} of attention, and that each item {\em costs} one unit of attention.
This model generalises cultural markets specified
by Krumme et al.~\cite{krumme2012QuantifyingSI} and Maldonado et al.~\cite{maldonado2018popularity} to
heterogeneous types of users.
\paragraph{Stochastic Trial-offer (T-O) Market.}
Let $\mathcal{U}$ denote the types of users and $\mathcal{I}$ denote the set of items.
The fraction of Type-$i$ users is denoted by $w_i$; note that $\sum_{i=1}^{|\mathcal{U}|} w_i = 1$.
If $|\mathcal{U}|=1$, we say the market is \emph{homogeneous}, otherwise it is \emph{heterogeneous}.
The dynamic starts at time $t=0$. At each time $t\ge 1$, a random user comes to the platform and tries an item,
and then she decides to purchase the item or not.
Let $d_j^t$ denote the number of purchases of item $j$ up to time $t$.
To ensure that all items have a positive probability to be tried in the initial rounds,
we assume that each item was purchased \lx{at least once} before the dynamic starts, i.e. $d_j^0 \ge 1$ for every item $j$.
The \emph{market share} of item $j$ at time $t$ is \lx{simply the fraction of all purchases that goes to item $j$.}
\[
\phi^t_j ~:=~ \frac{d^t_j}{\sum_{k=1}^{|\mathcal{I}|} d^t_k}.
\]
\lx{The possible market shares lies on a simplex, denoted as $\Delta$:}
\begin{equation}
\Delta = \left\{\boldsymbol{\phi}\in \mathbb{R}^{|\mathcal{I}|} : \sum_{j=1}^{|\mathcal{I}|} \phi_j = 1, \phi_j\geq 0 \text{ for any item $j$}\right\}. \label{eq: Simplex}
\end{equation}
If the user at round $t$ is of Type-$i$, the probability that she will try item $j$ is modelled as a multinomial logit, \lx{a common type of discrete choice model~\cite{greene2017econometric}}.
\begin{equation}
\tilde y_{ij}^t = \frac{v_{ij} (\phi^{t-1}_j)^{r_i}}{\sum_{k=1}^{|\mathcal{I}|} v_{ik} (\phi^{t-1}_k)^{r_i}}~,\label{eq:mnlchoice}
\end{equation}
$v_{ij} \ge 0$ is a parameter that depicts the \emph{visibility} of item $j$ to Type-$i$ users, which depends on the \emph{appeal} of the item itself, and also how the item is promoted or ranked with respect to other items.
$r_i > 0$ is a parameter that depicts the strength of feedback signal for Type-$i$ users.\footnote{Here $r_i=0$ denotes no {\em social} feedback signal from the current market share, whereas $r_i \rightarrow \infty$ means only the most popular item will be chosen in the next round. If the denominator $\sum_{k=1}^{|\mathcal{I}|} v_{ik} (\phi^{t-1}_k)^{r_i} = 0$, then this probability is defined as $0$.}
After a Type-$i$ user tries an item $j$, they purchase the item with probability $q_{ij} \in [0,1]$, \lx{which intuitively reflect the {\it quality} of an item that may be unknown before it is tried}.
In a homogeneous market, there is only one user type, we will drop all indices $i$ from the notations,
resulting in $v_j, q_j, r$,
and clearly $w_1=1$.
This distributed dynamic is the result of two interacting factors. The first is the user-specific visibility and quality factors $v_{ij}$ and $q_{ij}$ generalises recent models that analyse homogeneous attention market with feedback loop \cite{krumme2012QuantifyingSI,jiang2019degenerate}.
The second is a social feedback signal $(\phi_j^{t-1})^{r_i}$ based on the overall popularity of the item, such as the one implemented by the original \textsc{MusicLab} experiment~\cite{salganik2006ExperimentalSO}, or the number of downloads and likes on myriads of internet platforms. This feedback dynamic is also similar to proportional response in Fisher markets~\cite{cheung2018dynamics}, which we will exploit for obtaining key results in \Cref{sec:results}.
Ranked list is one of the most popular forms of presenting a set of items to users, and a salient factor affecting the visibility of an item is its position in such a list~\cite{craswell2008experimental}.
If the positions are fixed throughout the attention dynamic, $v_{ij}$ remains constant.
Our theoretical results focus on this case.
\lx{In \Cref{sect:experiments}, we empirically explore how strategies of dynamically} positioning the items by the platform will affect the outcome.
We compute the probability of item $j$ being the next purchase by manipulating the trial and purchase probabilities.
\begin{lem}
In a stochastic T-O market defined above,
the probability that the next purchase is for item $j$, denoted by $p_j(\boldsymbol{\phi})$,
is a function of the current market share $\boldsymbol{\phi}$,
given by $p_j(\boldsymbol{\phi}) = y_j(\boldsymbol{\phi}) / (\sum_{k=1}^{|\mathcal{I}|} y_k(\boldsymbol{\phi}))$, where
\begin{equation}
y_j(\boldsymbol{\phi}) ~:=~ \sum_{i=1}^{|\mathcal{U}|} w_i q_{ij} \cdot \frac{v_{ij} (\phi_j)^{r_i}}{\sum_{k=1}^{|\mathcal{I}|} v_{ik} (\phi_k)^{r_i}}.\label{choice TO hetero}
\end{equation}
$y_j(\boldsymbol{\phi})$ represents the probability that item $j$ is tried and then purchased by any user group.
In the homogeneous case, the probability that the next purchase is for item $j$ is
\begin{equation}
\frac{v_j q_j (\phi_j)^{r}}{\sum_{k=1}^{|\mathcal{I}|} v_k q_k (\phi_k)^{r}}.\label{choice TO}
\end{equation}
\label{lem:market-eff}
\end{lem}
For $\boldsymbol{\phi}$ to be a stationary point in this stochastic process, it must satisfy $p_j(\boldsymbol{\phi}) = \phi_j$
for all items $j$. This motivates the following equilibrium notion.
\begin{defn}\label{defn:TOME}
For any T-O market, we say a market share $\boldsymbol{\phi}$ is a trial-offer market equilibrium (TOME) if $\mathbf{p}(\boldsymbol{\phi}) = \boldsymbol{\phi}$.
\end{defn}
We show that TOME exists under mild conditions (\cite{supp3065}\ref{supp:TOME_exist}) using the Brouwer's fixed-point theorem.
\begin{thm}\label{thm:exist-TOME}
If (i) $0 < r_i < 1$ and $w_i > 0$ for any user type $i$, and
(ii) for each item $j$ there exists type $i$ such that $v_{ij} q_{ij} > 0$,
then the market must have a TOME $\boldsymbol{\phi}^*$, in which $\phi_j^* > 0$ for all items $j$.
\end{thm}
In the homogeneous case, we can explicitly compute the unique TOME $\boldsymbol{\phi}^*$ if $0<r<1$:
\begin{equation}
\phi^*_j ~=~ \frac{(v_j q_j)^{1/(1-r)}}{\sum_{k=1}^{|\mathcal{I}|} (v_k q_k)^{1/(1-r)}},\label{eq:TOMEhomo}
\end{equation}
provided that $v_k q_k > 0$ for some item $k$.
It is easy to verify $\boldsymbol{\phi}^*$ is a stationary point by plugging it into \Cref{defn:TOME}. \Cref{sec:results} specifies how to obtain $\boldsymbol{\phi}^*$ and argues for its uniqueness \lx{in the interior of simplex $\Delta$}.
\paragraph{Efficiency and Diversity Measures.}
An online platform may be interested in maximizing $\sum_{j=1}^{|\mathcal{I}|} y_j(\boldsymbol{\phi})$, the probability of successful transaction among all items.
\begin{defn}\label{defn:efficiency}
\lx{Given} market share $\boldsymbol{\phi}$, define the T-O market efficiency as $E(\boldsymbol{\phi}) := \sum_{j=1}^{|\mathcal{I}|} y_j(\boldsymbol{\phi})$.
\end{defn}
A platform may also be interested in promoting diversity among items.
A natural measure of diversity is
the \emph{Shannon entropy}, the standard measure of uncertainty of a probability distribution in information theory \cite{Cover2006}.
Given market share $\boldsymbol{\phi}$, its Shannon entropy is
\begin{equation}
H(\boldsymbol{\phi}) = - \sum_{j=1}^{|\mathcal{I}|} \phi_j \log \phi_j.
\label{eq:entropy}
\end{equation}
\paragraph{The Deterministic T-O Market Dynamic}
This model is analogous to the stochastic T-O model.
There is one user of each type $i$, whose budget is $w_i$.
The budget $w_i$ corresponds to the maximum amount of attention
the buyer can afford in the platform. At each time $t\ge 0$,
each buyer $i$ spends an amount of $b_{ij}^t$ for item $j$.
Subject to budget constraints, $\sum_{j=1}^{|\mathcal{I}|} b_{ij}^t \le w_i$.
Let the total spending on each item be $b_j^t := \sum_{i=1}^{|\mathcal{U}|} b_{ij}^t$, then the market share of item $j$ at time $t$ is $\phi_j^t := b_j^t / (\sum_{k=1}^{|\mathcal{I}|} b_k^t)$.
For $t\ge 1$, the update rule is
\begin{equation}
b_{ij}^t = w_i q_{ij} \cdot \frac{v_{ij} (\phi^{t-1}_j)^{r_i}}{\sum_{k=1}^M v_{ik} (\phi^{t-1}_k)^{r_i}}
~=~ w_i q_{ij} \cdot \frac{v_{ij} (b^{t-1}_j)^{r_i}}{\sum_{k=1}^M v_{ik} (b^{t-1}_k)^{r_i}}. \label{cont dynamic} \end{equation}
Note that the middle term in \eqref{cont dynamic} is the same as the term appeared in \eqref{choice TO hetero},
so $b_j = \sum_{i=1}^{|\mathcal{U}|} b_{ij}$ is the same as $y_j(\boldsymbol{\phi})$ in \eqref{choice TO hetero}.
With this, it is not hard to see that $\boldsymbol{\phi}^*$ is a TOME of a stochastic T-O market if and only if
$\mathbf{b}^*$, where each $b^*_{ij}$ is computed by the middle term of \eqref{cont dynamic} by replacing $\boldsymbol{\phi}^{t-1}$ with $\boldsymbol{\phi}^*$, is a fixed point of the dynamic \eqref{cont dynamic} of the analogous continuous and deterministic market.
\paragraph{Comparison with Classical Fisher Market and Proportional Response.}
The continuous and deterministic market and the dynamic \eqref{cont dynamic} are reminiscent of
the classical Fisher market and the proportional response (PR) dynamic in \cite{cheung2018dynamics}.
However, there is one crucial difference.
PR dynamic in Fisher market is same as \eqref{cont dynamic}
but with $b_k^{t-1}$ on the RHS replaced by $b_{ik}^{t-1}/b_k^{t-1}$.
The term $b_j^{t-1}$ is viewed as the \emph{price} of item $j$,
while a higher price in PR drives down spending on that item from the buyers.
In contrast, a higher value of $b_j^{t-1}$ in \eqref{cont dynamic}, which corresponds to receiving more attention in the T-O market, will lead to more spending on that item.
This reflects the tendency of cascading in \lx{online attention dynamics}.
\section{Results}
\label{sec:results}
This section provides an overview of our key new results in four parts. \Cref{ssec:maxutil} establishes a novel connection between TOME in homogeneous markets and two convex objectives. \Cref{ssec:mirror_descent} shows that update steps in deterministic T-O markets are mirror descent steps for these objectives.
\Cref{ssec:heteroRMA} presents the objective functions for heterogeneous markets and its two special cases.
\vspace{-0.mm}
\subsection{TOME maximises regularised utilities}
\label{ssec:maxutil}
First, we establish a robust connection between TOME of {\it homogeneous} T-O market and optimization. For notational simplicity, let $\bar{q}_j = q_jv_j$, noting that the quality and visibility factors $q_jv_j$ are coupled in both \eqref{choice TO} and \eqref{eq:TOMEhomo}.
We consider the following constrained optimization problem:
\begin{equation}
\begin{aligned}
&\max \quad \sum_{j=1}^{|\mathcal{I}|} \bar{q}_j\phi_j^r, \\
&\text{subject to } \boldsymbol{\phi}\in\Delta. \label{TotalUtilityProblem}
\end{aligned}
\end{equation}
We also discover another optimization problem which captures the \lx{same} TOME in the homogeneous market.
\begin{equation}
\begin{aligned}
&\max \quad \Psi(\boldsymbol{\phi}) = \sum_{j=1}^{|\mathcal{I}|} \left(\phi_j \log \bar{q}_j - (1-r) \phi_j \log \phi_j\right), \\
&\text{subject to } \boldsymbol{\phi}\in\Delta. \label{Efficiency Entropy Problem}
\end{aligned}
\end{equation}
We establish the equivalence between the equilibria and the maximisers of the above problems in the following theorem. \lx{The proofs for both simply invoke Lagrangian multipliers, and are found in \cite{supp3065} Section \ref{supp:social-welfare}.}
\begin{thm}\label{thm:social-welfare}
If $\boldsymbol{\phi}^*$ is a maximiser of problem \eqref{TotalUtilityProblem} or problem \eqref{Efficiency Entropy Problem}, then it is a TOME.
\end{thm}
We can view the objective function \eqref{TotalUtilityProblem} as the ``total utility'' since the choice probability of item $j$ is proportional to the ``utility'' associated with it, which is $\bar{q}_j\phi_j^r$.
The objective function in \eqref{Efficiency Entropy Problem} can be decomposed into two parts, namely
$\sum_{j=1}^{|\mathcal{I}|} \phi_j \cdot \log \bar{q}_j$ and $(1-r) \sum_{j=1}^{|\mathcal{I}|} -\phi_j \log \phi_j$.
The first summation can be viewed as an alternative measure of
total utility, with the utility of item $j$ being $\log \bar q_j$ weighted by its market share $\phi_j$.
The second summation is $(1-r)H(\boldsymbol{\phi})$ -- the entropy of market share.
When $r=1$, the entropy term disappears, so the optimization problem \eqref{Efficiency Entropy Problem}
becomes trivial: the optimal solution is by setting $\phi_j = 1$ for the highest-utility item $j = \arg\max_k \bar{q}_k$.
As $r$ decreases from $1$, i.e. the strength of feedback signal reduces, the entropy term becomes more significant,
which encourages diversity in the optimal solution.
For $0<r<1$, \eqref{TotalUtilityProblem} and \eqref{Efficiency Entropy Problem} are both \marco{strictly} concave in $\boldsymbol{\phi}$, therefore having a unique maximum.
A crucial advantage of \eqref{Efficiency Entropy Problem} over \eqref{TotalUtilityProblem} is that
mirror descent on \eqref{Efficiency Entropy Problem} provides insight into the convergence
of the stochastic influence dynamics in T-O market as specified in \eqref{eq:mnlchoice} and \eqref{choice TO}.
To formally describe this discovery,
we need several concepts in optimisation theory, which are discussed next.
\vspace{-0.mm}
\subsection{T-O update as mirror descent\marco{, and TOME convergence for homogeneous markets}}
\label{ssec:mirror_descent}
\paragraph{Background: Bregman Divergence and Mirror Descent}
Consider a general constrained convex optimization problem of minimizing a smooth convex function $f(x)$, subject to the constraint $x\in C$ for some convex set $C$.
\begin{defn}\label{defn::Bregmann divergence}
Let $C$ be a compact and convex set. Let $h$ be a differentiable convex function on $C$.
The \emph{Bregman divergence} w.r.t.~$h$, denoted by $d_h$, is defined as
\[
d_h(x,y) = h(x) - h(y) - \langle \nabla h(y)~,~ x-y \rangle,
\]
for any $x\in C$ and $y\in \textsf{rint}(C)$.
\end{defn}
The widely used Kullback–Leibler (KL) divergence is a special case of Bregman divergence,
generated by the function $h(x) = \sum_j (x_j \log x_j - x_j)$.
Given a Bregman divergence $d_h$, the corresponding mirror descent update rule is
\begin{equation}\label{general MD rule}
x^{t} ~=~ \argmin_{x\in C} \left\{ \langle \nabla f(x^{t-1})~,~x-x^{t-1} \rangle + \frac{1}{\alpha} \cdot d_h(x,x^{t-1}) \right\},
\end{equation}
where $\alpha$ is considered as the step-size of the update rule, which may depend on $t$ in general.
\paragraph{New Result: T-O update as Mirror Descent} \marco{A key conceptual message of this paper is the equivalence of influence dynamic and mirror descent. To illuminate this, we first focus on the continuous, deterministic and homogeneous T-O market.}
\begin{lem}
Let $\boldsymbol{\phi}^t$ be market share over time \lx{in a homogeneous T-O market}, and function $\mathbf{p}(\boldsymbol{\phi})$ as defined in \marco{\Cref{choice TO}}. The update rule
\begin{equation}\label{deterministic update rule homogeneous}
\boldsymbol{\phi}^{t} = \mathbf{p}(\boldsymbol{\phi}^{t-1})
\end{equation}
is same as the mirror descent update rule \eqref{general MD rule}
for the optimisation problem \eqref{Efficiency Entropy Problem}, in which $d_h$ is taken as the KL divergence,
and $\alpha = 1$.
\end{lem}
\marco{Once the equivalence is established, the convergence to TOME in the continuous and deterministic dynamic becomes intuitive; we will provide the formal argument in \Cref{sect:analysis}. To show that the convergence extends to the stochastic setting,}
we use the methodology in Maldonado et al.~\cite{maldonado2018popularity} to rewrite the stochastic influence dynamic as a Robbins-Monro algorithm (RMA) \marco{of the corresponding continuous and deterministic dynamic}.
Then we can apply the results in stochastic approximation~\cite{benaim1999dynamics} to formally establish the convergence of the stochastic dynamic. \marco{We summarize our main result for homogeneous market in the theorem below, and leave the discussions of RMA and stochastic approximation to \Cref{sect:analysis}.}
\begin{thm}[\cite{maldonado2018popularity}, Theorem 5.3]\label{thm-homo-convergence}
With the homogeneous setup, if $\boldsymbol{\phi}^0 >0$ and $0\leq r < 1$, then with probability 1,
\begin{equation}
\lim_{t\rightarrow \infty} \boldsymbol{\phi}^t = \boldsymbol{\phi}^*,
\end{equation}
where $\boldsymbol{\phi}^*$ is the unique
maximiser of the convex optimisation problem \eqref{Efficiency Entropy Problem}. And for any $r\in[0,1]$,\begin{equation}
\lim_{t\rightarrow \infty} \Psi(\boldsymbol{\phi}^t) = \Psi^*
\end{equation}
with probability 1, where $\Psi$ is defined in \eqref{Efficiency Entropy Problem}, and $\Psi^*$ is the global maximum of problem \eqref{Efficiency Entropy Problem}.
\end{thm}
\vspace{-0.mm}
\subsection{TOME for heterogeneous markets}
\label{ssec:heteroRMA}
For the heterogeneous case, we first show the following proposition, which depicts that the TOME is optimising an ex-post version of a convex objective.
\begin{prop}\label{prop:Nash-SW}
\marco{Given a heterogeneous T-O market with a TOME $\boldsymbol{\phi}^*$},
the TOME $\boldsymbol{\phi}^*$ is the optimal solution of the following problem:
\begin{equation}
\begin{aligned}
\max_{\{\phi_j\}_{j=1}^{|\mathcal{I}|}} \prod_{i=1}^{|\mathcal{U}|}(\sum_{j=1}^{|\mathcal{I}|} q_{ij}v_{ij}\phi_j^{r})^{w_ia_i^*},\\
\text{subject to: }\ \sum_{j=1}^{|\mathcal{I}|} \phi_j = 1
\end{aligned}\label{eq:objhetero}
\end{equation}
where $a_i^* = \frac{\sum_{j=1}^{|\mathcal{I}|} q_{ij}v_{ij}\phi_j^*} {\sum_{k=1}^{|\mathcal{I}|}v_{ik}\phi_k^*} $ for any $i\in\mathcal{U}$.
\end{prop}
A proof of \Cref{prop:Nash-SW} is provided in the online supplement \cite{supp3065}\ref{Objective function for heterogeneous case}.
Problem \eqref{eq:objhetero} takes the form of a product-of-utilities (or sum-of-log-utilities). Known as Nash social welfare~\cite{kaneko1979nash}, \marco{this objective was found to strike a good balance between fairness and efficiency}
in the resulting allocations~\cite{bertsimas2011price,CaragiannisKMPS19}. A proper exposition of this connection is outside the scope of this paper. Once $a_i^*$ are known (hence \emph{ex-post}), Problem \eqref{eq:objhetero} is convex in $\boldsymbol{\phi}$.
However, if $a_i^*$ is unknown, the optimisation problem is non-convex.
We raise the properties of its optimal solution \marco{(e.g., is the optimal solution still a TOME?)} as an open problem.
Then we turn our focus to two interesting \lx{special} cases of the heterogeneous market, where $r_i = r$ for all types $i$, and:
\begin{itemize}
\item the trial randomness is the same across all user types (i.e., $v_{ij}$ are the same for all types $i$),
but the purchase randomness can be different (i.e., $q_{ij}$ can be different for various types $i$);
\item the trial randomness can be different across all types (i.e., $v_{ij}$ can be different for various types $i$), but the purchase randomness is the same across all types (i.e., $q_{ij}$ are the same for all types $i$).
\end{itemize}
For the first case, we show that it can be reduced to the homogeneous case.
For the second case, we design a new optimization problem and show that \eqref{cont dynamic} is indeed mirror descent for the problem.
The driving variables of the problem
are $b_{ij}$ but not $\phi_j$, where $b_{ij}$ are the driving variables in the dynamic \eqref{cont dynamic}.
We let $b_j = \sum_{i=1}^{|\mathcal{U}|} b_{ij}$, and set $q_j$ to be the common value of $q_{ij}$ for all $i$.
The optimisation problem is
\begin{equation}
\begin{aligned}
&\max \quad \sum_{j=1}^{|\mathcal{I}|} -\frac{r}{q_j} (b_j \log b_j - b_j) \\
&~~~~~~~~~~~~~ + \sum_{i=1}^{|\mathcal{U}|} \sum_{j=1}^{|\mathcal{I}|}\left(\frac{1}{q_j}(b_{ij} \log b_{ij} - b_{ij}) - \frac{b_{ij}}{q_j}\log q_{j}v_{ij}\right)\\
&\text{subject to } \sum_{j=1}^{|\mathcal{I}|} \frac{b_{ij}}{q_{j}} ~=~ w_i,~\forall i\in \mathcal{U}. \label{hetero obj function}
\end{aligned}
\end{equation}
By performing a variable transformation $x_{ij} = b_{ij} / q_j$ to the above problem,
we obtain an equivalent transformed optimization problem where $x_{ij}$ are the driving variables,
which is needed for the key lemma below. The proof is in \cite{supp3065}\ref{Calculation of objective function}.
\begin{lem}
The update rule \eqref{cont dynamic} is equivalent to mirror descent w.r.t.~KL divergence on the transformed optimization problem. \label{Continuous_dynamics_as_mirror_descent}
\end{lem}
With the above lemma in hand, we again use RMA and Bena{\"\i}m's results to establish convergence
of the stochastic influence dynamics in heterogeneous markets. The proof can be found in supplement \cite{supp3065}\ref{Proof of theorem 13}.
\begin{thm}\label{thm:convergence-hetero}
With the heterogeneous setup when $r_i =r < 1$ for all $i\in \mathcal{U}$, if one of the following conditions
\begin{itemize}
\item[1.] $v_{ij} = v_j$ for all $i\in \mathcal{U}, j \in \mathcal{I}$.
\item[2.] $q_{ij} = q_j$ for all $i\in \mathcal{U}, j \in \mathcal{I}$.
\end{itemize}
is satisfied. In addition, $\boldsymbol{\phi}^0 >0$.
Then with probability 1,
\begin{equation}
\lim_{t\rightarrow \infty} \boldsymbol{\phi}^t = \boldsymbol{\phi}^*,
\end{equation}
where $\boldsymbol{\phi}^*$ is the unique maximiser of the convex program \eqref{hetero obj function}. And when $r\in [0,1]$,
\begin{equation}
\lim_{t\rightarrow \infty}\Gamma(\boldsymbol{\phi}^t) = \Gamma^*,
\end{equation}
with probability 1, where $\Gamma$ is the objective function of problem \eqref{hetero obj function}, and $\Gamma^*$ is the global maximum of problem \eqref{hetero obj function}.
\end{thm}
By \Cref{defn:TOME}, there indeed exist multiple TOMEs in the simplex $\Delta$. However, there is only one TOME that lies in the relative interior of the simplex $\Delta$. The uniqueness of the limit point of the dynamic depends on the choice of initial point and the social influence parameter $r$:
\begin{itemize}[leftmargin=*]
\item When $r < 1$ and the initial market shares $\phi^0$ is in the relative interior of $\Delta$,
then the limit point of the dynamic is unique, which is the interior TOME, for both homogeneous and heterogeneous cases according to Theorems \ref{thm-homo-convergence} and \ref{thm:convergence-hetero}.
\item When $\phi^0$ contains zero initial market shares for some items, then it is equivalent to consider a market with those items eliminated. In other words, those items would not gain non-zero market share in subsequent iterations from zero initial market shares. The limit point of the dynamic is still unique in these cases.
\item When $r=1$, the objective functions in optimisation problems \ref{Efficiency Entropy Problem} and \ref{eq:objhetero}) are not strictly convex. There is a level set containing multiple equilibria. The dynamic will converge to the level set which optimises these objective functions.
\end{itemize}
\section{Analysis}
\label{sect:analysis}
\marco{We follow the same approach in formally proving two of our main results,
\marco{Theorem \ref{thm-homo-convergence} and Theorem \ref{thm:convergence-hetero}}.
The approach comprises two main steps.}
Firstly, we show that the evolution of market share $\boldsymbol{\phi}$ can be cast as a stochastic RMA dynamic.
Secondly, we show the convergence of such RMA dynamics, by \marco{utilising their equivalences} to the mirror descent of convex functions, and bounding its gap from the optimal.
\subsection{Influence Dynamic as RMA}
\begin{defn}(\cite{benaim1999dynamics,robbins1951stochastic})
A Robbins-Monro algorithm (RMA) is a discrete-time stochastic process $z^t$ whose general structure is specified by
\[
\mathbf{z}^{t} - \mathbf{z}^{t-1} ~=~ \gamma^{t} \cdot \left( F(\mathbf{z}^{t-1}) + U^t \right),
\]
where $\mathbf{z}^t\in \mathbb{R}^n$ for some $n\ge 1$, $F:\mathbb{R}^n\rightarrow \mathbb{R}^n$ is a deterministic continuous vector field, $\gamma^t$ is deterministic and satisfies $\gamma^t > 0$, $\sum_{t\ge 1} \gamma^t = +\infty$ and $\lim_{t\rightarrow \infty} \gamma^t = 0$, and $\mathbb{E}[U^t | \mathcal{F}^{t-1}] = 0$ where
$\mathcal{F}^{t-1}$ is the natural filtration on the entire process.
\marco{The corresponding ordinary differential equation (ODE) system of the RMA is $\dot{\mathbf{z}} = F(\mathbf{z})$.}
\end{defn}
Note that market share $\boldsymbol{\phi}$ will change only when there is a purchase. Thus, Maldonado et al.~\cite{maldonado2018popularity} modify the time schedule to only count those times at which a purchase occurs,
and show the following lemma.
\begin{lem
In the stochastic T-O market, the update of market share follows the following RMA w.r.t.~the modified time schedule:
\[
\boldsymbol{\phi}^t - \boldsymbol{\phi}^{t-1} ~=~ \frac{1}{t} \cdot \left( (\mathbf{p}(\boldsymbol{\phi}^{t-1}) - \boldsymbol{\phi}^{t-1}) + U^t \right),
\]
where $U^t$ is the random variable defined as below. Let $\mathbf{e}^t$ denote the random unit vector whose
$j$-th entry is $1$ if item $j$ is purchased at time $t$. Then $U^t = \mathbf{e}^t - \mathbb{E}[\mathbf{e}^t | \mathcal{F}^{k-1}]$. (Recall that $e^t_j = 1$ with probability $p_j(\boldsymbol{\phi}^{t-1})$.)
\end{lem}
\hq{The proof of this lemma could be found in \cite{maldonado2018popularity} (for the homogeneous setting) and \cite{supp3065}\ref{sec:supp-RMA} (for the heterogeneous setting).} With the above lemma, we can apply the seminal results of Bena{\"\i}m~\cite{benaim1999dynamics}
to show that the RMA trajectory is the asymptotic pseudo trajectory of the mirror descent update \eqref{deterministic update rule homogeneous}. By using the
convergence theorem established in \cite{CT93,cheung2018dynamics}, we show that both dynamics converge to the global minimisers of \eqref{Efficiency Entropy Problem}. This allows us to present a new proof of \lx{\Cref{thm-homo-convergence} in~\cite{maldonado2018popularity} as shown below}.
\subsection{Convergence of Mirror Descent}
Let $d_h(\cdot,\cdot)$ denote the Bregman divergence in \Cref{defn::Bregmann divergence}. We assume that the function $h$ is strictly convex. Consequently, $d_h(\cdot,\cdot)$ is strictly convex in its first parameter, and $d_h(\mathbf{x},\mathbf{y}) = 0$ if and only if $\mathbf{x} = \mathbf{y}$.
\begin{defn}
A function $f$ is $L$-Bregman-convex with respect to the Bregman divergence $d_h$ if
for any $\mathbf{y} \in \text{rint}(C)$ and $\mathbf{x}\in C$,
\[
f(\mathbf{x}) ~\leq~ f(\mathbf{y}) + \langle\nabla f(\mathbf{y}), \mathbf{x}-\mathbf{y}\rangle + L \cdot d_h(\mathbf{x},\mathbf{y}).
\]
\end{defn}
Given an $L$-Bregman-convex function $f$ with respect to the Bregman divergence $d_h$,
the mirror descent rule with respect to the Bregman divergence $d_h$ is given by $\mathbf{x}^{t+1} = g(\mathbf{x}^t)$, where
\begin{equation}
g (\mathbf{x}^t) = \argmin_{\mathbf{x}\in C} \left\{f(\mathbf{x}^t) + \langle\nabla f(\mathbf{x}^t), \mathbf{x} - \mathbf{x}^t\rangle +L \cdot d_h(\mathbf{x},\mathbf{x}^t) \right\}. \label{MDMinimizer}
\end{equation}
\lx{The update in \eqref{MDMinimizer} is the same as that of a general mirror descent \eqref{general MD rule}, except with step size defined by $L$. It enables us to use the following theorem to bound the difference to the optimal.}
\begin{thm}[\cite{cheung2018dynamics}, Theorem 3.2]
Suppose $f$ is an $L$-Bregman-convex function with respect to $d_h$, and $\mathbf{x}^t$ is the point reached after $t$ applications of the mirror descent update rule $\mathbf{x}^{t+1} = g(\mathbf{x}^t)$. Then for all $t\ge 1$,
\[
f(\mathbf{x}^t) - f(\mathbf{x}^*) \leq \frac{L\cdot d_h(\mathbf{x}^*, \mathbf{x}^0 )}{t}.
\]
\end{thm}
Thankfully,
the objective functions of mirror descent updates for both homogeneous and heterogeneous cases are
Bregman convex, and the proof can be found in \cite{supp3065} Section \ref{Calculation of objective function}.
\marco{Lastly, we apply the theorem below to complete the proof. Note that what we have just showed is the convergence of \emph{discrete-time} mirror descent updates $\mathbf{x}^{t+1} = g(\mathbf{x}^t)$,
but the theorem requires condition that guarantee convergence of the ODE system $\dot{\mathbf{x}} = g(\mathbf{x}) - \mathbf{x}$. We need to convert the discrete-time convergence to its ODE analogue, which is simple. The conversion can be found in \cite{supp3065} Section \ref{sec:supp-shorter-proof}.}
\begin{thm}
Consider an ODE $\dot{\mathbf{x}} = h(\mathbf{x})$.
Suppose there is a continuously differentiable function $f:\mathbb{R}^d \rightarrow \mathbb{R}$ such that
(i) $\lim_{\|\mathbf{x}\|\rightarrow \infty} f(\mathbf{x}) = +\infty$;
(ii) the set of minimum points of $f$, $X^*$, is non-empty; and
(iii) $\innerProd{f'(\mathbf{x})}{h(\mathbf{x})} \le 0$ for all $\mathbf{x}$, with equality holds if and only if $\mathbf{x}\in X^*$.
Then almost surely, the Robbins-Monro algorithm of the ODE converges to a non-empty subset of $X^*$.
\end{thm}
\section{Existence of TOME}
\label{supp:TOME_exist}
In this section, we prove of Theorem \ref{thm:exist-TOME}.
By the definition of $y_j(\boldsymbol{\phi})$ in \eqref{choice TO hetero},
the following inequality holds for any $\boldsymbol{\phi} \in \Delta$ and $j\in \mathcal{I}$:
\[
\begin{aligned}
y_j(\boldsymbol{\phi}) &~=~ \sum_{i=1}^{|\mathcal{U}|} w_i q_{ij} \cdot \frac{v_{ij} (\phi_j)^{r_i}}{\sum_{k=1}^{|\mathcal{I}|} v_{ik} (\phi_k)^{r_i}}\\
&~\ge~ \underbrace{\frac{w_{i^*(j)} v_{i^*(j),j} q_{i^*(j),j}}{\sum_{k=1}^{|\mathcal{I}|} v_{i^*(j),k}}}_{c_j}\cdot (\phi_j)^{r_{i^*(j)}},
\end{aligned}
\]
for some type $i^*(j)$ with $v_{i^*(j),j} q_{i^*(j),j} > 0$, which exists due to condition (ii).
Note that if $\phi_j \ge (c_j)^{1/(1-r_{i^*(j)})}$, then
\begin{equation}\label{y fixed point}
y_j(\boldsymbol{\phi}) \ge c_j \cdot (c_j)^{r_{i^*(j)}/(1-r_{i^*(j)})} = (c_j)^{1/(1-r_{i^*(j)})}.
\end{equation}
On the other hand, recall that $p_j(\boldsymbol{\phi}) = y_j(\boldsymbol{\phi}) / (\sum_{k=1}^{|\mathcal{I}|} y_k(\boldsymbol{\phi}))$.
Since $\sum_{k=1}^{|\mathcal{I}|} y_k(\boldsymbol{\phi}) \le 1$,
\begin{equation}\label{p fixed point}
p_j(\boldsymbol{\phi}) \ge y_j(\boldsymbol{\phi}).
\end{equation}
Let $S$ denote the set $\left\{\boldsymbol{\phi}\in \Delta : \forall j\in \mathcal{I},~1\ge \phi_j\ge (c_j)^{1/(1-r_{i^*(j)})}\right\}$.
Since $0 < c_j \le 1$, $1/(1-r_{i^*(j)}) \ge 1$ and $0\le q_{ij}\le 1$,
\[
\begin{aligned}
\sum_{j=1}^{|\mathcal{I}|}(c_j)^{1/(1-r_{i^*(j)})} &~\le~ \sum_{j=1}^{|\mathcal{I}|} c_j \\
&~=~ \sum_{i=1}^{|\mathcal{U}|} \sum_{j:i^*(j)=i} c_j ~\le~ \sum_{i=1}^{|\mathcal{U}|} w_i ~=~ 1,
\end{aligned}
\]
hence $S$ is non-empty. And of course, $S$ is compact and convex.
Due to \eqref{y fixed point} and \eqref{p fixed point}, $\mathbf{p}$ is a continuous function that maps $S$ to a subset of $S$.
By Brouwer's fixed point theorem, $\mathbf{p}$ has a fixed point in $S$, which is a TOME of the market.
\section{Objective Function for Homogeneous Case}
\label{supp:social-welfare}
We present the proof of Theorem \ref{thm:social-welfare} below.
\begin{proof}
For the case $r>1$, since $\phi_j^r \leq \phi_j$ for any $\phi_j \in [0,1]$ in this case, we have
\begin{equation}
\sum_j^{|\mathcal{I}|} \bar{q}_j\phi_j^r \leq \sum_j^{|\mathcal{I}|} \bar{q}_j\phi_j \leq \sum_j^{|\mathcal{I}|} \bar{q}_{j^*}\phi_j = \bar{q}_{j^*},
\end{equation}
where $j^* \in \arg\max_j\{\bar{q}_j: j\in \mathcal{I}\}$. This is obtained only if $\phi_{j^*} =1$, and this is one of the equilibria of the market.\\
When $r\in (0,1]$, we first transform the maximisation problem into a minimisation problem for simplicity. The problem is given by
\begin{equation}
\begin{aligned}
\min \quad -\sum_j^{|\mathcal{I}|} \bar{q}_j\phi_j^r, \\
\text{subject to } \boldsymbol{\phi}\in\Delta.
\end{aligned}
\end{equation} Now, the problem is convex, we use the KKT theorem to solve the problem. Firstly, the Lagrangian is given by
\begin{equation}
\mathcal{L}_1 = -\sum_j^{|\mathcal{I}|} \bar{q}_j\phi_j^{r} + \lambda(\sum_{i= 1}^{|\mathcal{I}|} \phi_i) - \boldsymbol{\eta}^T\boldsymbol{\phi},
\end{equation}
Then, we know that
\begin{equation}
\frac{\partial\mathcal{L}_1 }{\partial \phi_j} = -r\bar{q}_j\phi_j^{r-1} + \lambda - \eta_j =0
\end{equation}
for any $j\in \mathcal{I}$, where $\eta_j$ is the $j$ th component of $\boldsymbol{\eta}$. Moreover, $\phi_j = 0$ satisfies the $j$ th component of the equilibrium equation \eqref{choice TO}. Consider the indices $j\in \mathcal{I}$ with $\phi_j \neq 0$ (which implies $\eta_j = 0$ by KKT theorem), we have
\begin{equation}
\frac{\partial\mathcal{L}_1 }{\partial \phi_j} = -r\bar{q}_j\phi_j^{r-1} + \lambda =0.
\end{equation}
By multiplying $\phi_j$ on both sides of the above equation we have,
\begin{equation}
-r\bar{q}_j\phi_j^r + \lambda\phi_j =0. \label{52}
\end{equation}
Hence,
\begin{equation}
\sum_{j=1}^{|\mathcal{I}|} -r\bar{q}_j\phi_j^r + \sum_{j=1}^{|\mathcal{I}|}\lambda\phi_j = \sum_{j=1}^{|\mathcal{I}|} -r\bar{q}_j\phi_j + \lambda =0,
\end{equation}
which implies
\begin{equation}
\lambda =\sum_{j=1}^{|\mathcal{I}|} r\bar{q}_j\phi_j^r.
\end{equation}
Therefore, by \eqref{52}, we could get
\begin{equation}
\phi_j =\frac{r\bar{q}_j\phi_j^r}{\lambda} = \frac{\bar{q}_j\phi_j^r}{\sum_{j=1}^{|\mathcal{I}|} \bar{q}_j\phi_j^r},
\end{equation}
which is exactly the equilibria equation \eqref{choice TO}, which concludes the proof.
\end{proof}
It remains to check that, given a trial-offer market model, whether the equilibrium which the dynamic is converging to is exactly the maximiser of \eqref{TotalUtilityProblem}. The following corollary verifies that it is indeed true.
\begin{cor}
If $r<1$, the unique interior equilibrium $\boldsymbol{\phi}^* \in \widetilde{\Delta}$ is the unique maximiser of \eqref{TotalUtilityProblem}.
If $r=1$ and $\bar{q}_1 > \bar{q}_2 >\ldots > \bar{q}_{|\mathcal{I}|}$, the unique equilibrium $\boldsymbol{\phi}^* \in \widetilde{\Delta}$ is the unique maximiser of \eqref{TotalUtilityProblem}. \label{Cor1}
\end{cor}
\begin{proof}
If $r<1$, with $Q = \{i\in \mathcal{I}, \phi_i \neq 0\}$, by theorem 5.1 of \cite{maldonado2018popularity}, we note that the equilibrium is given by
\begin{equation}
\phi^*_j = \frac{\bar{q}_j^{\frac{1}{1-r}}}{\sum_{i\in Q}\bar{q}_i^{\frac{1}{1-r}}}, \quad j\in Q
\end{equation}
Then, the value of objective function evaluated at this point is
\begin{equation}
\sum_{j\in Q} \frac{\bar{q}_j^{\frac{1}{1-r}}}{(\sum_{i\in Q}q_i^{\frac{1}{1-r}})^r} = (\sum_{i\in Q}\bar{q}_i^{\frac{1}{1-r}})^{1-r} = \Vert q_Q \Vert_{\frac{1}{1-r}},
\end{equation}
where $q_Q$ is the vector concatenating all $\bar{q}_j$ with $j\in Q$, and the last inequality is obtained by noting that $q$ does not have zero components. Hence, it is clear that the function value is maximised when $Q = \mathcal{I}$, which implies that the maximum is obtained by the unique interior point of \eqref{TotalUtilityProblem}.\\
If $r=1$, it is clear that the maximum of the objective function is $\bar{q}_1$ which is obtained by taking $\boldsymbol{\phi} = \mathbf{e}_1$.
\end{proof}
\section{Objective function for heterogeneous case}\label{Objective function for heterogeneous case}
We present the proof of \Cref{prop:Nash-SW} below.
\begin{proof}
Let $f$ be the logarithm of the objective function, then the partial derivatives are given by
\begin{equation}
\frac{\partial f}{\partial \phi_j} = \sum_{i=1}^{|\mathcal{U}|} w_ia_i^*\frac{q_{ij}v_{ij}\phi_j^{r_i-1}}{\sum_{k=1}^{|\mathcal{I}|}q_{ik}v_{ik}\phi_k^{r_i}}.
\end{equation}
We note that, if $\phi_j =0$ for some $j\in \mathcal{I}$, then this component clearly satisfies the equilibrium equation. Therefore, we consider the set $Q \subset \mathcal{I}$ which includes all item indices $j$ with $\phi_j^* \neq 0$. Therefore, we have $\sum_{j\in Q} \phi_j= 1$. By the KKT theorem, we have
\begin{equation}
\sum_{i=1}^{|\mathcal{U}|} w_ia_i^*\frac{q_{ij}v_{ij}\phi_j^{r_i-1}}{\sum_{k=1}^{|\mathcal{I}|}q_{ik}v_{ik}\phi_k^{r_i}} + \lambda = 0, \label{KKT_multi}
\end{equation}
for all $j\in Q$. Hence,
\begin{equation}
\begin{aligned}
0 & = \sum_{j\in Q}\sum_{i=1}^{|\mathcal{U}|} w_ia_i^*\frac{q_{ij}v_{ij}\phi_j^{r_i}}{\sum_{k=1}^{|\mathcal{I}|}q_{ik}v_{ik}\phi_k^{r_i}} + \lambda\sum_{j\in Q} s_j\\
& = \sum_{i=1}^{|\mathcal{U}|} w_ia_i^*\frac{\sum_{j\in Q}q_{ij}v_{ij}\phi_j^{r}}{\sum_{k=1}^{|\mathcal{I}|}q_{ik}v_{ik}\phi_k^{r_i}} + \lambda\\
& = \sum_{i=1}^{|\mathcal{U}|} w_ia_i^* + \lambda.\\
\end{aligned}
\end{equation}
Plugging this into \eqref{KKT_multi} we could get
\begin{equation}
s_j = \sum_{i=1}^{|\mathcal{U}|} \frac{w_ia_i^*}{\sum_{h=1}^{|\mathcal{U}|}w_ha_h^*} \cdot \frac{q_{ij}v_{ij}\phi_j^{r_i}}{\sum_{k=1}^{|\mathcal{I}|}q_{ik}v_{ik}\phi_k^{r_i}},
\end{equation}
which is exactly the equilibrium equation.
\end{proof}
\section{Special case: $v_{ij} = v_j$ for heterogeneous setup}\label{Special case: $v_{ij} = v_j$ for heterogeneous setup}
\begin{prop}
Suppose $v_{ij} = v_i$ for all $i\in \mathcal{U}$ and $j \in \mathcal{I}$, then the market dynamics converge and the equilibrium could be written as
\begin{equation}
\phi^*_j = \frac{(\bar{q_j}v_j)^{\frac{1}{1-r}}}{\sum_{j=1}^{|\mathcal{I}|}(\bar{q_j}v_j)^{\frac{1}{1-r}}},
\end{equation}
where $\bar{q_j} = \sum_{i=1}^{|\mathcal{U}|}w_iq_{ij}$.
\end{prop}
\begin{proof}
For any $j\in \mathcal{I}$, the market dynamics could be written as
\begin{equation}
\begin{aligned}
p^{mult}_j(\mathbf{s}^t) &= \sum_{i=1}^{|\mathcal{U}|} \frac{w_ia_i^t}{\sum_{h=1}^{|\mathcal{U}|}w_ha_h^t}\frac{q_{ij}v_{j}(\phi_j^t)^{r}}{\sum_{k=1}^{|\mathcal{I}|}q_{ik}v_{k}(\phi_k^t)^{r}}\\
& = \sum_{i=1}^{|\mathcal{U}|} \frac{w_i(\sum_{n=1}^{|\mathcal{I}|}q_{in}v_l(\phi_l^t)^r)}{\sum_{h=1}^{|\mathcal{U}|}w_h(\sum_{l=1}^{|\mathcal{I}|}q_{hl}v_l(\phi_l^t)^r)}\frac{q_{ij}v_{j}(\phi_j^t)^{r}}{\sum_{k=1}^{|\mathcal{I}|}q_{ik}v_{k}(\phi_k^t)^{r}}\\
& = \sum_{i=1}^{|\mathcal{U}|} \frac{w_iq_{ij}v_{j}(\phi_j^t)^{r}}{\sum_{h=1}^{|\mathcal{U}|}w_h(\sum_{l=1}^{|\mathcal{I}|}q_{hl}v_l(\phi_l^t)^r)}\\
& = \frac{\bar{q_j}v_{j}(\phi_j^t)^{r}}{\sum_{h=1}^{|\mathcal{U}|}\bar{q_h}v_l(\phi_l^t)^r},
\end{aligned}
\end{equation}
which is in the same form as the single user case, hence the convergence result follows.
\end{proof}
\section{Results on Mirror Descent} \label{Results on Mirror Descent}
To analyse this special case, we will use a similar approach as \cite{cheung2018dynamics}, which seeks for an objective function such that the market dynamics could be related to mirror descent. Before the analysis of this special case, we first present some preliminary results on mirror descent.
\begin{defn}
Let $C$ be a compact and convex set, given a differentiable convex function $h$ with domain $C$, the Bregman divergence genearated bt $h$, which is denoted by $d_h$ is defined as
\begin{equation}
d_h(\mathbf{x},\mathbf{y}) = h(\mathbf{x}) - h(\mathbf{y}) - \langle\nabla h(\mathbf{y}), \mathbf{x} - \mathbf{y} \rangle.
\end{equation}
where $\mathbf{x} \in C$, $\mathbf{y} \in \text{rint}(C) = \{\mathbf{y}\in C: \text{for all } \mathbf{z}\in C\text{, there exists some }\lambda>1 \text{ such that } \lambda \mathbf{y}+ (1-\lambda)\mathbf{z} \in C\}$.
\end{defn}
\begin{defn}
The function $f$ is $(\sigma,L)$-Bregman strongly convex with respect to the Bregman divergence $d_h$ if for any $\mathbf{y} \in \text{rint}C$ and $\mathbf{x}\in C$,
\begin{equation}
\begin{aligned}
f(\mathbf{y}) + &\langle\nabla f(\mathbf{y}), \mathbf{x}-\mathbf{y}\rangle + \sigma \cdot d_h(\mathbf{x},\mathbf{y}) \\
&\leq f(\mathbf{x}) \leq f(\mathbf{y}) + \langle\nabla f(\mathbf{y}), \mathbf{x}-\mathbf{y}\rangle + L \cdot d_h(\mathbf{x},\mathbf{y}).
\end{aligned}
\end{equation}
\end{defn}
The mirror descent rule with respect to the Bregman divergence $d_h$ is given by the following,
\begin{equation}
g (\mathbf{x}^t) = \arg\min_{\mathbf{x}\in C} \left\{f(\mathbf{x}^t) + \langle\nabla f(\mathbf{x}^t), \mathbf{x} - \mathbf{x}^t\rangle + \frac{1}{\alpha_t}d_h(\mathbf{x},\mathbf{x}^t) \right\}. \label{MirrorDescentMinimiser}
\end{equation}
\begin{equation}
\mathbf{x}^{t+1}_{\text{standard}} = g(\mathbf{x}^t). \label{MirrorDescentStandard}
\end{equation}
In addition, we also define the \emph{weighted} mirror descent rule as follows,
\begin{equation}
\mathbf{x}^{t+1}_{\text{weighted}} = \beta_tg(\mathbf{x}^t) + (1-\beta_t)\mathbf{x}^t.\label{MirrorDescentWeighted}
\end{equation}
Firstly, we give the convergence result for standard mirror descent
\begin{thm}[\cite{cheung2018dynamics}, Theorem 3.1]
Suppose that $f$ is $(\sigma,L)$-strongly Bregman convex with respect to $d_h$. With the update rule \eqref{MirrorDescentStandard} whereas $\alpha_t = 1/L$, for all $t\geq1$,
\begin{equation}
f(\mathbf{x}^t) - f(\mathbf{x}^*) \leq \frac{\sigma}{(\frac{L}{L-\sigma})^t -1}\cdot d_h(\mathbf{x}^*, \mathbf{x}^0).
\end{equation}
\label{Mirror_descent_convergence_thm_standard}
\end{thm}
For general convex function, we also have the following result,
\begin{thm}[\cite{cheung2018dynamics}, Theorem 3.2]
Suppose $f$ is $L$-Bregman convex function with respect to $d_h$, and $\mathbf{x}^T$ is the point reached after $T$ applications of the mirror descent update rule \eqref{MirrorDescentStandard}. Then,
\begin{equation}
f(\mathbf{x}^T) - f(\mathbf{x}^*) \leq \frac{L\cdot d_h(\mathbf{x}^*, \mathbf{x}^0 )}{T}.
\end{equation}
\end{thm}
Next, beyond existing results on the standard cases, we perform the analysis on the weighted update. It turns out that the weighted update rule will also converge to the global minimum of $f$ under certain assumptions.
\begin{thm}
Suppose that $f$ is $(\sigma,L)$-strongly Bregman convex with respect to $d_h$. In addition, $d_h(\mathbf{x},\mathbf{y})$ is convex on both variables. Under the conditions $\sum_{t=0}^{\infty}\beta_t = \infty$, and $0<\beta_t\leq1$ for any $t\geq0, t\in \mathbb{Z}$, with the update rule \eqref{MirrorDescentWeighted}, the dynamic converges to the global minimum of $f$, which is
\begin{equation}
\lim_{t\rightarrow \infty} \mathbf{x}_{\text{weighted}}^t = \mathbf{x}^*.
\end{equation}
where the step size $\alpha_t = 1$. \label{theorem_weighted_converge}
\end{thm}
The proof will rely on the following lemmas.
\begin{lem}
Suppose that $f$ is $(\sigma,L)$-strongly Bregman convex with respect to $d_h$. $d_h(\mathbf{x},\mathbf{y})$ is convex on both variables.
With the update rule \eqref{MirrorDescentWeighted},
\begin{equation}
f(\mathbf{x}^{t+1}) \leq f(\mathbf{x}^t)
\end{equation}
\end{lem}
\begin{proof}
By Bregman strong convexity, we have
\begin{equation}
\begin{aligned}
f(\mathbf{x}^{t+1}) - f(\mathbf{x}^t) &\leq \innerProd{\nabla f(\mathbf{x}^t)}{\mathbf{x}^{t+1} - \mathbf{x}^t} + L\cdot d_h(\mathbf{x}^{t+1}, \mathbf{x}^t)\\
&\leq \beta_t \innerProd{\nabla f(\mathbf{x}^t)}{ g(\mathbf{x}^t) - \mathbf{x}^t} + \beta_tL\cdot d_h(g(\mathbf{x}^t), \mathbf{x}^t).
\end{aligned}
\end{equation}
Noticing the relation \eqref{MirrorDescentMinimiser}, by replacing $g(\mathbf{x}^t)$ with $\mathbf{x}^t$, the result follows.
\end{proof}
\begin{lem}[\cite{birnbaum2011distributed}, lemma 9]
If $x^+$ is the optimal solution to the following optimisation problem
\begin{equation}
\begin{aligned}
\min \ \varphi(x) + d_h(x,y),\\
\text{subject to } x\in C,
\end{aligned}
\end{equation}
where $\varphi(x)$ is a convex function and $C$ is a compact convex set. Then,
\begin{equation}
\varphi(x)+ d_h(x,y) \geq \varphi(x^+)+ d_h(x^+,y) + d_h(x,x^+).
\end{equation} \label{lemma 17}
\end{lem}
\begin{proof}[Proof of \Cref{theorem_weighted_converge}]
By Bregman strong convexity, we have
\begin{equation}
\begin{aligned}
f(\mathbf{x}^{t+1}) - f(\mathbf{x}^t) \leq &\beta_t \innerProd{\nabla f(\mathbf{x}^t)}{ g(\mathbf{x}^t) - \mathbf{x}^t} \\
&+ \beta_tL\cdot d_h(g(\mathbf{x}^t), \mathbf{x}^t).
\end{aligned}
\end{equation}
With the above lemma, taking $x^+ = g(\mathbf{x}^{t}),\ y = \mathbf{x}^t,\ x= \mathbf{x}^*$, we have
\begin{equation}
\begin{aligned}
\beta_t&\innerProd{\nabla f(\mathbf{x}^t)}{ g(\mathbf{x}^t) - \mathbf{x}^t} + \beta_tL\cdot d_h(g(\mathbf{x}^t), \mathbf{x}^t)\\
&\leq \beta_t\innerProd{\nabla f(\mathbf{x}^t)}{ \mathbf{x}^* - \mathbf{x}^t} + \beta_tL\cdot (d_h(\mathbf{x}^*, \mathbf{x}^t) - d_h(\mathbf{x}^*, g(\mathbf{x}^t))).
\end{aligned}
\end{equation}
Using Bregman strong convexity, we also have
\begin{equation}
\innerProd{\nabla f(\mathbf{x}^t)}{ \mathbf{x}^* - \mathbf{x}^t} \leq f(\mathbf{x}^*) - f(\mathbf{x}^t) - \sigma d_h(\mathbf{x}^*,\mathbf{x}^t).
\end{equation}
Combining the above inequalities, we obtain
\begin{equation}
\begin{aligned}
f&(\mathbf{x}^{t+1}) - f(\mathbf{x}^t) \leq \beta_t(f(\mathbf{x}^*) - f(\mathbf{x}^t) - \sigma d_h(\mathbf{x}^*,\mathbf{x}^t)) \\
&+\beta_t(f(\mathbf{x}^*) - f(\mathbf{x}^t) - \sigma d_h(\mathbf{x}^*,\mathbf{x}^t)) \\
&+ \beta_tL\cdot (d_h(\mathbf{x}^*, \mathbf{x}^t) - d_h(\mathbf{x}^*, g(\mathbf{x}^t))).
\end{aligned}
\end{equation}
Adding $f(\mathbf{x}^t) - f(\mathbf{x}^*)$ on both sides,
\begin{equation}
\begin{aligned}
f&(\mathbf{x}^{t+1}) - f(\mathbf{x}^*) \leq (1-\beta_t)(f(\mathbf{x}^t) - f(\mathbf{x}^*))\\
&+ \beta_t\cdot ((L-\sigma)d_h(\mathbf{x}^*, \mathbf{x}^t) - Ld_h(\mathbf{x}^*, g(\mathbf{x}^t))).
\end{aligned}
\end{equation}
As $f(\mathbf{x}^{t}) \geq f(\mathbf{x}^{t+1})$, we also have
\begin{equation}
\beta_t(f(\mathbf{x}^{t+1}) - f(\mathbf{x}^*)) \leq \beta_t\cdot ((L-\sigma)d_h(\mathbf{x}^*, \mathbf{x}^t) - Ld_h(\mathbf{x}^*, g(\mathbf{x}^t))).
\end{equation}
Since $d_h$ is convex in its second variable, we also have
\begin{equation}
\begin{aligned}
d_h(\mathbf{x}^*, \mathbf{x}^t) - d_h(\mathbf{x}^*,\mathbf{x}^{t+1}) &\geq d_h(\mathbf{x}^*, \mathbf{x}^t)- (1-\beta_t)d_h(\mathbf{x}^*,\mathbf{x}^{t})\\
&\quad - \beta_td_h(\mathbf{x}^*,g(\mathbf{x}^{t})) \\
&= \beta_t(d_h(\mathbf{x}^*, \mathbf{x}^t)- d_h(\mathbf{x}^*,g(\mathbf{x}^{t}))).
\end{aligned}
\end{equation}
Hence,
\begin{equation}
\beta_t(f(\mathbf{x}^{t+1}) - f(\mathbf{x}^*)) \leq L(d_h(\mathbf{x}^*, \mathbf{x}^t) - d_h(\mathbf{x}^*,\mathbf{x}^{t+1})).
\end{equation}
The RHS is a telescoping sum, therefore
\begin{equation}
\sum_{t=0}^{N} \beta_t(f(\mathbf{x}^{t+1}) - f(\mathbf{x}^*)) \leq Ld_h(\mathbf{x}^*,\mathbf{x}^0),
\end{equation}
which yields
\begin{equation}
\lim_{N\rightarrow \infty}\sum_{t=0}^{N} \beta_t(f(\mathbf{x}^{t+1}) - f(\mathbf{x}^*)) < \infty.
\end{equation}
By the fact that $\sum_{t=0}^{\infty}\beta_t = \infty$, we have,
\begin{equation}
\liminf _{t\rightarrow \infty}(f(\mathbf{x}^{t}) - f(\mathbf{x}^*)) = 0.
\end{equation}
Finally, we could conclude the proof by monotonicity of $f(\mathbf{x}^{t})$.
\end{proof}
\section{The Robbin-Monro Algorithm}\label{sec:supp-RMA}
\subsection{Preliminaries on the Robbin-Monro Algorithm }
In this part, we will summarise the existing results on Robbin-Monro Algorithms we need.
\begin{defn}[\cite{benaim1999dynamics}, section 4.2]
Let $(\Omega,\mathcal{F},\mathbb{P})$ be a probability space and $\{\mathcal{F}_n\}_{n\in \mathbb{N}} \subset \mathcal{F}$ a filtration. A stochastic process $\{z_n\}_{n\in \mathbb{N}} \subset \{\mathbb{R}^m\}^{\mathbb{N}}$ is a Robbin-Monro algorithm if it is in the form
\begin{equation}
z_{n+1} = z_n + \gamma_{n+1}(F(z_n) + U_{n+1}),
\end{equation}
where $F: \mathbb{R}^m \rightarrow \mathbb{R}^m$ is continuous. Furthermore, it should satisfy the following conditions
\begin{itemize}
\item[(i)] $\{\gamma_n\}_{n\in\mathbb{N}}$ is a deterministic sequence.
\item[(ii)] $\{U_n\}_{n\in \mathbb{N}}$ is adapted, and $\mathbb{E}[U_{n+1} |\mathcal{F}_n] = 0$.
\end{itemize}
\end{defn}
Then we need the notion of asymptotic pseudo trajectories, the formulation of such a notion needs us to introduce the concept of semiflow, which is a flow but only considering the part where $t\geq 0$.
\begin{defn}[\cite{benaim1999dynamics}, section 3]
A semiflow $\Phi$ on a metric space $(M,d)$ is a continuous map
\begin{equation}
\begin{aligned}
\Phi: \mathbb{R}_+ \times M \rightarrow M, \\
\Phi(t,x) = \Phi_t(x)
\end{aligned}
\end{equation}
such that
\begin{equation}
\Phi_0 = id, \ \Phi_{t+s} = \Phi_{t} \circ \Phi_{s}
\end{equation}
for all $(t,s) \in \mathbb{R}_+ \times \mathbb{R}_+$.
\end{defn}
Next, for a (semi)flow (which is potentially induced by a vector field), we define the notion of asymptotic pseudotrajactory. Roughly, it means that a trajectory is very close to the integral curve (or flow) if we push $t$ to infinity.
\begin{defn}[\cite{benaim1999dynamics}, section 3]
A continuous function $X: \mathbb{R}_+ \rightarrow M$ is an asymptotic pseudotrajectory for $\boldsymbol{\phi}$ if
\begin{equation}
\lim_{t \rightarrow \infty} \sup_{0\leq h \leq T} d(X(t+h) , \Phi_h(X(t))) = 0
\end{equation}
for any $T>0$.
\end{defn}
Back to our main problem, our ultimate goal is to study the trajectory of a discrete-time process $\{z_n\}_{n\in\mathbb{N}} \subset \mathbb{R}^m$ adapted to the filtration $\{\mathcal{F}_n\}_{n\in\mathbb{N}}$ which could be written as
\begin{equation}
z_{n+1} - z_n = \gamma_{n+1}(F(z_n) + U_{n+1}), \label{RM1}
\end{equation}
where $F \in C^\infty(\mathbb{R}^m,\mathbb{R}^m)$ is a smooth map, $\{\gamma_n\}_{n\in\mathbb{N}}$ are the step sizes satisfying \begin{equation}
\sum_{n=1}^{\infty}\gamma_n = \infty, \lim_{n\rightarrow \infty} \gamma_n = 0, \label{RM2}
\end{equation}
and $U_n \in \mathbb{R}^m $ are random perturbations satisfying $\mathbb{E}[U_{k+1}| \mathcal{F}_k] = 0$. For such a process, the "trajectory" could be defined as the interpolated curve connecting the sequence in $\mathbb{R}^m$ generated by the stochastic process. Set
\begin{equation}
\tau_0 = 0, \tau_n = \sum_{i=1}^{n}\gamma_i,
\end{equation}
where $n\geq 1$. And define the continuous time affine interpolated process $X(t)$ as
\begin{equation}
Z(\tau_n+s) = z_n +s\frac{z_{n+1} - z_n}{\tau_{n+1} - \tau_n}.
\end{equation}
Finally, we have the main tool of analysing such the Robbin-Monro Algorithms, which is summarised in the following theorem,
\begin{thm}[\cite{benaim1999dynamics}, section 4]
Let $F$ be a smooth vector field on $M$, in addition, for any point $p$ it has unique integral curves around $p$. Assume that
\begin{enumerate}
\item $\sup_n \mathbb{E}[\Vert U_{n+1}\Vert^q] \leq \infty$ and $\sum_{n\in\mathbb{N}} \gamma_n^{1+\frac{q}{2}} \leq \infty$ for some $q\geq2$.
\item Either \begin{equation}
\sup_n{\Vert x_n\Vert} \leq \infty
\end{equation}
or $F$ is Lipschitz and bounded on a neighbourhood of $\{x_n: n\geq 0\}$.
\end{enumerate}
Then the interpolated process $Z$ is an asymptotic peseudotrajectory of the flow $\Phi$ induced by $F$ almost surely. \label{RM-converge-thm}
\end{thm}
\subsection{Formulation of the Robbin-Monro Algorithm}\label{Formulation of the Robbin-Monro Algorithm}
Define $D_{ij}^t$ as the number of purchases happen to user group $i\in \mathcal{U}$ and $j \in \mathcal{I}$ up to $t\in \mathbb{N}$ rounds. Then let $\mathbf{D}^t= \in \mathbb{R}^{|\mathcal{I}| \times |\mathcal{U}| + 1}$ which is defined as $\mathbf{D}^t= [D_{dum}^t, D_{i,j}^t]_{i\in \mathcal{U}, j \in \mathcal{I}}$, where $D_{dum}^t$ records the number of rounds with no purchases up to time $t$. Different from \cite{maldonado2018popularity}, we will now consider all of the rounds including the rounds where no purchases happen. Then the evolution of $\mathbf{D}$ could be written as
\begin{equation}
\mathbf{D}^{t+1} = \mathbf{D}^t + \mathbf{E}^{t+1},
\end{equation}
where $\mathbf{E}^{t+1} = [e_{\text{dum}},e_{ij}]_{i\in \mathcal{U}, j\in \mathcal{I}}$ contains $|\mathcal{I}| \times |\mathcal{U}| + 1$ random variables where $\mathbb{P}(e^{t+1}_{ij} = 1) =w_iq_{ij} \frac{v_{ij}(\sum_i{D_{ij}^t})^r}{\sum_jv_{ij}(\sum_i{D_{ij}^t})^r}$, $\mathbb{P}(e^{t+1}_{\text{dum}} = 1) = 1- \sum_{ij}w_iq_{ij} \frac{v_{ij}(\sum_i{D_{ij}^t})^r}{\sum_jv_{ij}(\sum_i{D_{ij}^t})^r}.$ Note that they are not independent since $\Vert\mathbf{E}^{t+1}\Vert_1 =1$ for any $t$ as it refers to the purchase happen at time $t+1$.
Assume $\Vert \mathbf{D}^0\Vert_1 \neq 0$, define $\alpha_{ij}^t= \frac{D^t_{ij}}{\sum_{ij}D^t_{ij} + D_{\text{dum}}^t}, \alpha_{\text{dum}}^t= \frac{D^t_{\text{dum}}}{\sum_{ij}D^t_{ij} + D_{\text{dum}}^t}$ . We note that if $\lim_{t\rightarrow\infty}\alpha^t_{ij}$ exists for all $i,j$, then the limit of market share exists, because the market share could be calculated as
\begin{equation}
\phi_j^t = \frac{\sum_{i}\alpha^{t}_{ij}}{\sum_{ij}\alpha^{t}_{ij}}.
\end{equation}
Let $\Pi^t = \sum_{ij}D^t_{ij} + D_{\text{dum}}^t$, $\mathbf{A}^t = [\alpha_{\text{dum}}^t, \alpha_{ij}^t]_{i\in \mathcal{U}, j \in \mathcal{I}}$ then we have
\begin{equation}
(\Pi^t + 1)\mathbf{A}^{t+1} = \Pi^t\mathbf{A}^t + \mathbf{E}^{t+1}.
\end{equation}
Now we change the variables $\alpha_{ij}^t$ by defining $x_{ij}^t = \frac{\alpha_{ij}^t}{q_{ij}}$. And similar, define the concatenation of $\mathbf{X}^t$ as $\mathbf{X}^t = [x_{ij}]_{i\in \mathcal{U}, j\in \mathcal{I}}$. Define a function $\varLambda(\mathbf{X}^t) = w_i \frac{v_{ij}(\sum_i{q_{ij}x_{ij}^t})^r}{\sum_jv_{ij}(\sum_i{q_{ij}x_{ij}^t})^r}$. Elementwisely, the preceding equation could be expressed as
\begin{equation}
\begin{aligned}
x^{t+1}_{ij} &= \frac{ \Pi^t}{\Pi^t + 1}x^t_{ij}+ \frac{ 1}{\Pi^t + 1}\frac{e_{ij}^{t+1}}{q_{ij}}.\\
& = x^t_{ij} + \frac{ 1}{\Pi^t + 1}(\varLambda(\mathbf{X}^t)_{ij} - x^t_{ij}) + \frac{ 1}{\Pi^t + 1}(\frac{e_{ij}^{t+1}}{q_{ij}} - \varLambda(\mathbf{X}^t)_{ij})\\ \label{Robbin-Monro x}
\end{aligned}
\end{equation}
Let the $\{\mathcal{F}_t\}_{t\in \mathbb{N}}$ be the filtration generated by $\{\sigma(\mathbf{E}^s)\}_{s\leq t}$, we could notice that
\begin{equation}
\begin{aligned}
\mathbb{E}[\frac{e_{ij}^{t+1}}{q_{ij}} | \mathcal{F}_t] &= w_i \frac{v_{ij}(\sum_i{D_{ij}^t})^r}{\sum_jv_{ij}(\sum_i{D_{ij}^t})^r} \\
&=w_i \frac{v_{ij}(\sum_i{\alpha_{ij}^t})^r}{\sum_jv_{ij}(\sum_i{\alpha_{ij}^t})^r} \\
&= w_i \frac{v_{ij}(\sum_i{q_{ij}x_{ij}^t})^r}{\sum_jv_{ij}(\sum_i{q_{ij}x_{ij}^t})^r}.
\end{aligned}
\end{equation}
Then it is clear that \eqref{Robbin-Monro x} is a Robbin-Monro algorithm, and we could claim that the process \eqref{Robbin-Monro x} has the same limiting behaviour as the weighted mirror descent dynamic which we want to consider.
\begin{prop}
Consider the following deterministic process
\begin{equation}
\widehat{\mathbf{X}}^{t+1} = \frac{ \Pi^t}{\Pi^t + 1}\widehat{\mathbf{X}}^t+ \frac{ 1}{\Pi^t + 1}\varLambda(\widehat{\mathbf{X}}^t) \label{deterministic RM}
\end{equation}
with $\widehat{\mathbf{X}}_0 = \mathbf{X}_0$. If the process $\{\widehat{\mathbf{X}}^t\}_{t\in \mathbb{N}}$ converges to $\mathbf{X}_\infty$. Then,
\begin{equation}
\lim_{n\rightarrow \infty} \mathbf{X}_n = \mathbf{X}_\infty
\end{equation} almost surely. \label{Deter-Sto-equi}
\end{prop}
\begin{proof}
We will work on the Euclidean norm $\Vert \cdot \Vert_2$. By definition \eqref{deterministic RM} is also a Robbin-Monro algorithm. Hence, by noting that $\widehat{\mathbf{X}}^t$ lies in a compact set for any $t$ (which could be regarded as a linear transformation of $\Delta$). Then, by \Cref{RM-converge-thm}, it is the asymptotic-pseudo trajectory of $\Phi_t$ where $\Phi_t$ is the flow generated by the ODE
\begin{equation}
\frac{d\mathbf{X}}{dt} = \Lambda(\mathbf{X}) - \mathbf{X}.
\end{equation}
Then, by triangle inequality
\begin{equation}
\Vert \widehat{\mathbf{X}}^t - \mathbf{X}^t\Vert_2 \leq \Vert \widehat{\mathbf{X}}^t - \Phi_t(\mathbf{X}_0) \Vert_2 + \Vert \mathbf{X}^t - \Phi_t(\mathbf{X}_0) \Vert_2.
\end{equation}
Taking ${t\rightarrow\infty}$ on both sides will yield the result directly.
\end{proof}
\section{Calculation of objective function} \label{Calculation of objective function}
In this section, we show that the form $\varLambda(\mathbf{X})$ is the standard mirror descent update of the following optimisation problem in the special case $q_{ij} = q_j$ for all $i\in \mathcal{U}$.
\begin{equation}
\begin{aligned}
&\min \quad \Psi(\mathbf{X}) = \sum_{ij}x_{ij}\log(\frac{x_{ij}}{v_{ij}}) + (r-1)\sum_{ij}x_{ij} \\ &\quad\quad\quad\quad\quad- r\sum_{j}(\sum_{i} x_{ij})\log(\sum_i(q_jx_{ij})), \\
&\text{subject to } \sum_j x_{ij} = w_i, \ \sum_iw_i =1 \label{OBJ-MirrorDesecnt} \text{ and } x_{ij}\geq 0 \text{ for all $i,j$}.
\end{aligned}
\end{equation}
The following lemma illustrates the calculation of the mirror descent update rule of the above problem,
\begin{lem}
The dynamic $$\mathbf{X}^{t+1} = \varLambda(\mathbf{X}^t)$$ is the standard mirror descent update \eqref{MirrorDescentStandard} of the problem \eqref{OBJ-MirrorDesecnt} with step size $\alpha_t =1$ for all $t\in \mathbb{N}$.
\end{lem}
\begin{proof}
It is straightforward to see that
\begin{equation}
\frac{\partial \Psi}{\partial x_{ij}^t} = -\log v_{ij} - r\log(\sum_i q_jx_{ij}^t) + \log x_{ij}^t. \label{70}
\end{equation}
In our case, let $F$ denote the objective function in \eqref{MirrorDescentMinimiser}, then
\begin{equation}
\frac{\partial F}{\partial x_{ij}} = \frac{\partial \Psi}{\partial x_{ij}^t} + \log{x_{ij}} + 1 -\log{x_{ij}^t}.
\end{equation}
Using the Lagrange multiplier, we have
\begin{equation}
\frac{\partial \mathcal{L}}{\partial x_{ij}} = \frac{\partial \Psi}{\partial x_{ij}^t} + \log{x_{ij}} + 1 -\log{x_{ij}^t} + \lambda = 0.
\end{equation}
Then, we could summarise that
\begin{equation}
x_{ij} = \exp(C - \frac{\partial F}{\partial x_{ij}})x_{ij}^t. \label{73}
\end{equation}
And noting the feasible condition, we have
\begin{equation}
w_i = \sum_j x_{ij} = \sum_j \exp(C - \frac{\partial F}{\partial x_{ij}})x_{ij}^t.
\end{equation}
Therefore,
\begin{equation}
\exp(C) = \frac{w_i}{\sum_j\exp( - \frac{\partial F}{\partial x_{ij}})x_{ij}^t}.
\end{equation}
Plugging this with \eqref{73} and \eqref{70}, the result follows directly.
\end{proof}
It remains to show that the function $\Psi$ is $(\sigma,L)$ Bregman strongly convex for some $\sigma, L\geq 0$. The following lemma proves that the objective function indeed possesses such good identities.
\begin{lem}
The function $\Psi$ is $(1-r,1)$-Bregman strongly convex w.r.t the KL divergence.
\end{lem}
\begin{proof}
We only need to consider the following expression
\begin{equation}
\delta = \Psi(\mathbf{X}) - \Psi(\mathbf{Y}) - \innerProd{\nabla \Psi(\mathbf{Y})}{\mathbf{X}- \mathbf{Y}}.
\end{equation}
By standard calculations, we have
\begin{equation}
\delta = r\Big(\sum_j(\sum_i x_{ij})\log(\sum_i y_{ij}) - (\sum_ix_{ij})\log(\sum_ix_{ij})\Big) + KL\infdivx{\mathbf{X}}{\mathbf{Y}}.
\end{equation}
It is clear that $\delta \geq (1-r)KL\infdivx{\mathbf{X}}{\mathbf{Y}}$ by noticing that $KL\infdivx{\mathbf{X}}{\mathbf{Y}}$ is more refined than $r\Big(\sum_j(\sum_i x_{ij})\log(\sum_i y_{ij}) - (\sum_ix_{ij})\log(\sum_ix_{ij})\Big)$. And by the positivity of KL divergence, it is straightforward to see that $\delta \leq KL\infdivx{\mathbf{X}}{\mathbf{Y}}$, which concludes the proof.
\end{proof}
\section{Proof of Theorem \ref{thm:convergence-hetero}}\label{Proof of theorem 13}
Proof of Theorem 13. For the case $v_{ij} = v_j$, the proof could be
found in \Cref{Special case: $v_{ij} = v_j$ for heterogeneous setup}
For the case $q_{ij} = q_j$, firstly, by change of variable, it suffices to
show that an RMA dynamic \eqref{Robbin-Monro x} converges to a unique limit (see
\Cref{Formulation of the Robbin-Monro Algorithm}). Then, we established the equivalence between \eqref{Robbin-Monro x}
and the deterministic dynamic \eqref{deterministic RM} (see Proposition \ref{Deter-Sto-equi}). By lemma
\eqref{Continuous_dynamics_as_mirror_descent}, the deterministic process could be regarded as a weighted mirror
descent described in \eqref{MirrorDescentWeighted} (see \Cref{Results on Mirror Descent}). Then \Cref{theorem_weighted_converge} yields the convergence result.
\section{Proof of Theorem \ref{thm-homo-convergence}}\label{Proof of theorem 10}
For the homogeneous case, we use could reformulate the market share evolution as:
\begin{equation}
\begin{aligned}
\boldsymbol{\phi}^{t+1} &= \frac{t}{t+1}\boldsymbol{\phi}^t + \frac{1}{t+1}\mathbf{e}^{t+1}\\
&=\boldsymbol{\phi}^t + \frac{1}{t+1}(\mathbf{p}(\boldsymbol{\phi}^t) - \boldsymbol{\phi}^t) + \frac{1}{t+1}(\mathbf{e}^{t+1}-\mathbf{p}(\boldsymbol{\phi}^t)). \label{RMA-homo}
\end{aligned}
\end{equation}
This is an RMA. By considering its deterministic analog
\begin{equation}
\boldsymbol{\phi}^{t+1} =\boldsymbol{\phi}^t + \frac{1}{t+1}(\mathbf{p}(\boldsymbol{\phi}^t) - \boldsymbol{\phi}^t)
\end{equation}
which is equivalent to \eqref{RMA-homo} by theorem 34 (using the same arguments as proposition \ref{Deter-Sto-equi}). Hence, by considering the objective function \eqref{Efficiency Entropy Problem} (which is $(1-r,1)$-Bregman strongly convex), we could conclude that this is a weighted mirror descent described in \eqref{MirrorDescentWeighted_main}. Therefore, by theorem \ref{theorem_weighted_converge}, the convergence could be concluded.
\section{Additional simulation results}
\begin{figure*}[h]
\centering
\begin{minipage}{0.52\textwidth}
\centering
\includegraphics[width=1.\textwidth]{figs/hetero_error_bar.pdf}%
\end{minipage
\begin{minipage}{0.52\textwidth}
\centering
\includegraphics[width=1.\textwidth]{figs/homo_error_bar.pdf}%
\end{minipage
\caption{The evolution of market efficiency under different ranking strategies, for the homogeneous case. Each simulation is run for 300,000 new users, a measurement is taken after every 1000 time steps. The solid lines denote the median, the shaded area represent the 25-th to the 75-th percentile of 50 repeated simulations.}
\label{fig:baseline}
\end{figure*}
\section{A Shorter Proof of Theorem \ref{thm-homo-convergence} and Theorem \ref{thm:convergence-hetero}}\label{sec:supp-shorter-proof}
\begin{defn}
Let $C$ be a compact and convex set, given a differentiable convex function $h$ with convex domain $C$,
the Bregman divergence generated by $h$, which is denoted by $d_h$ is defined as
\[
d_h(\mathbf{x},\mathbf{y}) = h(\mathbf{x}) - h(\mathbf{y}) - \langle\nabla h(\mathbf{y}), \mathbf{x} - \mathbf{y} \rangle.
\]
where $\mathbf{x} \in C$, $\mathbf{y} \in \text{rint}(C) = \{\mathbf{y}\in C: \text{for all } \mathbf{z}\in C\text{, there exists some }\lambda>1 \text{ such that } \lambda \mathbf{y}+ (1-\lambda)\mathbf{z} \in C\}$.
\end{defn}
We assume that the function $h$ is strictly convex. Consequently, $d_h(\cdot,\cdot)$ is strictly convex in its first parameter, and $d_h(\mathbf{x},\mathbf{y}) = 0$ if and only if $\mathbf{x} = \mathbf{y}$.
\begin{defn}
A function $f$ is $L$-Bregman-convex with respect to the Bregman divergence $d_h$ if
for any $\mathbf{y} \in \text{rint}(C)$ and $\mathbf{x}\in C$,
\[
f(\mathbf{x}) ~\leq~ f(\mathbf{y}) + \langle\nabla f(\mathbf{y}), \mathbf{x}-\mathbf{y}\rangle + L \cdot d_h(\mathbf{x},\mathbf{y}).
\]
\end{defn}
Given an $L$-Bregman-convex function $f$ with respect to the Bregman divergence $d_h$,
the mirror descent rule with respect to the Bregman divergence $d_h$ is given by $\mathbf{x}^{t+1} = g(\mathbf{x}^t)$, where
\begin{equation}
g (\mathbf{x}^t) = \argmin_{\mathbf{x}\in C} \left\{f(\mathbf{x}^t) + \langle\nabla f(\mathbf{x}^t), \mathbf{x} - \mathbf{x}^t\rangle +L \cdot d_h(\mathbf{x},\mathbf{x}^t) \right\}. \label{MDMinimizer}
\end{equation}
\begin{thm}[\cite{cheung2018dynamics}, Theorem 3.2]
Suppose $f$ is an $L$-Bregman-convex function with respect to $d_h$, and $\mathbf{x}^t$ is the point reached after $t$ applications of the mirror descent update rule $\mathbf{x}^{t+1} = g(\mathbf{x}^t)$. Then for all $t\ge 1$,
\[
f(\mathbf{x}^t) - f(\mathbf{x}^*) \leq \frac{L\cdot d_h(\mathbf{x}^*, \mathbf{x}^0 )}{t}.
\]
\end{thm}
\begin{lem}\label{lem:strict-decrease}
Suppose $X^*$ is the set of minimum points of $f$ in $C$. If $\mathbf{x}\in C$ and $\mathbf{x} \notin X^*$, then $f(g(\mathbf{x})) < f(\mathbf{x})$.
\end{lem}
\begin{proof}
First, we show that $g(\mathbf{x}) \neq \mathbf{x}$. Suppose the contrary.
Then $\mathbf{x}$ is a fixed point of the update rule $\mathbf{x}^{t+1} = g(\mathbf{x}^t)$.
By setting $\mathbf{x}^0 = \mathbf{x}$, by the above theorem, for any $\mathbf{x}^*\in X^*$,
we have $f(\mathbf{x}) - f(\mathbf{x}^*) \le \frac{L\cdot d_h(\mathbf{x}^*, \mathbf{x} )}{t}$ for any $t\ge 1$.
Since the LHS does not depend on $t$, this is possible only when $f(\mathbf{x}) - f(\mathbf{x}^*) = 0$
and hence $\mathbf{x} \in X^*$, a contradiction.
Let $\hat{f}(\mathbf{y}) = f(\mathbf{x}) + \langle\nabla f(\mathbf{x}), \mathbf{y} - \mathbf{x}\rangle +L \cdot d_h(\mathbf{y},\mathbf{x})$.
By the definition of $g$, we have $\hat{f}(g(\mathbf{x})) \le \hat{f}(\mathbf{x})$.
Since $d_h(\cdot,\cdot)$ is strictly convex in its first parameter, $\hat{f}$ is strictly convex.
Since $g(\mathbf{x}) \neq \mathbf{x}$, for any $\mathbf{x}'$ on the line segment between $\mathbf{x}$ and $g(\mathbf{x})$, $\hat{f}(\mathbf{x}') < \hat{f}(\mathbf{x})$.
Then by the definition of $g$ again, we have $\hat{f}(g(\mathbf{x})) \le \hat{f}(\mathbf{x}') < \hat{f}(\mathbf{x}) = f(x)$.
Finally, since $f$ is $L$-Bregman-convex with respect to $d_h$, $f(g(\mathbf{x})) \le \hat{f}(g(\mathbf{x}))$.
This completes the proof of $f(g(\mathbf{x})) < f(x)$.
\end{proof}
To use the established theorems regarding Robbins-Monro algorithm, we consider the following ODE:
\[
\dot{\mathbf{x}} ~=~ g(\mathbf{x}) - \mathbf{x}.
\]
Due to \cref{lem:strict-decrease} and the assumption that $f$ is convex,
we have $\innerProd{f'(\mathbf{x})}{g(\mathbf{x}) - \mathbf{x}} < 0$ for any $\mathbf{x} \notin X^*$.
Thus, the following theorem from \cite[Corollary 3]{Borkar2008} is applicable.
\begin{thm}
Consider an ODE $\dot{\mathbf{x}} = h(\mathbf{x})$.
Suppose there is a continuously differentiable function $f:\mathbb{R}^d \rightarrow \mathbb{R}$ such that
(i) $\lim_{\|\mathbf{x}\|\rightarrow \infty} f(\mathbf{x}) = +\infty$;
(ii) the set of minimum points of $f$, $X^*$, is non-empty; and
(iii) $\innerProd{f'(\mathbf{x})}{h(\mathbf{x})} \le 0$ for all $\mathbf{x}$, with equality holds if and only if $\mathbf{x}\in X^*$.
Then almost surely, the Robbins-Monro algorithm of the ODE converges to a non-empty subset of $X^*$.
\end{thm}
\section{Conclusion}
\label{sec:conclusion}
This paper views the dynamics of cultural markets under an optimisation lens.
We identify new objective functions for trial-offer markets, and establish robust connections between social feedback signals and optimisation processes.
Our results narrow the gap between the theory and practice of recommender systems. In particular, they make the analysis of recommender systems more versatile by incorporating user-specific preferences, and offer a holistic view of market stability and efficiency beyond individual clicks and views.
Simulations using real-world user preferences confirms that markets with heterogenous preferences are more stable and more efficient.
Our work leads to several open research questions, such as convergence rates of the stochastic T-O markets, analysis of general heterogeneous T-O settings, fairness properties of market equilibria, and describing markets that are also learning a recommender systems in-the-loop \cite{mladenov2021recsim}. More generally, we hope the current work opens up new ways to asking and answering a set of research questions at the intersection of classical markets and online attention.
\section{Empirical observations}\label{sect:experiments}
We simulate cultural markets using real-world preferences from the well-known MovieLens dataset~\cite{harper2015movielens}, in order to explore the efficiency and diversity of the market in homogeneous and heterogeneous settings, and under different ranking strategies\footnote{Code and data that reproduce results in this section is at \url{https://github.com/haiqingzhu543/Stability-and-Efficiency-of-Personalised-Cultural-Markets}}.
The simulations aim to answer key questions such as whether heterogenous T-O market is more efficient, whether T-O market is stable as prescribed in \Cref{sec:results}, and whether stability sacrifices diversity.
\begin{figure*}[bt]
\centering
\begin{minipage}{0.52\textwidth}
\centering
\includegraphics[width=1.\textwidth]{figs/eff_hetero_homo_modified.pdf}%
\end{minipage
\begin{minipage}{0.52\textwidth}
\centering
\includegraphics[width=1.\textwidth]{figs/entropy_vs_eff_modified.pdf}%
\end{minipage
\caption{Simulation results on Movielens dataset.
Each simulation is run for 300,000 time steps (each introducing a new user), a measurement is taken after every 1000 time steps. (Left) Market efficiency over time comparing the homogeneous vs heterogenous settings under three ranking strategies (random/popularity/quality). The lines denote the median of 50 simulations with different random initialisations. (Right) Efficiency over the entropy of market shares in heterogeneous setting. The lines denote the median of efficiency over 50 simulations with different random initialisations, markers denote iteration 1000, 100,000, 200,000 and 300,000, \lx{and error bars represent the 25th to 75th percentile range in both efficiency and entropy}. }
\label{fig:empiricalobs}
\end{figure*}
We set up the simulation using the MovieLens-100k dataset \cite{harper2015movielens}. This dataset consists of 100,000 ratings (valued 1-5) from 943 users on 1682 movies, where each user has rated at least 20 movies.
We performed matrix completion using incomplete SVD
\cite{bell2007lessons} via the {\tt Surprise} python package\footnote{\url{https://surpriselib.com}}, yielding a preference matrix $\Gamma = [\gamma_{ij}] \in \mathbb{R}^{943\times1682}$ for each (user, movie) pair.
With $\gamma_{ij} \in [0,1]$ denoting the normalised preference of user $i\in \mathcal{U}$ and item (movie) $j \in \mathcal{I}$.
Denote $\mathcal{O}$ as the set consisting of all indices $(i,j)$ with $\gamma_{i,j}$ observed (in the MovieLens-100k dataset), and $\mathcal{\bar O}$ as the set of unobserved indices (entries estimated with incomplete SVD).\\
To simulate heterogeneous preference types, we then divide the users into $M$ non-overlapping subgroups based on user attributes.
Let $\bigcup_{i=1}^{M}\mathcal{U}_i= \mathcal{U}$, where $\mathcal{U}_i$ denote the set of users in group $i$, and hence the weights $w_i = |\mathcal{U}_i|/ | \mathcal{U}|$. We first calculate the {\em visibility} factor $\hat v_{ij}$ by averaging over the set of observed entries generated by user group $i$ using \eqref{eq:hatVij}.
We also calculate the {\em quality} factor $q_{ij}$ as by averaging over the set of {\it unobserved} entries generated by user group $i$ using \eqref{eq:Qij}.
This choice reflects the intuition that in a T-O market, the probability of purchase depend on a quality factor that is often unkonwn to the platform a priori before a user tries an item. Other strategies for estimating $q_{ij}$ and $\hat v_{ij}$ are left for future work.
\begin{eqnarray}
\hat{v}_{ij} = \frac{1}{| (k,j)\in \mathcal{O}, k \in \mathcal{U}_i |} \sum_{(k,j)\in \mathcal{O}, k \in \mathcal{U}_i} \gamma_{kj}. \label{eq:hatVij}\\
q_{ij} = \frac{1}{|(k,j) \in \mathcal{\bar O}, k \in \mathcal{U}_i| }\sum_{(k,j) \in \mathcal{\bar O}, k \in \mathcal{U}_i} \gamma_{kj} \label{eq:Qij}
\end{eqnarray}
We cluster users into 100 groups using K-means on the rows of $\Gamma$.
This grouping is used to compare market efficiency and diversity in homogenous (all users have the same preferences) and heterogenous (100 user groups) settings.
results for other group compositions are in the supplement~\cite{supp3065}.
We also account for position bias in ranked lists~\cite{craswell2008experimental} in order to construct the actual visibility factor $v_{ij}$, which can vary over time, denoted as $v_{ij}^t$.
Prior {\sc MusicLab} model has included position bias parameters ~\cite{krumme2012QuantifyingSI,maldonado2018popularity}
for the top-50 items, with zero visibility assigned to all other items.
This results in a list of fixed weights $\iota_k$, $k=1,\ldots,1682$, where $\iota_{1},\ldots,\iota_{50 >0}$, and $\iota_{51},\ldots,\iota_{1682} = 0$.
We adopt the separable click-through rate (CTR) model commonly used in modelling auctions \cite{aggarwal2006truthful}, which simply multiplies the inherent visibility with a ranking factor $\eta_{ij}^t$ of item $j$ presented to user $i$ at time $t$. Different ranking strategies produce different $\eta_{ij}^t$ to modulate the visibility term $\hat{v}_{ij}$, which in turn result in different probability distributions on the trial phase (equation \eqref{eq:mnlchoice}).
\begin{equation}
v_{ij}^t = \hat{v}_{ij}\eta_{ij}^t
\end{equation}
We introduce the following three ranking strategies that define the relationship between $\iota_k$ and $\eta_{ij}^t$.
\begin{itemize}[leftmargin=*]
\item Random-ranking. Upon each simulation round, the ranking of items is randomly and uniformly drawn from a permutation of indexes $\{1, 2, \ldots,1682\}$. That is, the visibility term $v_{ij}$ is set to the product of $\hat v_{ij}$ and a random element of $\iota$. This ranking changes for each simulation step. One expects such a strategy to promote diversity while preserving some information on item appeal through $\hat v_{ij}$.
\item Popularity-ranking. This strategy sorts the items by descending market share $\boldsymbol{\phi}^t$, and set $\eta_{ij}^t = \iota_k$, with $k = \arg \text{sort}_{desc} \{\boldsymbol{\phi}^t\} [j]$. This ranking will change over the simulation steps, and is analogous to the original {\sc MusicLab} experimental setting~\cite{salganik2006ExperimentalSO}. One expects this strategy to be unstable due to the randomness early in the simulation, since it could accidentally promote items that users do not like to the top, resulting in the high quality items being buried.
\item Quality-ranking. Denote the descending sorting rank of item $j$ among user group $i$ as $k = \arg \text{sort}_{desc} \{q_{i, :}\} [j]$, where $q_{i, :}$ is the one-dimensional array for qualities factors in group $i$, and $[j]$ denotes array indexing. Then the position bias factor is set to the corresponding item, and $\eta_{ij}^t = \iota_k$. This ranking does not change over the simulation steps, since both $q_{ij}$ and its sorted order remains fixed. One expects this strategy to best align visibility with the underlying quality metrics (unobserved before trying), since it has {\em oracle} access to $q_{ij}$, and should yield high efficiency.
\end{itemize}
In each round of the simulation, one new user arrives at the market and choose an item for a \emph{trial} according to the multinomial logit (equation \ref{eq:mnlchoice}). Then the user will choose whether to \emph{purchase} this particular item by flipping a biased coin parameterized by $q_{ij}$.
Note that these new users are {\it generalisations} of the $M$ groups of user populations in MovieLens via attributes $q_{ij}$ and $\hat v_{ij}$, rather than being subsets (or samples) from the original 943 users.
This setting is consistent with other theoretical and simulation studies of cultural market and recommender systems~\cite{krumme2012QuantifyingSI,jiang2019degenerate}. We report {\em market efficiency} as the empirical version of \Cref{defn:efficiency}, namely, the fraction of users who made a purchase. We also measure diversity among the set of items, by computing the entropy of market share $\boldsymbol{\phi}$ (\Cref{eq:entropy}) at each time step. We explore the relationship between these two metrics.
\Cref{fig:empiricalobs} summarises the trends of efficiency and diversity over the two settings -- homogeneous/heterogeneous -- and three ranking strategies -- random/popularity/quality. For the left figure, it is observed that the quality-ranking oracle has the highest efficiency among the three ranking strategies, followed by popularity ranking, and random ranking has the lowest efficiency when there is a sufficient number of users.
\lx{Taking into account heterogeneous user preferences improves efficiency in both quality and popularity ranking settings, despite the gap being smaller than that between the two ranking strategies.}
Figure 2 (right) compares both efficiency and diversity (measured by the entropy of market shares) at different iterations for different ranking strategies in the heterogenous setting.
\lx{For popularity and quality rankings, item diversity is decreasing over time (curves moving left) while efficiency increases (curve moves slightly up).
Comparing the two, quality ranking yields more diversity across the items (higher entropy) and is more stable (smaller spread on both dimensions).
This observation corroborates \Cref{prop:Nash-SW} that heterogeneous Musiclab objective is ex-post concave.
Popularity ranking results in larger variations in both efficiency and entropy, confirming observations in the original Musiclab experiment~\cite{salganik2006ExperimentalSO} -- that market allocation is unstable due to random initialisations and result in market dominance by a few popular items.}
\lx{As a control group, random ranking yields the lowest efficiency and no apparent differences between homogenous and heterogeneous user preferences. In this setting, efficiency improves slightly due to the joint effect of both visibility and quality terms. But item diversity stays close to the thoeretical maximum in entopy ($\ln (1684) \sim 7.4 nats$) over time, indicating that the random ranking with cut-off at top 50 items is playing a larger role in user choice than signals present in the visibility and quality terms.}
\section{Introduction}
Online content platforms are major sources for everyday entertainment, and the attention allocation within a platform provides a market setting of interest.
There are strong social and economic motivations for the stakeholders to understand these markets.
The platform providers ask how they can improve user experience and raise profits.
They may also want to enforce diversity of the cultural products, so as to achieve a sustainable business model.
The content producers are interested in strategies to improve their products, so as to gain more popularity and raise revenues.
Regulatory bodies and the user population seek to understand the market dynamic and its drivers, with the goal of making attention markets more transparent, accountable and fair.
However, understanding such markets is challenging, since their outcomes are susceptible to intricate social influence dynamics among customers,
which in turn are affected by the recommender systems used by the platform providers.
While various analyses have been done for the classical markets (e.g., Arrow-Debreu/exchange markets, Fisher markets),
the online cultural markets (or more generally, \emph{attention economies}) have several key aspects different from the classical markets,
so it is not clear if
known analyses directly apply.
In classical markets, goods are scarce. Their prices act as the coordination signals to balance demand and supply.
Typically, there exist moderate prices that lead to equilibrium. In online cultural markets, however,
the digital goods can be reproduced with essentially no cost, so their supplies are unlimited.
Users' attention is the scarce commodity that the producers compete for.
Typically, the influence dynamics and recommender systems tend to \emph{cascade}, i.e.~to promote goods which already got high attentions.
This is known to
lead to polarizing and unpredictable outcomes.
Since the seminal empirical work by Salganik et al~\cite{salganik2006ExperimentalSO}, now dubbed \textsc{MusicLab}.
Several mathematical models have been proposed to describe it~\cite{krumme2012QuantifyingSI,abeliuk2015benefits,maldonado2018popularity}, but all of them assumes that user preferences are homogeneous. This is in stark contrast to the rich literature and wide-spread practice of recommender systems that are focused on estimating and catering to heterogeneous preferences in user populations.
On the other hand, recent work in classical markets (esp Fisher markets) offer a range of results to understand equilibria from an algorithmic and optimization perspective~\cite{Zhang2011PR,birnbaum2011distributed,cheung2018dynamics}.
One may wonder: can recent results on Fisher markets be extended to describe the implicit computations in cultural markets?
Specifically for the \textsc{MusicLab} model~\cite{krumme2012QuantifyingSI}, customers' behaviors are characterized by a two-step trial-offer (T-O) process: first,
they select a product to try; second, they decide to purchase or not.
The first step is a stochastic process where the randomness depends on the intrinsic \emph{appeal} of the products,
and also the history of past purchases by customers, creating a feedback loop.
The second step is random depending on the intrinsic \emph{quality} to the customer.
\begin{figure}[bt]
\centering
\includegraphics[width=0.46\textwidth]{figs/teaser_new.pdf}%
\vspace{-0.3cm}
\caption{A core contribution of this paper is to provide an optimisation view (Top) of cultural markets (Bottom), which affords new results on stability, efficiency and equilibrium behavior. (Bottom) An illustration of a culture market with several types of users interacting with a few items (color similarity between users and items indicate differing matches in preferences). Users allocate attention to the items based on proportional-response-esque dynamic \Cref{choice TO hetero}. }
\label{fig:bigalpha}
\vspace{-0.2in}
\end{figure}
\subsection* {Our Contributions}
The main themes of this work is to establish several robust connections between stochastic T-O markets and optimization, and to use these connections to \marco{rigorously} show that the influence dynamics in these markets are stable.
For the homogeneous markets, we discover two objective functions for which the equilibrium of a T-O market
is maximizer of these objectives. The first objective is the ``total utility'' of the market.
The second objective is of particular interest due to its natural interpretation.
It is a weighted sum of the efficiency and the diversity of the market shares in the market, as measured by the Shannon entropy.
While efficiency is a natural benchmark, diversity in cultural market is also important for the healthy development of the platform.
The diversity not only broaden the customer base, it also provides financial support to the less popular producers to keep them in the cultural industry.
Thus, it is of the platform providers' interest to strike for a balance between efficiency and diversity.
Interestingly, we show that the influence dynamic is indeed equivalent to stochastic mirror descent on the second objective. This suggests the dynamic is implicitly optimizing \marco{the} natural objective in the market.
A significant consequence is this allows us to present a new proof of a result of Maldonado et al.~\cite{maldonado2018popularity} that the dynamic converges to an equilibrium of the market almost surely.
For heterogeneous markets, we show that the equilibrium is optimizing an \emph{ex-post} version of Nash social welfare. In classical Fisher market, Nash social welfare is the product of users' utilities, whereas each utility is raised to a power of the user's budget. In our case, the power is the budget times the \emph{efficiency} for that user at the equilibrium. Then we turn our focus to two interesting sub-classes of heterogeneous markets, namely (i) the users have the same appeals on the items, but they perceive the qualities of the items differently; (ii) the users perceive same qualities on the items, but they have different appeals on the items. For (i), we show that it is equivalent to a homogeneous market. For (ii), we design a new objective function, where the influence dynamic is equivalent to stochastic mirror descent on the objective. Again, this allows us to show the dynamic converges to an equilibrium almost surely.
The robust connection between the dynamics and optimization processes echoes with \emph{the self-reinforced efficiency} of some economic systems,
for which there exist natural dynamics or algorithms that can attain equilibrium, while the equilibrium optimizes popular efficiency measure like social welfare. See Related Work below for more relevant discussions.
\lx{We perform simulations using user preferences from the well-known MovieLens-100K dataset. We observe that accounting for heterogenous user preferences improves efficiency in culture markets while preserving stability. We examine the (user-centric) efficiency and (item- or producer- centric) diversity measures across three ranking strategies: random, quality-driven, and popularity-driven. Results confirm quality ranking being more efficient and more diverse than popularity ranking which was implemented in \textsc{MusicLab}~\cite{salganik2006ExperimentalSO} and known to be unstable}.
\subsection*{Related Work}
As early as in 1971, Simon~\cite{simon1971designing} pointed out that in an information-rich world, attention becomes the new scarcity that information consumes.
Examples of attention economies include entertainment such as music, film and television~\citep{salganik2006ExperimentalSO,bell2007lessons}, political campaigns and votes~\cite{BondFJKMSF2012}, scientific publications and researchers~\cite{Fortunato2018TheSO}.
Since Simon's visionary statement, the research community has formulated economic question about attention in a number of different ways, such as articulating the phenomenon of attention scarcity in corporate life~\cite{davenport2001attention}, diagnostic criteria for attention scarcity and solving it as (one off) allocation problems~\cite{falkinger2007attention,falkinger2008limited}, or connecting attention allocation to advertising revenue~\cite{evans2020economics}.
A recent study by Vosoughi et al.~\cite{VosoughiFalseNews2018} showed that false news spreads faster online, suggesting that besides quality,
appeal (e.g., novelty of the false news and the emotion it stimulates) of a digital good is crucial in social influence.
More broadly, the web research community have measured attention to items by individual users~\cite{tong2020brain}, a large set of users~\cite{wu2018beyond}, and attention among a network of items~\cite{Wu2019EstimatingAF}.
The concept of self-reinforced efficiency of economic systems can be traced back to the ``invisible hand'' metaphor of Adam Smith.
One of the first analytical confirmations of the concept is the famous First Welfare Theorem,
which states that a market equilibrium of any complete market is Pareto efficient~\cite{Arrow1951,Debreu1959}.
Furthermore, in a broad class of markets called Eisenberg-Gale Fisher markets,
market equilibrium optimizes a popular efficiency measure called Nash social welfare \cite{EG59,Eisenberg61,JainVazirani-EG2010}.
On the other hand, in combinatorial auction, any Walrasian equilibrium (if exists) optimizes the social welfare \cite{BM1997}.
In many of these economic systems, there are natural adaptive price/bidding dynamics (e.g., t\^atonnement~\cite{walras,ABH59,CMV05,CF08,CCR12,CheungCD2020,CheungHN19} and proportional response~\cite{WZ2007,LLSB08,Zhang2011PR,birnbaum2011distributed,cheung2018dynamics,BMN2018,GaoKroer2020,BranzeiDR2021,CheungLP2021,CheungLSP2022,CheungHN19}) or
auction algorithms (e.g., ascending price auction~\cite{KelsoCrawford1982,NS2006}) that can attain the efficient equilibria. As we shall see, the influence dynamics we study are indeed a stochastic version of proportional response.
\vspace{-0.mm}
\subsection*{Paper Organization}
After describing the models in \Cref{sec:model}, we present our main results formally in \Cref{sec:results}.
In \Cref{sect:analysis}, we provide an overview of the techniques we employ for proving the main results.
This is followed by a discussion of empirical observations in \Cref{sect:experiments}.
\section{Model: The Trial-Offer Market with Heterogeneous User Types}
\label{sec:model}
First, we describe
stochastic trial-offer (T-O) market,
in which users come to the platform
one-by-one to try and purchase the items.
We introduce measures of efficiency and diversity.
Then we describe a continuous and deterministic analogue of the stochastic model which will be useful for analysis.
In this work, we still use {\em purchase} to denote user completing a transaction on an item, where the resource a user {\em spends} is attention. One could think of it as a unit amount of time. Without loss of generality, we assume that each user has the same {\em budget} of attention, and that each item {\em costs} one unit of attention.
This model generalises cultural markets specified
by Krumme et al.~\cite{krumme2012QuantifyingSI} and Maldonado et al.~\cite{maldonado2018popularity} to
heterogeneous types of users.
\paragraph{Stochastic Trial-offer (T-O) Market.}
Let $\mathcal{U}$ denote the types of users and $\mathcal{I}$ denote the set of items.
The fraction of Type-$i$ users is denoted by $w_i$; note that $\sum_{i=1}^{|\mathcal{U}|} w_i = 1$.
If $|\mathcal{U}|=1$, we say the market is \emph{homogeneous}, otherwise it is \emph{heterogeneous}.
The dynamic starts at time $t=0$. At each time $t\ge 1$, a random user comes to the platform and tries an item,
and then she decides to purchase the item or not.
Let $d_j^t$ denote the number of purchases of item $j$ up to time $t$.
To ensure that all items have a positive probability to be tried in the initial rounds,
we assume that each item was purchased \lx{at least once} before the dynamic starts, i.e. $d_j^0 \ge 1$ for every item $j$.
The \emph{market share} of item $j$ at time $t$ is \lx{simply the fraction of all purchases that goes to item $j$.}
\[
\phi^t_j ~:=~ \frac{d^t_j}{\sum_{k=1}^{|\mathcal{I}|} d^t_k}.
\]
\lx{The possible market shares lies on a simplex, denoted as $\Delta$:}
\begin{equation}
\Delta = \left\{\boldsymbol{\phi}\in \mathbb{R}^{|\mathcal{I}|} : \sum_{j=1}^{|\mathcal{I}|} \phi_j = 1, \phi_j\geq 0 \text{ for any item $j$}\right\}. \label{eq: Simplex}
\end{equation}
If the user at round $t$ is of Type-$i$, the probability that she will try item $j$ is modelled as a multinomial logit, \lx{a common type of discrete choice model~\cite{greene2017econometric}}.
\begin{equation}
\tilde y_{ij}^t = \frac{v_{ij} (\phi^{t-1}_j)^{r_i}}{\sum_{k=1}^{|\mathcal{I}|} v_{ik} (\phi^{t-1}_k)^{r_i}}~,\label{eq:mnlchoice}
\end{equation}
$v_{ij} \ge 0$ is a parameter that depicts the \emph{visibility} of item $j$ to Type-$i$ users, which depends on the \emph{appeal} of the item itself, and also how the item is promoted or ranked with respect to other items.
$r_i > 0$ is a parameter that depicts the strength of feedback signal for Type-$i$ users.\footnote{Here $r_i=0$ denotes no {\em social} feedback signal from the current market share, whereas $r_i \rightarrow \infty$ means only the most popular item will be chosen in the next round. If the denominator $\sum_{k=1}^{|\mathcal{I}|} v_{ik} (\phi^{t-1}_k)^{r_i} = 0$, then this probability is defined as $0$.}
After a Type-$i$ user tries an item $j$, they purchase the item with probability $q_{ij} \in [0,1]$, \lx{which intuitively reflect the {\it quality} of an item that may be unknown before it is tried}.
In a homogeneous market, there is only one user type, we will drop all indices $i$ from the notations,
resulting in $v_j, q_j, r$,
and clearly $w_1=1$.
This distributed dynamic is the result of two interacting factors. The first is the user-specific visibility and quality factors $v_{ij}$ and $q_{ij}$ generalises recent models that analyse homogeneous attention market with feedback loop \cite{krumme2012QuantifyingSI,jiang2019degenerate}.
The second is a social feedback signal $(\phi_j^{t-1})^{r_i}$ based on the overall popularity of the item, such as the one implemented by the original \textsc{MusicLab} experiment~\cite{salganik2006ExperimentalSO}, or the number of downloads and likes on myriads of internet platforms. This feedback dynamic is also similar to proportional response in Fisher markets~\cite{cheung2018dynamics}, which we will exploit for obtaining key results in \Cref{sec:results}.
Ranked list is one of the most popular forms of presenting a set of items to users, and a salient factor affecting the visibility of an item is its position in such a list~\cite{craswell2008experimental}.
If the positions are fixed throughout the attention dynamic, $v_{ij}$ remains constant.
Our theoretical results focus on this case.
\lx{In \Cref{sect:experiments}, we empirically explore how strategies of dynamically} positioning the items by the platform will affect the outcome.
We compute the probability of item $j$ being the next purchase by manipulating the trial and purchase probabilities.
\begin{lem}
In a stochastic T-O market defined above,
the probability that the next purchase is for item $j$, denoted by $p_j(\boldsymbol{\phi})$,
is a function of the current market share $\boldsymbol{\phi}$,
given by $p_j(\boldsymbol{\phi}) = y_j(\boldsymbol{\phi}) / (\sum_{k=1}^{|\mathcal{I}|} y_k(\boldsymbol{\phi}))$, where
\begin{equation}
y_j(\boldsymbol{\phi}) ~:=~ \sum_{i=1}^{|\mathcal{U}|} w_i q_{ij} \cdot \frac{v_{ij} (\phi_j)^{r_i}}{\sum_{k=1}^{|\mathcal{I}|} v_{ik} (\phi_k)^{r_i}}.\label{choice TO hetero}
\end{equation}
$y_j(\boldsymbol{\phi})$ represents the probability that item $j$ is tried and then purchased by any user group.
In the homogeneous case, the probability that the next purchase is for item $j$ is
\begin{equation}
\frac{v_j q_j (\phi_j)^{r}}{\sum_{k=1}^{|\mathcal{I}|} v_k q_k (\phi_k)^{r}}.\label{choice TO}
\end{equation}
\label{lem:market-eff}
\end{lem}
For $\boldsymbol{\phi}$ to be a stationary point in this stochastic process, it must satisfy $p_j(\boldsymbol{\phi}) = \phi_j$
for all items $j$. This motivates the following equilibrium notion.
\begin{defn}\label{defn:TOME}
For any T-O market, we say a market share $\boldsymbol{\phi}$ is a trial-offer market equilibrium (TOME) if $\mathbf{p}(\boldsymbol{\phi}) = \boldsymbol{\phi}$.
\end{defn}
We show that TOME exists under mild conditions (\cite{supp3065}\ref{supp:TOME_exist}) using the Brouwer's fixed-point theorem.
\begin{thm}\label{thm:exist-TOME}
If (i) $0 < r_i < 1$ and $w_i > 0$ for any user type $i$, and
(ii) for each item $j$ there exists type $i$ such that $v_{ij} q_{ij} > 0$,
then the market must have a TOME $\boldsymbol{\phi}^*$, in which $\phi_j^* > 0$ for all items $j$.
\end{thm}
In the homogeneous case, we can explicitly compute the unique TOME $\boldsymbol{\phi}^*$ if $0<r<1$:
\begin{equation}
\phi^*_j ~=~ \frac{(v_j q_j)^{1/(1-r)}}{\sum_{k=1}^{|\mathcal{I}|} (v_k q_k)^{1/(1-r)}},\label{eq:TOMEhomo}
\end{equation}
provided that $v_k q_k > 0$ for some item $k$.
It is easy to verify $\boldsymbol{\phi}^*$ is a stationary point by plugging it into \Cref{defn:TOME}. \Cref{sec:results} specifies how to obtain $\boldsymbol{\phi}^*$ and argues for its uniqueness \lx{in the interior of simplex $\Delta$}.
\paragraph{Efficiency and Diversity Measures.}
An online platform may be interested in maximizing $\sum_{j=1}^{|\mathcal{I}|} y_j(\boldsymbol{\phi})$, the probability of successful transaction among all items.
\begin{defn}\label{defn:efficiency}
\lx{Given} market share $\boldsymbol{\phi}$, define the T-O market efficiency as $E(\boldsymbol{\phi}) := \sum_{j=1}^{|\mathcal{I}|} y_j(\boldsymbol{\phi})$.
\end{defn}
A platform may also be interested in promoting diversity among items.
A natural measure of diversity is
the \emph{Shannon entropy}, the standard measure of uncertainty of a probability distribution in information theory \cite{Cover2006}.
Given market share $\boldsymbol{\phi}$, its Shannon entropy is
\begin{equation}
H(\boldsymbol{\phi}) = - \sum_{j=1}^{|\mathcal{I}|} \phi_j \log \phi_j.
\label{eq:entropy}
\end{equation}
\paragraph{The Deterministic T-O Market Dynamic}
This model is analogous to the stochastic T-O model.
There is one user of each type $i$, whose budget is $w_i$.
The budget $w_i$ corresponds to the maximum amount of attention
the buyer can afford in the platform. At each time $t\ge 0$,
each buyer $i$ spends an amount of $b_{ij}^t$ for item $j$.
Subject to budget constraints, $\sum_{j=1}^{|\mathcal{I}|} b_{ij}^t \le w_i$.
Let the total spending on each item be $b_j^t := \sum_{i=1}^{|\mathcal{U}|} b_{ij}^t$, then the market share of item $j$ at time $t$ is $\phi_j^t := b_j^t / (\sum_{k=1}^{|\mathcal{I}|} b_k^t)$.
For $t\ge 1$, the update rule is
\begin{equation}
b_{ij}^t = w_i q_{ij} \cdot \frac{v_{ij} (\phi^{t-1}_j)^{r_i}}{\sum_{k=1}^M v_{ik} (\phi^{t-1}_k)^{r_i}}
~=~ w_i q_{ij} \cdot \frac{v_{ij} (b^{t-1}_j)^{r_i}}{\sum_{k=1}^M v_{ik} (b^{t-1}_k)^{r_i}}. \label{cont dynamic} \end{equation}
Note that the middle term in \eqref{cont dynamic} is the same as the term appeared in \eqref{choice TO hetero},
so $b_j = \sum_{i=1}^{|\mathcal{U}|} b_{ij}$ is the same as $y_j(\boldsymbol{\phi})$ in \eqref{choice TO hetero}.
With this, it is not hard to see that $\boldsymbol{\phi}^*$ is a TOME of a stochastic T-O market if and only if
$\mathbf{b}^*$, where each $b^*_{ij}$ is computed by the middle term of \eqref{cont dynamic} by replacing $\boldsymbol{\phi}^{t-1}$ with $\boldsymbol{\phi}^*$, is a fixed point of the dynamic \eqref{cont dynamic} of the analogous continuous and deterministic market.
\paragraph{Comparison with Classical Fisher Market and Proportional Response.}
The continuous and deterministic market and the dynamic \eqref{cont dynamic} are reminiscent of
the classical Fisher market and the proportional response (PR) dynamic in \cite{cheung2018dynamics}.
However, there is one crucial difference.
PR dynamic in Fisher market is same as \eqref{cont dynamic}
but with $b_k^{t-1}$ on the RHS replaced by $b_{ik}^{t-1}/b_k^{t-1}$.
The term $b_j^{t-1}$ is viewed as the \emph{price} of item $j$,
while a higher price in PR drives down spending on that item from the buyers.
In contrast, a higher value of $b_j^{t-1}$ in \eqref{cont dynamic}, which corresponds to receiving more attention in the T-O market, will lead to more spending on that item.
This reflects the tendency of cascading in \lx{online attention dynamics}.
\section{Results}
\label{sec:results}
This section provides an overview of our key new results in four parts. \Cref{ssec:maxutil} establishes a novel connection between TOME in homogeneous markets and two convex objectives. \Cref{ssec:mirror_descent} shows that update steps in deterministic T-O markets are mirror descent steps for these objectives.
\Cref{ssec:heteroRMA} presents the objective functions for heterogeneous markets and its two special cases.
\vspace{-0.mm}
\subsection{TOME maximises regularised utilities}
\label{ssec:maxutil}
First, we establish a robust connection between TOME of {\it homogeneous} T-O market and optimization. For notational simplicity, let $\bar{q}_j = q_jv_j$, noting that the quality and visibility factors $q_jv_j$ are coupled in both \eqref{choice TO} and \eqref{eq:TOMEhomo}.
We consider the following constrained optimization problem:
\begin{equation}
\begin{aligned}
&\max \quad \sum_{j=1}^{|\mathcal{I}|} \bar{q}_j\phi_j^r, \\
&\text{subject to } \boldsymbol{\phi}\in\Delta. \label{TotalUtilityProblem}
\end{aligned}
\end{equation}
We also discover another optimization problem which captures the \lx{same} TOME in the homogeneous market.
\begin{equation}
\begin{aligned}
&\max \quad \Psi(\boldsymbol{\phi}) = \sum_{j=1}^{|\mathcal{I}|} \left(\phi_j \log \bar{q}_j - (1-r) \phi_j \log \phi_j\right), \\
&\text{subject to } \boldsymbol{\phi}\in\Delta. \label{Efficiency Entropy Problem}
\end{aligned}
\end{equation}
We establish the equivalence between the equilibria and the maximisers of the above problems in the following theorem. \lx{The proofs for both simply invoke Lagrangian multipliers, and are found in \cite{supp3065} Section \ref{supp:social-welfare}.}
\begin{thm}\label{thm:social-welfare}
If $\boldsymbol{\phi}^*$ is a maximiser of problem \eqref{TotalUtilityProblem} or problem \eqref{Efficiency Entropy Problem}, then it is a TOME.
\end{thm}
We can view the objective function \eqref{TotalUtilityProblem} as the ``total utility'' since the choice probability of item $j$ is proportional to the ``utility'' associated with it, which is $\bar{q}_j\phi_j^r$.
The objective function in \eqref{Efficiency Entropy Problem} can be decomposed into two parts, namely
$\sum_{j=1}^{|\mathcal{I}|} \phi_j \cdot \log \bar{q}_j$ and $(1-r) \sum_{j=1}^{|\mathcal{I}|} -\phi_j \log \phi_j$.
The first summation can be viewed as an alternative measure of
total utility, with the utility of item $j$ being $\log \bar q_j$ weighted by its market share $\phi_j$.
The second summation is $(1-r)H(\boldsymbol{\phi})$ -- the entropy of market share.
When $r=1$, the entropy term disappears, so the optimization problem \eqref{Efficiency Entropy Problem}
becomes trivial: the optimal solution is by setting $\phi_j = 1$ for the highest-utility item $j = \arg\max_k \bar{q}_k$.
As $r$ decreases from $1$, i.e. the strength of feedback signal reduces, the entropy term becomes more significant,
which encourages diversity in the optimal solution.
For $0<r<1$, \eqref{TotalUtilityProblem} and \eqref{Efficiency Entropy Problem} are both \marco{strictly} concave in $\boldsymbol{\phi}$, therefore having a unique maximum.
A crucial advantage of \eqref{Efficiency Entropy Problem} over \eqref{TotalUtilityProblem} is that
mirror descent on \eqref{Efficiency Entropy Problem} provides insight into the convergence
of the stochastic influence dynamics in T-O market as specified in \eqref{eq:mnlchoice} and \eqref{choice TO}.
To formally describe this discovery,
we need several concepts in optimisation theory, which are discussed next.
\vspace{-0.mm}
\subsection{T-O update as mirror descent\marco{, and TOME convergence for homogeneous markets}}
\label{ssec:mirror_descent}
\paragraph{Background: Bregman Divergence and Mirror Descent}
Consider a general constrained convex optimization problem of minimizing a smooth convex function $f(x)$, subject to the constraint $x\in C$ for some convex set $C$.
\begin{defn}\label{defn::Bregmann divergence}
Let $C$ be a compact and convex set. Let $h$ be a differentiable convex function on $C$.
The \emph{Bregman divergence} w.r.t.~$h$, denoted by $d_h$, is defined as
\[
d_h(x,y) = h(x) - h(y) - \langle \nabla h(y)~,~ x-y \rangle,
\]
for any $x\in C$ and $y\in \textsf{rint}(C)$.
\end{defn}
The widely used Kullback–Leibler (KL) divergence is a special case of Bregman divergence,
generated by the function $h(x) = \sum_j (x_j \log x_j - x_j)$.
Given a Bregman divergence $d_h$, the corresponding mirror descent update rule is
\begin{equation}\label{general MD rule}
x^{t} ~=~ \argmin_{x\in C} \left\{ \langle \nabla f(x^{t-1})~,~x-x^{t-1} \rangle + \frac{1}{\alpha} \cdot d_h(x,x^{t-1}) \right\},
\end{equation}
where $\alpha$ is considered as the step-size of the update rule, which may depend on $t$ in general.
\paragraph{New Result: T-O update as Mirror Descent} \marco{A key conceptual message of this paper is the equivalence of influence dynamic and mirror descent. To illuminate this, we first focus on the continuous, deterministic and homogeneous T-O market.}
\begin{lem}
Let $\boldsymbol{\phi}^t$ be market share over time \lx{in a homogeneous T-O market}, and function $\mathbf{p}(\boldsymbol{\phi})$ as defined in \marco{\Cref{choice TO}}. The update rule
\begin{equation}\label{deterministic update rule homogeneous}
\boldsymbol{\phi}^{t} = \mathbf{p}(\boldsymbol{\phi}^{t-1})
\end{equation}
is same as the mirror descent update rule \eqref{general MD rule}
for the optimisation problem \eqref{Efficiency Entropy Problem}, in which $d_h$ is taken as the KL divergence,
and $\alpha = 1$.
\end{lem}
\marco{Once the equivalence is established, the convergence to TOME in the continuous and deterministic dynamic becomes intuitive; we will provide the formal argument in \Cref{sect:analysis}. To show that the convergence extends to the stochastic setting,}
we use the methodology in Maldonado et al.~\cite{maldonado2018popularity} to rewrite the stochastic influence dynamic as a Robbins-Monro algorithm (RMA) \marco{of the corresponding continuous and deterministic dynamic}.
Then we can apply the results in stochastic approximation~\cite{benaim1999dynamics} to formally establish the convergence of the stochastic dynamic. \marco{We summarize our main result for homogeneous market in the theorem below, and leave the discussions of RMA and stochastic approximation to \Cref{sect:analysis}.}
\begin{thm}[\cite{maldonado2018popularity}, Theorem 5.3]\label{thm-homo-convergence}
With the homogeneous setup, if $\boldsymbol{\phi}^0 >0$ and $0\leq r < 1$, then with probability 1,
\begin{equation}
\lim_{t\rightarrow \infty} \boldsymbol{\phi}^t = \boldsymbol{\phi}^*,
\end{equation}
where $\boldsymbol{\phi}^*$ is the unique
maximiser of the convex optimisation problem \eqref{Efficiency Entropy Problem}. And for any $r\in[0,1]$,\begin{equation}
\lim_{t\rightarrow \infty} \Psi(\boldsymbol{\phi}^t) = \Psi^*
\end{equation}
with probability 1, where $\Psi$ is defined in \eqref{Efficiency Entropy Problem}, and $\Psi^*$ is the global maximum of problem \eqref{Efficiency Entropy Problem}.
\end{thm}
\vspace{-0.mm}
\subsection{TOME for heterogeneous markets}
\label{ssec:heteroRMA}
For the heterogeneous case, we first show the following proposition, which depicts that the TOME is optimising an ex-post version of a convex objective.
\begin{prop}\label{prop:Nash-SW}
\marco{Given a heterogeneous T-O market with a TOME $\boldsymbol{\phi}^*$},
the TOME $\boldsymbol{\phi}^*$ is the optimal solution of the following problem:
\begin{equation}
\begin{aligned}
\max_{\{\phi_j\}_{j=1}^{|\mathcal{I}|}} \prod_{i=1}^{|\mathcal{U}|}(\sum_{j=1}^{|\mathcal{I}|} q_{ij}v_{ij}\phi_j^{r})^{w_ia_i^*},\\
\text{subject to: }\ \sum_{j=1}^{|\mathcal{I}|} \phi_j = 1
\end{aligned}\label{eq:objhetero}
\end{equation}
where $a_i^* = \frac{\sum_{j=1}^{|\mathcal{I}|} q_{ij}v_{ij}\phi_j^*} {\sum_{k=1}^{|\mathcal{I}|}v_{ik}\phi_k^*} $ for any $i\in\mathcal{U}$.
\end{prop}
A proof of \Cref{prop:Nash-SW} is provided in the online supplement \cite{supp3065}\ref{Objective function for heterogeneous case}.
Problem \eqref{eq:objhetero} takes the form of a product-of-utilities (or sum-of-log-utilities). Known as Nash social welfare~\cite{kaneko1979nash}, \marco{this objective was found to strike a good balance between fairness and efficiency}
in the resulting allocations~\cite{bertsimas2011price,CaragiannisKMPS19}. A proper exposition of this connection is outside the scope of this paper. Once $a_i^*$ are known (hence \emph{ex-post}), Problem \eqref{eq:objhetero} is convex in $\boldsymbol{\phi}$.
However, if $a_i^*$ is unknown, the optimisation problem is non-convex.
We raise the properties of its optimal solution \marco{(e.g., is the optimal solution still a TOME?)} as an open problem.
Then we turn our focus to two interesting \lx{special} cases of the heterogeneous market, where $r_i = r$ for all types $i$, and:
\begin{itemize}
\item the trial randomness is the same across all user types (i.e., $v_{ij}$ are the same for all types $i$),
but the purchase randomness can be different (i.e., $q_{ij}$ can be different for various types $i$);
\item the trial randomness can be different across all types (i.e., $v_{ij}$ can be different for various types $i$), but the purchase randomness is the same across all types (i.e., $q_{ij}$ are the same for all types $i$).
\end{itemize}
For the first case, we show that it can be reduced to the homogeneous case.
For the second case, we design a new optimization problem and show that \eqref{cont dynamic} is indeed mirror descent for the problem.
The driving variables of the problem
are $b_{ij}$ but not $\phi_j$, where $b_{ij}$ are the driving variables in the dynamic \eqref{cont dynamic}.
We let $b_j = \sum_{i=1}^{|\mathcal{U}|} b_{ij}$, and set $q_j$ to be the common value of $q_{ij}$ for all $i$.
The optimisation problem is
\begin{equation}
\begin{aligned}
&\max \quad \sum_{j=1}^{|\mathcal{I}|} -\frac{r}{q_j} (b_j \log b_j - b_j) \\
&~~~~~~~~~~~~~ + \sum_{i=1}^{|\mathcal{U}|} \sum_{j=1}^{|\mathcal{I}|}\left(\frac{1}{q_j}(b_{ij} \log b_{ij} - b_{ij}) - \frac{b_{ij}}{q_j}\log q_{j}v_{ij}\right)\\
&\text{subject to } \sum_{j=1}^{|\mathcal{I}|} \frac{b_{ij}}{q_{j}} ~=~ w_i,~\forall i\in \mathcal{U}. \label{hetero obj function}
\end{aligned}
\end{equation}
By performing a variable transformation $x_{ij} = b_{ij} / q_j$ to the above problem,
we obtain an equivalent transformed optimization problem where $x_{ij}$ are the driving variables,
which is needed for the key lemma below. The proof is in \cite{supp3065}\ref{Calculation of objective function}.
\begin{lem}
The update rule \eqref{cont dynamic} is equivalent to mirror descent w.r.t.~KL divergence on the transformed optimization problem. \label{Continuous_dynamics_as_mirror_descent}
\end{lem}
With the above lemma in hand, we again use RMA and Bena{\"\i}m's results to establish convergence
of the stochastic influence dynamics in heterogeneous markets. The proof can be found in supplement \cite{supp3065}\ref{Proof of theorem 13}.
\begin{thm}\label{thm:convergence-hetero}
With the heterogeneous setup when $r_i =r < 1$ for all $i\in \mathcal{U}$, if one of the following conditions
\begin{itemize}
\item[1.] $v_{ij} = v_j$ for all $i\in \mathcal{U}, j \in \mathcal{I}$.
\item[2.] $q_{ij} = q_j$ for all $i\in \mathcal{U}, j \in \mathcal{I}$.
\end{itemize}
is satisfied. In addition, $\boldsymbol{\phi}^0 >0$.
Then with probability 1,
\begin{equation}
\lim_{t\rightarrow \infty} \boldsymbol{\phi}^t = \boldsymbol{\phi}^*,
\end{equation}
where $\boldsymbol{\phi}^*$ is the unique maximiser of the convex program \eqref{hetero obj function}. And when $r\in [0,1]$,
\begin{equation}
\lim_{t\rightarrow \infty}\Gamma(\boldsymbol{\phi}^t) = \Gamma^*,
\end{equation}
with probability 1, where $\Gamma$ is the objective function of problem \eqref{hetero obj function}, and $\Gamma^*$ is the global maximum of problem \eqref{hetero obj function}.
\end{thm}
By \Cref{defn:TOME}, there indeed exist multiple TOMEs in the simplex $\Delta$. However, there is only one TOME that lies in the relative interior of the simplex $\Delta$. The uniqueness of the limit point of the dynamic depends on the choice of initial point and the social influence parameter $r$:
\begin{itemize}[leftmargin=*]
\item When $r < 1$ and the initial market shares $\phi^0$ is in the relative interior of $\Delta$,
then the limit point of the dynamic is unique, which is the interior TOME, for both homogeneous and heterogeneous cases according to Theorems \ref{thm-homo-convergence} and \ref{thm:convergence-hetero}.
\item When $\phi^0$ contains zero initial market shares for some items, then it is equivalent to consider a market with those items eliminated. In other words, those items would not gain non-zero market share in subsequent iterations from zero initial market shares. The limit point of the dynamic is still unique in these cases.
\item When $r=1$, the objective functions in optimisation problems \ref{Efficiency Entropy Problem} and \ref{eq:objhetero}) are not strictly convex. There is a level set containing multiple equilibria. The dynamic will converge to the level set which optimises these objective functions.
\end{itemize} |
2,877,628,091,202 | arxiv | \section{Introduction}
Recent HST proper motion measurements of the Magellanic Clouds
by Kallivayalil et al. (20006a,b) and \cite{ Piatek07} indicate that they
are presently moving at
velocities substantially
higher (almost 100 km s$^{-1}$) than those provided by previous
observational studies (van der Marel et al. 2002, Kroupa \& Bastian 1997).
Such high velocities ($v=378$ km s$^{-1}$ and $v=302$ km s$^{-1}$ for
the LMC and SMC, respectively)
are close to the escape velocity of the Milky Way
and consistent with the hypothesis
of a first passage about the Galaxy (Besla et al. 2007).
A single perigalactic passage has serious
implications for the origin of the Magellanic Stream.
It definitely rules out the tidal stripping hypothesis (Ruzicka, Theis \&
Palous 2008) since in this scenario
the loss of mass is primarily induced by tidal shocks suffered by satellites at the
pericenters (Mayer et al. 2006) and the Stream would not have time to form before the
present time. Indeed kinematical data suggest that the Clouds are now just after a
perigalactic passage.
On the other hand, ram-pressure scales as $v^2$, where $v$ is
the relative velocity between satellites and the ambient medium.
The high velocities of the Clouds could therefore compensate the effect of
the reduced interaction time with the hot halo of the MW and
hydrodynamical forces would play a determinant role in forming the
Stream.\\
In \cite{Mastropietro05} (hereafter M05) we have performed high resolution N-body/SPH simulations to
study the hydrodynamical and gravitational interaction between the LMC
and the MW using orbital constraints by \cite{vanderMareletal02} and a
present time satellite velocity of 250 km s$^{-1}$.
We found that, after two perigalactic passages, the combined effect of tidal forces and
ram-pressure stripping can account for the majority of the LMC's
internal features and for the formation of the MS.
More in detail, ram-pressure stripping of cold gas from the LMC's disk
produces a Stream with morphology and kinematics similar to the
observed ones, while tidal stripping has
longer time-scales and is not efficient in forming stellar debris,
consistently with the lack of stars observed in the Stream.
Nevertheless, at each pericentric passage the LMC suffers tidal
heating which perturbs the overall structure of the satellite
reducing the gravitational restoring force and therefore indirectly contributing
to the loss of gas.\\
The main objection to this model, in light of the new proper
motion measurements of the LMC, is that
the time spent by the LMC within the hot halo of the MW would be to short to cover the full
extension of the Stream (more than 100 degree) by ram-pressure mechanisms (Besla et al. 2007).
Moreover, hydrodynamical forces would affect a galaxy only weakly perturbed by the gravitational interaction and stripping would result more difficult.
In this work I present the results of N-body/SPH simulations
where the interaction between the MW and the LMC is modeled according
to the new proper motion measurements of \cite{Kallivayaliletal06a}.
\section{Galaxy models}
The initial condition of the simulations are constructed using the
technique described by \cite{Hernquist93}. Both the MW and the LMC are
multi-component galaxy models with a stellar and gaseous disk embedded
in a spherical dark matter halo. The density profile of the NFW halo
is adiabatically contracted due to baryonic cooling. Stars and cold
gas in the disks follow the same exponential surface density profile. We also
explore the eventuality of an extended LMC gaseous disk. In this model
the gaseous disk is characterized by an additional
constant density layer which extends up to
eight times the scale length of the exponential disk.
The MW model comprises also a small stellar bulge and an extended low
density ($n = 2 \times 10^{-5}$ cm$^{-3}$ within 150 kpc from the Galactic center and $n = 8.5 \times 10^{-5}$ cm$^{-3}$ at 50 kpc ) hot ($T=10^6$ K) halo in hydrostatic equilibrium inside the
Galactic potential (M05).
The MW model, with virial mass $10^{12} M_{\odot}$ and concentration $c=11$, is similar to model A1 of \cite{Klypin02} while the
structural parameters of the LMC are chosen in such a way that the resulting rotation curve
resemble that of a typical bulgeless late-type disk galaxy.
In details, the satellite has virial mass $2.6 \times 10^{10}M_{\odot}$, concentration $c=9.5$ and the same amount of mass in the stellar and gaseous disk component ($\sim 10^9 M_{\odot}$).
The Toomre's stability criterion is always satisfied and the parameter $Q$ set equal to 1.5 and 2.0 at the disk scale radius in the different LMC models.\\
\section{Simulations}
I performed adiabatic simulations using GASOLINE, a parallel tree-code
with multi-stepping (Wadsley et al. 2004).
High resolution runs have $2.46 \times 10^6$ particles, of which
$3.5 \times 10^5$ are used for the disks and $5 \times 10^5$ for the
hot halo of the MW. The gravitational spline softening is set equal to
0.5 kpc for the dark and gaseous halos, and to 0.1 kpc for stars and
gas in the disk and bulge components.
\begin{figure}[h!]
\includegraphics[width=2.7in]{orbit.eps}
\includegraphics[width=2.7in]{velocity.eps}
\caption{Orbit (left panel) and orbital velocity (right) of the LMC.
Vertical lines represents the present time, just after the
perigalacticon. Present values and values at the perigalacticon are indicated. }
\label{orbit}
\end{figure}
In my best model the LMC approaches the MW on an unbound orbit
(Fig. \ref{orbit}) with initial Galactocentric distance of 400 kpc and velocity $\sim 190$ km s$^{-1}$.
After
the perigalactic passage (at $\sim 40$ kpc) the velocity decreases faster than for a
ballistic orbit as a result of dynamical friction.
The escape velocity at a given LMC position is indicated by a red curve in the right panel and calculated assuming a
spherical unperturbed host potential.
Due to the effects of dynamical friction, at late times
the satellite lies on a nearly parabolic orbit.
At the present time ($t \sim 1.78 $ Gyr, vertical lines in the plots)
it reaches a velocity of $378$ km $s^{-1}$ at $ 49$ kpc from the Galactic
center, in good agreement with the new proper motion measurements.\\
In choosing the initial inclination of the LMC I made the
approximation that it does not change during the interaction due to
the effects precession or nutation of the disk plane.
At the beginning of the simulation the disk moves almost face-on through the external medium,
and ram-pressure affecting the whole disk perpendicularly. In proximity to the perigalactic passage the velocity vector changes rapidly and the angle between the satellite's disk and the proper motion is close to zero.
At the present time the simulated disk has an inclination of about 30 degree
with respect to the orbital motion (Kallivayalil et al. 2006a) and is indeed moving nearly edge-on through the external hot gas, with ram-pressure compressing its eastern side.
\begin{figure}[h!]
\vspace*{ 1.8 cm}
\begin{center}
\hspace*{-1.0 cm}
\includegraphics[%
scale=0.8]{aitoff2.eps}
\vspace*{ 1.5 cm}
\caption{Present time distribution of stars (top) and gas (bottom)
from the LMC disk in Galactic coordinates.}
\label{aitoff}
\end{center}
\end{figure}
\begin{figure}[h!]
\vspace*{0.5 cm}
\begin{center}
\hspace*{3.5 cm}
\includegraphics[%
scale=0.6]{aitoffcircle2lr.eps}
\hspace*{3.5 cm}
\includegraphics[%
scale=0.6] {aitoffcircle3lr.eps}
\vspace*{0.5 cm}
\caption{Polar projection of the simulated stream in Galactic
coordinates. Both the pure exponential LMC model (top panel) and
the model with extended disk (bottom) are shown.
}
\label{aitoffcirc}
\end{center}
\end{figure}
Fig. \ref{aitoff} illustrates the present time distribution of stars and
gas originating from the LMC's disk. The stellar disk becomes elongated while
tidal debris start forming after the perigalactic passage. But all stars
stay bound to the satellite.
Tidal heating does not perturb significantly the vertical structure
of the disk that remains thin and does not create a warp, unlike
what observed in M05.
Bar instability develops at the perigalacticon only in the case of $Q=1.5$.\\
Ram-pressure strips nearly $2 \times 10^{8}
M_{\odot}$ of gas from the LMC's disk forming a continuous
Stream that lies in a thin plane perpendicular to the disk of the MW and
extending up to $\sim 140 $ degrees from the LMC
(Fig. \ref{aitoffcirc}).
The location of the Stream in the Southern Galactic hemisphere is
comparable to the values of $b$ and $l$ provided by observations.
Contrary to M05 there is no LMC's gas above the
Galactic plane. In M05 the material lying in the
Northern hemisphere is stripped from the satellite during the
orbital period preceding the present one and the Stream forms a great polar circle
around the Galaxy.
The lack of gas at $b>0$ in Fig. \ref{aitoffcirc} is not due to
inefficient ram-pressure at early times, but to
the fact that, in order to reproduce the current location and velocity
of the LMC, the satellite enters the MW halo exactly at $b=0$.
The morphology of the Stream does not change significantly adopting an
extended gaseous disk model (top panel of Fig. \ref{aitoffcirc}), except for the region at
the head of the Stream, which appears broader (bottom).
\section{Conclusions}
I carried out high resolution gravitational/hydrodynamical simulations of the interaction between the LMC and the MW using the orbital parameters suggested by the new HST proper motion measurements.
I find that ram-pressure stripping exerted by a tenuous MW hot halo during a single perigalactic passage forms a Stream whose extension and location in the Sky are comparable to the observed ones.
The stellar structure of the satellite is only marginally affected by tidal forces.\\
The numerical simulations were performed on the l SGI-Altix 3700 Bx2 at the University Observatory in Munich. This work was partly supported by the DFG Sonderforschungsbereich 375 ``Astro-Teilchenphysik''.
|
2,877,628,091,203 | arxiv | \section{Introduction and context}
Visibility is reduced in fog because the tiny droplets of water suspended in air cause random multiple scattering of light, thereby degrading the image-bearing capabilities of photons. This is detrimental to many imaging applications of optics in open air, based either on passive imaging of scenes immersed in fog, or on active detection of beacons. This latter situation is of particular relevance when series of beacons are installed along runways to guide aircraft for landing and takeoff. It becomes impossible for the pilot to observe these beacons during thick fogs, and there is no other alternative in the case of airfields or aircraft not equipped with radio-frequency instrument landing systems. Similar problems exist in maritime navigation, railway transportation, and even for motor transport on highways. Such examples illustrate the need for a simple, cheap, and compact technique aimed at improving the visibility of optical beacons in foggy weather conditions, and if possible also viewing objects that do not themselves emit light.
One class of ``fog-removal'' technique is purely computational, where image processing algorithms are used on single or multiple images to remove the effect of fog (e.g., \cite{Anwar2017} and references therein).
The other class of techniques exploits the physics of the problem and discriminates between different types of photon trajectories. Photons transiting a scattering medium are usually classified as (a) ballistic photons, that are forward scattered and retain their original direction of propagation, (b) snake photons, that are near forward scattered, and have paths that are not far from the ballistic, and (c) diffusive photons that are scattered through random angles over all directions and whose paths are scrambled such that the original direction of propagation is lost. The
various approaches that have been used either select the small amount of ballistic (and snake) photons from among the huge amount of multiply scattered light that reaches the detector or the camera, or exploit the diffusive light itself to gain imaging capabilities (see, for example, reviews \cite{Ramachandran1999,Dunsby2003}). As illustrations of the first strategy, the ballistic photons may be temporally discriminated using a pulsed source of light in conjunction with time-gated detection \cite{David2006} or time gated holography \cite{Kanaev2018}. An alternative technique also based on pulsed laser illumination exploits the different statistics of backscattered and reflected photons \cite{Satat2018}. Many recent works have aimed at imaging or focussing light through strongly scattering media, mainly for bio-medical imaging through live tissues \cite{Popoff2010, Katz2012, Bertolotti2012}. These techniques, that characterise the scattering properties of the turbid medium, have limited applicability to fog, as the scatterers in fog are constantly moving at high speed.
In the context of imaging through fog, another simpler and cheaper approach consists in using a modulated continuous-wave source of light and relies on the fact that the intensity variation of the ballistic photons retains a phase relationship with the intensity modulation of the source, while that of the multiply scattered photons does not.
This technique requires a demodulation of the detected signal at the modulation frequency. High modulation frequencies are required for true ballistic filtering \cite{Panigrahi2016}, especially when the scatterers have a large anisotropy factor with significant forward scattering, as is the case with Mie scatterers. Rayleigh scatterers, due to their almost isotropic scattering, should in principle permit discrimination of the diffusive from the ballistic low scattering order components at lower modulation frequencies \cite{Panigrahi2016}, however at the cost of a reduced number of ballistic or snake-like photons. Thus, the modulation-demodulation technique has proven to be efficient in enhancing source visibility, by discriminating against the background contributed by ambient lighting, sources modulated at different frequencies, and to varying extents, the diffusive photons. The demodulation may be performed using a bucket detector followed by lock-in electronics. This requires a step scan of the detector to build a two-dimensional image \cite{Fabien}, and is thus time-consuming. Demodulation may be performed numerically using Fourier transform over a time-series of images \cite{HemaOC}. Though many optimised algorithms are available for fast Fourier transform, the technique has its drawbacks \cite{Sudarsanam2016} - it requires large memories to store the time series (1K frames or more) of images (each megapixels or more), on-camera buffer sizes are not large enough, and storage and read-out of images on the computer takes time. An alternative technique that was recently demonstrated consists in performing this demodulation instantaneously by optical means, with promising perspectives of high-frequency operation \cite{Panigrahi2020}. This, however, comes with increased complexity and cost of the optical elements. A different and simpler approach, suited for moderate frequencies, has also been recently demonstrated \cite{Sudarsanam2016}. It consists in performing quadrature lock-in discrimination (QLD) \cite{Mullen2007} computationally to obtain real-time demodulation of images. By multiply-and-accumulate operation on each image as it is acquired, the need for storing a series of images is eliminated. By multiplying by the two quadratures of the modulation, the need for phase matching between the source and receiver is obviated. Exploiting the task and data parallelization capabilities of present-day desktop computers, this technique has been shown to lead to real-time display of $600\times 600$ pixel images with low latency and at rates faster than the eye bandwidth. However, till now, this technique has been applied only in table-top experiments where suspensions of microspheres were used as the scattering medium. The aim of the present paper is to test this technique in actual field conditions in real fog.
\section{Principle of QLD}
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\linewidth]{Fig01.pdf}
\caption{Principle of quadrature lock-in discrimination.}
\label{Fig01}
\end{figure}
The principle of QLD is schematized in Fig.\,\ref{Fig01}. We start from a source whose intensity is modulated at angular frequency $\Omega$. After propagation in the scattering medium, a series of frames are acquired by the camera at a rate much larger than $\Omega/2\pi$ and transferred in real time to a computer. The computer multiplies two copies of each frame by two sinusoidal signals at frequency $\Omega$, which are in quadrature one to the other. After time averaging and addition, this allows the retrieval of the amplitude of the modulated part of the intensity that falls on each pixel and eliminates the DC background that blurs the original raw frames. Thanks to the fact that the modulation at $\Omega$ is imprinted only on the light emitted by the modulated beacon, this technique permits the reduction of the noise in the acquired frames that comes from background light. However, the modulation frequency is too low to allow to filter out the part of the modulated light which is multiply scattered \cite{Mullen2009}. Thus, this technique does not perform a real selection of ballistic photons only.
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\linewidth]{Fig02.pdf}
\caption{(a) Image to be modulated at 13~Hz. (b) Image to be modulated at 17~Hz. (c) One of the frames of the mixed image after addition of noise. Original images are not visible at all. The retrieval of original image is done from 1105 such frames. (d) Image obtained upon demodulation at 13~Hz. (e) Image obtained upon demodulation at 17~Hz. (f) Image obtained upon demodulation at 14~Hz.}
\label{Fig02}
\end{figure}
We give a simulation of this principle. We begin with two different 8-bit images (see Figs.\,\ref{Fig02}(a,b)). We sinusoidally modulate the first image at $\Omega/2\pi=13\,\mathrm{Hz}$ and the second image at $\Omega/2\pi=17\,\mathrm{Hz}$. To obtain entire numbers of period, we need at least $13\times17=221$ frames. We then multiply this number of frames by 5 to increase the number of frames, leading to a total number of $5\times13\times17=1105$ frames for each image. An offset of 255 is added to every frame pixel in order to make all the values positive. The frames of the two series are then added. Finally, we mimic the noise by adding to every pixel of every frame a random number uniformly distributed between 0 and $5\times 255= 1275$. This represents the raw frames that a camera would record when simultaneously viewing the two differently intensity modulated sources through a randomly scattering medium. One such simulated raw frame is shown in Fig.\,\ref{Fig02}(c). Clearly, neither of the original images of Figs.\,\ref{Fig02}(a) and \ref{Fig02}(b) is visible. We then apply QLD in order to retrieve the original images from the series of raw frames. The results of processing at 13~Hz, 17~Hz and 14~Hz are shown in Figs.\,\ref{Fig02}(d-f), respectively. The original images are retrieved, with the added noise being filtered out when processed at the "correct" frequency. On the other hand, QLD performed at a ``wrong'' frequency reveals neither image.
\begin{figure}[htbp]
\centering
\includegraphics[width=1\linewidth]{Fig03revised2.pdf}
\caption{{\color{black}(a) Schematic of the experiment performed.The red curved arrows indicate the location of the camera in the building in the foreground, and the LED panel fixed on the building at the background. The distance between the two is 150~m; the orange full line indicates the clear line-of-sight between them. (b) An image of a portion of the building housing the LED panel, as captured at daybreak, in the absence of fog. The portion circled in yellow is enlarged and shown in (c); this is further enlarged and shown in (d). The processed images appearing in later figures may be compared with (d).}}
\label{Fig03}
\end{figure}
\section{Field experiments}
\subsection{Application of QLD to a modulated beacon}
To test the efficiency of QLD in real fog, we performed field experiments as schematized in Fig.\,\ref{Fig03}, over a period of two months during peak winter at Shiv Nadar University, Uttar Pradesh. A LED panel, consisting of 10 uncollimated LEDs connected in parallel on a 10\,cm x 16\,cm standard printed circuit board, emitting typically 1~Watt each in the red (around 640~nm), was used as the source of light. Several factors, apart from the ease of availability of LEDs, contributed to this choice of wavelength. Red light is conventionally used to signify danger, and most warning lights are in this colour. Further, the Rayleigh scattering of light is proportional to the inverse fourth power of wavelength, and therefore the red part of the spectrum should be preferred over the blue. Finally, most cameras have the highest sensitivity in this part of the spectrum. The entire bunch of LEDs were so closely spaced that they could not be individually resolved at the camera, and thus appeared as a single bright source. The current through the LEDs was modulated (peak-to-peak modulation amplitude equal to 30\,\% of the average current) so that the intensity of the emitted light could be varied sinusoidally at any frequency in the range 13 - 17 Hz. The detector used was a 16-bit Andor Neo sCMOS camera, controlled by Andor Solis software, with a 8-48\,mm F/1-1.2 zoom lens from Ernitec. In the conditions of our acquisitions, the actual pixel dynamic of the camera is 13.4~bits, obtained from the ratio of the pixel well depth (30000 electrons) to the RMS read noise (2.8 electron according to the manufacturer). Series of frames of the scene at the desired frame rates were acquired and transferred to a desktop computer where they were processed for the extraction of images using the QLD technique. The distance between the source and the detector was 150 meters. We used natural features of the scene to evaluate visibilities, which during our acquisitions, ranged between 30 to 150~m.
In Fig.\,\ref{Fig04} we describe a set of recordings made when the source was modulated at 13~Hz. Each data set was based on the acquisition of a total of 10,140 frames collected at rates of 260 or 390~Hz, and with exposure times per frame ranging from 0.5~ms to 5~ms depending on the weather condition. A raw frame recorded in one of the experiments is shown in false color in Fig.\,\ref{Fig04}(a). Although this frame was acquired at day-break, the LED panel, located at a distance of 150~m, is not visible because of the heavy fog conditions with visibility of 40~m. This frame is re-plotted in Fig.\,\ref{Fig04}(b) with a different color scale corresponding to a scale enhancement by a factor 850. The LED panel is still almost impossible to distinguish. After QLD processing of 10,140 such raw frames, we obtain the result reproduced in Fig.\,\ref{Fig04}(c), where, in contrast to Figs.\,\ref{Fig04}(a,b), one can now clearly see the LED panel at the centre. Figure\,\ref{Fig04}(d) shows the image obtained upon averaging the 10140 raw frames, without QLD processing. The source is not visible in this case.
\begin{figure*}[htbp]
\centering
\includegraphics[width=1.5\columnwidth]{Fig04-new.pdf}
\caption{Examples of raw and QLD-processed images acquired at day-break with a visibility of 40~m. (a) One full-scale raw frame ($\mathrm{CNR}=2.3$, 1~ms exposure time). (b) Same as (a) for a reduced color scale. (c) Corresponding processed image obtained from 10140 raw frames acquired at 390 frames per second. CNR is now equal to 11.0 (d) The image obtained by averaging all 10140 raw frames. The source is not seen, the CNR is 2.5. {\color{black} The red and the white squares in the figures represent the "object" and the "background" regions defined after Eq.\,(\ref{eqCNR}).} }
\label{Fig04}
\end{figure*}
In order to gain a more quantitative picture of the improvement of the image quality obtained using QLD, we measure the Contrast-to-Noise Ratio (CNR), defined as :
\begin{equation}
\mathrm{CNR}=\frac{\langle I_{\mathrm{obj}}\rangle-\langle I_{\mathrm{back}}\rangle}{\left\langle\left(I_{jk}-\langle I_{\mathrm{back}}\rangle\right)^2\right\rangle_{\mathrm{back}}^{1/2}}\ .\label{eqCNR}
\end{equation}
In this expression, $\langle I_{\mathrm{obj}}\rangle$ is the average value of the signal over the pixels covering the modulated source. In the case of Fig.\,\ref{Fig04}, it corresponds to the $3\times3$ pixels surrounded by the red rectangle. The quantity $\langle I_{\mathrm{back}}\rangle$ is the average of the signal recorded in the background surrounding the object. In Fig.\,\ref{Fig04}, it corresponds to the 8 blocks of $3\times3$ pixels contained in the white rectangle. In the denominator, the averaging is taken over the same $i,j$ pixels belonging to the surrounding background, so that this denominator is the square root of the variance of the background noise.
\begin{figure}[]
\centering
\includegraphics[width=1.0\columnwidth]{Fig05-new.pdf}
\caption{Evolution of the CNR as a function of the number of processed modulation cycles. $\mathrm{Modulation\ frequency}=13\,\mathrm{Hz}$; 390 frames per second. Full circles: 40~m visibility; 1~ms exposure time. Full squares: 40~m visibility; 1~ms exposure time. Full diamonds: 50~m visibility; 2~ms exposure time. Open circles: 60~m visibility; 1~ms exposure time. Open squares: 60~m visibility; 0.5~ms exposure time.}
\label{Fig05}
\end{figure}
For the data of Fig.\,\ref{Fig04} (visibility 40~m) QLD applied to images acquired at a distance of 150~m from the source permits to increase the CNR from 2.3 to 11. In Fig.\,\ref{Fig05}, we plot the evolution of the CNR as a function of the number of cycles over which QLD averaging is performed. The results in this figure, obtained in five sets of experiments performed at day-break for visibilities ranging from 40 to 60~m,
show that the CNR can be significantly increased using the technique of QLD. Four of them permit to reach a value of the CNR larger than 8, typically with the number of periods needed to optimize the CNR being of the order of 100. While the general trend for all data sets is the same, variations exist. For example, the curves shown in full circles and in full squares, both of which are for a visibility of 40~m, are different. This may be attributed to the somewhat different nature of the fog on the two days the data was taken, or to possible variations of the ambient illumination. It is well known that fog can have droplets with sizes ranging from sub-micrometer to several micrometers \cite{NASA}, and thus the scattering can vary from being isotropic to significantly anisotropic (forward scattering), leading to a difference in the efficiency of the QLD technique. This effect is more pronounced in the curve with open circles in Fig.\,\ref{Fig05}, which is quite different from the other four : the CNR is initially close to zero, and increases very slowly as a function of the number of modulation cycles over which QLD is performed, reaching a modest value of 1.9 even when all the available data are processed. Though this set of data was not obtained for a visibility significantly lower than the other sets of data, the nature of fog varied during observation. The wind was relatively strong then, and fog trails could be seen passing across the scene, indicating that the density and diameter of the water droplets was changing with time during the acquisition time, thus leading to the different results for seemingly identical conditions. It is also quite possible that these variations in the fog during acquisition explains why, in the case of the data represented as full diamonds, the CNR slowly decreases when the number of cycles is increased above 100.
\begin{figure*}[]
\centering
\includegraphics[width=1.5\columnwidth]{Fig06-new.pdf}
\caption{Examples of raw and QLD-processed images acquired at day-break with a visibility of 30~m. The red rectangle shows the position of the piece of cardboard that is illuminated by the light of modulated LED panel. (a) One full-scale raw frame (5~ms exposure time). (b) Corresponding processed image obtained from 10,400 raw frames acquired at 160 frames per second. The modulation frequency is 16~Hz.}
\label{Fig06}
\end{figure*}
\subsection{Application of QLD to an illuminated object}
We have, so far, focused on imaging light beacons through fog using the technique of QLD. We now investigate whether this method can be used under foggy conditions to enhance the visibility of an object that is illuminated by the modulated source of light, rather than the source of light itself. The data presented in Fig.\,\ref{Fig06} were obtained at day-break by illuminating using the modulated LED panel, a piece of cardboard located on the right of the picture. The distance between the LED panel and the illuminated cardboard was about 20\,cm. The distance between this object and the camera was 75~m while the visibility through fog was estimated at about 30~m. In the raw frame of Fig.\,\ref{Fig06}(a) the object cannot be distinguished; the associated CNR, equal to 0.3, is indeed quite low. However, after QLD processing of 10,400 such frames acquired at a rate of 160 images per second, the image of Fig.\,\ref{Fig06}(b) was obtained, where the object illuminated by the LED panel can be clearly distinguished from the noisy background, with a CNR of 1.9. Notice that in the present experiment the QLD technique works at low modulation frequencies because the modulated light source is close to the illuminated object. Making it to work with the modulated light source close to the camera and thus far from the object would probably require much higher frequencies or even pulsed illumination.
\subsection{QLD imaging in daylight condition}
The preceding results of Figs.\,\ref{Fig04}, \ref{Fig05}, and \ref{Fig06} were obtained at day-break, when the LED panel was much brighter than the ambient light. We now show that QLD can also be useful in distinguishing a modulated beacon or an object illuminated by a modulated source, from surrounding sources of light, especially during day time when many objects can reflect the sun light and blind the observer. Figure \ref{Fig07} illustrates this capability, using a similar setup as in Fig.\,\ref{Fig06}, but obtained during day time fog. Here, only a cardboard piece is illuminated by the modulated light provided by the LED panel. Close to it, but shielded from the modulated source, is a polystyrene block that strongly scatters sunlight towards the camera. Both objects, the cardboard and the polystyrene block, are 150~m from the camera, and are viewed through fog during daylight. In Fig.\,\ref{Fig07}(a), the parasitic sunlight reflection is clearly visible, while the illuminated cardboard is impossible to distinguish, as confirmed by a CNR measured to be equal to $-0.3$ for this image. This image also exhibits vague shapes in the foreground due to intervening trees. After QLD processing (see Fig.\,\ref{Fig07}(b)), all these spurious shining objects disappear, and we are left with a clearly visible image of the piece of cardboard illuminated by the LED panel, with a CNR which is now equal to 2.4.
\begin{figure*}[]
\centering
\vspace{-0.2cm}
\includegraphics[width=1.5\columnwidth]{Fig07.pdf}
\caption{Examples of raw and QLD-processed images acquired during day time. The red rectangle shows the position of the piece of cardboard that is illuminated by the modulated LED panel. (a) One full-scale raw frame (5~ms exposure time), showing reflection of sunlight from a polystyrene object located close to the LED panel (b) Corresponding processed image obtained from 10,400 raw frames acquired at 160 frames per second. The modulation frequency is 16~Hz.}
\label{Fig07}
\end{figure*}
\section{Conclusion}
In conclusion, we have shown that computational QLD processing of images obtained using a modulated LED source is a powerful tool, compatible with real-time processing, which could be very useful for many applications. The fact that such imaging can be performed by illumiating with simple LEDs and processing on an ordinary computer shows that it can potentially be implemented at low cost, which further paves the way to a broad range of applications. In particular, this technique has been proven to be efficient in improving the visibility of beacons under heavy fog conditions, particularly at night, a situation that is commonly encountered during plane landing and takeoff. Moreover, we have shown that it is also capable of imaging a reflecting or diffusive object which is illuminated by the modulated source of light. Thus, in the context of aircraft navigation, unlike modern instrument landing systems that merely guide an aircraft using radio waves, the QLD technique can provide the pilot a visual image of the scene that lies ahead, and in particular a realistic representation of the runway beacons. In motor, rail or maritime navigation, apart from showing the path by means of beacons, the technique may be used to reveal obstacles in the path, that are otherwise hidden by fog. The technique is particularly interesting if one wants to be able to steer the direction of emission of the modulated light, like in the case of a lighthouse, whose range could thus be extended in heavy fog conditions. Finally, we have shown that source modulation and QLD also proves to be interesting in the presence of daylight, because it permits one to distinguish the beacon or object of interest from any surrounding source of light that could dazzle the observer.
Finally, it is worth mentioning that an important perspective of the present work, in the context of above mentioned application, consists in assessing its validity in the case of moving targets.
\medskip
\noindent\textbf{Funding.} Indo-French Center for the Promotion of Advanced Research (CEFIPRA/IFCPAR 4406). Department of Science and Technology (DST), India. International Emerging Action project "HINDI-BIO" (CNRS), France.
\section*{Acknowledgments}
The authors are happy to thank Prof. Rupamanjari Ghosh for having made the experiments possible at Shiv Nadar University. FB acknowledges the hospitality of Raman Research Institute.
|
2,877,628,091,204 | arxiv | \section{Introduction}
There are several models for describing CP violation by scalar
dynamics. Spontaneous CP violation~\cite{W,B} is one of the
interesting schemes, especially where CP is broken simultaneously with
SU(2)$_L\times$U(1)$_Y$. This kind of models has been widely
studied (see {\it e.g.} Refs.~\cite{S,BS,CGN}).
There are other models in which heavy quarks and scalars are
introduced and CP violation is originated in the heavy scalar sector.
The CP violation is transported to the ordinary quark sector through
the Yukawa interactions among heavy quark, ordinary quark and heavy
scalar. At the same time, an attempt is made to resolve the Strong CP
problem.
These models may be divided into two classes by the existence of the
tree level flavor changing neutral current (FCNC).
There are two typical models without tree level FCNC.
In one class of models only right-handed quarks have Yukawa
interactions with heavy quarks and scalars~\cite{BCK}, while in
another class only left-handed quarks have the Yukawa
interaction~\cite{GG}. In both models CP is violated in the heavy
mass terms softly or spontaneously.
A typical model with the tree level FCNC is the Aspon
model~\cite{Aspon}.
This model is widely studied (see, {\it e.g.} Refs.~\cite{FN,AFKL}).
In this model one vector-like SU(2)$_L$ doublet of quarks are
introduced.
Those quarks have same charges as up- and down-types of ordinary
quarks. Two heavy SU(2)$_L\times$U(1)$_Y$ singlet scalars have VEVs
which break CP spontaneously.
Another model with tree level FCNC is given in \cite{BBP},
where unlike the models considered here, no additional U(1) symmetry
occurs.
In this paper we study constraints and predictions of the above two
models of soft CP breaking comparing with those for the Aspon model.
These considerations are timely because experiments are underway
to measure both Re$(\epsilon'/\epsilon)$ and the CP asymmetries
in $B^0$ decays. In fact, there are two experiments each to measure
both effects. The former is being measured by the NA-48 experiment
at CERN, and by the E799/E832 (KTEV) experiment at FermiLab. The
latter is being studied by the BaBar detector of the PEP-II experiment
at SLAC and by the BELLE experiment at the KEK B-Factory.
The layout of the paper is as follows: In Section(\ref{Models})
we describe in detail the different models we shall analyze.
In Section(\ref{epsilon}) the constraints arising from
$\epsilon_K$ are derived. The predictions for
$\epsilon^{\prime}/\epsilon$ are given in Section(\ref{prime}).
In Section(\ref{B}) the constraint of $B-\bar{B}$ mixing
is discussed. Section(\ref{decays}) covers the CP asymmetry
predictions for neutral $B$ decay. In Section(\ref{theta}) the
lower limits on $\bar{\Theta}$ are calculated, together with the
corresponding lower limits on the neutron electric dipole moment.
Finally in Section(\ref{summary}) the different predictions
are summarized.
\section{Models} \label{Models}
Here we shall list four different models for CP violation
which exemplify all of the ideas we are considering.
At the end of the paper we shall summarize the similarities and
differences
of the experimental predictions. Thus the hurried reader could read
just this section and that summary to sample the main points: the
intervening sections provide technical details.
\subsection{Standard Model}
The first model is just the standard model (SM)
with the KM mechanism~\cite{KM}
of explicit CP violation. Principally, we are interested in
models which also solve strong CP (as all the other three will). The
standard model requires an additional mechanism ({\it e.g.} the
Peccei-Quinn mechanism~\cite{PQ} or massless up quark (see, {\it e.g.}
Ref.~\cite{P}) to accomplish this. Nevertheless, it
forms an essential comparison for all the other cases.
\subsection{Two Models (Types L and R) of Soft CP Breaking}
The class of models we consider for soft
CP violation is constructed by adding two SU(2)$_L$ singlet
scalars $\chi_I$ ($I=1$, $2$) with hyper-charge $Y_\chi$
and one non-chiral quark $Q$ to the standard model (SM).
These $Q$ and $\chi_I$ carry the opposite charges of
an extra U(1)$_{\rm new}$ symmetry.
The hypercharge of $Q$ is determined in such a way that the Yukawa
interactions among $\chi_I$, $Q$ and the ordinary quark $q$
are allowed: $Y_Q = Y_\chi + Y_q$.
The Yukawa interactions in the models can be written as
\begin{equation}
{\cal L}_Y = - \sum_{I=1}^2 \sum_{i=1}^3 h^I_i
\left[
\overline{Q} q_{i} \chi_I + \overline{q}_{i} Q \chi^\ast_I
\right]
\ ,
\end{equation}
where $h^I_i$ is a real Yukawa coupling.
CP is softly broken by the mass term of
$\chi_I$.
The models in this category are divided into two types by
the chirality of the ordinary quark $q$ which couples to $Q$ and
$\chi_I$:
in the first type (Type R), $q$ is a right-handed down-type quark,
$Y_q = -1/3$~\cite{BCK};
in the second type (Type L), $q$ is a left-handed SU(2)$_L$ doublet
quark, $Y_q = 1/6$~\cite{GG}.
Let us explain details of these models for soft CP violation.
The scalar potential for $\chi_I$ is given by
\begin{equation}
{\cal L}_{\chi} =
\sum_{I,J,K,L=1}^2 \overline{\lambda}_{IJKL}
\chi_I^\ast \chi_J \chi_K^\ast
\chi_L + \sum_{I,J=1}^2 \overline{M}_{IJ} \chi_I^\ast \chi_J
\ ,
\label{L chi}
\end{equation}
where $\overline{\lambda}_{IJKL}$ and
$\overline{M}_{II} = \overline{M}_{II}^{*}$ are real quantities
and $\overline{M}_{12}=\overline{M}^\ast_{21}$ is a
complex quantity.
The interaction between $\chi_I$ and the ordinary
SU(2)$_L$ doublet Higgs scalar $H$ is given by
\begin{equation}
{\cal L}_{\chi H} =
\left( H^{\dag} H -\frac{ v^2}{2} \right) \,
\sum_{I,J=1}^2 \lambda_{IJ}
\chi^{\ast}_I \chi_J
\ ,
\label{L chi H}
\end{equation}
where $\lambda_{IJ}$ is real.
The mass eigenstate $\chi'_I$ is given by a unitary rotation:
\begin{equation}
\chi'_I = \sum_{J=1}^2 U_{IJ} \chi_J \ ,
\label{rotation}
\end{equation}
where $U$ is a suitable unitary matrix.
After rotating $\chi$ to the mass eigenstate $\chi'$ as in
Eq.~(\ref{rotation}), the Yukawa interactions become
\begin{equation}
{\cal L}_Y = - \sum_{I=1}^2 \sum_{i=1}^3
\left[
f_{Ii} \overline{Q} q_i \chi'_I
+ f_{Ii}^\ast \overline{q}_i Q \chi^{\prime\ast}_I
\right]
\ ,
\end{equation}
where $f_{Ii} = \sum_{J=1}^2 U^\ast_{IJ} h^J_i$ is a complex Yukawa
coupling.
An important combination for CP measurement is
$X^I_{ij} \equiv f^\ast_{Ii}f_{Ij}$.
The fact that the original Yukawa coupling $h^I_i$ is real leads
\begin{equation}
\mbox{Im} \left( X^{I=2}_{ij} \right)
= - \mbox{Im} \left( X^{I=1}_{ij} \right)
\ .
\label{cond:Imf}
\end{equation}
\subsection{Aspon Model}
Here the complex scalar $\chi_I$ has a Vacuum Expectation Value (VEV)
which spontaneously breaks CP (Aspon model~\cite{Aspon});
In the Aspon model the U(1)$_{\rm new}$ is gauged,
and the gauge boson acquires its mass from the VEV of $\chi_I$.
$Q$ and $q$ have same charges ($Y_\chi=0$, $Y_Q=Y_q$),
and they have
complex mass mixing. Accordingly, there exist tree-level flavor
changing neutral currents (FCNC) mediated by the Aspon gauge boson and
$\chi_I$.
Let us briefly review the relevant part of the
Aspon model.
In the Aspon model $q$ can be the left-handed doublet
quarks, or the right-handed down-type quarks, in the simplest versions.
In the present analysis we fix $q$ to be the left-handed
doublet quarks for definiteness.\footnote{%
In the concluding section VIII we mention the difference
in predictions for an R-type Aspon model.}
All the couplings in the scalar potentials in
Eqs.~(\ref{L chi}) and (\ref{L chi H}) are real, and CP
is spontaneously broken by the VEV of $\chi_I$:
\begin{equation}
\left\langle \chi_1 \right\rangle = \frac{1}{\sqrt{2}}
\kappa_1 e^{i\theta} \ , \quad
\left\langle \chi_2 \right\rangle = \frac{1}{\sqrt{2}}
\kappa_2 \ .
\end{equation}
As a result the light quarks $q$ mix with the non-chiral
heavy quark $Q$. The mass matrix, in the weak basis where
$3\times3$ submatrix for down sector is diagonal, is given
by~\cite{FN}
\begin{equation}
{\cal M}_d =
\left( \begin{array}{cccc}
m_d & 0 & 0 & F_1 \\
0 & m_s & 0 & F_2 \\
0 & 0 & m_b & F_3 \\
0 & 0 & 0 & M_Q
\end{array} \right) \ ,
\end{equation}
where
\begin{equation}
F_i = h^1_i \left\langle \chi_1 \right\rangle
+ h^2_i \left\langle \chi_2 \right\rangle \ .
\end{equation}
This mass matrix is diagonalized by a biunitary
transformation, $K_L^{\dag} {\cal M}_d K_R$.
The approximate form of the transformation matrices are given
by~\cite{FN}
\begin{eqnarray}
K_L &=&
\left( \begin{array}{cccc}
1 - \frac{1}{2} \left\vert x_1 \right\vert^2 &
x_1 x_2^\ast \frac{m_s^2}{m_d^2-m_s^2} &
x_1 x_3^\ast \frac{m_b^2}{m_d^2-m_b^2} &
x_1
\\
x_2 x_1^\ast \frac{m_d^2}{m_s^2-m_d^2} &
1 - \frac{1}{2} \left\vert x_2 \right\vert^2 &
x_2 x_3^\ast \frac{m_b^2}{m_s^2-m_b^2} &
x_2
\\
x_3 x_1^\ast \frac{m_d^2}{m_b^2-m_d^2} &
x_3 x_2^\ast \frac{m_s^2}{m_b^2-m_s^2} &
1 - \frac{1}{2} \left\vert x_3 \right\vert^2 &
x_3
\\
- x_1^\ast & - x_2^\ast & - x_2^\ast &
1 - \frac{1}{2} \sum_{j=1}^3 \left\vert x_j \right\vert^2
\end{array}
\right)
\ ,
\nonumber\\
K_R &=&
\left( \begin{array}{cccc}
1 &
x_1 x_2^\ast \frac{m_d m_s}{m_d^2-m_s^2} &
x_1 x_3^\ast \frac{m_d m_b}{m_d^2-m_b^2} &
\frac{m_d}{M_Q} x_1
\\
x_2 x_1^\ast \frac{m_s m_d}{m_s^2-m_d^2} &
1 &
x_2 x_3^\ast \frac{m_s m_b}{m_s^2-m_b^2} &
\frac{m_s}{M_Q} x_2
\\
x_3 x_1^\ast \frac{m_b m_d}{m_b^2-m_d^2} &
x_3 x_2^\ast \frac{m_b m_s}{m_b^2-m_s^2} &
1 &
\frac{m_b}{M_Q} x_3
\\
- \frac{m_d}{M_Q} x_1^\ast &
- \frac{m_s}{M_Q} x_2^\ast &
- \frac{m_b}{M_Q} x_3^\ast &
1
\end{array}
\right)
\ ,
\end{eqnarray}
where
\begin{equation}
x_i \equiv F_i/M_Q \ .
\label{def x}
\end{equation}
In the weak basis the Aspon gauge boson does not
couple to light quarks. However, due to the mixing
with the heavy quark $Q$, light quarks in terms of the mass
eigenstates couple to the Aspon gauge boson.
This induces FCNC's:
\begin{equation}
{\cal L}_A^{\rm FCNC} (\mbox{down}) = - g_A
\alpha_{ij} \bar{d}^{\prime i}_L \gamma_\mu d^{\prime j}_L
A^\mu \ ,
\end{equation}
where
\begin{eqnarray}
&& \alpha_{ij} \simeq x_i x_j^\ast \ ,
\qquad \mbox{($i,j= 1,2,3$)} \ ,
\nonumber\\
&& \alpha_{4i} = \alpha_{i4}^\ast \simeq - x_i^\ast \ ,
\qquad \mbox{($i= 1,2,3$)} \ ,
\nonumber\\
&& \alpha_{44} \simeq 1 - \sum_{i=1}^{3}
\left\vert x_i \right\vert^2
\ ,
\end{eqnarray}
with $A^\mu$ being the Aspon gauge boson and $g_A$
the gauge coupling.
In addition to the above FCNC's in the left-handed sector
there exist FCNC's in the
right-handed sector.
However, the coupling is suppressed by the mass ratio
$m_i/M_Q$, where $m_i=(m_d,\,m_s,\,m_b)$.
Similarly, flavor changing couplings to $\chi_I$ are suppressed
by $m_i/M_Q$.
So we will neglect these couplings below.
\section{Constraint from $\epsilon_K$}
\label{epsilon}
In the SM, $\epsilon_K$ arises from the $W^+W^-$ exchange
box diagram, and is proportional to a combination of CKM
angles and to $\sin\delta$ where $\delta$ is the KM phase, and
therefore gives a constraint between these SM parameters.
Now we study the other models defined in Section(\ref{Models}).
The parameter $\epsilon_K$ is given by
\begin{equation}
\epsilon_K =
\frac{e^{i\pi/4}}{2\sqrt{2}}
\left[ \frac{\mbox{Im}M_{12}}{\mbox{Re}M_{12}} + 2
\frac{\mbox{Im}A_0}{\mbox{Re}A_0}
\right]
\ .
\end{equation}
The second term is related to $\epsilon'/\epsilon$, and much smaller
than the first term as we shall see below.
\begin{figure}[thbp]
\begin{center}
\ \epsfbox{fig1.eps}
\end{center}
\caption[]{Box diagram contributions to $K^0$--$\bar{K}^0$ mixing in
the models of soft CP breaking.
}\label{fig:box}
\end{figure}
\begin{figure}[thbp]
\begin{center}
\ \epsfbox{fig2.eps}
\end{center}
\caption[]{Tree level Aspon gauge boson exchange contributions to
$K^0$--$\bar{K}^0$ mixing in the Aspon model.
}\label{fig:aspon}
\end{figure}
The dominant contribution to $\mbox{Im}M_{12}$ is given by the
scalar-heavy quark exchange box diagram shown in Fig.~\ref{fig:box}
for the models of soft CP
breaking, and by the Aspon gauge boson exchange tree diagram
shown in Fig.~\ref{fig:aspon} for the
Aspon model.
The effective $\Delta S = 2$ Hamiltonian derived from the contribution,
for Type R soft breaking, is given by
\begin{equation}
{\cal H}_{\Delta S =2}^{\rm (new)}
= \frac{1}{v^2} C_{sd}^{(R)}
\left( \bar{s}_R \gamma^\mu d_R \right)
\left( \bar{s}_R \gamma_\mu d_R \right) \ ,
\label{new H}
\end{equation}
where
\begin{equation}
C_{sd}^{(R)} = \frac{1}{6(4\pi)^2}
\frac{v^2}{M_Q^2} \sum_{I,J=1}^{2} X^I_{sd} X^J_{sd}
\, F\left( r_I , r_J \right) \ , \label{Csd}
\end{equation}
with $r_I = M_I^2/M_Q^2$.
The function $F(r_I,r_J)$ is defined by
\begin{equation}
F\left( r_I, r_J \right) =
\frac{3}{ (1-r_I) (1-r_J)} -
\frac{3r_J^2}{(1-r_J)^2 (r_I-r_J)} \ln r_J +
\frac{3r_I^2}{(1-r_I)^2 (r_I-r_J)} \ln r_I
\ ,
\end{equation}
where the normalization of $F(r_I,r_J)$ is taken as
$F(1,1)=1$.
For Type L soft breaking, and for the Aspon model, the effective
coupling is the same as Eq.~(\ref{new H}) with the helicities switched
from R to L, and with the coefficient $C_{sd}^{(R)}$
replaced by $C_{sd}^{(L)}$, and
$C_{sd}^{(A)}$, respectively. The formula for $C_{sd}^{(L)}$
is exactly as for $C_{sd}^{(R)}$ in Eq.(\ref{Csd}).
We will give the formula for $C_{sd}^{(A)}$ later.
As in the SM, $\mbox{Re}M_{12}$ is dominated by the contribution
from $W$-charm exchange box diagram.
This is given by
\begin{equation}
{\cal H}_{\Delta S =2}^{\rm (KM)}
= \frac{1}{v^2} C_{sd}^{\rm (KM)}
\left( \bar{s}_L \gamma^\mu d_L \right)
\left( \bar{s}_L \gamma_\mu d_L \right) \ ,
\label{KM H}
\end{equation}
where
\begin{equation}
C_{sd}^{\rm (KM)} = \frac{1}{8\pi^2}
\frac{M_W^2}{v^2}
\left( V_{cs}^\ast\, V_{cd} \right)^2
S\left( \frac{m_c^2}{M_W^2} \right) \ .
\end{equation}
The function $S(x)$ is so-called Inami-Lim function~\cite{IL} and
$S(m_c^2/M_W^2) \simeq 3.48 \times 10^{-4}$.
$V_{cs}$ and $V_{cd}$ are corresponding elements of the quark mixing
matrix, $\left\vert V_{cs}^\ast V_{cd} \right\vert \simeq 0.22 $.
Note that the mixing matrix for the models of soft CP
breaking is real and orthogonal, and the $3 \times 3$ submatrix
of it in the Aspon model is also real and orthogonal to a
very good approximation\cite{FG}.
Since QCD respects parity invariance, it may be enough to assume that
two operators in Eqs.~(\ref{new H}) and (\ref{KM H}) give the same
hadron matrix elements.
Then $\left\vert \epsilon_K \right\vert$ can be expressed as
\begin{equation}
\left\vert \epsilon_K \right\vert \simeq
\frac{1}{2\sqrt{2}}
\left\vert \frac{\mbox{Im}C_{sd}^{(R,L,A)}}{C_{sd}^{\rm (KM)}}
\right\vert \ .
\end{equation}
The experimental value $\left\vert \epsilon_K \right\vert = 2.26
\times 10^{-3}$ gives a constraint to $\left\vert \mbox{Im}C_{sd}
\right\vert$:
\begin{equation}
\left\vert \mbox{Im}C_{sd}^{(R,L,A)}\right\vert \simeq
1.4 \times 10^{-10} \ . \label{eps}
\end{equation}
This smallness of $\left\vert \mbox{Im}C_{sd}^{(R,L,A)} \right\vert$
is easily understood by small Yukawa coupling $f_{Ii}$.
To estimate the size of the Yukawa couplings, we can assume that their
real and imaginary parts are comparable, equate $M_Q$ and $M_I$
and arrive, from Eq.(\ref{Csd}), at:
\begin{equation}
\frac{v}{M_Q} \bar{X}_{sd}^{(R,L)} \simeq 3 \times 10^{-4} \ ,
\label{X}
\end{equation}
where we have defined $(\bar{X}_{sd})^{2} = \frac{1}{2}
|\mbox{Im}X_{sd}^{I=1}
\sum_{I=1}^{2}
\mbox{Re}X_{sd}^{I}|$ using the fact that $\mbox{Im}X_{sd}^{I=2} =
- \mbox{Im}X_{sd}^{I=1}$.
Of course, the corresponding Yukawa couplings involving the third
family, {\it e.g.} $\bar{X}_{bd}$, $\bar{X}_{bs}$ are not constrained
by $\epsilon_K$.
It seems natural to say that $M_Q$ is bigger than the weak scale, and
then Eq.~(\ref{X}) gives the lower bound
$\bar{X}_{sd}^{(R,L)} \gtrsim 3 \times 10^{-4}$.
In the Aspon model
\begin{equation}
C_{ds}^{(A)} =
2 \left( \frac{v}{\kappa} \right)^{2} (x_1^*x_2)^2 \label{CA}
\end{equation}
where $\kappa$ is the scale of U(1)$_{\rm new}$ breaking. The
combination of Eq.~(\ref{eps}) and Eq.~(\ref{CA}), as is
well-known~\cite{FN,FH}, gives a
constraint on $\kappa$, using information from $\bar{\Theta}$
(see Section(\ref{theta})).
The parameter $x_3$ is not constrained by $\epsilon_K$.
\section{Predictions for Real Part of $(\epsilon^\prime/\epsilon)$}
\label{prime}
In the standard model, an enormous effort has gone into
calculating direct CP violation, characterized by the quantity
$\mbox{Re}\left( \epsilon' / \epsilon \right)$
(see, {\it e.g.} Refs.~\cite{SM,BJL,BBH,BBL}).
There remains some uncertainties
in the prediction due to the quark masses, especially $m_s$,
the QCD scale $\Lambda_{QCD}$, and certain hadronic
matrix elements. One quoted range is\cite{BJL}:
\begin{equation}
\mbox{Re}\left( \frac{\epsilon^{'}}{\epsilon} \right)
= (3.6 \pm 3.4) \times 10^{-4}
\label{ep}
\end{equation}
In particular, a vanishing result results from an accidental cancellation
(rather than a symmetry).
The parameter $\epsilon'$ is given by
\begin{equation}
\epsilon' = -
\frac{ e^{i\left(\pi/2 + \delta_2 - \delta_0\right)} }{\sqrt{2}}
\frac{\mbox{Re}A_2}{\mbox{Re}A_0}
\left[ \frac{\mbox{Im}A_0}{\mbox{Re}A_0} -
\frac{\mbox{Im}A_2}{\mbox{Re}A_2} \right]
\ ,
\end{equation}
where $A_I$ are the isospin amplitudes in $K\rightarrow\pi\pi$ decays
and $\delta_I$ are the corresponding final state interaction phases.
\begin{figure}[bthp]
\begin{center}
\ \epsfbox{fig3.eps}
\end{center}
\caption[]{
Gluon penguin diagram contribution to the imaginary part of the
$K\rightarrow\pi\pi$ decay for the models of soft CP breaking.
}\label{penguin}
\end{figure}
To estimate the contributions to the imaginary part of the
$K\rightarrow\pi\pi$ decay for the models of soft CP breaking,
let us consider the gluon penguin diagram shown in Fig.~\ref{penguin}.
For Type R soft CP breaking model
the chiralities of $s$ and $d$ quarks in the external lines are
different with those for the $W$-exchange contribution. Then it is
convenient to define the following operators:
\begin{eqnarray}
Q'_3 &=& 4 \left( \bar{s}_R \gamma_\mu d_R \right)
\sum_{q=u,d,s} \left( \bar{q}_R \gamma^\mu q_R \right) \ ,
\nonumber\\
Q'_4 &=& 4 \sum_{q=u,d,s} \left( \bar{s}_R \gamma_\mu q_R \right)
\left( \bar{q}_R \gamma^\mu d_R \right) \ ,
\nonumber\\
Q'_5 &=& 4 \left( \bar{s}_R \gamma_\mu d_R \right)
\sum_{q=u,d,s} \left( \bar{q}_L \gamma^\mu q_L \right) \ ,
\nonumber\\
Q'_6 &=& - 8 \sum_{q=u,d,s} \left( \bar{s}_R q_L \right)
\left( \bar{q}_L d_R \right) \ .
\end{eqnarray}
By using these operators,
the $\Delta S = 1$ effective Hamiltonian
for Type R soft CP breaking model
is given by
\begin{equation}
{\cal H}_{\Delta S=1}^{\rm(new)} =
\frac{1}{v^2} \overline{C}_{sd}^{(R)}
\sum_{i=3}^6 \nu_i(M_{\rm new}) Q'_i(M_{\rm new})
\ ,
\label{S1}
\end{equation}
where $M_{\rm new}$ is a scale around masses of new particles, and
\begin{equation}
-\frac{1}{3}\nu_3(M_{\rm new}) =
\nu_4(M_{\rm new}) =
-\frac{1}{3}\nu_3(M_{\rm new}) =
\nu_6(M_{\rm new}) =
\frac{\alpha_s(M_{\rm new})}{256\pi}
\ .
\end{equation}
Here $\overline{C}_{sd}^{(R)}$ is expressed as
\begin{equation}
\overline{C}_{sd}^{(R)} = \frac{v^2}{M_Q^2}
\sum_{I=1}^2 X^I_{sd} \,
\tilde{F} \left( \frac{M_I^2}{M_Q^2} \right)
\ ,
\end{equation}
where
\begin{equation}
\tilde{F} \left( r_I \right) =
\frac{4}{3(1-r_I)}
\left[
\frac{7-29r_I+16r_I^2}{6(1-r_I)^2}
- \frac{r_I(3-2r_I)}{(1-r_I)^3} \ln r_I
\right]
\ .
\end{equation}
The effective Hamiltonian for the Type L soft CP breaking is obtained
by switching the helicities R to L and L to R in the above
expressions, and $\overline{C}_{sd}^{(L)} = \overline{C}_{sd}^{(R)}$.
To obtain the amplitudes for $K\rightarrow \pi\pi$ we need to study
the renormalization group evolution of the coefficients.
This is done by using the method described in, {\it e.g.}
Refs.~\cite{BBH,BBL}.
The resultant coefficients are
$\left(\nu_3(m_c),\nu_4(m_c),\nu_5(m_c),\nu_6(m_c)\right) =
\left( -1.2 , 1.5 , 0.8 , 4.7 \right)\times 10^{-4}$,
where we have taken $M_{\rm new}=M_W$ for simplicity.
As is well known, the gluon penguin diagram gives a contribution to only
isospin zero channel.
We use the values in Ref.~\cite{BBL} for the hadron matrix elements:
$\left( \langle Q'_3(m_c) \rangle_0 , \langle Q'_4(m_c) \rangle_0 ,
\langle Q'_5(m_c) \rangle_0 , \langle Q'_6(m_c) \rangle_0 \right) =
\left( -0.01 , -0.19 , 0.09 , 0.28 \right)$\,$(\mbox{GeV})^3$.
By using the experimental values
$\mbox{Re}A_0=3.33\times10^{-7}$\,GeV and
$\mbox{Re}A_2=1.50\times10^{-8}$\,GeV with
$\left\vert \epsilon_K \right\vert = 2.36 \times 10^{-3}$,
$\mbox{Re} \left(\epsilon'/\epsilon \right)$
from the gluon-penguin diagram is given by
\begin{equation}
\left\vert \mbox{Re} \left( \frac{\epsilon'}{\epsilon}
\right) \right\vert
\simeq
\left( 7.7 \times 10^{-2} \right)
\left\vert \mbox{Im} \overline{C}_{sd}^{(R,L)} \right\vert
\ .
\label{prediction}
\end{equation}
Assuming that the real and imaginary parts of the Yukawa coupling are
comparable, and using the value in Eq.~(\ref{X}) estimated from
$\epsilon_K$, we obtain
\begin{equation}
\left\vert \mbox{Re} \left( \frac{\epsilon'}{\epsilon}
\right) \right\vert
\simeq 2 \times 10^{-5}
\frac{v}{M_Q} \lesssim 2 \times 10^{-5}
\end{equation}
for the models of soft CP breaking.
Note that $\mbox{Re}(\epsilon^\prime/\epsilon)$ can be of order
$10^{-4}$ if we allow that the imaginary part is bigger than the real
part of $X_{sd}^I$, $\mbox{Im}X_{sd}^I \sim 10 \times
\mbox{Re}X_{sd}^I$.
The prediction in Eq.(\ref{prediction})
is more reliable than the corresponding
prediction in the standard model because there is no expectation
of delicate cancellation between diagrams.
In the Aspon model the dominant contribution is given by the Aspon
gauge boson--heavy quark exchange penguin diagram, and
$\mbox{Re} \left( \epsilon' / \epsilon \right)$ is estimated
as~\cite{FH1}
$\mbox{Re} \left( \epsilon' / \epsilon \right) \lesssim 10^{-5}$.
\section{$B^0-\bar{B}^0$ Mixing}
\label{B}
In addition to the $W$-exchange box diagram contribution
the scalar-heavy quark exchange box diagram contribute to
$B_d$--$\bar{B}_d$ mixing in the models of soft CP violation.
The effective Hamiltonian derived from the
new contribution for the Type R soft CP breaking model
takes the same form as that for $\Delta S=1$ effective Hamiltonian
given in Eq.~(\ref{new H}) with $s$ replaced by $b$, and similarly for
the Type L soft CP breaking model and the Aspon model.
This should be compared with the $W$-top exchange diagram
contribution, which takes the same form as that in Eq.~(\ref{KM H})
with $s$ and $c$ replaced by $b$ and $t$.
Again it may be enough to assume that the two operators with different
chiralities give the same hadron matrix elements.
Then let us compare $C_{bd}$ with $C_{bd}^{\rm(KM)}$.
The experimental value of the top quark mass, $m_t=175$\,GeV, gives
$C_{bd}^{\rm (KM)} \simeq \left(3.46\times 10^{-3} \right)
\left( V_{tb} V_{td} \right)$.
In the models of soft CP breaking
the quark mixing matrix is real and orthogonal,
and the unitarity triangle is flat.
In the Aspon model the imaginary parts of the mixing matrix arise from the
imaginary parts of the small quantities $x_i$, and the $3\times3$
submatrix is real and orthogonal in good approximation.
So the current experimental value
$\left\vert \left(V_{ud}^\ast V_{ub}\right)
/ \left( V_{cd}^\ast V_{cb} \right) \right\vert \simeq 0.35$ leads
$\left\vert \left(V_{td}^\ast V_{tb}\right)
/ \left( V_{cd}^\ast V_{cb} \right) \right\vert \simeq 0.65$, and
$\left\vert V_{td}^\ast V_{tb} \right\vert \simeq 5.9 \times 10^{-3}$.
This implies $C_{bd}^{\rm (KM)} \simeq 1.2 \times 10^{-7}$.
On the other hand, when we assume that the Yukawa couplings are
independent of the generation in the models for soft or
spontaneous CP violation considered in this paper,
$C_{bd}$ is roughly of the same order as $C_{sd}$;
$\left\vert C_{bd}^{(R,L,A)} \right\vert \sim
\left\vert C_{sd}^{(R,L,A)} \right\vert \sim 10^{-10}$.
This value is much smaller than $C_{bd}^{\rm (KM)}$, and negligible.
This situation is similar to $\eta=0$ in the standard model,
which is not excluded by the experiment~\cite{FG,BHSW,M}.
In the case of generation-{\it independent} Yukawa couplings, CP
violation in $B_d$--$\bar{B}_d$ mixing is much smaller for the
soft and spontaneous CP breaking models than that for the SM.
On the other hand, we can admit generation-{\it dependent} Yukawa
couplings, and expect that $C_{bd}$ is larger and roughly of the same
order as $C_{bd}^{\rm(KM)}$;
$\left\vert C_{bd}^{(R,L,A)} \right\vert \simeq 10^{-7}$.
For the models of soft CP breaking this corresponds to:
\begin{equation}
\frac{v}{M_Q} \left\vert X_{bd} \right\vert \simeq 7 \times
10^{-3} \ ,
\label{Xbd}
\end{equation}
where $X_{bd}$ is the average value of $X_{bd}^{I=1}$ and
$X_{bd}^{I=2}$.
[Note that $\mbox{Im} X_{bd} = \mbox{Im} X_{bd}^{I=1} =
- \mbox{Im} X_{bd}^{I=2}$.]
For the Aspon model $\left\vert C_{bd}^{(A)} \right\vert
\simeq 10^{-7}$ leads
\begin{equation}
\frac{v}{\kappa} \left\vert x_1^\ast x_3 \right\vert
\simeq 2 \times 10^{-4} \ .
\label{x13}
\end{equation}
For the Type R soft breaking model
$\mbox{Im}X_{bd}$ is strongly constrained by $\bar{\Theta}$,
$\left\vert \mbox{Im}X_{bd} \right\vert \lesssim
2 \times 10^{-4}$ (see Eq.~(\ref{thetaR})).
So the above constraint (\ref{Xbd})
for $\left\vert X_{bd} \right\vert$ leads that
$\left\vert \mbox{Re} X_{bd}
\right\vert$ is much bigger than $\left\vert \mbox{Im}
X_{bd} \right\vert$. This implies that the CP violation
in $B_d$--$\bar{B}_d$ mixing in the Type R soft breaking
model is much smaller than that in the SM even if we introduce the
generation-dependent Yukawa coupling.
For the Type L soft breaking model and the Aspon model,
however, the constraint from $\bar{\Theta}$ is not strong,
so that the CP violation in the $B_d$--$\bar{B}_d$ mixing
can be as big as in the SM.
Similarly, for $B_s$--$\bar{B}_s$ mixing,
we may expect that $C_{bs}$ is as large as $C_{bs}^{\rm (KM)}$.
In such a case,
the CP violation in the $B_s$--$\bar{B}_s$ mixing
for the Type L soft breaking model and the Aspon model
can be as large as in the SM.
On the other hand, due to the constraint
from $\bar{\Theta}$,
for the Type R soft breaking model it is much smaller
than that in the SM.
\section{Neutral $B$ Decays and CP Asymmetries.}
\label{decays}
The CP violation in
the neutral $B$ meson decays is expressed by the product of the two
quantities measuring the indirect and direct CP violations,
respectively:
\begin{equation}
\lambda(B_q \rightarrow X) =
\left( \frac{q}{p} \right)_{B_q}
\frac{\bar{A}(\bar{B_q} \rightarrow \bar{X})}%
{A(B_q \rightarrow X)}
\ ,
\label{def:lambda}
\end{equation}
where $B_q$ is $B_d$ or $B_s$.
In the SM this quantity measures the angles of the unitarity triangle.
This corresponds to the terms in the requirement that:
\begin{equation}
V_{ub}^{\ast} V_{ud} +
V_{cb}^{\ast} V_{cd} +
V_{tb}^{\ast} V_{td} = 0.
\end{equation}
The angles between the 1st \& 2nd, 2nd\& 3rd, and 3rd \& 1st terms are
called $\gamma,
\alpha$, and $\beta$ respectively. The KM model
predicts a sizeable area of the triangle involving,
{\it e.g.} $\sin2\beta > 0.65$\cite{M}.
To study the quantity $\lambda$ in Eq.~(\ref{def:lambda}) in
the soft and spontaneously broken models,
let us
consider four cases for the coefficients of the 4-fermi
operator as in Eqs.~(\ref{new H}) and (\ref{KM H}). (An alternative
analysis of new physics and the quantity $\lambda$ is in \cite{SW}.)
The first case
corresponds to generation-independent Yukawa couplings. The
other three cases involve generation dependence, in
particular where the third generation couples more strongly
than the second (case 2), the first (case 3) or both (case 4);
these lead, in general, to a deviation from pure superweak
phenomenology. The four cases are explicitly:
\begin{enumerate}
\item
$\left\vert \mbox{Im} C_{bd} \right\vert \sim
\left\vert \mbox{Im} C_{bs} \right\vert \sim
\left\vert \mbox{Im} C_{sd} \right\vert$;
\item
$\left\vert \mbox{Im} C_{bs} \right\vert \sim
\left\vert \mbox{Im} C_{sd} \right\vert$ and
$\left\vert \mbox{Im} C_{bd} \right\vert \sim
\left\vert C_{bd}^{\rm (KM)} \right\vert$;
\item
$\left\vert \mbox{Im} C_{bd} \right\vert \sim
\left\vert \mbox{Im} C_{sd} \right\vert$ and
$\left\vert \mbox{Im} C_{bs} \right\vert \sim
\left\vert C_{bs}^{\rm (KM)} \right\vert$;
\item
$\left\vert \mbox{Im} C_{bd} \right\vert \sim
\left\vert C_{bd}^{\rm(KM)} \right\vert$ and
$\left\vert \mbox{Im} C_{bs} \right\vert \sim
\left\vert C_{bs}^{\rm(KM)} \right\vert$;
\end{enumerate}
The first factor
$(q/p)_{B_q}$ in Eq.~(\ref{def:lambda}) measures the indirect CP
violation. In the present models, up to corrections of order
$10^{-2}$, it is given by the quantity with modulus one,
\begin{equation}
\left( \frac{q}{p} \right)_{B_q} \simeq
\frac{C_{bq}^{\rm(KM)} + C_{bq}}%
{\left\vert C_{bq}^{\rm(KM)} + C_{bq}\right\vert}
\label{qpval}
\ .
\end{equation}
Then for $\mbox{Im} C_{bq} \sim \mbox{Im} C_{sd}$ we find
$\mbox{Im}\left( q/p \right)_{B_q} \lesssim 10^{-2}$.
Note that any non-vanishing value for
$\mbox{Im}\left( q/p \right)_{B_q}$ comes from the approximation
involved in Eq.~(\ref{qpval}).
On the other hand, for $\left\vert \mbox{Im} C_{bq} \right\vert
\sim \left\vert C_{bq}^{\rm(KM)} \right\vert$ as in 2, 3 and 4 (as in
the SM), which is possible
for the Aspon model and the Type L soft breaking model,
the $\mbox{Im} C_{bq}$ becomes less negligible and so
it is convenient to define
\begin{equation}
\left( \frac{q}{p} \right)_{B_q} \simeq
e^{i 2 \tilde{\beta}_q} \ .
\end{equation}
The second factor
$\left(\bar{A}/A\right)$ in Eq.~(\ref{def:lambda}) measures direct
CP violation in neutral $B$ meson decays.
Neutral $B$ meson decays are described by $\bar{b}\rightarrow q'
\bar{q'} q''$ at the quark level.
In this case the ratio of
$W$-exchange penguin contribution to the tree contribution is
roughly\cite{Neubert}
\begin{equation}
\frac{A_{\rm penguin}^{\rm(KM)}}{A_{\rm tree}} \sim
\left( \mbox{4-10\%} \right)
\frac{V_{tb}^* V_{tq''}}{V_{q'b}^* V_{q'q''}} \ .
\label{ratio KM}
\end{equation}
In addition, there is a contribution from the scalar-heavy quark
exchange penguin diagram in the soft breaking models, and
a contribution from the Aspon gauge boson-heavy quark
exchange penguin diagram in the Aspon model.
The ratio of the new penguin contribution to
the $W$-top penguin contribution is given by
\begin{equation}
\frac{A_{\rm penguin}^{\rm(new)}}{A_{\rm penguin}^{\rm(KM)}}
\, \sim \,
\frac{\overline{C}_{bq''}}{\overline{C}_{bq''}^{\rm (KM)}} \ ,
\label{ratio new}
\end{equation}
where $\overline{C}_{bq''}$ and $\overline{C}_{bq''}^{\rm (KM)}$
are analogues of $\overline{C}_{sd}$ in Eq.~(\ref{S1}).
This ratio is estimated by the ratio of the couplings:
\renewcommand{\arraystretch}{1.5}
\begin{equation}
\frac{\overline{C}_{bq''}}{\overline{C}_{bq''}^{\rm (KM)}}
\, \sim \,
\left\{
\begin{array}{ll}
\displaystyle
\left( \frac{v}{M_Q} \right)^2
\frac{X_{bq''}}{V_{tb}^\ast V_{tq''}} \ , \qquad
& \mbox{for the soft breaking models}\ ,
\\
\displaystyle
\left( \frac{v}{\kappa} \right)^2
\frac{x_3 x_{q''}^\ast}{V_{tb}^\ast V_{tq''}} \ , \qquad
& \mbox{for the Aspon model}\ ,
\end{array}
\right.
\end{equation}
\renewcommand{\arraystretch}{1}
where $x_{d,s} = x_{1,2}$.
For $\mbox{Im} C_{bq''} \sim \mbox{Im} C_{sd}$,
the imaginary part of
this ratio is very small, and the new contribution
is negligible compared with the KM-penguin
contribution.
When $\left\vert \mbox{Im} C_{bq''} \right\vert
\sim \left\vert C_{bq''}^{\rm (KM)} \right\vert$,
the imaginary part of this ratio
can be of order one in the Type L soft breaking model,
while it is small, $\lesssim 10^{-1}$, in the Aspon model.
Then if
\begin{equation}
\left\vert \frac{V_{tb} V_{tq''}}{V_{q'b} V_{q'q''}}
\right\vert \leq 1 \ ,
\end{equation}
the tree diagram dominates over penguin
diagram~\cite{Neubert}, and the direct CP violation
in the $B$ system is small.
This corresponds to the processes $b\rightarrow c\bar{c}s$,
$b\rightarrow c\bar{c}d$ and $b\rightarrow u\bar{u}d$.
On the other hand, if tree diagrams are forbidden,
the penguin diagram dominates, and
\begin{equation}
\frac{\bar{A}}{A} \sim
\frac{
\overline{C}_{bq''}^{\rm (KM)} + \overline{C}_{bq''}^\ast
}{
\overline{C}_{bq''}^{\rm (KM)} + \overline{C}_{bq''}
}
\ .
\end{equation}
This is for $q'=d$ or $s$.
When $\left\vert \mbox{Im} C_{bq''} \right\vert \sim
\left\vert C_{bq''}^{\rm (KM)} \right\vert$
in this case, it is convenient to parameterize
\begin{equation}
\frac{\bar{A}}{A} \simeq e^{i2\tilde{\alpha}_{q''}} \ ,
\end{equation}
where $\tilde{\alpha}_{q''}$ is of order one in the
Type L soft breaking model, $\lesssim 10^{-1}$ in the
Aspon model, and very small in the Type R soft
breaking model.
\begin{table}[htbp]
\begin{tabular}{ccccccc}
& & (1) & (2) & (3) & (4) & SM \\
\hline
$b \rightarrow c\bar{c}s$
& $B_d \rightarrow \psi K_S$ & 0 & $\sin2\tilde{\beta}_d$
& 0 & $\sin2\tilde{\beta}_d$ & $-\sin 2 \beta $ \\
& $B_s \rightarrow D^+_s D^-_s$ & 0 & 0 & $\sin2\tilde{\beta}_s$
& $\sin2\tilde{\beta}_s$ & $-\sin 2 \beta' $ \\
\hline
$b \rightarrow c\bar{c}d$
& $B_d \rightarrow D^+ D^-$ & 0 & $\sin2\tilde{\beta}_d$
& 0 & $\sin2\tilde{\beta}_d$ & $-\sin 2 \beta $ \\
& $B_s \rightarrow \psi K_S$ & 0 & 0 & $\sin2\tilde{\beta}_s$
& $\sin2\tilde{\beta}_s$ & $-\sin 2 \beta' $ \\
\hline
$b \rightarrow c\bar{c}s$
& $B_d \rightarrow \pi^+\pi^-$ & 0 & $\sin2\tilde{\beta}_d$
& 0 & $\sin2\tilde{\beta}_d$ & $\sin 2 \alpha $ \\
& $B_s \rightarrow \rho K_S$ & 0 & 0 & $\sin2\tilde{\beta}_s$
& $\sin2\tilde{\beta}_s$ & $-\sin 2 (\gamma+\beta') $ \\
\hline
$b \rightarrow s\bar{s}s$
& $B_d \rightarrow \phi K_S$ & 0 & $\sin2\tilde{\beta}_d$
& $\sin2\tilde{\alpha}_s$
& $\sin2\left(\tilde{\beta}_d+\tilde{\alpha}_s\right)$
& $-\sin 2 (\beta-\beta') $ \\
& $B_s \rightarrow \eta' \eta'$ & 0 & 0
& $\sin2\left(\tilde{\beta}_s+\tilde{\alpha}_s\right)$
& $\sin2\left(\tilde{\beta}_s+\tilde{\alpha}_s\right)$
& 0 \\
\hline
$b \rightarrow s\bar{s}d$
& $B_d \rightarrow K_S K_S$ & 0
& $\sin2\left(\tilde{\beta}_d+\tilde{\alpha}_d\right)$
& 0
& $\sin2\left(\tilde{\beta}_d+\tilde{\alpha}_d\right)$
& 0 \\
& $B_s \rightarrow \phi K_S$ & 0 & $\sin2\tilde{\alpha}_d$
& $\sin2\tilde{\beta}_s$
& $\sin2\left(\tilde{\beta}_s+\tilde{\alpha}_d\right)$
& $\sin 2 (\beta-\beta')$ \\
\end{tabular}
\caption[]{
Values of $\mbox{Im}\lambda\left(B_q\rightarrow X\right)$
for the examples of the neutral $B$ meson decay modes.
(1)--(4) correspond to four cases discussed in text.
A zero indicates that the value is small,
$\lesssim{\cal O}(10^{-2})$.
The column indicated by ``SM'' shows the predictions in
the SM~\cite{Neubert}.
}\label{tab:1}
\end{table}
In Table~\ref{tab:1} we show examples of neutral $B$
meson decay modes with values of $\mbox{Im}\lambda
\left(B_q \rightarrow X\right)$ for the four cases
discussed above.
One can read from Table~\ref{tab:1}
specific features of the present models.
For example: if CP assymetry in $B_d \rightarrow K_S K_S$ were large,
then it indicates a clear deviation from the Standard
Model, and those for tree dominant decay modes are the same;
$\mbox{Im}\lambda\left(B_d \rightarrow \psi K_S\right)
\simeq \mbox{Im}\lambda\left(B_d \rightarrow D^+ D^-\right)
\simeq \mbox{Im}\lambda\left(B_d \rightarrow \pi^+ \pi^-\right)$.
On the other hand, if it were small, all CP violations
in $B_d$ decays are small.
If we focus just on the ``gold-plated'' decay mode
$B \rightarrow \psi K_S$ (top row of Table~\ref{tab:1})
- where the SM predicts an unmistakable large CP asymmetry -
then in the Type R soft breaking model one must have
condition (1) and hence a very small $\beta$ ($\beta < 10^{-2}$);
in the Type L soft breaking model or the Aspon model one
{\it can} admit conditions (2) and (4) and hence large
effective $\beta$.
However, if we impose that the Yukawa couplings are
generation-independent, all except the SM
predict a CP asymmetry in this mode too
small to be detected.
\section{Compatibility with Upper Bound on $\bar{\Theta}$;
{\it Lower} Bounds on Electric Dipole Moments.}
\label{theta}
It is interesting to estimate the {\it lower} bound
on $\bar{\Theta}$ and hence on the neutron electric dipole
moment $d_n$, for the different models.
First recall that in the standard model where the strong CP
problem is unresolved - and requiring an
additional mechanism such
as the Peccei-Quinn symmetry~\cite{PQ} or a massless up quark
(see, {\it e.g.} Ref.~\cite{P}) -
there is no such lower limit
because there is no reason to make $\bar{\Theta}$
small. If one simply puts the bare $\bar{\Theta}$
equal to zero (without motivation) then it has been
pointed in Ellis and Gaillard\cite{EG} that
there is a finite correction at two loops of $\sim 10^{-16}$
and an infinite renormalization at seven loops
which is even smaller, $\sim 10^{-32}$ if one arbitrarily
puts in a cut-off equal to the Planck mass. But these are not really
predictions for
a lower bound because there is fine-tuning unless there is an additional
mechanism.
The value of $\bar{\Theta}$ is strongly constrained by
the experiment of the neutron electric dipole moment;
$\bar{\Theta}\leq 10^{-10}$.
In the models considered in this paper the determinants of
mass matrices of quarks are real, and
the resultant $\bar{\Theta}$ is zero at tree level.
However, it is generated at some loop level through
corrections to the mass matrix;
$\bar{\Theta} =
\mbox{Im} \left\{
\mbox{tr}\left[ M^{-1} \delta M \right] \right\}$,
where $M$ is the tree level mass matrix.
In the Aspon model a contribution to $\bar{\Theta}$
appears at one-loop level due to the mixing between the
heavy scalar $\chi_I$ and the ordinary Higgs boson $H$
given in Eq.~(\ref{L chi H})~\cite{FN}.
This contribution is estimated as~\cite{FG}
\begin{equation}
\bar{\Theta} = \frac{\lambda x^2}{16\pi^2} \ ,
\end{equation}
where $\lambda$ is an average value of $\lambda_{IJ}$
in Eq.~(\ref{L chi H}) and $x$ is an average value of
$\left\vert x_i \right\vert$ in Eq.~(\ref{def x}).
{}From a one-loop correction from the
quark box diagram
a lowest value of $\lambda$ and hence
$\bar{\Theta}$ are estimated as~\cite{FH}
\begin{equation}
\lambda \gtrsim \frac{x^2}{16\pi^2}\ ,
\qquad
\bar{\Theta} \gtrsim \frac{x^4}{\left(16\pi^2\right)^2} \ .
\label{ThetaA}
\end{equation}
This by using the upper bound of $\bar{\Theta}$
implies $x^2 \lesssim 10^{-3}$.
When the Yukawa couplings are generation-independent,
we obtain $\kappa \lesssim 3 \times 10^4$\,GeV by combining
this with the constraint (\ref{eps}) from $\epsilon_K$.
The assumption $\kappa > v$ gives the lower bound $x^2 \gtrsim
10^{-5}$, which leads to $\bar{\Theta} \gtrsim 4 \times 10^{-15}$, and
hence $d_n \gtrsim 4 \times 10^{-30}$e.cm.
As discussed in Section(\ref{B}) one can admit
$\left\vert C_{bd}^{(A)} \right\vert$ is as large as
$\left\vert C_{bd}^{\rm(KM)}\right\vert$ by using the
generation-dependent Yukawa couplings. In such a case the combination
of Eqs.~(\ref{ThetaA}) with
the constraint (\ref{x13}) from $B_d$--$\bar{B}_d$
mixing, we obtain $\kappa \lesssim 10^3$\,GeV (rather than $\kappa
\lesssim 3 \times 10^4$ GeV).
Equation~(\ref{x13}) with the assumption $\kappa > v$ gives
the lower bound $\left\vert x_1^\ast x_3 \right \vert \gtrsim 2 \times
10^{-4}$, which combined with Eq.~(\ref{ThetaA}) leads $\bar{\Theta}
\gtrsim 2 \times 10^{-12}$, and hence $d_n \gtrsim 2 \times
10^{-27}$e.cm.
\begin{figure}[bthp]
\begin{center}
\ \epsfbox{fig4.eps}
\end{center}
\caption[]{%
Three loop diagram which gives a correction
to the imaginary part of the $d$ mass matrix
in the model for Type R soft CP breaking.
}\label{fig:theta R d}
\end{figure}
In the model for Type R soft CP breaking
the corrections to the imaginary part of the mass matrix of the down
sector first arise at two-loop level, as pointed out in ~\cite{BCK}.
We estimate this as $\bar{\Theta} \simeq \lambda f^2 /
\left(16\pi^2\right)^2$, where $\lambda$ is an average value of
$\lambda_{IJ}$ in Eq.~(\ref{L chi H}) and $f$ is an average value of
the Yukawa couplings. Different from the case of the Aspon model
(where the top quark contributes), a
lowest value of $\lambda$ is here estimated from a one-loop correction from
the box diagrams of {\it down-type} quarks.
The resultant lowest value of $\bar{\Theta}$ is estimated as
$\bar{\Theta} \gtrsim \left( f^4/(16\pi^2)^3 \right) \cdot
\left( m_b / v \right)^2 $, which leads only to $f^2\lesssim1$.
This constraint is not strong. However, a stronger constraint comes
from the three-loop diagram shown in Fig.~\ref{fig:theta R d}.
The correction from the diagram to the imaginary part of the mass
matrix of down sector is estimated as
\begin{equation}
\delta {\cal M}_d \sim \frac{\alpha_s}{4\pi}
\frac{1}{16\pi^2 v^2} \frac{1}{16\pi^2}
V^T \left( {\cal M}_u \right)^2 V {\cal M}_d X^I
\ ,
\end{equation}
where $\alpha_s \simeq 0.12$ is the QCD coupling
and $V$ is a real orthogonal KM matrix.
The contribution to
$\bar{\Theta}$ is calculated by multiplying the above
$\delta {\cal M}_d$ by $\left( {\cal M}_d \right)^{-1}$
and taking trace. Since ${\cal M}_d$ is included between
two hermitian matrices in $\delta {\cal M}_d$, enhancement
factors arises from $\left( {\cal M}_d \right)^{-1}$.
The resultant correction to $\bar{\Theta}$ is given by
\begin{eqnarray}
\bar{\Theta}(\mbox{down}) &\sim&
\frac{\alpha_s}{4\pi}
\frac{m_t^2}{ \left(16\pi^2\right)^2 v^2 }
\left[
\frac{m_s}{m_d} V_{ts} V_{td}\, \mbox{Im}X^I_{12}
+ \frac{m_b}{m_d} V_{tb} V_{td}\, \mbox{Im}X^I_{13}
+ \frac{m_b}{m_s} V_{tb} V_{ts}\, \mbox{Im}X^I_{23}
\right] \ .
\end{eqnarray}
By using $(m_d,\,m_s,\,m_b) \simeq
(8,\,150,\,4800)$\,MeV and $(V_{td},\,V_{ts},\,V_{tb})
\simeq (5.9\times10^{-3},\,4.3\times10^{-2},\,1)$,
the above expression becomes
\begin{equation}
\bar{\Theta}(\mbox{down}) \sim
\left[
9.0\times 10^{-3} \mbox{Im} X^I_{sd}
+ 6.7 \mbox{Im} X^I_{bd} + 2.6 \mbox{Im} X^I_{bs}
\right] \times 10^{-7}
\ .
\end{equation}
Then the constraint $\bar{\Theta} \lesssim 10^{-10}$ gives
\begin{eqnarray}
&& \left\vert \mbox{Im} X^I_{sd} \right\vert
\lesssim 0.1 \ ,\nonumber\\
&& \left\vert \mbox{Im} X^I_{bd} \right\vert
\lesssim 2\times10^{-4} \ ,
\nonumber\\
&& \left\vert \mbox{Im} X^I_{bs} \right\vert
\lesssim 4\times10^{-4} \ .
\label{thetaR}
\end{eqnarray}
When we demand $\left\vert C_{bd}^{(R)}\right\vert \simeq
\left\vert C_{bd}^{\rm(KM)} \right\vert$ consistently with the above
upper bound,
we need to require
$\left\vert \mbox{Re}X^I_{bd} \right\vert \gg
\left\vert \mbox{Im}X^I_{bd} \right\vert$.
Then the bounds for $M_Q$ and $\bar{\Theta}$ are same as those for the
generation-independent Yukawa couplings.
The combination of the upper bound (\ref{thetaR}) with the constraint
obtained from $\epsilon_K$ (Eq.~(\ref{X})) gives the upper bound for
$M_Q$: $M_Q^{(R)} \lesssim 8 \times 10^4$\,GeV.
The lower bound for $\bar{\Theta}$ may be obtained from the lower
bound for $\bar{X}_{sd}$ ($\bar{X}_{sd} \gtrsim 3 \times 10^{-4}$)
derived from $\epsilon_K$. The result is
$\bar{\Theta} \gtrsim 3 \times 10^{-13}$, and hence
$d_n \gtrsim 3 \times 10^{-28}$.
In the model of Type L soft CP breaking, a contribution to
$\delta {\cal M}$ arises at two-loop level\footnote%
{This two-loop contribution is due to Sheldon Glashow.}
from the diagram similar to
the one for the Type R soft breaking model,
while the three-loop diagram similar to the one in
Fig.~\ref{fig:theta R d} does not contribute to
$\bar{\Theta}$. So the dominant contribution to $\bar{\Theta}$ is
estimated as $\bar{\Theta} \simeq \lambda f^2/ (16\pi^2)^2$.
Similarly to the Aspon model, a lowest value of $\lambda$ is estimated
from a one-loop correction from the box diagram of {\it both} up-type
{\it and} down-type quarks. The resultant value of $\bar{\Theta}$ is
thus estimated as $\bar{\Theta} \gtrsim f^4/(16\pi^2)^3$, which leads
to $f^2\lesssim 0.02$.
For the case of generation-independent Yukawa couplings the
combination of this upper bound with the constraint from $\epsilon_K$
gives an upper bound for $M_Q$: $M_Q \lesssim 2 \times 10^5$\,GeV.
The bound $\bar{X}_{sd} \gtrsim 3 \times 10^{-4}$ leads to
$\bar{\Theta} \gtrsim 2 \times 10^{-14}$, and hence $d_n \gtrsim 2
\times 10^{-29}$e.cm.
On the other hand, when we require $\left\vert C_{bd}^{(L)}
\right\vert \simeq \left\vert C_{bd}^{\rm(KM)} \right\vert$, the
combination of $f^2 \lesssim 0.02$ with the constraint
obtained from $B_d$--$\bar{B}_d$ mixing gives an
upper bound for $M_Q$: $M_Q \lesssim 7\times 10^2$\,GeV.
A lower bound for $\bar{X}_{bd}$ can be derived
from the $B_d$--$\bar{B}_d$ mass
difference (Section(\ref{B})):
$\bar{X}_{bd} \gtrsim 7 \times 10^{-3}$, which leads to $f^2\gtrsim
7\times 10^{-3}$. {}From this lower bound
the lower bound for $\bar{\Theta}$
is estimated as $\bar{\Theta} \gtrsim 10^{-11}$, and hence
$d_n \gtrsim 10^{-26}$e.cm., quite close to the experimental limit.
\section{Summary of Predictions.}
\label{summary}
The predictions of the different models we have
studied are collected together\footnote{%
In Table~\ref{tab:2} the aspon model is ``L-type spontaneous'' meaning
that light left-handed quarks couple to the new quarks. If
we replace this by an aspon model with q= right-handed
down-type quarks, the lower limits on $\lambda$
and $\bar{\Theta}$ in Eq.(\ref{ThetaA})
are each reduced by a factor $(m_b/v)^2 \sim 10^{-3}$.}
are in Table~\ref{tab:2}.
\begin{table}[ht]
\begin{tabular}{||c||c|c|c|c||}
& KM & R-type soft & L-type soft & Aspon \\
\hline
\hline
$\left(\frac{\epsilon^{'}}{\epsilon}\right)$ & few $10^{-4}$ &
? & ? & $<10^{-5}$ \\
$\bigtriangleup$ & big & flat & ? & ? \\
$\bar{\Theta}$ & axion? & $>10^{-13}$ ($10^{-13}$)
& $>10^{-11}$ ($10^{-14}$) & $>10^{-12}$ ($10^{-15}$)\\
$d_n$ & $10^{-32}$ & $>10^{-28}$ ($10^{-28}$)
& $>10^{-26}$ ($10^{-29}$) & $>10^{-27}$ ($10^{-30}$) \\
\end{tabular}
\caption[]{
Summary of results for the three CP violation models
compared to the SM.
$\bigtriangleup$ denotes the unitarity triangle determined from
neutral $B$ meson decays.
A query ? denotes not {\it necessarily} pure superweak (essentially
zero $\epsilon'/\epsilon$ and a flat $\bigtriangleup$),
but becomes so if the Yukawa couplings are generation-independent. In
that case, the first two rows in the last three columns become
indistinguishable.
Values in parentheses denote weaker bounds for the case of
generation-independent Yukawa couplings, to be
compared to generation-dependent ones.}
\label{tab:2}
\end{table}
{}From this Table we see that the predictions for the different
models are very divergent and therefore when the quantity
$(\epsilon^{'}/\epsilon)$ is measured with an accuracy
of $10^{-4}$, and the CP asymmetry in $B\rightarrow \psi K_S$
is measured to determine whether or not $\sin 2\beta > 10^{-2}$
we will be able to exclude models. As mentioned in the Introduction we
expect that both of these measurements will be completed
within perhaps 2 or 3 years.
\bigskip
\section*{Acknowledgement}
This work was supported in part by the
US Department of energy under Grant No.
DE-FG05-85ER-40219.
|
2,877,628,091,205 | arxiv | \section{Introduction}
\label{sec:intro}
The identification of the particle dark matter (DM) is one of the most challenging tasks in theoretical and experimental particle physics. Although the extensive searches to date yield null results, tremendous progress has been made in recent years in the underground direct searches \cite{dama,cogent,cresst,Agnese:2013rvf,Felizardo:2011uw,Archambault:2012pm,Behnke:2012ys,xenonall,Akerib:2013tjd,Agnese:2014aze}, in the indirect astrophysical searches \cite{astroDM,fermi_dwarfs,planckwmap}, and at colliders \cite{LEPDM,TevatronDM,LHCDM}
From the theoretical point of view, the weakly interacting massive particle (WIMP) remains to be a highly motivated candidate (for a recent review, see, {\it e.g.}, Ref.~\cite{Bertone:2004pz}).
To reach the correct relic abundance in the current epoch, a WIMP mass is roughly at the order
\begin{eqnarray}
M_{\rm WIMP} \lsim {g^2\over 0.3}\ 1.8~{\rm TeV}.
\label{eq:mwimp}
\end{eqnarray}
The upper bound miraculously coincides with the new physics scale expected based on the ``naturalness'' argument for electroweak physics. There is thus a high hope that the search for a WIMP dark matter may be intimately related to the discovery of TeV scale new physics.
However, the precise value of the WIMP mass and the exact relic abundance heavily depend on the dynamics in a specific model.
It is interesting to understand the viable WIMP mass range under current experimental constraints.
While the dark matter direct detection experiments probe the dark matter at around a few hundred GeV with high sensitivity, the sensitivity drops significantly for the light dark matter given the limitation from the energy threshold of a given experiment.
Light WIMP dark matter and its related sector, on the other hand, typically receive strong experimental constraints from various dark matter related searches, especially direct searches at lepton colliders. These factors make proper light WIMP DM candidate in a given model very restricted, sometimes tuned to rely on specific kinematics and dynamics. A comprehensive examination of light DM candidates in the low mass range is then in demand.
Indeed, there have been interesting excesses in annual modulation by the DAMA collaboration \cite{dama}, and in direct measurements by CoGeNT~\cite{cogent}\footnote{For a recent independent analysis, see Ref.~\cite{Davis:2014bla}.}, CRESST~\cite{cresst} and CDMS~\cite{Agnese:2013rvf} experiments that could be interpreted as signals from a low mass dark matter.
The tantalizing events from the gamma ray spectrum \cite{gammaray} from the Galactic Center in the Fermi-LAT data could also be attributed to contributions from low mass dark matter annihilation \cite{Daylan:2014rsa}.
To convincingly establish a WIMP DM candidate in the low mass region, it is ultimately important to reach consistent observations among the direct detection, indirect detection and collider searches for the common underlying physics such as mass, spin and coupling strength.
Supersymmetric (SUSY) theories are well motivated to understand the large hierarchy between the electroweak scale and the Planck scale. The lightest supersymmetric particle (LSP) can serve as a viable DM candidate. In the Minimal-Supersymmetric-Standard-Model (MSSM), the lightest neutralino serves as the best DM candidate (for a review, see, {\it e.g.}, Ref.~\cite{Jungman:1995df}). The absence of the DM signal from the direct detection in underground experiments as well as the missing energy searches at colliders, however, has significantly constrained theory parameter space. The relic abundance consideration leads to a few favorable scenarios for a (sub) TeV DM, namely $Z/h/A$ funnels, and LSP-sfermion coannihilation. For heavier gauginos, the ``well-tempered'' spectrum \cite{ArkaniHamed:2006mb} may still be valid.
For some recent related works on SUSY DM after the Higgs boson discovery, see Refs.~\cite{workgeneral,workmssm,worknmssm,worklight,workpmssm,Roszkowski:2012uf,Baer:2012uy,Han:2013gba,tstau,Cao:2013mqa,Christensen:2013dra}
\begin{table}[t]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
&Models & DM ($<40~\,{\rm GeV}$)& Annihilation \\ \hline
Funnels& NMSSM & Bino/Singlino & $\tilde \chi^0_1\ch10\rightarrow A_1, H_1\rightarrow SM$ \\ \hline
Co-ann.&MSSM~\&~NMSSM&Bino/Singlino & $\tilde \chi^0_1\ch10 \rightarrow f\bar f$; $\tilde \chi^0_1\tilde f\rightarrow Vf$; $\tilde f \tilde f^\prime\rightarrow f f^\prime$ \\
\hline
\end{tabular}
\caption[]{Possible solutions for light ($<40~\,{\rm GeV}$) neutralino DM in MSSM and NMSSM.}
\label{tab:scenarios}
\end{table}
In this paper, we explore the implications of a low mass neutralino LSP dark matter in the mass window $2-40$ GeV in the framework of the Next-to-Minimal-Supersymmetric-Standard-Model (NMSSM, see Ref.~\cite{Ellwanger:2009dp} for a recent review).
The robust bounds on the chargino mass from LEP experiments
disfavored the Wino-like and Higgsino-like neutralinos, and forced a light LSP largely Bino-like or Singlino-like, or an admixture of these two.
However, those states do not annihilate efficiently to the SM particles in the early universe. Guided by the necessary efficient annihilation to avoid overclosing the universe, we tabulate in Table \ref{tab:scenarios} the potentially effective processes, where the first row indicates the funnel processes near the light Higgs resonances, and the second row lists the coannihilation among the light SUSY states. There is another possibility of combined contributions from the $s$-channel $Z$-boson and SM-like Higgs boson, as well as the $t$-channel light stau ($\sim100~\,{\rm GeV}$). For more details, see Refs.~\cite{Han:2013gba,tstau}.
With a comprehensive scanning procedure, we confirm three types of viable light DM solutions consistent with the direct/indirect searches as well as the relic abundance considerations: $(i)$ $A_1,\ H_1$-funnels, $(ii)$ LSP-stau coannihilation and $(iii)$ LSP-sbottom coannihilation. Type-$(i)$ may take place in any theory with a light scalar (or pseudo-scalar) near the LSP pair threshold; while Type-$(ii)$ and $(iii)$ could occur in the framework of MSSM as well.
These possible solutions all have very distinctive features from the perspective of DM astrophysics and collider phenomenology. We present a comprehensive study on the properties of these solutions and focus on the observational aspects of them at colliders, including new phenomena in Higgs physics, missing energy searches and light sfermion searches.
The decays of the SM-like Higgs boson may be modified appreciably and the new decay channels to the light SUSY particles may be sizable. The new light CP-even and CP-odd Higgs bosons will decay to a pair of LSPs as well as other observable final states, leading to rich new Higgs phenomenology at colliders.
For the light sfermion searches, the signals would be very difficult to observe at the CERN Large Hadron Collider (LHC) when the LSP mass is nearly degenerate with the parent. However, a lepton collider, such as the International Linear Collider (ILC), would be able to uncover these scenarios benefited from its high energy, high luminosity, and the clean experimental environment.
This paper is organized as follows. In Sec.~\ref{sec:overview}, we first define the LSP dark matter in the NMSSM, and outline its interactions with the SM particles. We list the relevant model parameters with broad ranges, and compile the current bounds from the collider experiments on them. We then search for the viable solutions in the low mass region by scanning a large volume of parameters. Having shown the existence of these interesting solutions, we comment on the connection to the existing and upcoming experiments for the direct and indirect searches of the WIMP DM.
Focused on the light DM solutions, we study the potential signals of the unique new Higgs physics, light sbottom and stau at the LHC in Sec.~\ref{sec:collider} and the ILC Sec.~\ref{sec:leptoncollider}.
We summarize our results and conclude in Sec.~\ref{sec:conclude}.
\section{Light Neutralino Dark Matter}
\label{sec:overview}
\subsection{Neutralino Sector in the NMSSM}
\label{sec:theory}
In the NMSSM, the neutralino DM candidate is the lightest eigenstate of the neutralino mass matrix \cite{Ellwanger:2009dp}, which can be written as
\begin{equation}
M_{\tilde N^0} = \left (
\begin{array}{ccccc}
M_1 & 0 & -g_1 \frac {v_d} {\sqrt {2}} & g_1 \frac {v_u} {\sqrt {2}} & 0 \\
& M_2 & g_2 \frac {v_d} {\sqrt {2}} & -g_2 \frac {v_u} {\sqrt {2}} & 0 \\
& & 0 & -\mu & -\lambda v_u\\
& *& & 0 & -\lambda v_d \\
& & & & 2\frac \kappa \lambda \mu \\
\end{array}
\right )
\end{equation}
in the gauge interaction basis of Bino $\tilde B$, Wino $\tilde W^0$, Higgsinos $\tilde H_d^0$ and $\tilde H_u^0$, and Singlino $\tilde S$.
Here $\lambda,\ \kappa$ are the singlet-doublet mixing and the singlet cubic interaction couplings, respectively~\cite{Ellwanger:2009dp}, and we have adopted the convention of $v_d^2+v_u^2=(174~\,{\rm GeV})^2$. The light neutralino, assumed to be the LSP DM candidate, can then be expressed as
\begin{equation}
\tilde \chi^0_1=N_{11}\tilde B+N_{12}\tilde W^0+N_{13}\tilde H_d^0+N_{14}\tilde H_u^0+N_{15}\tilde S,
\end{equation}
where $N_{ij}$ are elements of matrix $N$ that diagonalize neutralino mass matrix $M_{\tilde N_0}$:
\begin{equation}
N^*M_{\tilde N_0}N^{-1}={\rm Diag}\{m_{\tilde \chi^0_1},m_{\tilde \chi_2^0},m_{\tilde \chi_3^0},m_{\tilde \chi_4^0},m_{\tilde \chi_5^0}\},
\end{equation}
with increasing mass ordering for $m_{\tilde\chi_i^0}$.
Given the current chargino constraints, a favorable SUSY DM candidate could be either Bino-like, Singlino-like or Bino-Singlino mixed. In most cases, the DM follows the properties of the lightest (in absolute value) diagonal entry. Similar to Bino-Wino mixing via Higgsinos, Bino and Singlino do not mix directly: they mix through the Higgsinos. The mixing reaches maximum when $M_1 \sim 2\kappa/\lambda \mu$ from simple matrix argument. This Bino-Singlino mixing is the only allowed large mixing with light DM candidate due to LEP bounds.
A particularly interesting case is the Peccei-Quinn limit \cite{pqlimit,Miller:2003ay}, when the singlet cubic coupling is small: $\kappa \rightarrow 0$, and both the singlet-like (CP-odd) Higgs boson and the Singlino can be light.
Under the limit of either a Bino-like LSP $N_{11}\approx 1$ or a Singlino-like LSP $N_{15}\approx 1$,
the couplings of the physical Higgs bosons and the LSP are
\begin{eqnarray}
H_i \tilde \chi^0_1 \tilde \chi^0_1\ (i=1,2,3):
&&{g_1} N_{11} \left[\xi_i^{h_v}(c_\beta N_{13} - s_\beta N_{14})-\xi_i^{H_v}(s_\beta N_{13} + c_\beta N_{14})\right] \nonumber \\
&+& {\sqrt {2}} \lambda N_{15} \left[\xi_i^{h_v}(s_\beta N_{13} + c_\beta N_{14}) + \xi_i^{H_v} (c_\beta N_{13} - s_\beta N_{14})\right]-\sqrt 2 \kappa \xi_i^{S}N_{15}^2 \hfill \nonumber \\
\label{eq:approxhchichi}
A_i \tilde \chi^0_1 \tilde \chi^0_1\ (i=1,2) :&&-i {g_1} N_{11} \xi_i^{A} \left[s_\beta N_{13}-c_\beta N_{14}\right] \nonumber \\
&& -i {\sqrt {2}} \lambda N_{15} \xi_i^{A} \left[c_\beta N_{13}+s_\beta N_{14}\right]-i\sqrt 2 \kappa \xi_i^{A_S} N_{15}^2,
\label{eq:approxachichi}
\end{eqnarray}
where $\xi_{i}$ are the mixing matrix elements for the Higgs fields with
\begin{equation}
H_i=\xi_i^{h_v} h_v+\xi_i^{H_v}H_v+\xi_i^{S}S, \ \ \ \ A_i=\xi_i^A A + \xi_i^{A_S} A_S,
\label{eq:frac}
\end{equation}
in the basis of $(h_v, H_v, S)$ for the CP-even Higgs sector and $(A, A_S)$ for the CP-odd Higgs sector.\footnote{In the basis of $(h_v, H_v, S)$, $h_v= \sqrt{2} [\cos\beta\ {\rm Re}(H_d^0) + \sin\beta\ {\rm Re}(H_u^0) ]$ couples to the SM particles with exactly the SM coupling strength; while $H_v= \sqrt{2} [-\sin\beta\ {\rm Re}(H_d^0) + \cos\beta\ {\rm Re}(H_u^0))] $ does not couple to the SM $W$ and $Z$.
Similarly, $A$ and $A_S$ are the CP-odd MSSM Higgs and singlet Higgs, respectively~\cite{Miller:2003ay}. }
In the limit of a decoupling MSSM Higgs sector plus a singlet, the singlet-like Higgs has $\xi^{S}\approx 1$ and the SM-like Higgs has $\xi^{h_v}\approx 1$.
Specifically, in the Bino-like LSP scenario,
\begin{eqnarray}
&&N_{11}\approx 1,\ \ N_{15}\approx 0,\ \ N_{13}\approx \frac{m_Z s_W}{\mu} s_\beta,\ \ N_{14}\approx -\frac{m_Z s_W}{\mu} c_\beta,
\label{eq:Bino_limit} \\
&&H_i \tilde \chi^0_1\ch10: {g_1} N_{11} \frac{m_Z s_W}{\mu} \left[\xi_i^{h_v}s_{2\beta}+\xi_i^{H_v} c_{2 \beta} \right]-\sqrt 2 \kappa \xi_i^{S}N_{15}^2,
\label{eq:Bino_Hchichi_limit}\\
&&A_i \tilde \chi^0_1\ch10: -i g_1 N_{11}\frac{m_Z s_W}{\mu} \xi_i^A-i\sqrt 2 \kappa \xi_i^{A_S} N_{15}^2.
\label{eq:Bino_Achichi_limit}
\end{eqnarray}
The couplings to the SM-like or MSSM-like Higgs bosons are proportional to the Bino-Higgsino mixing of the order ${\cal O}(m_Z s_W/{\mu})$.
The coupling to the SM-like Higgs with $\xi_i^{h_v}\approx 1,\ \xi_i^{H_v} \ll 1$ is roughly $ s_{2\beta}+\xi_i^{H_v} c_{2 \beta}$, and is typically suppressed for $\tan\beta >1$. The coupling to the MSSM-like Higgs with $\xi_i^{H_v}\approx 1,\ \xi_i^{h_v} \ll 1$, on the other hand, is unsuppressed.
The couplings to the singlet-like (CP-even and CP-odd) Higgs bosons are suppressed by $N_{15}^2$.
In the Singlino-like LSP scenario,
\begin{eqnarray}
&& N_{11}\approx 0,\ \ N_{15}\approx 1,\ \ N_{13}\approx -\frac{\lambda v}{\mu} c_\beta,\ \ N_{14}\approx -\frac{\lambda v}{\mu} s_\beta,
\label{eq:Singlino_limit} \\
&& H_i \tilde \chi^0_1\ch10:
-{\sqrt {2}} \lambda N_{15}\frac{\lambda v}{\mu} \left[\xi_i^{h_v}s_{2\beta}+\xi_i^{H_v} c_{2 \beta} \right] -\sqrt 2 \kappa \xi_i^{S} N_{15}^2,
\label{eq:Singlino_Hchichi_limit}\\
&& A_i \tilde \chi^0_1\ch10: i \sqrt{2} N_{15}\frac{\lambda v}{\mu} \xi_i^{A} - i \sqrt 2 \kappa \xi_i^{A_S} N_{15}^2.
\label{eq:Singlino_Achichi_limit}
\end{eqnarray}
The couplings to the SM-like or MSSM-like Higgs bosons are proportional to the Singlino-Higgsino mixing of the order ${\cal O}(\lambda v/{\mu})$. The contributions from the $h_v$ and $H_v$ components follow the same relation as in the Bino-like LSP case above. The coupling to the singlet-like Higgs can be approximated as $-\sqrt 2 \kappa N_{15}^2$, proportional to the Singlino component and the PQ symmetry-breaking parameter $\kappa$.
Neutralinos couple to fermion-sfermion through their Bino, Wino and Higgsino components, proportional to the corresponding ${\rm U}(1)_Y$ Hyper charge, ${\rm SU}(2)_L$ charge and $\tan\beta$ modified Yukawa couplings. For the Bino-like LSP, the coupling is dominated by the ${\rm U}(1)_Y$ Hyper charge. For the Singlino-like LSP, the couplings to the SM fermions are more complex as the leading contributions depend on the mixing with the gauginos and Higgsinos.
\subsection{Parameters and Constraints}
\label{sec:constrains}
\begin{table}[t]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
&General&\multicolumn{3}{|c|}{Scenario-dedicated Scan} \\ \cline{3-5}
& Scan & Sbottom & Stau & $H_1$, $A_1$-funnels \\ \hline
$m_{A_{\rm tree}}$ & [0,3000] & $\ldots$ & $\ldots$ & $\ldots$\\ \hline
$\tan\beta$ & [1,55] & $\ldots$ & $\ldots$ &$\ldots$ \\ \hline
$\mu$ & [100,500] & $\ldots$ & $\ldots$ &$\ldots$ \\ \hline
$|A_\kappa|$ & [0,1000] &$\ldots$ & $\ldots$ & $\ldots$\\ \hline
$\lambda$ & [0,1] &$\ldots$ & $\ldots$& [0.01,0.6] \\ \hline
$\kappa$ & [0,1] & \multicolumn{3}{|c|}{either $ \kappa\in[2,30]\lambda/(2\mu)$} \\ \cline{1-2}
$|M_1|$ & [0,500] & \multicolumn{3}{|c|}{or $M_1\in[2,30]$, or both} \\ \hline
$M_{Q3},~M_{U3}$ & [0,3000] & $\ldots$ & $\ldots$ & $\ldots$\\ \hline
$|A_{t}|$ & [0,4000] & $\ldots$ &$\ldots$ &$\ldots$\\ \hline \hline
$M_{D3}$ & [0,3000] & [0,80] & \multicolumn{2}{|c|}{3000} \\ \hline
$|A_{b}|$ & [0,4000] & $\ldots$ & \multicolumn{2}{|c|}{0} \\ \hline \hline
$M_{L3}, M_{E3}$ & [0,3000] & 3000 & [0,500] & 3000\\ \hline
$|A_{\tau}|$ & [0,4000] & 0 & [0,2000] & 0 \\ \hline
\end{tabular}%
\caption{The parameters and ranges considered. The symbols ``$\ldots$" in entries indicate the scanning ranges the same as the ones in the general scan.
}
\label{tab:range}
%
\end{table}%
There are 15 parameters relevant to our low-mass DM consideration. In the Higgs sector with a doublet and a singlet, the tree-level parameters are $m_{A_{tree}}$,\footnote{$m_{A_{tree}}$ is the tree-level MSSM CP-odd Higgs mass parameter, defined as $m^2_{A_{tree}}=\frac {2\mu} {\sin2\beta} (A_\lambda + \frac \kappa \lambda \mu)$~\cite{Christensen:2013dra,Ellwanger:2009dp}.} $\tan\beta$, $\mu$, $\lambda$, $\kappa$ and $A_\kappa$, and loop-level correction parameters on the stop sector $M_{Q3}$, $M_{U3}$ and $A_{t}$. These parameters also determine the Higgsino masses, Singlino mass and make strong connections between these particle sectors. The soft SUSY breaking gaugino mass $M_1$ governs the Bino mass.
To explore the sfermion coannihilation with the LSP, we choose the third generation of stau and sbottom as benchmarks by including $M_{L3}$, $M_{E3}$ and $A_{\tau}$ for stau, and $M_{D3}$ and $A_{b}$ for sbottom. The third generation sfermion sectors are expected to potentially have large mixing and small masses from the theoretical point of view, and as well are the least constrained sectors from the phenomenological perspective.
We decouple other squarks and sleptons by setting their masses at $3~\,{\rm TeV}$ and other trilinear mass terms to be zero.
The range for $\mu$ parameter is mainly motivated by the LEP lower bounds on the chargino mass. The upper bounds of superparticle mass parameters and the $\mu$ parameter are motivated by the naturalness argument \cite{Baer:2012uy,naturalness}.
In the rest of the study, we employ a comprehensive random scan over these 15 parameters, which are summarized in Table \ref{tab:range}.
The second column presents the parameter ranges for our general scan. To effectively look for possible solutions, we also device several scenario-dedicated scans as listed in the other columns: sbottom-scan, stau-scan and $A_1,\ H_1$-funnels scan with certain relationship enforced and simplified parameters for different scenarios. The combinations for $\kappa$ and $M_1$ are motivated by focusing on the Bino-like and Singlino-like LSP. In addition, we also choose several benchmarks as seeds and vary the DM mass parameters accordingly. This helps us to examine the possibility of Bino-Singlino mixture as well as solutions with fixed sfermion masses.
\begin{comment}
\begin{table}[t]
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$\mu$ & [100,500] & $m_1$ & [-500,500] & $m_{Q3}$ & [0,3000] \\ \hline
$M_{A_{tree}}$ & [100,3000] & $m_2$ & [-1000,1000] & $m_{U3}$ & [0,3000] \\ \hline
$\tan\beta$ & [1,55] & $m_3$ & 3000 & $m_{D3}$ & [0,3000] \\ \hline
$\lambda$ & $[0.005,1]$ & $m_{L3}$ & [0,3000] & $A_{U3}$ & [-4000,4000] \\ \hline
$\kappa$ & $[0.0005,1]$ & $m_{E3}$ & [0,3000] & $A_{D3}$ & [-4000,4000] \\ \hline
$A_\kappa$ & [-1000,0] & $A_{E3}$ & [-4000,4000] & $A_{U1,U2,D1,D2,E1,E2}$ & 0 \\
\hline
\end{tabular}
\caption[]{Parameter scanning ranges.}
\label{tab:range}
\end{table}
\end{comment}
Focusing on the light DM scenarios motivated in Table \ref{tab:scenarios}, and guided by the relevant collider bounds to be discussed in the next section, we adopt the following theoretical and experimental constraints for the rest of the study:
\begin{itemize}
\item 2$\sigma$ window of the SM-like Higgs boson mass: $122.7 - 128.7 $ GeV, with linearly added estimated theoretical uncertainties of $\pm 2~\,{\rm GeV}$ included.
\item 2$\sigma$ windows of the SM-like Higgs bosons cross sections for $\gamma\gamma$, $ZZ$, $W^+W^-$, $\tau^+\tau^-$ and $b\bar b$ final states with different production modes.
\item Bounds on the other Higgs searches from LEP, the Tevatron and the LHC.
\item LEP, Tevatron and LHC constraints on searches for supersymmetric particles, such as charignos, sleptons and squarks.
\item Bounds on $Z$ boson invisible width and hadronic width
\item $B$-physics constrains, including $b\rightarrow s\gamma$, $B_s\rightarrow \mu^+\mu^-$, $B\rightarrow \chi_s \mu^+ \mu^-$ and $B^+\rightarrow \tau^+ \nu_\tau$, as well as $\Delta m_s$, $\Delta m_d$, $m_{\eta_b(1S)}$ and $\Upsilon(1S)\rightarrow a \gamma, h\gamma$.
\item Theoretical constraints such as physical global minimum, no tachyonic solutions, and so on.
\end{itemize}
We use modified NMSSMTools 4.2.1~\cite{NMSSMTools} to search for viable DM solutions that satisfy the above conditions.
\subsection{Highlights from Experimental Bounds}
\label{sec:collider_constraints}
The absence of deviations from the SM predictions on precision observables as well as null results on new physics direct searches put strong bounds on the parameters. We take them into account to guide our DM study. In this subsection, we highlight some specific collider constraints that are very relevant to our light neutralino DM study.
\subsubsection{Bounds on Light Neutralino LSP}
Precision measurements of $Z$-boson's invisible width put strong constraint on the light neutralino LSP. The 95\% C.L. upper limit on $Z$ boson invisible width is~\cite{ALEPH:2005ab}
\begin{equation}
\Delta\Gamma_{\rm inv} < 2.0~\,{\rm MeV}.
\label{eq:Zinv}
\end{equation}
$Z$ boson coupling to neutralino LSP pairs is proportional to $N_{14}^2-N_{13}^2$
and vanishes when $\tan\beta=1$. This coupling could also be small when the LSP is ``decoupled'' from Higgsinos, {\it e.g.}, for a Bino-like LSP with $|\mu| \gg |M_1|, g_1 v_{u,d}$ or a Singlino-like LSP with $|\mu| \gg 2 |\kappa/\lambda \mu|, |\lambda| v_{u,d}$.
\begin{figure}[t]
\centering
\subfigure{
\includegraphics[scale=0.238]{Delta_Z}}
\subfigure{
\includegraphics[scale=0.25]{sfmix}}
\caption[]{
$Z$ boson partial decay widths (left panel) and coupling parameters $|N_{13}^2-N_{14}^2|$ , $\cos^2\theta_{\tilde{b}}$, $\cos^2\theta_{\tilde{\tau}}$ (right panel) to the pairs $\tilde{\chi}_1^0\tilde{\chi}_1^0$ (red), $\tilde{b}_1\tilde{b}_1$ (green) and $\tilde{\tau}_1\tilde{\tau}_1$ (blue) versus the neutralino and sfermion masses.
Constraints on $\Delta\Gamma_{\rm inv}$ in Eq.~(\ref{eq:Zinv}) and $\Delta\Gamma_{\rm tot}$ in Eq.~(\ref{eq:Ztotal}) are imposed.
}
\label{fig:Zwidth}
\end{figure}
We show the impact of Eq.~(\ref{eq:Zinv}) on the relevant mass and coupling parameters in Fig.~\ref{fig:Zwidth}.
The left panel shows in red the scanning results of $\Gamma(Z \rightarrow \tilde{\chi}_1^0\tilde{\chi}_1^0)$ as a function of $m_{\tilde \chi^0_1}$.
The resulting $|N_{13}^2-N_{14}^2|$, which governs the $Z\tilde{\chi}_1^0\tilde{\chi}_1^0$ coupling, is shown in the right panel (red). Its typical value is near 0.1. The increasing in the allowed range for larger $m_{\tilde \chi^0_1}$ is due to the extra phase space suppression near the $Z$ decay threshold. For $\tan\beta>1$ and negligible $Z$ decay phase space suppression, this requires $\mu \gsim 140~\,{\rm GeV}$ for the Bino limit shown in Eq.~(\ref{eq:Bino_limit}) and $\mu/\lambda \gtrsim 540~\,{\rm GeV}$ for the Singlino limit shown in Eq.~(\ref{eq:Singlino_limit}).
The property of the neutralino LSP is constrained by the invisible decay branching fraction of the observed 126 GeV Higgs as well, with the 95\% C.L.~upper limit of ${\rm Br}_{\rm inv}$ around $56\%$~\cite{Belanger:2013kya} from indirect fitting with current observed production and decays. Current direct searches on Higgs to invisible from $ZH$ associated production and VBF set limits of ${\rm Br}_{\rm inv}<75\%$ \cite{ Aad:2014iia} and ${\rm Br}_{\rm inv}<58\%$~\cite{Chatrchyan:2014tja}. Limits from other searching channels such as mono-jet and $WH$ associated productions can also contribute (see, {\it e.g.},~Ref.~\cite{exotic}) and are relatively weak as well.
\subsubsection{Bounds on Light Sfermions}
Superpartners of light quarks and leptons are in general excluded up to a few hundred GeV with arbitrary mass splittings~\cite{Beringer:1900zz} and are not suitable to be the NLSP to coannihilate with light neutralino LSP. The stop quark has been excluded up to 63 GeV at LEP~\cite{Heister:2002hp} for arbitrary mixing angles and splittings. Sneutrino is in general unlikely to coannihilate with the light Bino-like LSP, because the $Z$-boson invisible width searches forbid light sneutrino. Only sbottom and stau could coannihilate with the light neutralino LSP.
Light sbottom and stau also contribute to the $Z$ hadronic width.
The current experimental precision on $Z$ boson decay width is $2.4952\pm0.0023~\,{\rm GeV}$~\cite{ALEPH:2005ab}, leading to
\begin{equation}
\Delta\Gamma_{\rm tot}<4.7~\,{\rm MeV} \ {\rm at\ 95\%\ C.L.},
\label{eq:Ztotal}
\end{equation}
which includes a theoretical uncertainty of $\sim0.5~\,{\rm MeV}$ based on a complete calculation with electroweak two-loop corrections~\cite{Freitas:2013dpa}.
The couplings of the $Z$ to the sfermions depend on the mixing angles of the sfermions, which are originated from the left-right mixing in the sfermion mass matrices.
We take the mixing angle $\theta_{\tilde{f}}$ convention that lighter mass eigenstate of the sfermions follows $\tilde f_1=\cos\theta_{\tilde{f}} \tilde f_L + \sin\theta_{\tilde{f}} \tilde f_R$. The $Z$ boson coupling to the sfermions can then be expressed as
\begin{equation}
Z\tilde f_1 \tilde f_1: g_f^L \cos^2\theta_{\tilde{f}}+g_f^R \sin^2\theta_{\tilde{f}},
\end{equation}
with $g_f^{L} = - (T_{3f} - Q_f \sin^2\theta_{\rm w})$ and $g_f^{R} = Q_f \sin^2\theta_{\rm w}$ being the left-handed and right-handed chiral couplings of the corresponding SM fermions. To minimize the $Z\tilde{f}_1\tilde{f}_1$ coupling in order to suppress the contribution to $\Gamma_{\rm tot}$, $\theta_{\tilde{f}}$ needs to be near the $Z$-decoupling value: $\tan^2\theta_{\tilde{f}}^{min}=-g_f^L/g_f^R$. For a sbottom (down-type squark), $\tan^2\theta_{\tilde{f}}^{min}$ equals $5.49$, preferring the lighter sbottom to be right-handed. For a stau (slepton), $\tan^2\theta_{\tilde{f}}^{min}$ equals $1.16$, preferring the lighter stau to be an even mixture of $\tilde \tau_L$ and $\tilde \tau_R$.
\begin{table}[t]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
$\tilde f$ & $m_{min} (\,{\rm GeV})$ & Ref. & Condition \\ \hline \hline
& 76 & DELPHI~\cite{Abdallah:2003xe} & $\tilde b \rightarrow b \tilde \chi^0$, all $\theta_{\tilde{b}}$, $\Delta m>7~\,{\rm GeV}$ \\ \cline {2-4}
$\tilde b$ & 89 & ALEPH~\cite{Heister:2002hp} & $\tilde b \rightarrow b \tilde \chi^0 $, all $\theta_{\tilde{b}}$, $\Delta m>10~\,{\rm GeV}$ \\ \cline{2-4}
& $645$ & ATLAS~\cite{Aad:2011cw,Aad:2013ija} & $\tilde b \rightarrow b \tilde \chi^0_1 $, $m_{\tilde \chi^0_1} < 100~\,{\rm GeV}$, for $m_{\tilde b}>100~\,{\rm GeV}$\\ \hline\hline
\multirow{1}[0]{*}{$\tilde \tau$} & $26.3\ (81.9)$ & DELPHI~\cite{Abdallah:2003xe} & $\tilde \tau \rightarrow \tau \tilde \chi^0_1, \Delta m>m_\tau\ (15~\,{\rm GeV}$), all $\theta_{\tilde{\tau}}$ \\ \cline{2-4}
\hline
\end{tabular}
\caption[]{Collider constraints on the sbottom and stau. Some of above constraints are from the Review of Particle Physics \cite{Beringer:1900zz}.
}
\label{tab:sfermion}
\end{table}
The left panel of Fig.~\ref{fig:Zwidth} shows the scanning results of $\Gamma(Z \rightarrow \tilde{b}_1\tilde{b}_1,\tilde{\tau}_1\tilde{\tau}_1 )$ as a function of $m_{\tilde{b}_1}$, $m_{\tilde{\tau}_1}$ after imposing $\Delta\Gamma_{\rm tot}<4.7$ MeV. The resulting mixing parameters $\cos^2\theta_{\tilde{f}}$ are shown in the right panel. For the light sbottom, it is almost completely right-handed with $\cos\theta_{\tilde{b}}\approx 0,\ m_{\tilde{b}_1} \gtrsim 16$ GeV.
For the light stau, a wide range of $\cos^2\theta_{\tilde{\tau}} \lesssim 0.25 $ can be accommodated with $m_{\tilde{\tau}_1} \gtrsim 32$ GeV, especially for large $m_{\tilde\tau_1}$ when there is extra kinematic suppression in phase space.
Light sbottom and light stau are also constrained by many other collider searches, as summarized in Table~\ref{tab:sfermion}.
The LEP constraints on sfermion pair productions excludes sbottom and stau $\lesssim 80-90$ GeV with relatively large mass splitting $\Delta m = m_{\tilde{b}, \tilde\tau}-m_{\tilde \chi^0_1} \gtrsim 5$ GeV, independent of sfermion mixing angles.
Once $\Delta m$ becomes small ($\lesssim 5$ GeV), the LEP constraints could be relaxed.
Mono-photon searches at LEP~\cite{LEPDM} could constrain the extreme degenerate LSP and NLSP sfermion. The limits, however, do not apply for GeV level mass splitting due to hadronic activity veto applied in the analysis.
There are currently no LHC bounds on stau yet. The existing analysis for sbottom searches at the LHC are optimized for heavy ($>$ 100 GeV) sbottom and larger mass splitting. These bounds are applicable to the heavier sbottom $\tilde b_2$
after taking into account the branching fraction modifications for $\tilde b_2$ decay.
To summarize, as a result of stringent collider constraints, the coannihilator sfermions considered in this paper are in the rather narrow ranges
\begin{eqnarray}
{\rm Stau:}&~m_{\tilde \tau_1} = (32-45)~\,{\rm GeV}~&{\rm with } ~\Delta m= m_{\tilde\tau}-m_{\tilde \chi^0_1} < (3-5)~\,{\rm GeV},
\label{eq:dmst}\\
{\rm Sbottom:}&~m_{\tilde b_1} = (16-45)~\,{\rm GeV}~&{\rm with } ~\Delta m= m_{\tilde{b}}-m_{\tilde \chi^0_1} < 7~\,{\rm GeV} .
\label{eq:dmsb}
\end{eqnarray}
\subsubsection{Bounds on Light Higgs Bosons}
Current measurements of the Higgs properties at the LHC, in particular the discovery modes $H\rightarrow\gamma\gamma$ and $H\rightarrow ZZ^*$ both point to the 126 GeV Higgs being very SM-like.
For the NMSSM, it is conceivable to have light Higgs bosons from the singlet Higgs fields, especially in the approximate PQ-symmetry limit of the NMSSM. These light Higgs bosons could be either CP-even or CP-odd.
A light CP-even Higgs boson also appears in the non-decoupling solution of the MSSM~\cite{Christensen:2012ei}.
They could give rise to new decay channels of the SM-like Higgs boson observed at the LHC and thus would be constrained by the current observations~\cite{LHCHAA,exotic}.
If the light Higgs bosons are present in the main annihilation channels for the DM, such as in the case of $A_1,\ H_1$-funnels, slight mixing with the MSSM Higgs sector is required to ensure large enough cross sections for $\tilde \chi^0_1\ch10\rightarrow A_1/H_1 \rightarrow$ SM particles in the early universe.
If sizable spin-independent direct detection rate is desired and mainly mediated by singlet-like light CP-even Higgs boson, its sizable mixing with the MSSM CP-even sector is required as well. LEP experiments have made dedicated searches for light Higgs bosons and have tight constraints on the MSSM components of the light Higgs $\xi_1^{h_v}$ and $\xi_1^{H_v}$.
NMSSMTools~\cite{NMSSMTools} has incorporated all these constraints on the light Higgs bosons. Hadron collider searches on light CP-odd Higgs bosons are also included.
\subsubsection{Relic Abundance Considerations}
\label{sec:relic}
In the multi-variable parameter space in the NMSSM, the collider constraints presented in the previous sections serve as the starting point for viable solutions. In connection with the direct and indirect searches, the DM related observables, such as Spin-Independent~(SI) cross sections $\sigma_{p,n}^{\rm SI}$, Spin-Dependent~(SD) cross sections $\sigma_{p,n}^{\rm SD}$, indirect search rate $\langle\sigma v\rangle$ and relic density $\Omega h^2$ are calculated with MicrOmegas 2.2~\cite{Belanger:2008sj} integrated with NMSSMTools. Furthermore, we choose the LSP to be neutralino and consider its contribution to the current relic abundance. As for a rather tight requirement, we demand the calculated relic density corresponding to the 2$\sigma$ window of the observed relic density~\cite{planckwmap} plus $10\%$ theoretical uncertainty~\cite{Roszkowski:2012uf,Akcay:2012db}. To be conservative, we also consider a loose requirement that the neutralino LSP partially provides DM relic, leaving room for other non-standard scenarios such as multiple DM scenarios~\cite{multidm}.
We thus choose the tight (loose) relic density requirement as
\begin{equation}
0.0947~(0.001) < \Omega_{\tilde \chi^0_1} h^2< 0.142,
\label{eq:relic}
\end{equation}
\subsection{DM Properties}
\label{sec:DMproperties}
With a comprehensive scanning procedure over the 15 parameters as listed in Table \ref{tab:range}, we now present the interesting features of the viable LSP DM solutions and discuss their implications and consequences.
\begin{figure}[t]
\centering
\subfigure{
\includegraphics[scale=0.25]{mdm_rel.png}
\label{fig:relic}}
\subfigure{
\includegraphics[scale=0.255]{mdm_direct.png}
\label{fig:direct}}
\caption[]{
Relic density (left panel) and scaled spin-independent cross section $\sigma_p^{\rm SI}$ (right panel) versus neutralino DM mass. All points pass constraints described in Sec.~\ref{sec:constrains}. The $A_1,\ H_1$-funnels, sbottom coannihilation, stau coannihilation solutions are shown in red, green and blue dots, respectively. Left panel: All points pass the LUX~\cite{Akerib:2013tjd} and superCDMS~\cite{Agnese:2014aze} direct detection constraints. The grey shaded region shows the sbottom coannihilation solutions that are excluded by direct detection. The horizontal line is the lower limit for the tight relic requirement. Right panel: The color points (shaded regions) are the viable solutions that pass tight (loose) relic abundance constraints specified in Eq.~(\ref{eq:relic}). Also shown are the 68\% and 95\% C.L. signal contours from CDMS II~\cite{Agnese:2013rvf} (dotted black enclosed region), 95\% C.L. exclusion and projected exclusion limits from superCDMS (solid and dashed black) and LUX/LZ (solid and dashed magenta). The grey shaded region at the bottom is for the coherent neutrino-nucleus scattering backgrounds \cite{Billard:2013qya}.
}
\label{fig:relicdirect}
\end{figure}
We show the DM relic density $\Omega h^2$ (left panel) and the scaled\footnote{DM direct detection observables are scaled with the ratio of the LSP relic density over the measured value.} spin-independent cross section $\sigma_p^{\rm SI}$ (right panel) versus the neutralino DM mass in Fig.~\ref{fig:relicdirect}. The red, green, and blue dots are the points in the $A_1,\ H_1$-funnels, sbottom, and stau coannihilation regions, respectively, which satisfy all constraints described in section~\ref{sec:constrains}
as well as direct detection limits from the LUX~\cite{Akerib:2013tjd} and superCDMS~\cite{Agnese:2014aze}. The grey shaded region shows the sbottom coannihilation solutions that are excluded by direct detection. The horizontal line marks the lower limit for the tight relic abundance requirement.
On the right panel, the color points (shaded regions) are the viable solutions that pass tight (loose) relic abundance constraints specified in Eq.~(\ref{eq:relic}). To gain some perspectives, also shown there are the 68\% and 95\% C.L.~signal contours from CDMS II~\cite{Agnese:2013rvf}, the current 95\% C.L.~exclusion and projected future exclusion limit from superCDMS, the current LUX result and future LZ expectation. The grey shaded region at the bottom is for the coherent neutrino-nucleus scattering backgrounds~\cite{Billard:2013qya}, below which the signal extraction would be considerably harder.
As seen from the left panel of Fig.~\ref{fig:relicdirect}, all the three scenarios as in Table \ref{tab:scenarios}
could provide the right amount of relic cold dark matter within the 2$\sigma$ Planck region.
However, results from the DM direct detection have led to important constraints, cutting deep into the regions consistent with the relic density considerations, in particular, for the sbottom coannihilation case.
The direct detection in the sbottom coannihilation scenario receives a large contribution from the light sbottom exchange, typically of the order $10^{-8}\sim10^{-5}$ pb, which is severely constrained by current searches from LUX and superCDMS.
The large shaded grey region of sbottom coannihilation solutions on the left panel of Fig.~\ref{fig:relicdirect} is excluded by the direct detection constraints. This is also seen on the right panel of Fig.~\ref{fig:relicdirect} by the green dots mostly excluded by the direct detection.
There is, however, a narrow dip region for $ m_{\tilde{b}_1}- m_{\tilde \chi^0_1} < 3~\,{\rm GeV}$ when the direct detection rate could be suppressed below the current limit (for example, see~\cite{Gondolo:2013wwa}). These small mass splittings indicate late freeze-out of the coannihilator, resulting in a low relic density for the DM. For $m_{\tilde{b}_1}-m_{\tilde \chi^0_1} > m_b$, the direct detection rate decreases slowly as the splitting increases. The collider searches from LEP also exclude large mass splitting. Consequently, to survive direct detection, loose relic density and collider constraints, the mass splittings typically need either to be between 2 GeV to $m_b$, or be as large as allowed by the LEP searches.
On the other hand, the $A_1,\ H_1$-funnels and stau coannihilation cases are not affected much by the direct detection constraints. Only a small fraction of $A_1,\ H_1$-funnels and stau coannihilation solution is excluded by the direct detection. For the $A_1,\ H_1$-funnel region, $m_{\tilde \chi^0_1}$ spans over the whole region of 2$-$40 GeV. For the sbottom (stau) coannihilation, only $ m_{\tilde \chi^0_1} \gtrsim 10$ (30) GeV is viable due to the tight LEP constraints.
\begin{figure}[t]
\centering
\subfigure{
\includegraphics[scale=0.25]{deltam_rel}}
\subfigure{
\includegraphics[scale=0.24]{mn1msf}}
\caption[]{Left panel: relic density versus the mass splitting $|m_{A_1,H_1}-2m_{\tilde \chi^0_1}|/m_{A_1,H_1}$ for $A_1$-funnel (red) and $H_1$-funnel (blue). Grey points represent those with non-negligible $s$-channel $Z$ boson contributions.
Right panel: the sfermion masses versus neutralino LSP mass for the coannihilation regions. The shaded/dotted regions are those pass loose/tight relic density requirement for the sbottom coannihilation (green) and stau coannihilation (blue). The diagonal lines indicate the mass splittings of 0, 1.7 ($m_\tau$), 4.2 ($m_b$), and 7 GeV as references.
}
\label{fig:masses}
\end{figure}
There are several recent studies on the possible ``blind spot'' for direct detection where large accidental cancellation in the neutralino Higgs couplings occurs~\cite{Cheung:2012qy,Han:2013gba,Huang:2014xua}. Ref.~\cite{Huang:2014xua} specifically pointed out the non-negligible cancellation between direct detection mediated by the light CP-even Higgs and the heavy CP-even Higgs with negative $\mu$ parameter. These constructions could further reduce the direct detection rate for our $A_1,\ H_1$-funnels and stau coannihilation solutions.
The left panel of Fig.~\ref{fig:masses} shows the relic density versus the mass splitting $|m_{A_1,H_1}-2m_{\tilde \chi^0_1}|/m_{A_1,H_1}$ for the $A_1,\ H_1$-funnel region. The deviation from the pole mass is typically less than 15\% to satisfy the relic density constraints, with $|m_{A_1,H_1}-2m_{\tilde \chi^0_1}| \lesssim 12$ GeV. The interplay among the LSP's couplings to the resonant Higgs mediator, the Higgs couplings to SM particles, and the resonance enhancement in the early universe determines the relic density. For larger deviations from the resonance region, there are non-negligible $Z$ mediated contributions (indicated by grey points in Fig.~\ref{fig:masses}), which is emphasized in Refs.~\cite{Han:2013gba,zfunnel}.
The right panel of Fig.~\ref{fig:masses} shows the mass of sbottom/stau versus neutralino LSP for the sbottom/stau coannihilation regions. For the sbottom, imposing loose relic density requirement and collider constraints yields that 2 GeV $< m_{\tilde{b}_1} -m_{\tilde \chi^0_1} < 7$ GeV. Most points that satisfy the direct detection fall in the region of 2 GeV $< m_{\tilde{b}_1} -m_{\tilde \chi^0_1} < m_b$, which typically have a suppressed relic density. Only very few points survive both the dark matter direct detection and tight relic density requirement with $m_{\tilde \chi^0_1} \sim 20$ GeV and $m_{\tilde{b}_1} -m_{\tilde \chi^0_1} \sim 6$ GeV.
For the stau,
imposing direct detection bound does not restrict the mass regions further, while imposing the tight relic density requirement favors slightly larger stau masses.
\begin{figure}[t]
\centering
\subfigure{
\includegraphics[scale=0.25]{mdm_n1j1_a1h1_tight_decomp_b}}
\subfigure{
\includegraphics[scale=0.25]{mdm_n1j1_a1h1_tight_decomp_s}}
\caption[]{The LSP DM candidate components $N^2_{1j}$ as a function of its mass in the $A_1,\ H_1$-funnel region with tight relic constraints. The left panel is for the Bino-like LSP ($N_{11}^2>0.5$) and the right panel is for Singlino-like LSP ($N_{15}^2>0.5$).
}
\label{fig:components_funnel}
\end{figure}
It is informative to understand the DM LSP nature in terms of the gaugino, Higgsino and Singlino components $N_{1j}^2$. This is shown in Figs.~\ref{fig:components_funnel} and Fig.~\ref{fig:components_coann}, for the $A_1,\ H_1$-funnel region and the stau, sbottom coannihilation regions, respectively, as a function of the LSP mass.
As seen in Fig.~\ref{fig:components_funnel}, for the $A_1,\ H_1$-funnel case, the dark matter could either be Bino (dark black dots) or Singlino (light black dots) dominated, or as a mixture of these two.
For a Bino-like LSP (left panel), the $\tilde{H}_d$ component is typically larger: about 0.5\%$-$5\% while $\tilde{H}_u$ component is suppressed, $\lesssim$ 0.1\%. For a Singlino-like LSP (right panel), it features a larger $\tilde{H}_u$ component: around 1\% to 10\%, while $\tilde{H}_d$ fraction is much more suppressed. These features are direct results of the mixing matrix as shown in Eq.~(\ref{eq:Bino_limit}) and Eq.~(\ref{eq:Singlino_limit}).
\begin{figure}[t]
\centering
\subfigure{
\includegraphics[scale=0.17]{mdm_n1j2_loose_b}}
\subfigure{
\includegraphics[scale=0.17]{mdm_n1j2_loose_s}}
\subfigure{
\includegraphics[scale=0.17]{mdm_n1j3}}
\caption[]{The LSP DM candidate components $N^2_{1j}$ as a function of its mass for stau coannihilation with the Bino-like LSP (left panel) and the Singlino-like LSP (middle panel) and sbottom coannihilation (right panel) with the loose relic density constraint.
}
\label{fig:components_coann}
\end{figure}
As seen in Fig.~\ref{fig:components_coann}, the stau coannihilation case can have the LSP being dominantly Bino-like (left panel) with a Higgsino fraction up to about $5\%$ (mostly $\tilde{H}_d$),
or dominantly Singlino-like (middle panel) with a Higgsino fraction up to about 20\% (mostly $\tilde{H}_u$). The Singlino-like LSP case usually has a larger relic density due to the suppressed coupling to the stau coannihilator. The sbottom coannihilation case (right panel) has a much smaller fraction of Higgsino component $0.5\%$ or less, with LSP being mostly Bino-like.
Finally, we want to comment on the degree of mass degeneracy for these solutions. For the funnel case, the requirement is mostly for hitting the resonance with the LSP pair. For a measure defined as $|m_{H_1/A_1}-2 m_{\tilde \chi^0_1}|/m_{H_1/A_1}$, about $10\%$ mass split in the neutralino and singlet-like Higgs sector is more than sufficient to provide viable solutions as shown in Fig.~\ref{fig:masses}.
For the sfermion coannihilation, several requirements need to be satisfied simultaneously. One requirement is the nearly degenerate masses of the coannihilator and the LSP, as enforced by the LEP constraints and effective coannihilation. The other requirement is to have the appropriate amount of L-R mixing while keeping the heavier eigenstate heavier than hundreds of GeV, as enforced by $Z$-boson width constraint, collider searches on sfermions, and the decays of the SM-like Higgs boson. This tuning leads to the lack of solutions with $Z$-decoupling sfermions as shown in Fig.~\ref{fig:Zwidth}. Overall, light neutralino solutions require certain level of tuning, and future searches are likely to either lead to discovery or push the solutions into much narrower and fine-tuned regions.
\subsection{Direct and Indirect Detection}
As already discussed in the last section, for the spin-independent (SI) direct detection of all these three scenarios with the loose relic density constraint, the signal rates vary in a large range.
It is typically mediated by the CP-even Higgs bosons via $t$-channel exchange. The partons in the nucleon couple to the MSSM doublet Higgs bosons (or $h_v$ and $H_v$) directly. The dark matter candidate, which is Bino-like or Singlino-like, couples to the doublet Higgs bosons through their Higgsino components only, as shown in Eqs.~(\ref{eq:Bino_Hchichi_limit}) and (\ref{eq:Singlino_Hchichi_limit}). Their direct detection are usually suppressed because the singlet-like Higgs only couples to the SM fermions weakly, and the doublet Higgs bosons do not couple to the LSP pairs much. The signal rate could be extended well below the coherent neutrino backgrounds. Certain tuned scenarios could result in larger SI direct detection, for example, a very light CP-even Higgs with sizable doublet Higgs fraction~\cite{Draper:2010ew,Cao:2013mqa}.
The detection rate for the sbottom coannihilation scenario, on the other hand, is naturally high, coming from the additional contribution through the sbottom exchange.\footnote{A recent study shows that the pole region resides at $m_{\tilde b}=m_{\tilde \chi^0_1}-m_b$ instead of $m_{\tilde b}=m_b + m_{\tilde \chi^0_1}$~\cite{Gondolo:2013wwa}. Given that the sbottom mass is always larger than the corresponding LSP mass, we are away from this pole region. In our analyses, we correct the direct detection cross sections calculated by MicrOMEGAs~\cite{Belanger:2008sj} by replacing the values for points near the fake pole of $m_{\tilde b}=m_b + m_{\tilde \chi^0_1}$ with points of the same sbottom mass away from the pole, which well approximates the results in Ref.~\cite{Gondolo:2013wwa} in the relevant regions. }
The next generation direct detection experiments such as LZ and superCDMS would provide us valuable insights into very large portion of the allowed parameter space with the increased sensitivity of several orders of magnitude.
\begin{figure}[t]
\centering
\subfigure{
\includegraphics[scale=0.25]{mdm_direct_sd}}
\subfigure{
\includegraphics[scale=0.25]{mdm_indirect}}
\caption[]{The scaled proton spin-dependent direct detection rate (left panel) and the indirect detection rate (right panel) versus the neutralino DM mass. Red, green and blue dots are for the solutions in $A_1,\ H_1$-funnels, sbottom and stau coannihilation scenarios, respectively. The solid lines on the left panel correspond to exclusions on $\sigma_p^{\rm SD}$ from SIMPLE~\cite{Felizardo:2011uw}, PICASSO~\cite{Archambault:2012pm}, COUPP~\cite{Behnke:2012ys}, and XENON100~\cite{Aprile:2013doa}.\footnotemark[5] ~~The solid (dashed) line on the right panel corresponds to exclusion on indirect detection rate from Fermi-LAT~\cite{fermi_dwarfs} with $b\bar b$ ($\tau^+\tau^-$) annihilation mode.
The shaded region are the preferred low velocity annihilation cross section to account for the gamma ray excess with 35 GeV Majorana DM annihilating into $b\bar b$~\cite{Berlin:2014tja}.}
\label{fig:indi}
\end{figure}
\footnotetext[5]{Indirect searches on neutrinos from some specific DM annihilation in the SUN by ICECUBE~\cite{ICECUBESD} and Baksan Underground Scintillator Telescope~\cite{BUSTSD} are translated into model-dependent bounds on SD direct detection rate, yielding more stringent bounds on neutrino-rich annihilation modes.}
We show the scaled proton Spin-Dependent (SD) cross section in the left panel of Fig.~\ref{fig:indi}. All the viable solutions has the spin-dependent cross sections of the order $10^{-4}$ pb or smaller, below the current limits from various dark matter direct detection experiments. For these solutions through the funnels and coannihilations, the usual connection among the annihilation, direct detection, indirect detection and collider searches through crossing diagrams is not always valid. It needs to be examined in a scenario and model specific manner. Due to the Majorana nature of the neutralino LSP, only the CP-even Higgs bosons could mediate the SI direct detection, and only the axial vector current through $Z$-boson contributes to the SD direct detection. In addition, there are squark contributions to the direct detection, which leads to large SI direct detection rate for sbottom coannihilation scenario as discussed in previous sections. As a result, the SD direct detection provides a complementary probe for the neutralino LSP's couplings to the $Z$ boson. This is especially true even in some of the ``blind spot'' scenarios.
In the right panel of Fig.~\ref{fig:indi} we show the low velocity DM annihilation rate in the current epoch for different light DM scenarios, together with the 95\% C.L.~exclusions on the indirect detection rate from Fermi-LAT~\cite{fermi_dwarfs}. Majority of our solutions satisfy the indirect detection constraints. Note that the low-velocity DM annihilation rate could be either larger or smaller than the usual WIMP thermal relic preferred value of $\sim2\times 10^{-26}~ {\rm cm}^3 {\rm s}^{-1}$ (assuming $s$-wave dominance). This is because the DM annihilation rate at low velocity does not necessarily correspond to the thermal averaged dark matter annihilation $\langle \sigma v \rangle$ around the time of the dark matter freezing out. When far away from the resonance, the $s$-channel CP-odd (CP-even) Higgs exchange corresponds to $s$-wave ($p$-wave) annihilation. While low velocity annihilation rate for the $s$-wave annihilation is similar to the thermal freezing out rate due to the velocity independence, the rate for $p$-wave annihilation today is much lower comparing to the early universe due to velocity suppression. Furthermore, this simple connection between mediator CP property and partial wave no longer holds when near the resonance region, when full kinematics needs to be taken into account in numerical studies.
In particular, for the funnel region with $2 m_{\tilde \chi^0_1} > m_{A_1, H_1}$ ($2 m_{\tilde \chi^0_1} < m_{A_1, H_1}$), low velocity rate should be higher (lower) than the freezing out annihilation rate due to the increase (decrease) of resonant enhancement.
The bulk of our funnel region solutions corresponds to the $2 m_{\tilde \chi^0_1} < m_{A_1, H_1}$ case, as a result of the combined constraints imposed.
Interestingly, our results indicate that possible solutions exist for those regions preferred by the GeV gamma-ray excess from the Galactic Center~\cite{gammaray}, which is indicated by the grey region in the right panel of Fig.~\ref{fig:indi}. While the astro-physical sources for explanation of the excess could be very subtle with different subtraction scheme resulting in different shapes of excess, or even no excess, this observation has stimulated several interesting discussions recently~\cite{Berlin:2014tja,gammafollow}.
As shown in later sections, the dominant decay for funnel mediators is $b\bar b$, which serves as a good candidate for the gamma-ray source. For the stau and sbottom coannihilations, the main annihilation channels for the LSP pairs are $\tau^+\tau^-$ and $b\bar b$, with the former yielding a different gamma ray spectrum.
The predicted gamma-ray excess spectra could vary in shape in many different ways in a given model such as (N)MSSM due to various composition of annihilation products.
With more data collected and analyzed, confirmation of the gamma-ray excess and a robust extraction of the excess shape would help pin down the source and shed light on the underlying theory. The three light neutralino LSP DM scenarios provide an important framework with their different annihilation modes, yielding a range of soft to hard gamma-ray spectra to confront the potential excess data.
\section{LHC Observables}
\label{sec:collider}
Collider experiments provide a crucial testing ground for the WIMP light dark matter scenarios. In the NMSSM, guided by the light $A_1$ and $H_1$ in the funnel region, the light sbottom and stau in the coannihilation regions, we discuss the collider implications of the three light dark matter solutions on observables related to the SM-like Higgs boson, searches for light scalars and Missing Transverse Energy (MET) signals.
\subsection{Modifications to the SM-like Higgs Boson Properties}
The observation of a SM-like Higgs boson imposes strong constraints on the extensions of the SM Higgs sector. In particular, one of the CP-even Higgs bosons in the NMSSM is required to have very similar properties to the SM Higgs boson. As a result, any deviation of this SM-like Higgs boson from $h_v$ state is tightly constrained. Moreover, decays of the SM-like Higgs boson to these newly accessible states of $\tilde \chi^0_1\ch10$, $A_1A_1$, $H_1H_1$, $\tilde \tau_1^+\tilde \tau_1^-$ and $\tilde b_1 \tilde {b}^*_1$ could reduce the Higgs branching fractions to the SM particles, which are constrained by the current experimental results as well. Furthermore, new light charged sparticles such as sbottom and stau could modify the loop-induced Higgs couplings such as Higgs to diphoton.
We examine the cross sections of the dominant channels for the SM-like Higgs boson search, as well as the Higgs decay branching fractions to those new light states. In Fig.~\ref{fig:SMratio}, we show the ratios of the cross sections with respect to the SM value $\sigma/\sigma_{\rm SM}$ of $gg\rightarrow H_{SM} \rightarrow WW/ZZ$ versus that of $gg\rightarrow H_{SM} \rightarrow \gamma\gamma$ for the 126 GeV SM-like Higgs. The $\gamma\gamma$ channel remains correlated with the $WW/ZZ$ channel, with the cross section ratios to the SM values varying between $0.7 - 1.2$.
Since the $W$-loop dominates the Higgs to diphoton coupling, deviations from the diagonal line come from the variation of other loop contributions such as the (s)fermion-loop. Importantly, although we have new light charged states such as sbottom and stau that could modify the Higgs to diphoton coupling, it does not show large deviations. Their limited contributions result from indirect constraints imposed on the Higgs boson decays to these light sfermions pairs.
Beyond the mass range of our current interest, dedicated scan for stau around $100~\,{\rm GeV}$ may still give very large enhancement in the diphoton rate, as discussed in detail in Ref.~\cite{Carena:2012gp}.
\begin{figure}[t]
\centering
\includegraphics[scale=0.24]{SMRates}
\caption{The cross section ratios $\sigma(gg \rightarrow H_{\rm SM} \rightarrow WW/ZZ)/\sigma_{\rm SM}$ versus $\sigma(gg \rightarrow H_{\rm SM} \rightarrow \gamma\gamma)/\sigma_{\rm SM}$ for the SM-like Higgs. The $A_1,\ H_1$-funnels, sbottom coannihilation, stau coannihilation solutions are in red, green and blue dots, respectively. A black dashed line with slope 1 is shown as a reference.}
\label{fig:SMratio}
\end{figure}
\begin{figure}[t]
\centering
\subfigure{
\includegraphics[scale=0.25]{BRSM1}}
\subfigure{
\includegraphics[scale=0.25]{ZWBRSM}}
\caption[]{Left panel: branching fractions of the SM-like Higgs boson decaying to new light Higgs channels $A_1 A_1$ (magenta and orange), and $H_1 H_1$ (black) versus the $h_v$ fraction $(\xi_{{\rm SM}}^{h_v})^2$ of the SM-like Higgs boson. Right panel: branching fractions of the SM-like Higgs boson decaying to $\tilde \chi^0_1\ch10$ (red), $\tilde b_1 \tilde b_1^*$ (green) and $\tilde \tau_1^+ \tilde \tau_1^-$ (blue) versus partial widths of these modes for $Z$ boson.
}
\label{fig:brsm}
\end{figure}
We show the decay branching fractions of the SM-like Higgs boson to the new states in Fig.~\ref{fig:brsm}. The left panel shows the branching fractions of $H_{SM}\rightarrow A_1 A_1, H_1 H_1$. We see that the exotic decays can be as large as $40\%$ and still consistent with the current Higgs measurements. Given the possible decay final states of $A_1$ and $H_1$ to $\tau\tau$, $b\bar b$ or $\gamma\gamma$, dedicated searches for these exotic multi-body decays of the SM-like Higgs could be fruitful in studying these solutions~\cite{LHCHAA,exotic}.
A generic 7-parameter fit with extrapolation shows the LHC 14 TeV could bound the exotic decays of the Higgs boson up to $14-18\%$ ($7-11\%$) with 330 (3000) ${\rm fb}^{-1}$ of integrated luminosity~\cite{Dawson:2013bba}, assuming the couplings of the Higgs boson to $W$ and $Z$ not exceeding the SM values \cite{Dobrescu:2012td}.
The right panel in Fig.~\ref{fig:brsm} shows the branching fractions of $H_{SM}\rightarrow \tilde \chi^0_1\ch10$, $\tilde \tau_1^+ \tilde \tau_1^-$ and $\tilde b_1 \tilde b_1^*$ versus contributions to the $Z$-boson width.
The invisible decay channel $\tilde \chi^0_1\ch10$ (red) shows some correlations between $Z$ and $H_{\rm SM}$ decay because both are mediated through the Higgsino component.
The invisible branching fraction of the SM-like Higgs boson could be quite sizable, reaching $30\% - 40\%$. While the current LHC limits on the invisible Higgs decay via the $ZH$ and VBF channels are relatively weak \cite{Aad:2014iia,Chatrchyan:2014tja,Belanger:2013kya}, future measurements will certainly improve the sensitivity to further probe this important missing energy channel~\cite{invifuture}.
The Higgs boson couplings to sfermion receive contributions from D-term, F-term and trilinear soft SUSY breaking terms, resulting in a generally non-correlated decay branching fractions to $\tilde b_1 \tilde b_1^*$ (green) and $\tilde \tau_1^+ \tilde \tau_1^-$ (blue) comparing to the corresponding decays of the $Z$ boson.
%
These decay branching fractions could be as large as $30\%$. However, given the small mass splitting between the mass of the sbottom/stau with that of the LSP,
all the SM decay products would be too soft to be identifiable in the LHC environment. In practice, those channels could be counted as the invisible modes.
\subsection{Non-SM Light Higgs Bosons}
Non-SM light Higgs bosons are particularly important in the $A_1,\ H_1$-funnel solutions and may as well exist for sbottom and stau coannihilation solutions. They are well-motivated in the PQ-limit NMSSM. These light scalars are usually singlet-dominant, but they have non-negligible mixing with the MSSM doublet Higgs bosons in the case of the $A_1,\ H_1$-funnel solutions.
\begin{figure}[t]
\centering
\subfigure{
\includegraphics[scale=0.252]{compa1}}
\subfigure{
\includegraphics[scale=0.25]{Bra1}}
\subfigure{
\includegraphics[scale=0.252]{comph1}}
\subfigure{
\includegraphics[scale=0.25]{Brh1}}
\caption[]{Left panels: squared normalized couplings of the light CP-even, CP-odd Higgs bosons to up-type quarks (brown), down-type quarks (blue), gluon pair (pink) and weak boson pairs (orange) versus their doublet fraction: $(\xi_1^A)^2$ for $A_1$ (upper panel) and $(\xi_1^{h_v})^2+(\xi_1^{H_v})^2$ for $H_1$ (lower panel) in the funnel regions. Right panels: branching fractions of light Higgs bosons $A_1,~H_1$ to $\tilde \chi^0_1\ch10$ (red), $b\bar b$ (green) and $\tau^+\tau^-$ (blue) and $A_1 A_1$ (brown) final states for the funnel regions.
}
\label{fig:phi}
\end{figure}
The two panels on the left of Fig.~\ref{fig:phi} show the couplings of $A_1$ and $H_1$ to quarks, gluons and gauge bosons, normalized to the SM values, versus the doublet fractions as defined in Eq.~(\ref{eq:frac}).
For $A_1$, the couplings squared roughly scale with the MSSM CP-odd Higgs fraction $(\xi_1^A)^2$. The couplings to the up-type quarks are further suppressed by $1/\tan\beta$ while the couplings to the down-type quarks are enhanced by $\tan\beta$, which could reach $\sim 0.1$ for $|g_d/g_d^{\rm SM} |^2$ despite the small $(\xi_1^A)^2$. Loop induced $A_1$ coupling to gluon is dominated by the bottom loop, therefore roughly the same order as the normalized $A_1d\bar{d}$ coupling.
The $H_1$ couplings to SM particles are through its $h_v$ and $H_v$ components. $h_v$ couples in the same way as the SM Higgs, while $H_v$ couples to the up- and down-type quarks with a factor of $1/\tan\beta$ and $\tan\beta$ of the corresponding SM Higgs couplings, and does not couple to $W$ and $Z$ at all. $H_1 d\bar d$ and $H_1 gg$ couplings squared span over a while range for a given $(\xi_1^{h_v})^2+(\xi_1^{H_v})^2$, while $H_1 u\bar u$ and $H_1 VV$ scale with $(\xi_1^{h_v})^2+(\xi_1^{H_v})^2$ almost linearly.
We show the leading decay branching fractions of the light Higgs bosons for the $A_1,\ H_1$-funnel cases in the two right panels of Fig.~\ref{fig:phi}. The decays of both CP-even and CP-odd Higgs boson show clear $\tau^+\tau^-$ dominance at lower masses and $b\bar b$ dominance once above the $b\bar b$ threshold. It is interesting to note that the invisible mode for $A_1 \rightarrow \tilde \chi^0_1\ch10$ is competitive to $\tau^+\tau^-$ below the $b\bar b$ threshold, and increasingly important for larger $m_{A_1}$ comparing with the $b\bar b$ mode. This is because the higher DM mass, the more annihilation contribution through $Z$-boson (for example, the $Z$-funnel emphasized in Ref.~\cite{Han:2013gba}) could be in effect, allowing either larger deviation of the dark matter from $A_1$ pole and larger branching fraction of $A_1$ to LSP pair.
For the $H_1$ decays on the other hand, the invisible mode $H_1\rightarrow \tilde \chi^0_1\ch10$ is less competitive and typically below $30\%$. A new interesting channel $H_1 \rightarrow A_1 A_1$ opens up when kinematically allowed, which could reach as large as 80\%.
These light Higgs bosons can be produced either indirectly from the decay of heavier Higgs bosons or
directly from the SM-like processes through their suppressed MSSM doublet Higgs components. The former indirect production has many unique features. One of the important cases has been discussed in the previous section as $H_{SM} \rightarrow A_{1}A_{1}$. Many other interesting channels have also been discussed in Refs.~\cite{exotic,lowhiggs}.
\begin{figure}[t]
\centering
\subfigure{
\includegraphics[scale=0.25]{Siga1}}
\subfigure{
\includegraphics[scale=0.25]{Sigh1}}
\caption[]{Total cross sections at the 14 TeV LHC for the light $A_1$ (left panel) and $H_1$ (right panel), from the gluon fusion (ggF), Vector Boson Fusion (VBF), Vector boson associated production ($VH_1$), and $b\bar b$, $t\bar t$ associated production.
}
\label{fig:A1H1_pro}
\end{figure}
The direct production cross sections at the LHC could still be quite sizable, benefited from the large phase space and high parton luminosity at low $x$. We calculate the cross sections of these light Higgs bosons by extrapolating SM Higgs cross sections~\cite{Dittmaier:2011ti} to low mass regions and scaling with the corresponding squared couplings. The production cross sections for various channels are shown in Fig.~\ref{fig:A1H1_pro}. The gluon fusion remains to be the leading production mode, and is typically of the order of pb. For the light $A_1$, because its coupling to the top quark is suppressed by $\tan\beta$, the $t\bar t A_1$ cross section are as low as tens of ab, while $b\bar bA_1$ cross section could reach as high as pb level. For the light $H_1$, it usually mixes more with the $h_v$, resulting in sub pb level $t\bar t H_1$ and $b\bar b H_1$ cross sections. The light CP-even Higgs boson also couples to the weak bosons. The VBF and $Z/W H_1$ associated production rate range from sub fb to sub pb.
\begin{figure}[t]
\centering
\subfigure{
\includegraphics[scale=0.25]{darkportal_low}}
\subfigure{
\includegraphics[scale=0.25]{darkportal_high}}
\caption[]{Neutralino DM production from Higgs decays at the 14 TeV LHC as a function of Higgs boson mass. Left panel is for $ZH, WH$ and VBF production, and right panel is for $t\bar t H/A,\ b\bar b H/A$ associated production.
}
\label{fig:portal}
\end{figure}
As discussed in the last section, one of the promising channels to search at the LHC is the Higgs boson to invisible mode \cite{Aad:2014iia,Chatrchyan:2014tja}. This study can be naturally carried out with the Higgs bosons other than the SM-like one.
In Fig.~\ref{fig:portal} we show the cross sections for the Higgs bosons produced in channels of $t\bar tH/A$, $b\bar bH/A$, $WH/ZH$,
as well as VBF, with the subsequent decay of Higgs bosons into a neutralino LSP pair as the invisible mode. For $VH$ and VBF, the cross section rate could be as large as 10 fb to 1 pb for production via a relatively light Higgs, reaching a maximum near $m_{H_{SM}} \approx 125$ GeV. This is because VVH coupling is maximized for the SM-like Higgs.
We note that given the fact that the SM-like Higgs boson must take up a large portion of $h_v$ in the doublet, such associated production will be correspondingly suppressed for other Higgs bosons.
On the other hand, the $b\bar b H/A$ and $t\bar t H/A$ production cross sections reach their maximal allowed value around $80$ GeV and fall below 1 fb for $m_{H/A} \gtrsim 600$ GeV.
We have also included contributions from the solutions of both $A_1,\ H_1$-funnels and coannihilations. In principle, the coannihilation regions do not necessarily have light Higgs bosons in presence, nor the Higgs bosons have large branching fractions to DM pairs. Nevertheless, Higgs bosons could help enhance the DM signals, especially for the Singlino-like one.
These processes can be triggered in the LHC experiments with large MET plus the other companioning SM particles. Besides the typical search for $\ell\ell\ {\rm or}\ {\ell\nu}+\ensuremath{\displaystyle{\not}E_T}$ and VBF jets $+\ensuremath{\displaystyle{\not}E_T}$, other possible search channels include the heavy quark associated production $t\bar t+\ensuremath{\displaystyle{\not}E_T}$ and $b\bar b+\ensuremath{\displaystyle{\not}E_T}$.
It is also known that one could take the advantage of the Initial State Radiation (ISR) of a photon or a jet for DM pair production. Such searches have been carried out in terms of effective operators~\cite{Goodman:2010ku} at the LHC for mono-photon and mono-jet searches~\cite{LHCDM}. These searches should be interpreted carefully in our case through Higgs portal, due to the existence of the relatively light particles in the spectrum (see, {\it e.g.}, Ref.~\cite{Busoni:2014sya}).
\subsection{Light Sfermions}
\label{sec:sbottom}
It is of intrinsic interest to study the viability of the light sfermions at the LHC.
Usual sfermion searches at the LHC tag the energetic visible part of the sfermion decay, requiring a larger mass gap between the sfermion and neutralino LSP. In this section, we discuss the LHC implications for these light sfermions with compressed spectra.
The light sbottom has to be very degenerate with the LSP to avoid the LEP constraints as shown in Eq.~(\ref{eq:dmsb}): $\Delta m=m_{\tilde{b}_1}-m_{\tilde \chi^0_1} \lesssim 7$ GeV. This very special requirement has important kinematical and dynamical consequences and it leads to two distinctive regimes for the sbottom search at the LHC.
For $\Delta m>m_b$, the prompt decay of $\tilde b_1 \rightarrow b \tilde \chi^0_1$ would result in $2b+\slash{\hspace{-2.5mm}E}_T$ final state for sbottom pair production. Given the softness of the $b$ jets with energy of a few GeV, these events have to be triggered by demanding large $\slash{\hspace{-2.5mm}E}_T$ or a very energetic jet from initial or final state radiation. As a result, the $b$ jet from sbottom though soft in the sbottom rest frame, can be boosted and can be even triggered on. However, the signal cross section is reduced by orders of magnitude with the requirement of large $\slash{\hspace{-2.5mm}E}_T$ or a energetic jet.
\begin{table}[t]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
& \multicolumn{3}{|c|}{SRA} & SRB \\ \hline
$m_{\rm CT}$ & $\ge 250~\,{\rm GeV}$ & $\ge 300~\,{\rm GeV}$ & $\ge 350~\,{\rm GeV}$ & \\ \hline
$95\%$ C.L. upper limit & \multirow{2}[0]{*}{0.45} & \multirow{2}[0]{*}{0.37} & \multirow{2}[0]{*}{0.26} & \multirow{2}[0]{*}{1.3} \\
$ \sigma_{vis}$ (fb) & & & & \\ \hline
$ \sigma_{sig}$ (fb) & 0.20 & 0.19 & 0.17 & 137 \\
\hline
\end{tabular}
\caption[]{Summary of the ATLAS sbottom search results on the upper bound of signal cross section $\sigma_{vis}$~\cite{Aad:2013ija}, and the sbottom signal cross section $\sigma_{sig}$ after selection cuts for the benchmark point of $m_{\tilde{b}_1}=20$ GeV and $m_{\tilde \chi^0_1}=14$ GeV from our study, in the two signal regions SRA and SRB. }
\label{tab:sbsb_search}
\end{table}
ATLAS has performed the sbottom searches for $2b+\slash{\hspace{-2.5mm}E}_T$ and $b\bar bj+\slash{\hspace{-2.5mm}E}_T$ final states ~\cite{Aad:2013ija} at the 8 TeV LHC with 20 ${\rm fb}^{-1}$ integrated luminosity, and a similar CMS analysis has used the 7 TeV data with $H_T$ and variable $\alpha_T$ to reject backgrounds with 0, 1, 2 and 3 $b$-jets~\cite{Chatrchyan:2012wa}. While current studies focus on the sbottom mass
between 100 $-$ 700 GeV with $\Delta m \geq 15$ GeV, we adopted the same cuts used in their analyses to put bounds on the light sbottom in the sbottom coannihilation scenario.
For illustration, we choose a sbottom mass to be $20~\,{\rm GeV}$ and a neutralino LSP mass to be $14~\,{\rm GeV}$. We generate the events using MadGraph5~\cite{Alwall:2011uj} at parton level. In Table~\ref{tab:sbsb_search}, we list the 95\% C.L. upper limit on $\sigma_{vis}$ from the ATLAS analysis~\cite{Aad:2013ija} for two signal regions: SRA, mostly sensitive to $b\bar b+\ensuremath{\displaystyle{\not}E_T}$ final state, and SRB, mostly sensitive to $b\bar bj+\ensuremath{\displaystyle{\not}E_T}$ final state. This search mainly relies on large MET with two $b$-tagged jets and requires additional hard jet in SRB~\cite{Aad:2013ija}. The last row of Table~\ref{tab:sbsb_search} gives the signal cross sections after all cuts, $\sigma_{sig}$, for the chosen benchmark point in the sbottom coannihilation region. We see that the $b\bar b+\slash{\hspace{-2.5mm}E}_T$ search does not provide a meaningful bound for the light sbottom case, which could be attributed to the inefficient choice of the acceptance cuts, optimized for sbottom mass of hundreds of GeV.
The $b\bar bj+\slash{\hspace{-2.5mm}E}_T$ search in SRB, on the other hand, provides far more stringent bound that rules out the light sbottom prompt decay case with $\Delta m = m_{\tilde{b}}-m_{\tilde \chi^0_1} > m_b$. Varying the light sbottom mass and neutralino LSP mass does not alter the results much since the triggers and cuts are on the order of hundred GeV.
For $\Delta m < m_b$, the tree-level 2-body decay is kinematically inaccessible and its decay lifetime is most likely longer than the QCD hadronization scale of ($10^{-12} - 10^{-13}$) second.
A sbottom would first hadronize into a ``R-hadron'' \cite{Farrar:1978xj}.
If the R-hadron subsequently decays in the detector, the small mass difference would lead to very soft decay products with little MET and thus escape the detection at the LHC. These events may have to be triggered on by demanding a highly energetic jet from initial or final state radiation, recoiling against large MET. The requirement of large MET or a leading jet of hundreds of GeV reduces its signal cross section by several orders of magnitude. The overwhelming hadronic backgrounds at the LHC environment would render this weak signal impossible.
%
If the R-hadron decays within the detector with favorable displacement, an interesting possibility of displaced vertex search at the LHC with high $p_T$ jet recoiling against sbottom pairs may be sensitive to
such a scenario, see Ref.~\cite{displaced}.
If the R-hadron, on the other hand, is quasi-stable and is charged (CHArged Massive Particle CHAMP), it could lead to a soft charged track in the detector.
Searching for such signals is interesting, but typically challenging at the LHC~\cite{Batell:2013psa}.
On the other hand, such a light and long-live charged R-hadron has been excluded by CHAMP searches at the LEP~\cite{LEPCHAMP}.
\label{sec:stau}
In the stau coannihilation scenario, there is typically a light stau of mass between 32 and 45 GeV, which degenerates with the neutralino LSP with a small mass splitting of less than 3 $-$ 5 GeV.
It is known that searching for slepton signals at the LHC is extremely challenging because of the low signal rate and large SM backgrounds \cite{LHCstau}.
The direct pair production for stau at the LHC is via the $s$-channel $\gamma/Z$ exchanges. The electroweak coupling and $p$-wave behavior render the production rate characteristically small. With the leading decay of stau to tau plus LSP, the final state signal $\tilde\tau^{+} \tilde\tau^{-} \rightarrow \tau^+\tau^-+\ensuremath{\displaystyle{\not}E_T}$ encounters the overwhelming SM backgrounds such as $W^{+} W^{-} \rightarrow \tau^+\tau^-+\ensuremath{\displaystyle{\not}E_T}$.
Furthermore, the nearly degenerate mass relation for our favorable DM solutions further reduces the missing energy, thus making the signal more difficult to identify over the SM backgrounds.
For stau pair production in association with an additional energetic jet or photon, the extra jet/photon momentum kicks the stau pair and could result in a larger missing energy. However, $W^+W^-+{\rm n}j$ background would still be overly dominating, which makes the stau detection very challenging at the LHC. For some related studies, see Ref.~\cite{Arnowitt:2006jq}.
The existing LHC searches on neutralino/chargino with cascaded decay via stau can be viewed as stau searches and the analyses relied on two tagged taus with MT2 cut~\cite{LHCstau}. The minimal MT2 cut of $90\sim110$ GeV makes these searches insensitive to our light stau solutions which typically have a much smaller MT2.
\section{Rescue the Coannihilation DM Signals at the ILC}
\label{sec:leptoncollider}
The electron-positron collider is a much cleaner machine in comparison with hadron colliders. The designed center of mass (c.m.) energy of the ILC will be well above the light DM threshold of our current interest, and thus will be sufficient to produce the DM pair with substantial kinetic energy. It would help to overcome the difficulties encountered at the LHC for the signals of sfermion coannihilation scenarios. In this section, we choose to study this class of signals at the International Linear Collider (ILC) with c.m.~energies $\sqrt s= 250$ GeV and 500 GeV.
We focus on studying the signal sensitivity to the stau and sbottom with near degeneracy in mass with the neutralino LSP.
Motivated by our earlier discussion on DM solutions and considering the $Z$ decay width constraints, we set the sbottom and stau decouple from the $Z$-boson for simplicity. With such a conservative choice, the pair productions of sbottom and stau are mainly mediated by an $s$-channel photon with the standard QED vector-like coupling.
The unpolarized electromagnetic production of a pair of charged scalars has the canonical cross section formula
$$\sigma = {\pi \alpha^{2} \over 3 s} K_{c}\ Q^2\ \beta^{3}\ ,\quad \beta = \sqrt{1-{4m^2\over s}}\ ,$$
where $K_{c}=3 (1)$ is the color factor for a color triplet (singlet), $\beta$ is the velocity, and $Q$ is the electromagnetic charge. As expected, above the threshold, the sbottom pair production cross section is about one third of stau pair due to its electromagnetic charge and color factor. For a light sbottom and stau, the cross sections at 250 GeV ILC are about 130 fb and 400 fb, respectively, while the cross sections at 500 ILC are about a factor of four smaller.
For the stau pair production, the mass splitting $\Delta m\geq m_\tau$ would yield the prompt decay of $\tilde{\tau}\rightarrow\tau\tilde \chi^0_1$ with a typical lifetime of $10^{-22} - 10^{-19}$ second \cite{Jittoh:2005pq}.
The parent stau momentum and the energy range of the decay product $\tau$ are, respectively,
$$p_{\tilde\tau} = {\sqrt s\over 2} \beta_{\tilde\tau}, \qquad
\frac {\sqrt s} {4} \frac {\Delta m} {m_{\tilde \tau}} (1-\beta_{\tilde \tau}) \lesssim E_\tau \lesssim \frac {\sqrt s} {4} \frac {\Delta m} {m_{\tilde \tau}} (1+\beta_{\tilde \tau}).$$
LEP analysis on such decay mode is sensitive to mass splitting around $4~\,{\rm GeV}$ or above given its integrated luminosity.\footnote{For details, see Ref.~\cite{Abdallah:2003xe}.
Tagged acoplanary tau leptons are required to be back-to-back in the central region, etc., to reduce the main $\gamma\gamma\rightarrow \tau^{+}\tau^{-}$ background and $W^+W^-$ background. The energy of the tau leptons is selected to be harder than the $\gamma\gamma$ events but softer than the $W^+W^-$ events.} The selection efficiency decreases quickly as the mass splitting decreases, ranging from $5\%$ to $1\%$ for stau mass of $30-45~\,{\rm GeV}$ with $\Delta m \approx m_\tau$~\cite{Abdallah:2003xe}.
The background for such optimized analysis is very low and the search is essentially statistically limited. For ILC at 250/500 GeV, very similar search could be conducted and the sensitivity will be significantly improved. The decay products in the final state will be rather energetic, especially for 500 GeV ILC. With the high luminosity of the ILC design, even given the percent level signal selection efficiency, more than a thousand signal events are expected for our stau coannihilation scenario at ILC 250 GeV with $250~\,{\rm fb}^{-1}$ designed integrated luminosity. As for the backgrounds, taus from $W^+W^-$ will typically be harder and can be separated from the signal. The $\gamma\gamma \rightarrow \tau^{+}\tau^{-}$ background, on the other hand, will increase but can be reduced by adjusting the tau tagging energy threshold and the acoplanarity. Therefore, this region can be fully explored by the ILC through stau pair production with tagged tau leptons. Further kinematical features with the DM mass determination at the ILC has been recently studied in detail~\cite{Christensen:2014yya}.
For the case $\Delta m < m_\tau$, which typically corresponds to the stau coannihilation solution with reduced relic density, the virtual tau decay is dominated by the kinematical accessible modes $\tau^{*} \rightarrow \nu_\tau \pi$ and then
$\tau^{*} \rightarrow \nu_{\tau} \bar\nu_{\mu} \mu,\ \nu_{\tau} \bar\nu_{e} e$.
The stau lifetime thus varies in a large range of $10^{-7} - 100$
second\footnote{The upper bound is due to the consideration of not spoiling the Bing Bang Nucleosynthesis.}
\cite{Jittoh:2005pq}.
Generically, the stau is stable in the scale of the detectors and thus behaves like a highly-ionized charged track. The CHAMP searches at LEP already excluded this scenario in the mass region of current interest \cite{LEPCHAMP}.
\begin{comment}
\begin{figure}[t]
\centering
\subfigure{
\includegraphics[bb=0.0in -0.25in 7in 5.0in,width=200pt]{ILCsigbkg_250_gamma}
}
\subfigure{
\includegraphics[bb=0.0in -0.25in 7in 5.0in,width=200pt]{ILCsigbkg_250_gamma_reverse}
}
\caption[]{
Photon energy distributions with 250 $\,{\rm fb}^{-1}$ integrated luminosity with beam polarization $P_{e^-,e^+}=(-0.8,+0.3)$ (left panel) and $P_{e^-,e^+}=(+0.8,-0.3)$ (right panel). The green lines correspond to 40 GeV of $\tilde \tau_1$ mass, with lighter $\tilde \tau_1$ kinematically allows higher photon energy. The red and black curves are the distribution for $\gamma WW$ and $\gamma \nu\bar\nu$, respectively.}
\label{fig:sbpairg}
\end{figure}
\Tao{again, I don't feel this ISR business is needed: the light stau/sbottom are NOT produced near the threshold (like in LHC), and are very energetic at ILC. How does the ISR help to further boost them? } \Shufang{Good. Then we remove the discussion and simplify Fig 12.} the
To improve the search sensitivity by taking the advantage of the ISR, the cross sections for $\tilde{b}_1\tilde{b}_1^*\gamma$ and $\tilde{\tau}_1\tilde{\tau}_1^*\gamma$ are also shown, where we have imposed the minimal acceptance cuts on the photon
$$
E_{\gamma} > 10 \,{\rm GeV}, \ \ 10^{\circ} < \theta < 170^{\circ}.
$$
Depending on the cuts, the photon associated production cross section is about an order of magnitude smaller or more, around 20 $-$ 5 fb for sbottom in the mass range of 2$-$45 GeV, and around 80$-$20 fb for stau.
In Fig.~\ref{fig:sbpairg}, we present the photon energy distribution for the $\tilde{\tau}_1\tilde{\tau}_1^*\gamma$, the leading SM background of photon plus leptonic $W$ pair and photon plus invisible neutrino pair for normal beam polarization $P_{e^-,e^+}=(-0.8,+0.3)$ (lower left panel) and reversed beam polarization $P_{e^-,e^+}=(+0.8,-0.3)$ (lower right panel). The reversed polarization will reduce the photon plus leptonic $W$ pair process by a factor of sixteen. The searching strategy depends on the mass splitting and benefits most from the large luminosity from ILC comparing to LEP. For the normal beam polarization, the mono-photon search encounters large background from photon plus neutrino pair background. We have to rely on the tagging at least one of the taus, whose typical energy is below 20 GeV for these compressed spectra. One could obtain a relative clean sample by requiring photon energy greater than 75 GeV to remove photon plus leptonic $W$ pair background, and at least one tagged tau lepton to remove the photon plus neutrino pair background. This put ahead a challenging task to tag tau leptons below 20 GeV. Another possibility is to suppress the background by reversing the beam polarization, as shown on the low right panel of Fig.~\ref{fig:sbpair}. Mono-photon search without relying on tau tagging could reach statistical significance around three sigma. For 500 GeV ILC, the tau lepton could carry more energy. When the tau tagging is feasible, a direct search on tau pair with MET is sensitive, and a series of kinematics could help suppress the leptonic $W$ background as well as determining the unknown masses~\cite{Christensen:2014yya}.
\end{comment}
For the sbottom pair production, the very stringent constraint from the LHC already ruled out the prompt decay channel $\tilde b \rightarrow b \tilde \chi^0_1$ for $\Delta m > m_b$, as presented in Sec.~\ref{sec:sbottom}.
The sbottom for $\Delta m < m_b$ would lead to long-lived R-hadrons, or decaying R-hadrons within the detector with or without displaced vertices. As discussed earlier, the CHAMP search at the LEP excluded long-lived charged R-hadron case~\cite{LEPCHAMP}.
While the searches for the prompt R-hadrons at the LHC are very challenging as discussed earlier, the signal sensitivity at the ILC would be significantly improved, covering the full mass range of the current interest. In particular, at the 500 GeV ILC, the sbottom decay products could be energetic enough and a series of kinematic cuts could help to separate the SM backgrounds, similar to the discussions for the stau case.
With the well-determined kinematics at a lepton collider, the ILC could be utilized to measure the masses of the sfermion and the LSP, particularly suitable for the two-step decays \cite{Christensen:2014yya}.
\section{Summary and Conclusions}
\label{sec:conclude}
Identifying particle dark matter is of fundamental importance in particle physics. Searching for a light dark matter particle is always strongly motivated because of the interplay among the complementary detection of the underground direct search, indirect search with astro-particle means, and collider studies. Ultimately, the identification of a WIMP dark matter particle must undergo the consistency check for all of these three detection methods.
In this paper, we discussed the phenomenology of the light ($<40~\,{\rm GeV}$) neutralino DM candidates in the framework of the NMSSM. We performed a comprehensive scan over 15 parameters as shown in Table \ref{tab:range}. We implemented the current constraints from the collider searches at LEP, the Tevatron and the LHC,
the direct detection bounds, and the relic abundance considerations.
We illustrated the qualitative nature of the neutralino dark matter solutions in Table \ref{tab:scenarios}. We provided extensive discussions for the complementarity among the underground direct detection, astro-physical indirect detection, and the searches at the LHC and ILC.
Our detailed results are summarized as follows.
\begin{itemize}
\item
{\it Viable light DM solutions:}
We found solutions characterized by three scenarios:
$(i)$ $A_1,\ H_1$-funnels, $(ii)$ stau coannihilation and $(iii)$ sbottom coannihilation, as listed in Table \ref{tab:scenarios}.
The $A_1,\ H_1$-funnels and stau coannihilation could readily provide the right amount of dark matter abundance within the 2$\sigma$ Planck region (Figs.~2 and 3). The sbottom coannihilation solutions typically result in a much lower relic density. This under-abundance could also occur for $A_1,\ H_1$-funnel solutions if $m_{A_1/H_1} \approx 2m_{\tilde\chi_1^0}$, and for stau coannihilation solutions if the LSP is Bino-like.
\item
{\it Features of the light DM solutions:}
The neutralino LSP could either be Bino-like, Singlino-like or an admixture (Figs.~\ref{fig:components_funnel} and \ref{fig:components_coann}).
For the $A_1,\ H_1$-funnels, the light Higgs bosons $A_1/H_1$ are very singlet-like. They serve as the nearly resonant mediators for the DM annihilation.
For the stau coannihilation, the stau usually needs large L-R mixing or $Z$ decay kinematic suppression to avoid the $Z$ boson total width constraint, and it could be as light as 32 GeV. For the sbottom coannihilation, the sbottom is mostly right handed and could be as light as 16 GeV given the $Z$ total width consideration as well as other collider constraints (Fig.~\ref{fig:Zwidth}).
\item
{\it Direct detection: }
The direct detection rates for the three types of solutions vary in a large range.
For the sbottom coannihilation with the right amount of DM relic abundance, the SI direct detection rate is usually high, due to the effective bottom content in the nuclei. The SD direct detection provides complementary probes to the DM axial-vector couplings to $Z$ boson and light squark exchanges. The three kinds of solutions could have very low SI direct detection rate, some extend into the regime of the coherent neutrino-nucleus scattering background. The next generation of direct detection such as LZ, SuperCDMS and SNOLAB experiments would provide us valuable insights into very large portion of the allowed parameter space (Figs.~\ref{fig:relicdirect} and \ref{fig:indi}).
\item
{\it Indirect detection: }
The low velocity annihilation cross sections for these solutions also vary in a large range, usually prefer a rate lower than the canonical value of $s$-wave dominance assumption. For the $A_1,\ H_1$-funnels, the resonance feature allows some larger rates in the current epoch. Interestingly, it naturally provides a dark matter candidate for the GeV gamma-ray excess with $\sim 35~\,{\rm GeV}$ LSP pair that mainly annihilates into $b\bar b$. For sbottom and stau coannihilations, the corresponding annihilations are mainly into $b\bar b$ and $\tau^+\tau^-$, with the later yielding different gamma-ray spectra (Fig.~\ref{fig:indi}).
\item
{\it SM Higgs physics:}
The decays of the SM-like Higgs boson may be modified appreciably (Fig.~\ref{fig:SMratio}),
and its new decay channels to the light SUSY particles, including the invisible mode to the LSP DM particle, may be sizable (Fig.~\ref{fig:brsm}).
\item
{\it New light Higgs physics:}
The new light CP-even and CP-odd Higgs bosons will decay to the LSP DM particle, as well as other observable final states (Fig.~\ref{fig:phi}), leading to interesting new Higgs phenomenology at colliders. The search for a light singlet-like Higgs boson is usually difficult at the LHC due to the low production rates (Fig.~\ref{fig:A1H1_pro}) and the large SM backgrounds. The searches for pair produced singlet-like Higgs bosons via the decay of the SM-like Higgs as in Fig.~\ref{fig:brsm} and production of LSP pairs through Higgs portals as in Fig.~\ref{fig:portal} may improve the signal sensitivity at the LHC.
\item
{\it Collider searches for the light sfermions:}
For the sbottom coannihilation, our recast of the current LHC searches for heavier sbottom shows that the case of $\Delta m > m_b$ has been ruled out given the analysis of the sbottom pair production with a hard ISR jet. For the case of $\Delta m < m_b$, the long-lived charged R-hadron has been excluded by the LEP search, and the only viable case left would be a promptly decaying sbottom (or an R-hadron) that could escape the LHC search due to the softness in decay products, but will be covered at the ILC by searching for events with large missing energy plus charged tracks or displaced vertices.
For the stau coannihilation, searches at the LHC would be prohibitively difficult with the nearly degenerate masses. A lepton collider, however, comes to the rescue:
For the case of $\Delta m < m_\tau$, the stau is most likely long-lived and has been excluded by the LEP search.
For the case of $\Delta m > m_\tau$, the ILC will definitely be capable of covering this scenario.
\end{itemize}
Overall, a light WIMP DM candidate remains to be of great interest both experimentally and theoretically.
A light neutralino DM in the NMSSM may result in rich physics connecting all the current and the upcoming endeavors of the underground direct detection, astro-physical indirect searches, and collider signals related to the Higgs bosons and new light sfermions.
\acknowledgments{
We would like to thank Matt Buckley, Carlos Wagner, Lian-Tao Wang and Xerxes Tata for useful discussions. The work of T.H.~and Z.L.~was supported in part by the U.S.~Department of Energy under grant No.~DE-FG02-95ER40896, in part by PITT PACC. Z.L.~was also supported in part by an Andrew Mellon Predoctoral Fellowship and PITT PACC Predoctoral Fellowship from School of Art and Science at University of Pittsburgh. S.S.~was supported by the Department of Energy under Grant~DE-FG02-13ER41976.}
\bibliographystyle{JHEP}
\begingroup\raggedright |
2,877,628,091,206 | arxiv | \section{Introduction}
\label{sec:intro}
The exponential growth of out-of-time-order correlator (OTOC) \cite{Larkin} has attracted considerable attention these years, motivated by possible relations between black hole systems and quantum mechanical systems through the AdS/CFT correspondence \cite{Maldacena:1997re}.
The ``chaos bound'' \cite{Maldacena:2015waa} for the Lyapunov exponent $\lambda_\text{OTOC}$ in thermal OTOCs in large $N$ quantum theories
at temperature $T$,
\begin{align}
\lambda_\text{OTOC}(T) \leq 2\pi T \, ,
\label{MSSbound}
\end{align}
is saturated when there exists a gravity dual in which the
Lyapunov exponent is interpreted as a red shift factor near the black hole horizon probed by shock waves \cite{Shenker:2013pqa,Shenker:2013yza,Leichenauer:2014nxa}. This indicator of the holographic principle indeed has lead \cite{Kitaev-talk-KITP,Maldacena:2016hyu} to a surprising quantum mechanical model, the Sachdev-Ye-Kitaev (SYK) model \cite{Sachdev:1992fk,Kitaev-talk}, which admits a 2-dimensional dual gravity description.
With the OTOC as the novel indicator of quantum chaos, quantum chaotic few-body systems have been probed to see whether the OTOC grows exponentially in time.
The way to calculate microcanonical/thermal OTOCs in generic quantum mechanics was provided \cite{Hashimoto:2017oit}, and
major examples of chaotic systems with the exponentially growing OTOCs include
a kicked rotor \cite{Rozenbaum:2017zfo}, a stadium billiard \cite{Hashimoto:2017oit,Rozenbaum:2019kdl}, the Dicke model \cite{Chavez-Carlos:2018ijc}, bipartite systems \cite{Prakash:2020fsj,Prakash:2019kip}, and coupled harmonic oscillators \cite{Akutagawa:2020qbj}\footnote{See \cite{Zhuang:2019jyq} for the OTOC analysis for the Henon-Heiles system. Various kinds of OTOCs in quantum maps were also studied \cite{Garcia-Mata:2018slr,Lakshminarayan:2019xxd,Bergamasco:2019tfw,Fortes:2019frf}.
The cases with large $N$
are found in \cite{Shen:2016htm,Bohrdt:2016vhv,Bianchi:2017kgb,Lin:2018tce,Rammensee:2018pyk,Lin:2018luj,Wang:2019vjl,Hartmann:2019cxq,Dag:2019yqu,Borgonovi:2019mrk,Ghosh:2019yjh,Yan:2019wio}.}.
In particular, in the coupled harmonic oscillator system \cite{Akutagawa:2020qbj}
(which is reminiscent of Yang-Mills theory
\cite{Matinyan Savvidy Ter-Arutunian Savvidy (1981a), Matinyan Savvidy Ter-Arutunian Savvidy (1981b), Savvidy (1984)}),
the thermal OTOC is a better indicator of quantum chaos compared to the conventional energy level statistics.
The important observation was made in \cite{Pappalardi:2018frz,Hummel:2018brt,Pilatowsky-Cameo:2019qxt} finding that the exponential growth of OTOCs is possible in non-chaotic regular systems at low dimensions\footnote{See also related discussions in \cite{Rozenbaum:2019nwn,Li:2020zuj}.}. This growth is interpreted as being generated by a classical unstable maximum of the potential at which, locally, an initial difference grows exponentially in time. The phenomenon is expected to be general, and
\cite{Xu:2019lhc} provided a general semiclassical inequality between the classical Lyapunov exponent $\lambda_\text{saddle}$ at the unstable maximum (or a saddle point) and the quantum Lyapunov exponent $\lambda_\text{OTOC}$ of the thermal OTOC at infinite temperature,
\begin{align}
\lambda_\text{OTOC}(T=\infty)\geq \lambda_\text{saddle} \, .
\label{boundsaddle}
\end{align}
Since a classical saddle or a local maximum of the potential does not necessarily mean classical chaos, this inequality \eqref{boundsaddle} suggests that the information scrambling is not only generated by chaos, and that the scrambling is possible in regular systems.
The two inequalities, \eqref{MSSbound} and \eqref{boundsaddle},
inevitably lead us to the following two questions: $\langle i \rangle$ {\it whether the general inequality \eqref{boundsaddle} applies to any quantum mechanics or not}, and $\langle ii \rangle$ {\it what is the relation between \eqref{MSSbound} and \eqref{boundsaddle}}. We are going to study these two questions in this paper.
Concerning the first question $\langle i \rangle$, we examine OTOCs for the system of one-dimensional inverted harmonic oscillator. In one dimension, this is the most generic set-up which generates a non-zero positive classical Lyapunov exponent $\lambda_\text{saddle}$ at the local maximum. To make the system bounded from below to define the temperature $T$, we put some potential walls away from the local maximum. This well-defined quantum system of a double-well potential is non-chaotic due to the Poincar\'e-Bendixon theorem. We numerically calculate the thermal OTOCs for various types of the walls and at various values of $T$. We indeed find a nonzero $\lambda_\text{OTOC}$, so, we confirm that in generic one-dimensional quantum mechanics with a local maximum in the potential, in spite of the non-chaoticity, the OTOCs grow exponentially in time. The observed $\lambda_\text{OTOC}$ is $T$-dependent, and its value is ${\cal O}(\lambda_\text{saddle})$,
thus naturally interpreted as being generated by the inverted harmonic potential. The infinite temperature limit of $\lambda_\text{OTOC}(T)$ slightly violates \eqref{boundsaddle}, which would be due to our fully quantum calculations away from the semiclassical limit.
As an answer to the second question $\langle ii \rangle$, we derive an inequality
\begin{align}
\lambda_\text{OTOC}(T) \leq c\, T \, , \quad c \simeq {\cal O}(1)
\label{bound}
\end{align}
for generic one-dimensional quantum mechanics. In fact, the structure of the inverted harmonic oscillator potential, together with the quantum resolution condition to discriminate the local maximum by wave functions, leads to this inequality. The similarity to the ``chaos bound'' \eqref{MSSbound} is striking. The bound \eqref{MSSbound} is for large $N$ theories while our inequality \eqref{bound} is for a single degree of freedom.
A possible discussion to relate \eqref{MSSbound} with \eqref{bound} owes to AdS/CFT set-ups. The renowned large-$N$ quantum mechanics with a dual gravity description is the BFSS matrix theory \cite{Banks:1996vh}\footnote{This was a motivation for the model of \cite{Akutagawa:2020qbj},
and string theory matrix models in similar spirit are found in \cite{Asano:2015eha, Hashimoto:2016wme, Berenstein:2016zgj, Akutagawa:2018yoe}.}. Separating one degree of freedom and integrating out the remaining as a black hole \cite{Iizuka:2001cw}, the system reduces to a quantum mechanics of a particle in one dimension\footnote{For related chaos analyses, see
\cite{Gur-Ari:2015rcq,Berkowitz:2016znt,Buividovich:2018scl}.}.
This particle feels the gravitational potential emergent from the integration.
It is known that there is a universal chaotic behavior near black hole horizons \cite{Hashimoto:2016dfz}\footnote{The potential
provides a way to explain Hawking radiation and other universal phenomena
\cite{Betzios:2016yaq, Morita:2018sen, Dalui:2018qqv, Hashimoto:2018fkb, Zhao:2018wkl, Morita:2019bfr}.} which is due to the inverted harmonic (gravitational) potential with the Lyapunov exponent $2\pi T$. Therefore, proper integration of large degrees of freedom, in a quantum mechanics with a gravity dual, may lead to an effective one-dimensional quantum mechanics with the inverted harmonic potential. Although this whole story is still far from our reach, it motivates us to the study given in this paper and to provide the answers to the two questions $\langle i \rangle$ $\langle ii \rangle$ described above.
This paper is organized as follows.
In Sec.~\ref{sec:2}, we calculate the thermal OTOC in the quantum mechanics of the simplest inverted harmonic oscillator (a double-well Higgs-like potential). We find the temperature-dependent quantum Lyapunov exponents, whose high temperature limit remains nonzero.
In Sec.~\ref{sec:3}, we study the universality of the exponential growth of
the thermal OTOC, by evaluating quantum models with a different shape of the potential walls.
In Sec.~\ref{sec:4}, we derive the inequality that the Lyapunov exponent of the thermal OTOC is bounded above by the temperature, in a generic one-dimensional quantum mechanics.
Sec.~\ref{sec:5} is for our summary and discussions.
Note added: while we were finishing our project, we noticed a related paper
\cite{Bhattacharyya:2020art} which studies an OTOC for a system with an
inverted harmonic oscillator.
\section{Exponential growth of OTOC in inverted harmonic oscillator}
\label{sec:2}
In this section we study the microcanonical and thermal OTOCs of the simplest quantum mechanical system including an inverted harmonic oscillator (IHO).
We employ a one-dimensional Hamiltonian system, which is hence classically non-chaotic (regular), while at the unstable maximum of the potential, a nonzero Lyapunov exponent $\lambda_\text{saddle}$ appears.
We numerically find the microcanonical/thermal OTOCs grow exponentially at early times. We study the temperature dependence of the observed quantum Lyapunov exponents $\lambda_\text{OTOC}$ of the thermal OTOCs and find that at the high temperature limit the Lyapunov exponent $\lambda_\text{OTOC}$ remains nonvanishing, whose value is ${\cal O}(\lambda_\text{saddle})$.
The simplest quantum mechanical system including the inverted harmonic oscillator is defined by the Hamiltonian
\begin{eqnarray}
\label{eq:IHO}
& H \equiv p^2 + V\,, \\
& V \equiv g \left(x^2 - \displaystyle\frac{\lambda^2}{8g}\right)^2
= - \frac{1}{4}\lambda^2 x^2 + g x^4 + \displaystyle\frac{\lambda^4}{64g}\,.
\label{HP}
\end{eqnarray}
Here $\lambda$ and $g$ are constant parameters. This is nothing but the Higgs potential in the high energy theoretic terminology.
The $x^4$ term is included in order for the system to be bounded from below\footnote{The boundedness of the potential ensured by the ``soft wall'' $x^4$ term is necessary to define temperature which is crucial to our analyses. It was pointed out in \cite{Ali:2019zcj} that using an analytic continuation of the frequency parameter $\omega$, the thermal OTOC for the standard harmonic oscillator $c_n(t) = \cos^2 \omega t$ obtained in \cite{Hashimoto:2017oit} suggests $c_n(t) = \cosh^2 \omega t$ for
an inverted harmonic oscillator without the boundedness. This grows exponentially forever. However, a naive analytic continuation makes the energy to be pure imaginary, and the definition of the microcanonical/thermal OTOCs are ambiguous. To define them properly with the temperature, we consider only the cases with bounded potentials by introducing the walls. See Sec.~\ref{sec:3} for the dependence of the choice of the walls. And in Sec.~\ref{sec:4}, the bounded bottom of the potential is indeed crucial for the derivation of the inequality \eqref{boundT}. }.
Since this is a one-dimensional Hamiltonian system, the classical mechanics is regular. But this does not mean that the Lyapunov exponent vanishes. The system is unstable around $x=0$, so we have a non-vanishing classical Lyapunov exponent $\lambda_\text{saddle}$ there. This exponent is equal to the parameter $\lambda$ in the potential given above. Note that the parameter $\lambda$ determines the curvature of the unstable top of the hill.
In this section first we choose $\lambda=2$ and $g=1/50$ for our numerical calculations of the OTOCs, and later we choose another set $\lambda=2\sqrt{5}$ and $g=1/10$. The latter shares, with the former, the property that the location of the bottom of the potential is at $x=\pm 5$\footnote{By rescaling $x$ and $H$, we can tune a certain combination of $\lambda$ and $g$ to be an arbitrary value, for example, $\lambda/\sqrt{8g}=5$.}.
\begin{figure}
\centering
\subfigure[Potential shape and energy levels.]
{\includegraphics[width=6cm]{Poten_12p5.pdf} \label{fig1a}}
\subfigure[Energy eigenvalues]
{\includegraphics[width=6cm]{energy_eigenvalues.pdf} }
\caption{Inverted harmonic oscillator potential for $\lambda=2, g=1/50$ ($V(0)=12.5$) and its energy levels. The energy levels in red color play an important role for exponential growth of OTOCs. See Fig.~\ref{fig:microcanonical}. Note that the energy levels smaller than $n=11$ are almost degenerated so the black lines below the top of the hill in (a) are double lines. }\label{fig:energy_eigenvalues}
\end{figure}
We are interested in the quantum analogue of the exponential behavior of the particle motion around the top of the hill, thus we choose the following
thermal OTOC defined in the Heisenberg picture\footnote{
In this paper we use only the OTOC with the commutator squared, to make the story parallel to the classical definition of chaos and the Lyapunov exponent. For a more general OTOC without the commutator squared is
studied in Appendix \ref{App:B}.
}
by
\begin{equation}
\label{eq:thermal OTOC}
C_T(t) \equiv -\langle [x(t),p(0)]^2 \rangle,
\end{equation}
where $\langle O \rangle \equiv {\rm tr}[e^{-\beta H}O]/{\rm tr}[e^{-\beta H}]$ is the thermal average. Let $|n\rangle$ be the $n$-th energy eigenstate, $H|n\rangle=E_n|n\rangle$ ($n=1,2,3,\cdots$). We define the microcanonical OTOC for this energy eigenstate by
\begin{equation}
\label{eq:microcanonical OTOC}
c_n(t) \equiv -\langle n | [x(t),p(0)]^2 | n \rangle\, .
\end{equation}
Using the completeness relation of the energy eigenstates, the thermal OTOC can be written as the thermal average of the microcanonical OTOCs,
\begin{equation}
\label{eq:thermal averave}
C_T(t) = \frac{1}{Z}\sum_n e^{-\beta E_n}c_n(t)\, , \quad Z \equiv {\rm tr}[e^{-\beta H}]\, .
\end{equation}
We quantize the system and consider the time-independent Schr\"odinger equation
\begin{equation}
\label{eq:Schrodinger IHO}
-\frac{d^2}{dx^2}\psi_n(x) + \left[ - \frac{1}{4}\lambda^2 x^2 + g x^4 + \frac{\lambda^4}{64g} \right]\psi_n(x) = E_n\psi_n(x)\, ,
\end{equation}
where we take $\lambda=2, g=1/50$. We numerically solve this equation and obtaine energy eigenvalues $E_n$ and the wave functions $\psi_n(x)$. In Fig.\ref{fig:energy_eigenvalues}, we show the obtained distribution of the energy eigenvalues.
Following the general method for calculating the OTOCs numerically \cite{Hashimoto:2017oit}, we compute\footnote{In the evaluation, we include the energy eigenstates up to $n=192$.} the microcanonical OTOCs as functions of $t$ for each energy level $n$. In Fig.\ref{fig:microcanonical}, we show our numerical results. For lower/higher modes, the OTOCs do not show exponential growth. On the other hand, for intermediate modes ($n=9,10,11,12,13$), the OTOCs exponentially grow at early times. These intermediate energy eigenvalues are in the range $8<E<14$. Actually, the height of the unstable saddle from the bottom of the potential is $\frac{\lambda^4}{64g}=12.5$.
Note that the level $n=11$ (red in Fig.\ref{fig:microcanonical} ) is the closest\footnote{The energy levels below the top of the hill in Fig.\ref{fig1a} are double lines.} to the top of the potential (Fig.\ref{fig1a}) and shows the strongest exponential growth.
\begin{figure}
\centering
\includegraphics[width=90mm]{Log_cnt_soft_12p5_ver_2.pdf}
\caption{The microcanonical OTOCs for the IHO. We can see the strong exponential growth for intermediate modes (like $n=9\sim 13$), while lower modes and higher modes do not show initial exponential growth. The energy range of these intermediate modes correspond to the local maximum of the potential (the red lines in Fig.\ref{fig1a}).}
\label{fig:microcanonical}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=90mm]{Log_CTt_soft_12p5_ver_2.pdf}
\caption{The time dependence of the thermal OTOCs of the system \eqref{HP} for various values of the temperature $T$. The dashed part is non-linear and the solid part is linear (exponential in $t$). The time domains for the solid lines are determined at each temperature data such that the linear fit provides the smallest confidence interval of the slope normalized by the slope itself. The obtained time domains are longer than the twice of ${\cal O}(1/\lambda_\text{OTOC})$, which certifies the exponential growth here.
}
\label{fig:thermal_IHO}
\end{figure}
The behavior of the microcanonical OTOCs is exactly what we expect from the structure of the IHO potential. When the energy is low, the wave function lives inside the well and do not reach the unstable saddle. If we raise the energy, the wave function begin to feel the hilltop of the potential. In addition, the wave function localizes around the unstable point\footnote{By the conservation of energy, the momentum of a particle is small when the potential is high. This means that the particle stays for a longer time around the turning point and the hilltop of the potential. Quantum mechanically, this means the wave function localizes around those points. We would like to thank Lea Ferreira dos Santos and Sa\'ul Pilatowsky for valuable discussions on localization of wave functions.}.
As a result, the corresponding microcanonical OTOC shows the exponential growth. When the energy is high enough, the effect of the unstable point on the wave function is buried, and the corresponding OTOCs do not show the exponential growth any more. In this IHO case, the origin of the exponential growth of the microcanonical OTOC is not chaoticity, but instability of the potential. Hence, the exponential growth of the OTOC does not necessarily indicate chaos.
By taking the thermal average of the microcanonical OTOCs, we compute the thermal OTOCs for various values of the temperature\footnote{The computation was done by discretizing the time coordinate by units of $0.1$. In Fig.~\ref{fig:thermal_IHO}, we connected those discrete points by smooth curves for a better visibility.}. The numerical results are shown in Fig.~\ref{fig:thermal_IHO}. We can find the exponential growth in the thermal OTOCs for high temperature\footnote{Interestingly, the Ehrenfest time looks almost the same (around $t=3$) for this region of $T$.}.
We fit the thermal OTOCs at early times by a function $a \exp[\lambda_{\rm OTOC}\, t]$ with $a$ and $\lambda_\text{OTOC}$ being adjustable constant parameters, to find the Lyapunov exponents $\lambda_{\rm OTOC}$.
In other words, the slope of the solid part in Fig.~\ref{fig:thermal_IHO} is the Lyapunov exponent at a given value of the temperature.
The temperature dependence of the Lyapunov exponents is shown in Fig.~\ref{fig:Lyapunov_IHO}. The bars represent the 95\% confidence interval for the Lyapunov exponent at a given value of the temperature.
Here, from the obtained Lyapunov exponents, we observe the following facts. First, the order of those measure exponents is equal to that of the classical Lyapunov exponent $\lambda_\text{saddle}=2$.
Thus, we find that indeed the classical instability of the unstable maximum of the IHO is detected by the thermal OTOC\footnote{The OTOC is a quantum counterpart of the square of the classical Poisson bracket. In view of this, in the comparison between $\lambda_\text{OTOC}$ and $\lambda_\text{saddle}$, a natural relation would be $\lambda_{\rm OTOC} = 2 \lambda_\text{saddle}$. However, considering a compression factor of the phase space for the dominantly growing mode \cite{Xu:2019lhc}, this factor $2$ may drop off.
See \cite{Xu:2019lhc} for the discussion.}.
\begin{figure}
\centering
\includegraphics[width=90mm]{Lyapunov_IHO.pdf}
\caption{The temperature dependence of the Lyapunov exponents $\lambda_\text{OTOC}$ of the thermal OTOCs. The bar represents the 95\% confidence interval for the exponent. The dashed curve is the fitting function for the Lyapunov exponents obtained in the range $20\leq T \leq 70$.}
\label{fig:Lyapunov_IHO}
\end{figure}
Second,
as the temperature goes up, the exponent slightly decreases monotonically. To check if the inequality \eqref{boundsaddle} is satisfied in our IHO system, we study the Lyapunov exponent in the high-temperature limit $T \to \infty$. To see this, we assume that $\lambda_{\rm OTOC}(T)$ is analytic around $T=\infty$, that is, it can be expanded as
\begin{equation}
\label{eq:expansion}
\lambda_{\rm OTOC}(T)=a_0+\frac{a_1}{T}+\frac{a_2}{T^2}+\frac{a_3}{T^3}+\cdots \, .
\end{equation}
Using this as a fitting function, we find
\begin{equation}
\label{eq:fitting}
\lambda_{\rm OTOC}(T) \sim 1.58+\frac{8.01}{T}\, .
\end{equation}
as a reasonable fitting function for $\lambda_\text{OTOC}$ in the high temperature region.
In Fig.~\ref{fig:Lyapunov_IHO}, this fitting function is drawn as the dashed curve. In Appendix \ref{App:A}, we discuss the error analysis for the fitting. The fitting function \eqref{eq:fitting} is non-negative, which satisfies the general requirement that the Lyapunov exponent is non-negative by definition.
\begin{figure}
\centering
\subfigure[Potential shape and energy levels. The energy levels in red color play an important role for exponential growth of OTOCs.]
{\includegraphics[width=6cm]{Poten_62p5.pdf} \label{pot2a}} \ \ \
\subfigure[Quantum Lyapunov exponent vs temperature. The red fitting curve is $\lambda_{\rm OTOC}(T) \sim 4.01+35.4/T$.]
{\includegraphics[width=6cm]{Lyap_model_4_fitting.pdf} \label{pot2b}}
\caption{Inverted harmonic oscillator potential for $\lambda =2\sqrt{5}, g = 1/10, (V(0) = 62.5)$.}\label{pot2}
\end{figure}
We repeat the analysis for another potential with parameters $\lambda = 2\sqrt{5}$ and $g = 1/10$, which are chosen such that the potential minimum is located at $x=\pm 5$ for the comparison with the previous case. See Fig.~\ref{pot2a} for the potential shape, where the energy levels are displayed as horizontal lines. Similarly to the previous case in Fig.~\ref{fig:energy_eigenvalues}, the states with energy levels around the local maximum play an important role. Only the microcanonical canonical OTOCs of those ``red'' states show the exponential growth at early times. Furthermore, these are dominant contributions to the exponential growth of the thermal OTOCs.
Fig.~\ref{pot2b} shows the thermal Lyapunov exponents as a function of temperature. The high temperature region is fitted by the red dashed line:
\begin{equation}
\label{eq:fitting2}
\lambda_{\rm OTOC}(T) \sim 4.01+\frac{35.4}{T}\, .
\end{equation}
Importantly, the Lyapunov exponent does not vanish in the high-temperature limit.
The asymptotic value of \eqref{eq:fitting} and \eqref{eq:fitting2} at $T=\infty$ is smaller than the classical Lyapunov exponent ($\lambda_{\mathrm{saddle}}=2$ for the first case and $\lambda_{\mathrm{saddle}}=2\sqrt{5} \sim 4.47$ for the second case). This appears to slightly violate the proposed inequality \eqref{boundsaddle}.
Noting that the inequality \eqref{boundsaddle} was derived in the classical limit \cite{Xu:2019lhc}, this slight violation would be due to the quantum effect. In addition, the Hilbert space of our quantum mechanical system is infinite dimensional, thus the infinite temperature limit is not well-defined.
These would be possible reasons for the slight violation of the inequality \eqref{boundsaddle}. Nevertheless, the observation that the quantum Lyapunov exponent $\lambda_\text{OTOC}$ asymptotes to a nonzero constant at $T=\infty$ is one of our important conclusions.
The measured $\lambda_\text{OTOC}$ is a monotonicaly decreasing function of $T$. This can be naturally understood as follows.
If we raise the temperature, the higher modes of the microcanonical OTOCs contribute to the thermal OTOC. In the IHO system, since the microcanonical OTOCs for the higher modes do not show any exponential growth, they do not contribute to the exponential behavior of the thermal OTOC, rather may smear it. Hence, $\lambda_{\rm OTOC}(T)$ is expected to be a monotonically decreasing function of $T$, $d\lambda_{\rm OTOC}/dT\leq0$. This is explicitly observed in our numerical evaluation of $\lambda_\text{OTOC}(T)$.
However, this is not a universal feature because there are also cases where $d\lambda_{\rm OTOC}/dT\geq0$. It depends on the shape of potential as will be shown in the following section.
In this section, we have taken the simplest potentials which include the inverted harmonic oscillator, and have seen that the Lyapunov exponent of the thermal OTOCs is non-vanishing. This explicitly shows that the thermal OTOCs can grow exponentially even in non-chaotic systems. A possible concern would be that this result may be specific to the Higgs-type potential \eqref{HP}. To resolve the issue, in the next section we shall see the universality of the results.
\section{Universality of the growth}
\label{sec:3}
In this section,
we study the universality of the exponential growth phenomenon of the thermal OTOC in one-dimensional quantum mechanics. In the previous section, we have used the
potential of the Higgs-type \eqref{HP}. To study the universality, let us consider the following potential:
\begin{align}
V(x) = \left\{
\begin{array}{lll}
\left(x^2 - \displaystyle\frac{\lambda^2}{8g}\right)^2
= - \frac{1}{4}\lambda^2 x^2 + g x^4 + \displaystyle\frac{\lambda^4}{64g} &\quad & \left(|x| \leq \frac{\lambda}{\sqrt{8g}}\right)\\
\infty & \quad & \left(\frac{\lambda}{\sqrt{8g}} < |x|\right)
\end{array}
\right.
\label{Vhardd00}
\end{align}
This potential shares the same form as \eqref{HP} inside, but we put hard walls at $x = \pm\frac{\lambda}{\sqrt{8g}} $.
See, for example, Fig.~\ref{pothhh} for the shape of the potentials with the same values of the parameters as the ones in Sec.~\ref{sec:2}.
\begin{figure}
\centering
\subfigure[$\lambda =2\sqrt{5}, g = 1/10, V(0) = 62.5$]
{\includegraphics[width=6cm]{pot_62p5_gray.pdf} \label{pothhha}} \ \ \
\subfigure[$\lambda =2, g = 1/50, V(0)= 12.5$]
{\includegraphics[width=6cm]{pot_12p5_gray.pdf} \label{pothhhb}}
\caption{Potential energy with hard walls and energy levels. The levels below the top of the hill are double lines.}\label{pothhh}
\end{figure}
We investigate this hard-wall model for two reasons. First,
since it shares the same potential inside as that of Sec.~\ref{sec:2}, thus, while the classical saddle effect is kept, the effect of the boundaries can be efficiently probed by a comparison to the results in Sec.~\ref{sec:2}.
Second, the hard-wall model may help us develop more analytic intuition because its eigenfunctions are basically trigonometric functions at high energy levels regardless of the potential hill inside the hard-wall potential. We will call the models in Sec.~\ref{sec:2} the ``soft-wall'' model to compare with the ``hard-wall'' model.
As a concrete example, we deal with the potential with $\lambda =2\sqrt{5}$ and $g = 1/10$, shown in Fig.~\ref{pothhha}. By the same procedures as Sec.~\ref{sec:2}, we compute the microcanonical OTOCs, some of which are shown in Fig.~\ref{MC123}.
Let us compare the features with those of Sec.~\ref{sec:2}.
There are two common features:
i) The level close to the top of the hill has the steepest slope, {\it i.e.} the largest (microcanonical) Lyapunov exponent. In Fig.~\ref{MC123} this steepest slope corresponds to $n=15$ (orange). ii) For small $n$ there is no exponential growth of the microcanonical OTOCs. In Fig.~\ref{MC123} it corresponds to $n=1$ (dashed black).
While we have these common features which are physically reasonable,
there is a big difference from the soft-wall case in Fig.~\ref{fig:microcanonical}. As the energy level $n$ increases above the height of the potential hill, the microcanonical OTOCs still show the exponential growth, while in Fig.~\ref{fig:microcanonical} in Sec.~\ref{sec:2} they are suppressed. The time range of the exponential growth decrease as $n$ increases.
\begin{figure}
\centering
{\includegraphics[width=9cm]{Log_cnt_hard_62p5.pdf} }
\caption{Time evolution of the microcanonical OTOCs for the model in Fig.~\ref{pothhha} ($\lambda =2\sqrt{5}, g = 1/10$). The dotted curves do not have ranges of exponential growth.}\label{MC123}
\end{figure}
By using the microcanonical OTOCs, we compute the thermal OTOCs, some of which at given values of the temperature are shown in Fig.~\ref{T123}. At low temperature, there is no exponential growth: see the dashed curve for $T=1$ case, for example. By reading off the slopes of the linear part of the curves in Fig.~\ref{T123} we make a plot of the quantum Lyapunov exponents at several values of the temperature. See the blue dots in Fig.~\ref{QL123a}. In the infinite temperature limit, the quantum Lyapunov exponent saturates to $3.05$ approximately. For comparison, in Fig.~\ref{QL123a} the results of the soft-wall case are displayed as red dots. We find that the Lyapunov exponent asymptotes to a nonzero constant, and the value is ${\cal O}(\lambda_\text{saddle})$.
These are common to what have been found in the previous section, and we find the universality.
\begin{figure}
\centering
{\includegraphics[width=9cm]{Log_CTt_hard_62p5.pdf} }
\caption{Time evolution of the thermal OTOCs for the model in Fig.~\ref{pothhha} ($\lambda =2\sqrt{5}, g = 1/10$). The dotted curves do not have ranges of exponential growth. }\label{T123}
\end{figure}
By doing the same analysis for the hard-wall model with $\lambda =2$ and $g = 1/50$ shown in Fig.~\ref{pothhhb}, we obtain the blue dots in Fig.~\ref{QL123b}. The red dots are for the soft-wall case in Sec.~\ref{sec:2}.
In this case, it is not clear if the quantum Lyapunov exponent saturates to a finite constant in the infinite temperature limit. The fitting with a $1/T$ expansion is a way to estimate the asymptotic value which turns out to be finite. To find a more reliable asymptotic behavior, we may need to investigate the higher temperature regime with more accuracy, against the numerical difficulties about the computational cost.
As seen in Fig.~\ref{QL123},
contrary to the soft-wall case in Sec.~\ref{sec:2},
the quantum Lyapunov exponents in hard-walls increase as temperature increases,
$d\lambda_{\rm OTOC}/dT\geq0$. This can be understood by the fact the microcanonical OTOCs are not suppressed as $n$ increases as shown in Fig.~\ref{MC123}. Furthermore, it asymptotes to a function with a constant exponent, say $\bar{c}(t)$.
In the infinite temperature limit,
\begin{equation}
C_T(t) = \frac{1}{Z}\sum_n e^{-\beta E_n}c_n(t) \sim \bar{c}(t) \, .
\end{equation}
Thus, the quantum Lyapunov exponent of the thermal OTOC is equal to the microcannonical Lyapunov exponent at the large $n$ limit. Therefore, the exponential grwoth of the thermal OTOC is the accumulation effect of the microcannonical Lyapunov exponents of the {\it higher} modes rather than the strong effect of the microcannonical Lyapunov exponents of the {\it intermediate} levels near the saddle point.
For example, in Fig.~\ref{MC123}, we find that $c_n(t) \to \exp [3.05t]$ at large $n$, whose exponent is equal to the quantum Lyapunov exponent in
Fig.~\ref{T123}.
This feature is in strong contrast to that in the soft-wall models.
However, for both the hard-wall and the soft-wall cases the underlying physics comes from the saddle point.
In particular, for the hard-wall case, it seems that the effect of the unstable maximum propagates among energy levels quite effectively and spreads to the whole system.
This good efficiency may come from the commensurability of the energy levels and the simple trigonometric wave functions. So indeed the boundary walls of the potential affects the delicate behavior of the thermal OTOCs.
\begin{figure}
\centering
\subfigure[Model in Fig.~\ref{pothhha} ($\lambda =2\sqrt{5}, g = 1/10)$. ]
{\includegraphics[width=6cm]{Lyap_62p5_Total_ver_4.pdf} \label{QL123a}} \ \ \
\subfigure[Model in Fig.~\ref{pothhhb} ($\lambda =2, g = 1/50) $]
{\includegraphics[width=6cm]{Lyap_12p5_Total_ver_4.pdf} \label{QL123b}}
\caption{Quantum Lyapunov exponent vs temperature. Red dots: soft-wall, Blue dots: hard-wall.}\label{QL123}
\end{figure}
In spite of the difference in the high temperature behavior in the soft-wall and hard-wall models, the Lyapunove exponent asymptotes to finite values in the infinite temperature limit. The values slightly violate the semiclassical inequality \eqref{boundsaddle}, and again, this could be due to the quantum nature of the system.
\section{Lyapunov bound for quantum mechanics in one dimension}
\label{sec:4}
As described in the introduction, large $N$ quantum mechanical models may admit
an effective description with just a single degree of freedom, and in such a case the chaos bound
\eqref{MSSbound}
is expected also for a quantum mechanical model with just a single degree of freedom. Since such a quantum mechanics never have chaos, the only possibility is to have the unstable maximum in the potential to generate a nonzero Lyapunov exponent, in the manner described in Sec.~\ref{sec:2} and Sec.~\ref{sec:3} of this paper. With this motivation, we shall look for a mechanism of why \eqref{MSSbound} can work even in one-dimensional quantum mechanics.
In fact, the results of Sec.\ref{sec:2} and Sec.~\ref{sec:3} show that all Lyapunov exponents measured by the thermal OTOCs satisfy \eqref{MSSbound}. The bound
\eqref{MSSbound} would have been violated if the exponential growth is seen
at the value of temperature below $\lambda_\text{saddle}/2\pi$, but this
value is too low for having the exponential growth, as observed in Fig.~\ref{fig:thermal_IHO}. This suggests that there may exist some quantum mechanism to prohibit going to lower temperature to violate the bound \eqref{MSSbound}.
In this section, we provide an intuitive explanation of the chaos bound \eqref{MSSbound}
for generic quantum mechanics in one dimension. What we assume is
that the exponential growth of the thermal OTOC, with the Lyapunov exponent $2\lambda$, is caused by
a potential hill of the form of an inverted harmonic oscillator, whose classical Lyapunov exponent is $\lambda$.
Under this assumption, with generic quantum mechanical arguments, we can derive the bound \eqref{bound} for the Lyapunov exponent:
\begin{align}
\lambda \lesssim c\, T \, , \quad c \simeq {\cal O}(1) \, .
\end{align}
The principles which we use for our derivation of \eqref{bound} are the following natural facts which any quantum mechanical system is subject to.
For any quantum wave function of an energy eigenstate with energy $E$ to probe the local maximum, the following two conditions apply.
\begin{itemize}
\item {\bf Potential height condition.}
The energy $E$ of the quantum wave function can probe the local maximum only when the energy $E$
is larger than the height of the potential $V_\text{top}$,
\begin{align}
E \gtrsim V_\text{top} \, .
\label{cond1}
\end{align}
\item {\bf Quantum resolution condition.}
The quantum wave function can discriminate the local maximum only when the effective width $\Delta x$ of the
hill-shaped potential
is smaller than a half of the wave length of the wave function. The wave length $l$ of a plane wave
and its energy $E$ are related as $E = \frac{(2\pi)^2}{l^2}$. So, the quantum resolution condition is
\begin{align}
E > \frac{\pi^2}{(\Delta x)^2} \, .
\label{cond2}
\end{align}
\end{itemize}
Since the thermal OTOC is a summation of microcanonical OTOCs with the thermal weight $\exp[-E/T]$,
a necessary
condition for the thermal OTOC at temperature $T$ to probe the local maximum is, according to
\eqref{cond1} and \eqref{cond2},
\begin{align}
T \gtrsim \text{max} \left\{ V_\text{top}, \, \frac{\pi^2}{(\Delta x)^2} \right\} \, .
\label{boundT}
\end{align}
We evaluate the right hand side of this inequality to derive \eqref{bound}.
\begin{figure}[t]
\centering
\includegraphics[width=130mm]{pot_fig.pdf}
\caption{A schematic picture of the potentials we use for evaluating the bound for the Lyapunov coefficient.
Left: the potential \eqref{Vquart}. Right: the potential \eqref{Vhardd}.}
\label{fig:pot}
\end{figure}
To illustrate the generic statement, let us evaluate the right hand side of \eqref{boundT}
with a concrete potential as the first example:
\begin{align}
V(x) = -\frac14 \lambda^2 x^2 + g x^4 + \frac{\lambda^4}{64g} \, ,
\label{Vquart}
\end{align}
with $g>0$. See the left figure of Fig.~\ref{fig:pot}.
The last term is included so that the bottom of the potential is at $V=0$.
The total Hamiltonian is $H = p^2 + V(x)$. This potential includes our analysis in Sec.~\ref{sec:2}
for some chosen values of $\lambda$ and $g$. The potential has a local maximum $x=0$, at which the classical
Lyapunov exponent is $\lambda$.
In this case we find the height of the potential as
\begin{align}
V_\text{top} = \frac{\lambda^4}{64g}\, .
\end{align}
The natural choice for the effective width of the potential is the distance between the two minima of the potential,
\begin{align}
\Delta x = \frac{\lambda}{\sqrt{2g}} \, .
\end{align}
Using these, the inequality \eqref{boundT} is written as
\begin{align}
T > \text{max} \left\{ \frac{\lambda^4}{64g}, \frac{2\pi^2g}{\lambda^2} \right\} \, .
\end{align}
Our goal is to find the most effective way to saturate this bound. Hence we change the potential while fixing
$\lambda$ to find the minimum value of the temperature $T$. This is achieved by varying $g$ in the right hand side,
and the result is
\begin{align}
T_\text{min} = \frac{\sqrt{2}\pi}{8} \lambda \,
\label{Tminl}
\end{align}
with the optimized potential parameter
\begin{align}
g= \frac{\lambda^3}{8\sqrt{2}\pi} \, .
\label{optg}
\end{align}
This equation \eqref{Tminl} is equivalent to the bound
\begin{align}
\lambda < \frac{4\sqrt{2}}{\pi} T
\label{boundquart}
\end{align}
which is nothing but \eqref{bound} which we wanted to show\footnote{The coefficient $4 \sqrt{2}/\pi$ is
${\cal O}(1)$ and is less than $2\pi$.}.
\begin{figure}[t]
\centering
\includegraphics[width=70mm]{Lyapunov_bound.pdf}
\caption{The time evolution of the thermal OTOC $C_T(t)$ of the system \eqref{Vquart}, with the optimized coupling $g$ \eqref{optg}, at the
temperature value $T$ saturating \eqref{boundquart}. For the numerical calculation we chose $\lambda=2$. Obviously there is no exponential growth seen, and the minimal value of the temperature is too low to detect the local maximum effectively.
}
\label{fig:boundT}
\end{figure}
It should be noted that this bound is the necessary condition, and for a given $\lambda$ the minimal value of
the temperature to observe the exponential growth of the thermal OTOC would be higher than the value
saturating the inequality \eqref{boundquart}. To see this concretely, we numerically calculate the thermal OTOC
of the system \eqref{Vquart} with $\lambda=2$ and the value of $g$ tuned to satisfy \eqref{optg}. At the
temperature value saturating the inequality \eqref{boundquart}, the thermal OTOC is plotted in Fig.~\ref{fig:boundT}.
The OTOC does not show any exponential growth at this value of the temperature. Therefore, we are just looking at
necessary conditions for the exponential growth to be seen in the thermal OTOCs.
The inequality \eqref{bound} can be shown in a more general setup of the potential. Consider the potential
\begin{align}
V(x) = \left\{
\begin{array}{lll}
-\frac14 \lambda^2x^2 + \frac14 \lambda^2a^2 &\quad & (|x| \leq a)\\
0 &\quad & (a \leq |x| \leq a') \\
\infty & \quad & (a' < |x|)
\end{array}
\right.
\label{Vhardd}
\end{align}
See the right panel of Fig.~\ref{fig:pot}.
There exists a potential hill whose local maximum is at $x=0$.
The classical Lyapunov exponent at $x=0$ is taken to be $\lambda$, as in the previous case.
The hard walls are located at $|x|=a'$.
The model is similar to the one used in Sec.~\ref{sec:3}, and now we allow
arbitrary location of the hard walls. In fact, in the following discussion, the
potential shape in the region $|x|>a$ does not matter.
Since the bottom of the potential is $V=0$, we find
\begin{align}
V_\text{top} = \frac14 \lambda^2 a^2 \, .
\end{align}
The effective width of the potential hill is obviously
\begin{align}
\Delta x = 2a \, .
\end{align}
Then the bound for the temperature of the thermal OTOC \eqref{boundT} is\footnote{
In the right hand side of \eqref{T>max}, the quantity $\frac{\pi^2}{4a^2}$
happens to be equal to the zero-point energy for the case of a single-well potential with the size $2a$.
In this sense, one may think that
the quantum resolution condition may be rephrased as the condition that the temperature is
larger than the order of the ground state energy.
But this condition can always be achieved by simply taking
$a' \to \infty$, while the quantum resolution condition in fact forbids this limit.
}
\begin{align}
T > \text{max} \left\{ \frac14\lambda^2a^2, \frac{\pi^2}{4a^2} \right\} \, .
\label{T>max}
\end{align}
The right hand side is minimized when $a= \sqrt{\pi/\lambda}$, at which we find
\begin{align}
\lambda < \frac{4}{\pi} T \, .
\end{align}
This is again the inequality \eqref{bound} with the ${\cal O}(1)$ numerical coefficient\footnote{The coefficient $4/\pi$ is less than $2\pi$.}.
Note that this argument does not depend on $a'$.
Thus we can generally expect that the argument above will not
depend on the structure of the potential outside of the inverted harmonic oscillator part, and we have the generic bound \eqref{bound} for any bounded potential which includes the inverted harmonic potential.
In this section, we have provided a derivation of \eqref{bound} which is
of the same form as the chaos bound discovered in \cite{Maldacena:2015waa}.
The latter is the bound for chaotic large $N$ systems, while our bound \eqref{bound}
is for one-dimensional quantum mechanical systems which are classically non-chaotic.
Possible concrete relations between the two, if any along the direction described in the introduction, would be interesting.
\section{Summary and discussions}
\label{sec:5}
In this paper we have investigated Lyapunov exponents $\lambda_\text{OTOC}$ of the thermal OTOCs for one-dimensional quantum mechanical systems with an inverted harmonic oscillator potential. The system is non-chaotic, and the classical counterparts are general with a local maximum which can generate a local classical Lyapunov exponent $\lambda_\text{saddle}$. We have numerically evaluated $\lambda_\text{OTOC}(T)$ for various values of temperature. We have discovered that
at values of the temperature above a certain threshold the exponential growth is observed in the thermal OTOCs, and the measured $\lambda_\text{OTOC}(T)$ is of the same order as $\lambda_\text{saddle}$. As we extrapolate our numerical results to $T=\infty$, the Lyapunov exponent $\lambda_\text{OTOC}(T=\infty)$ is suggested to be non-vanishing. These features are shared in various quantum mechanical models and universal, as we studied in detail in Sec.~\ref{sec:3}.
Our results of $\lambda_\text{OTOC}(T)$ for the Higgs-type potential case are summarized in Fig.~\ref{fig:Lyapunov_IHO} and Fig.~\ref{pot2b} in Sec.~\ref{sec:2}, and for the hard-wall potential case in Fig.~\ref{QL123} in Sec.~\ref{sec:3}.
Our findings have shown that the thermal OTOC can grow exponentially in time generically in one-dimensional quantum mechanics which are regular (non-chaotic). The temperature dependence of the OTOCs confirms that the origin of the exponential growth is a classical Lyapunov exponent at the saddle (the local maximum) of the potential. Since this is shown in our generic one-dimensional systems, it is natural to expect that finite-dimensional quantum systems follow the same behavior. If we equate the exponential growth of the thermal OTOC with the information scrambling at finite temperature, we are led to the conclusion that the information scrambling can happen in non-chaotic quantum systems.
At low temperature the exponential growth cannot be numerically identified, while the Lyapunov exponent $\lambda_\text{OTOC}$, if observed, needs to be ${\cal O}(\lambda)$ which is fixed by the curvature of the potential at the unstable maximum. This suggests that there exists a bound concerning the Lyapunov exponent and the temperature, which is suggestive in view of the ``chaos bound'' \eqref{bound} \cite{Maldacena:2015waa}.
In Sec.~\ref{sec:4} we have derived a bound \eqref{boundT}, $\lambda_\text{OTOC}(T) \lesssim c \, T$ with $c \simeq {\cal O}(1)$ for generic one-dimensional quantum systems. This bound is simple and quite similar to \eqref{bound}. The derivation is based on two facts which are satisfied generically in quantum mechanics in one dimension: first, the energy of the wave function to probe the local maximum needs to be higher than the potential energy of the maximum, and second, the wave length needs to be shorter than the scale of the potential hill. It is surprising that such a simple bound of the form \eqref{bound} and \eqref{boundT} holds for a wide class of quantum systems.
Several discussions on our results are in order.
First, our $\lambda_\text{OTOC}$ evaluated at $T=\infty$ by a fitting does not satisfy the semiclassical inequality \eqref{boundsaddle}, as described in Sec.~\ref{sec:2} and Sec.~\ref{sec:3}. This could be due to the fact that our analyses are not semiclassical but fully quantum, and/or the fact that the Hilbert space of our system is infinite dimensional. Pinning down the reason would help us when we generalize the analyses to quantum field theories which have much bigger Hilbert spaces\footnote{See \cite{Stanford:2015owe} for an example of the evaluation of the thermal OTOC in a quantum field theory.}, in view of the holographic principle. Numerical investigation of the semiclassical limits of our system, and comparison to the general semiclassical analyses \cite{Jalabert:2018kvl}, may provide a path to a resolution.
Next, in our bound \eqref{boundT}, the numerical coefficient $c$ is dependent on what kind of potential one chooses for the walls. A natural question is whether $c=2\pi$ or not, to compare \eqref{boundT} with \eqref{bound}. In fact, it is difficult to find the exact value of $c$ which can work for any system, because the principles we use for the derivation is difficult to be quantified: the wave length needs to be smaller compared to the length scale of the potential hill, but here, the ``length scale'' is ambiguous. Therefore, to make a precise statement with some explicit number $c$, we may need to introduce a measure of the detectability of the curvature of the potential by wave functions.
Finally, as described in the introduction, finding any possible relation between the chaos bound \eqref{bound} and our quantum mechanical bound \eqref{boundT} would be interesting. It may open up a bridge between large-$N$ quantum mechanics and few-body quantum mechanics, through information scrambling, chaos and holographic principle. We like to revisit the issues in the future.
\acknowledgments
K.~H.~and R.~W.~would like to thank Lea Ferreira dos Santos and Sa\'ul Pilatowsky
for valuable discussions which motivated the present work. K.~H.~would like to thank Takeshi Morita
for valuable discussions.
K.~H.~was supported in part by JSPS KAKENHI Grant No.~JP17H06462.
K.-Y.~K.~and K.-B.~H.~were supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT $\&$ Future Planning(NRF- 2017R1A2B4004810) and GIST Research Institute(GRI) grant funded by the GIST in 2020.
\begin{figure}[]
\centering
\includegraphics[width=75mm]{a.pdf}
\includegraphics[width=75mm]{ab.pdf}\\
\includegraphics[width=75mm]{abc.pdf}
\includegraphics[width=75mm]{abcd.pdf}
\caption{Blue curves (lines): the fitting function. Yellow region: 95\% confidence interval for the fitting.}
\label{fig:the temperature dependence}
\end{figure}
|
2,877,628,091,207 | arxiv | \section*{\small}}
\renewcommand{\bibfont}{\small}
\renewcommand{\bibsep}{2pt}
\usepackage[font=small,labelfont={color=black,bf},textfont={color=black},format=plain,indention=0cm,justification=centerlast,skip=0.1cm]{caption,subfig}
\makeatletter
\renewcommand{\fnum@figure}{\textbf{\small\mbox{Fig.~\thefigure}}}
\renewcommand{\fnum@table}{\textbf{\small\mbox{Table~\thetable}}}
\makeatother
\DeclareRobustCommand{\uppartial}{\text{\rotatebox[origin=t]{19}{\scalebox{0.99}[1]{$\partial$}}}\hspace{-1pt}}
\newcommand{\sech}[1]{\mathrm{sech}{#1}}
\newcommand{\rect}[1]{\mathrm{rect}{#1}}
\newcommand{\erf}[1]{\mathrm{erf}{#1}}
\newcommand{\erfc}[1]{\mathrm{erfc}{#1}}
\newcommand{\rule{0.09in}{0.09in}}{\rule{0.09in}{0.09in}}
\newcommand{\textbf{Gas diffusion in nanoporous thin films}}{\textbf{Gas diffusion in nanoporous thin films}}
\newcommand{\linefindoc}{\color{black
\vspace{0.5cm}
\centering\rule{0.7\linewidth}{1.2pt}\\%
\vspace{-0.39cm}
\rule{0.5\linewidth}{1.2pt}\\%
\vspace{-0.39cm}
\rule{0.3\linewidth}{1.2pt}%
\vspace{-0.5cm}}
\def\vprod{\boldsymbol{\cdot}}
\DeclareRobustCommand{\uppartial}{\text{\rotatebox[origin=t]{15}{\scalebox{0.95}[1]{$\partial$}}}\hspace{-1pt}}
\newcommand{\hyphen}[1]{---{#1}---}
\newenvironment{rcases}
{\left.\begin{aligned}}
{\end{aligned}\quad\right\rbrace}
\usepackage[colorlinks=true]{hyperref}
\hypersetup{
pdffitwindow=true
pdftitle={\textbf{Gas diffusion in nanoporous thin films}}
pdfauthor={L. N. Acquaroli}
pdfsubject={}
pdfcreator={L.N. Acquaroli}
pdfproducer={L.N. Acquaroli}
pdfkeywords={}
colorlinks={true}
linkcolor={NeonBlue}
citecolor={Cinnabar}
filecolor={Cinnabar}
urlcolor={NeonBlue
}
\usepackage{url}
\definecolor{NeonBlue}{rgb}{0.11,0.22,0.73}
\definecolor{Cinnabar}{rgb}{0.8078,0.0863,0.1255}
\usepackage{titlesec}
\titlelabel{\thetitle. }
\titleformat*{\section}{\centering\large\bfseries}
\titleformat*{\subsection}{\centering\bfseries}
\usepackage{titling}
\newcommand{\HorRule}{\color{crimson
\rule{\linewidth}{.5pt}%
}
\pretitle{\vspace{-40pt} \begin{center} \vspace{5pt} \fontsize{14}{14} \bfseries \color{black} \selectfont }
\title{\textbf{Gas diffusion in nanoporous thin films}}
\posttitle{\par\end{center}\vspace{-.15cm}}
\preauthor{\vspace{-13pt} \begin{center}
\bigskip \color{black}}
\author{\textbf{Leandro N. Acquaroli}}
\postauthor{\small \color{black}
\medskip
{Department of Engineering Physics, Ecole Polytechnique Montreal
P.O. Box 6079, Station Centre-Ville, Montreal (QC) H3C 3A7, Canada}
\vspace{-0.4cm}
\date{\small\today}
\par\end{center}}
\usepackage{fancyhdr}
\pagestyle{fancy}
\usepackage{lastpage}
\lhead{}
\chead{}
\rhead{\footnotesize \textit{L. N. Acquaroli. \textbf{Gas diffusion in nanoporous thin films}.}}
\lfoot{}
\cfoot{}
\rfoot{\footnotesize \thepage\ of \pageref{LastPage}}
\renewcommand{\headrulewidth}{0.0pt}
\renewcommand{\footrulewidth}{0.0pt}
\usepackage{abstract}
\setlength{\absleftindent}{-0cm}
\setlength{\absrightindent}{-0cm}
\raggedbottom
\usepackage{indentfirst}
\renewcommand{\thetable}{\Roman{table}}
\renewcommand{\thesection}{\Roman{section}}
\hyphenation{}
\begin{document}
\setlength{\belowdisplayskip}{5pt}\setlength{\belowdisplayshortskip}{5pt}
\setlength{\abovedisplayskip}{5pt}\setlength{\abovedisplayshortskip}{5pt}
\columnsep 0.6cm
\renewcommand{\abstractname}{}
\twocolumn[
\maketitle
\vspace{-1.7cm}
\begin{onecolabstract}
We analyze the Fick's diffusion of a gas inside porous nanomaterials through the one-dimensional diffusion equation in nanopores for various cases of boundary conditions for homogeneous and non-homogeneous problems. We study the diffusion problems, starting without adsorption of the gas inside the pores, to more complex situations with surface adsorption in the pore walls and at the pore tips. Different methods of solution are reviewed depending on the problem, such as similarity transformation, Laplace transform, separation of variables, Danckwerts method and the Green's functions technique. The recovery step when the diffusion process stops and reached the steady-state is presented as well for the different problems.
\end{onecolabstract}
\vspace{1cm}
]
\fancypagestyle{plain}{%
\fancyhf{}
\fancyfoot[R]{\footnotesize \thepage\ of \pageref{LastPage}}
\renewcommand{\headrulewidth}{0pt}
\renewcommand{\footrulewidth}{0pt}}
\section{Introduction}
In the last decades, research on sensors fabricated with nanomaterials increased due to their broad range of applications from optoelectronics to sensors~\cite{bisi1, theiss1, monsouri1}.
Porous silicon \hyphen{PS} is a nanomaterial obtained by electrochemical anodization of crystalline silicon \hyphen{c-Si} wafers in hydrofluoric acid \hyphen{HF} solutions containing a surfactant such as ethanol \hyphen{EtOH}. Under proper preparation conditions, a porous network grows inside the c-Si wafer with pores sizes varying from \SI{2}{\nano\meter} up to \SI{10}{\micro\meter}, and surface areas up to \SI{800}{\square\meter\per\gram}. Due to these properties, its fast preparation and its diverse and tuneable optical, electrical and surface-chemical properties predestined PS for sensor and biosensor applications~\cite{adfm.200500218, adfm.200500899, ACQUAROLI2010189, 10408430903245369, Haidary, C4RA04184D, doi:10.1002/adtp.201800095}.
We present an analysis of the different approaches to the one-dimensional diffusion equation, considering nanoscale mass transport inside a porous material.
Consider a porous thin film on a substrate with an idealized cylindrical close-end pore \hyphen{so that there is no net hydrodynamic flow into the pore, thus, neglecting advective transport} with length $L_{\text{n}}$ and radius $R_{\text{n}}$ \hyphen{Fig.~\ref{f.fig1}}. The transport always occurs in gas phase by neglecting Kelvin condensation in the nanopores. Adsorption and desorption time scales vary depending upon substrate surface and analyte species, but they typically fall in the range $\approx$\SIrange{e-6}{e-3}{\second}, compared to the diffusion times in the \SI{e-11}{\second}. Considering the length and time scales above, the justifications for the use of the continuum assumption are not applicable. The requirement that the characteristic system length scale is large when compared to the characteristic molecular length scale is equivalent to the requirement that the Knudsen number, $K_{\text{n}}\equiv \mu\sqrt{RT/M}/(P R_{\text{n}})$, be small. For instance, in PS nanopores, $K_{\text{n}}\approx 5$. Thus, it would appear that the continuum assumption should not be applied and that the governing equation for mass transport should be the more general Boltzmann transport equation. However, it has been demonstrated that a Fickian-like continuum model can still be used to describe mass transport in gasses at moderate Knudsen numbers provided the diffusion coefficient is appropriately modified to account for rarefaction effects~\cite{Kottke2009}.
We present the diffusion problems, from simple without adsorption to more complex situations with surface adsorption in the pore walls and at the pore tips. We review different methods of solution depending on the problem and the boundary conditions, such as similarity transformation, Laplace transform, separation of variables, Danckwerts method and the Green's functions technique. We study as well the recovery step, when the diffusion process stops and reached the steady-state, for the different problems.
\section{Diffusion problem at the nanoscale}
A simplified version of the problem is reached assuming that the difference between the concentration at the pore wall and a cross-sectional area weighted average concentration are negligible. At least for short times, the fraction of active sites per unit area filled with the adsorbed species is negligible, and only adsorption \hyphen{no desorption} occurs at an appreciable rate. Thus, removing the radial coordinate, the general dimensionless governing problem results as follow:
\begin{subequations}\label{eq.1}
\begin{align}
\text{(DE):}\quad & \uppartial_t c^* = \uppartial_{xx} c^* - \beta \alpha^2 c^*,\,\,\,\, 0<x^* <1,\,\, t^* >0, \label{eq.1a}\\
\text{(BC-1):}\quad & c^*(0, t^*) = 1,\,\,\,\, t^* >0, \label{eq.1b}\\
\text{(BC-2):}\quad & \uppartial_x c^*(1, t^*) + \gamma \alpha^2 c^*(1, t^*)=0,\,\,\,\, t^* >0, \label{eq.1c}\\
\text{(IC):}\quad & c^*(x^*, 0) = 0,\,\,\,\, 0<x^* <1, \label{eq.1d}
\end{align}
\end{subequations}
\noindent
where $c^*(x^*, t^*)=c(x,t)/c_0$ is the concentration normalized to the initial $c_0$, $x^* = x/L_{\text{n}}$ is the dimensionless space coordinate, $t^* = t\, D_{\text{Kn}}/L_{\text{n}}^2$ is the dimensionless time, $D_{\text{Kn}} = R_{\text{n}} \sqrt{RT/M}$ is the Knudsen diffusion, with $R$ is the universal gas constant, $T$ is the temperature and $M$ is the mole average molecular weight of the gas mixture. The parameter $\alpha = k_{\text{a}} L_{\text{n}} N/D_{\text{Kn}}$ where $k_{\text{a}}$ is the rate coefficient for adsorption, and $N$ the number of active sites per unit area of surface~\cite{Kottke2009}. The parameters $\beta$ and $\gamma$ serve to describe the general problem and will take values 0 or 1 depending on the different problems and boundary conditions considered.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.4]{figure_2.pdf}
\vspace{0.2cm}
\caption{Scheme of a nanoporous thin film surrounded by gas at the top and sitting on a solid substrate. The detail on the right shows an idealized cylindrical geometry used to analyze transient axisymmetric diffusion in a nanopore.\hfill{ }}
\label{f.fig1}
\end{center}
\end{figure}
\section{Recovery process from diffusion}
For sensing purposes, is interesting to determine the mathematical solution to the problem of diffusion, once the steady-state is reached and the point of diffusion is off. The steady-state is calculated taking the limit of the solutions of the diffusion problems discussed before when the time tends to infinity. Switching off the source of diffusion is done by setting the Dirichlet BC-1 \eqref{eq.1b} to zero, giving the general problem for the normalized concentration $u^*(x^*,t^*)$ as follow:
\begin{subequations}\label{eq.62}
\begin{align}
\text{(DE):}\quad & \uppartial_t u^* = \uppartial_{xx} u^*, \label{eq.62a}\\
\text{(BC-1):}\quad & u^*(0, t^*) = 0, \label{eq.62b}\\
\text{(BC-2):}\quad & \uppartial_x u^*(1, t^*) = 0, \label{eq.62c}\\
\text{(IC):}\quad & u^*(x^*, 0) = c^*(x^*,\infty). \label{eq.62d}
\end{align}
\end{subequations}
\section{Solutions to the diffusion problem without adsorption}
We will tackle different approaches to solve the system \eqref{eq.1} depending on the conditions established. First, we will consider the problem without adsorption, \text{i.e.}, setting $\beta=\gamma=0$ in \eqref{eq.1} and solving the problem simplest diffusion equation with Dirichlet and Neumann boundary conditions using three different approaches.
\subsection{Similarity transformation}
Similarity transformations to PDEs are solutions which relies on treating independent variables in groups rather than separately. Setting $\beta=\gamma=0$ in problem~\eqref{eq.1} gives:
\begin{subequations}\label{eq.2}
\begin{align}
\text{(DE):}\quad & \uppartial_t c^* = \uppartial_{xx} c^*, \label{eq.2a}\\
\text{(BC-1):}\quad & c^*(0, t^*) = 1, \label{eq.2b}\\
\text{(BC-2):}\quad & \uppartial_x c^*(1, t^*) = 0, \label{eq.2c}\\
\text{(IC):}\quad & c^*(x^*, 0) = 0, \label{eq.2d}
\end{align}
\end{subequations}
\noindent
We seek a solution of the form\cite{Crank1975}
\begin{equation}\label{eq.3}
c^* = (t^*)^r\, g(\eta^*),\quad \eta^* = \frac{x^*}{\sqrt{t^*}},
\end{equation}
\noindent
where $r$ is chosen arbitrarily to satisfy the BCs. Then, we plug~\eqref{eq.3} into~\eqref{eq.2a}:
\begin{subequations}\label{eq.4}
\begin{align}
\uppartial_t c^* &= (t^*)^{r-1}\, rg - (t^*)^{r-1}\, \frac{\eta^*\, g'}{2}, \label{eq.4a}\\
\uppartial_{xx} c^* &= (t^*)^{r-1}\, g''.
\end{align}
\end{subequations}
\noindent
We set $r=1/2$ to satisfy the BCs in the final transformed ODE:
\begin{subequations}\label{eq.5}
\begin{align}
\text{(DE):}\quad & g'' + \frac{\eta^*}{2} g' - \frac{1}{2}g = 0, \label{eq.5a}\\
\text{(BC-1):}\quad & g(\eta^*\to 0) \to 1, \label{eq.5b}\\
\text{(BC-2):}\quad & g'(\eta^*\to\infty) \to 0. \label{eq.5c}
\end{align}
\end{subequations}
\noindent
Using the transformation $g(\eta^*)=\eta^* f(\eta^*)$ in \eqref{eq.5a} gives the differential equation: $\eta^* f''+[2+(\eta^*)^2/2] f'=0$. Solving for $f$ and applying the anti-transformation, we get:
\begin{equation}\label{eq.6}
g(\eta^*) = \frac{1}{2}\left[ 2\, \exp\!{(-(\eta^*)^2 /4)} - \sqrt{\pi}\,\eta^* + \eta^*\, \erf{(\eta^*/2)} \right].
\end{equation}
\noindent
Thus, combining Eqs.~\eqref{eq.3} and \eqref{eq.6}, the solution of \eqref{eq.2} results as follow:
\begin{equation}\label{eq.7}
c^*(x^*,t^*) = \sqrt{t^*}\,\exp\!{\left(-\frac{x^*}{4t^*}\right)} - \frac{\sqrt{\pi}}{2} x^* \erfc{\left(\frac{x^*}{2 \sqrt{t^*}}\right)}.
\end{equation}
\noindent
Integration of \eqref{eq.7} over $x^*$ yields the time solution of the concentration:
\begin{align}
c^*(t^*) &= \int_0^1 c^*(x^*,t^*)\, \text{d}x^* \label{eq.8}\\
&= \frac{1}{2}\, \sqrt{t^*}\, \exp\!{\left(-\frac{1}{4t^*}\right)} + \frac{\sqrt{\pi}}{4} \left[ (1 + 2t^*)\,\erf{\left(\frac{1}{2 \sqrt{t^*}}\right)} - 1 \right]. \label{eq.9}
\end{align}
\begin{figure}[t]
\begin{center}
\hspace{-0.1cm}\includegraphics[scale=0.425]{similarity_solution.pdf}
\caption{Concentration profiles of the similarity solution: (top) Eq.~\eqref{eq.7} with legends indicating different $t^*$; (bottom) Eq.~\eqref{eq.9}. \hfill{ }}
\label{f.similarity_solution}
\end{center}
\end{figure}
Equation~\eqref{eq.7} shows a decay behavior which strongly depends on $t^*$, while the time profile of Eq.~\eqref{eq.9} builds up until it reaches the end of the pore, without clear saturation \hyphen{Fig.~\ref{f.similarity_solution}}.
\subsection{Laplace transform}
The method of Laplace transform is widely used in the solution of time-dependent diffusion problems because the partial derivative with respect to the time variable is removed from the differential equation of diffusion and replaced with a parameter, $s$, in the transformed field. Thus, this technique to solve PDEs is relatively straightforward, however, the inversion of the transformed solution generally is rather involved unless the inversion is available in the standard Laplace transform tables\cite{Haberman2012, Ozisik2012}. In fact, to simplify the inverse of the Laplace transform in our problem, we change the limits of the BCs, as if we flipped the thin film in the spatial coordinate. By setting $\beta=\gamma=0$ in \eqref{eq.1}, we get:
\begin{subequations}\label{eq.10}
\begin{align}
\text{(DE):}\quad & \uppartial_t c^* = \uppartial_{xx} c^*, \label{eq.10a}\\
\text{(BC-1):}\quad & \uppartial_x c^*(0, t^*) = 0, \label{eq.10b}\\
\text{(BC-2):}\quad & c^*(1, t^*) = 1, \label{eq.10c}\\
\text{(IC):}\quad & c^*(x^*, 0) = 0, \label{eq.10d}
\end{align}
\end{subequations}
We start by taking the Laplace transform, $\mathcal{L}$, of each term of \eqref{eq.10} as follow:
\begin{subequations}\label{eq.11}
\begin{align}
\mathcal{L}[\uppartial_{xx} c^*] &= \hat{c}''(x^*,s), \label{eq.11a}\\
\mathcal{L}[\uppartial_t c^*] &= s\,\hat{c}(x^*,s) - c^*(x^*,0) = s\,\hat{c}(x^*,s), \label{eq.11b}\\
\mathcal{L}[\uppartial_x c^*(0, t^*)] &= \mathcal{L}[0] = \hat{c}'(0, s) = 0, \label{eq.11c}\\
\mathcal{L}[c^*(1, t^*)] &= \mathcal{L}[1] = \hat{c}(1, s) = \frac{1}{s},\label{eq.11d}
\end{align}
\end{subequations}
\noindent
where the variable $\hat{c}=\hat{c}(x^*,s)$ represents the concentration in the transformed field, where $s$ is a parameter, not a variable. Then, the PDE of problem \eqref{eq.10} is transformed into the following ODE:
\begin{subequations}\label{eq.12}
\begin{align}
\text{(DE):}\quad & \hat{c}'' - s \hat{c} = 0, \label{eq.12a}\\
\text{(BC-1):}\quad & \hat{c}'(0, s) = 0, \label{eq.12b}\\
\text{(BC-2):}\quad & \hat{c}(1, s) = \frac{1}{s}, \label{eq.12c}
\end{align}
\end{subequations}
\noindent
The general solution for problem \eqref{eq.12} has the form\cite{Ozisik2012}
\begin{subequations}\label{eq.13}
\begin{align}
\hat{c} &= A_1 \cosh(x^*\sqrt{s}) + A_2 \sinh(x^*\sqrt{s}),\label{eq.13a}\\
\hat{c}' &= A_1 \sqrt{s} \sinh(x^*\sqrt{s}) + A_2 \sqrt{s} \cosh(x^*\sqrt{s}).\label{eq.13b}
\end{align}
\end{subequations}
\noindent
Applying the BCs to the solution \eqref{eq.13}, gives $A_1 = [s\cosh(\sqrt{s})]^{-1}$ and $A_2 = 0$. Plugging these constants into \eqref{eq.13a}, we get the solution in the transformed field:
\begin{equation}\label{eq.14}
\hat{c}(x^*,s) = \frac{\cosh(x^*\sqrt{s})}{s\cosh(\sqrt{s})}.
\end{equation}
\noindent
The inverse of the Laplace transform, $\mathcal{L}^{-1}$, of \eqref{eq.14}, is found in tables and gives the following solution to \eqref{eq.10}\cite{Ozisik2012}:
\begin{align}
c^*(x^*,t^*) &= \mathcal{L}^{-1}[\hat{c}(x^*,s)] \label{eq.15} \\
&= 1 + 2 \sum_{n=1}^{\infty} \frac{(-1)^n}{\lambda_n}\cos(\lambda_n x^*) \exp(-\lambda_n^2 t^*).\label{eq.16}
\end{align}
\noindent
where $\lambda_n=(n-1/2)\pi, n\in\mathbb{Z}^+$. The time profile is \hyphen{Eq.~\eqref{eq.8}}:
\begin{equation}\label{eq.17}
c^*(t^*) = 1 - 2 \sum_{n=1}^{\infty} \frac{1}{\lambda_n^2} \exp(-\lambda_n^2 t^*).
\end{equation}
Solutions obtained with Laplace transform are plotted in Fig.~\ref{f.laplace_solution}.
\begin{figure}[t]
\begin{center}
\hspace{-0.1cm}\includegraphics[scale=0.425]{laplace_solution_no_ads.pdf}
\caption{Concentration profiles using the Laplace transform solution: (top) Eq.~\eqref{eq.16} with legends indicating different $t^*$; (bottom) Eq.~\eqref{eq.17}. Note the inversion of the boundary conditions. \hfill{ }}
\label{f.laplace_solution}
\end{center}
\end{figure}
\subsection{Separation of variables}
The inverse of Laplace transform is useful when the anti-transformation is generally tabulated. Otherwise, it involves further complicated steps depending on the solution in the transformed field. The method of separation of variables presents as an alternative. To use this method, we split the problem \eqref{eq.2} into an equilibrium steady-state part, $v(x^*)$, plus a non-equilibrium displacement part, $w(x^*,t^*)$\cite{Haberman2012}. By doing this, we send the non-homogeneity in BC-1 to the IC. Thus,
\begin{equation}\label{eq.18}
c^*(x^*,t^*) = v(x^*) + w(x^*,t^*),
\end{equation}
\noindent
The steady-state part problem is:
\begin{equation}\label{eq.19}
\begin{rcases}
\text{(DE):}\quad & v''(x^*) = 0\\%\label{eq.19a}\\
\text{(BC-1):}\quad & v(0) = 1\\%\label{eq.22b}\\
\text{(BC-2):}\quad & v'(1) =
\end{rcases}
\,\, v(x^*)=1.
\end{equation}
\noindent
The non-equilibrium problem is built from expression \eqref{eq.18}, $w^*(x^*,t^*) = c(x^*,t^*) - v(x^*)$, with homogenous BCs as follow:
\begin{subequations}\label{eq.20}
\begin{align}
\text{(DE):}\quad & \uppartial_t w = \uppartial_{xx} w, \label{eq.20a}\\
\text{(BC-1):}\quad & w(0,t^*) = 0, \label{eq.20b}\\
\text{(BC-2):}\quad & \uppartial_x w(1,t^*) = 0, \label{eq.20c}\\
\text{(IC):}\quad & w(x^*,0) = c^*(x^*,0)-v(x^*)=-v(x^*), \label{eq.20d}
\end{align}
\end{subequations}
\noindent
where $c^*(x^*,0)=0$ in the IC results from \eqref{eq.10d}. Replacing $w(x^*,t^*)=p(x^*)\,q(t^*)$ in \eqref{eq.20a} gives:
\begin{equation}\label{eq.21}
\frac{p''}{p} = \frac{q'}{q} = -\lambda^2,
\end{equation}
\noindent
where $\lambda$ is an arbitrary separation constant. Last expression yields two ordinary differential equations for $x^*$ and $t^*$. The $x^*$-dependent part satisfies the eigenvalue problem with two homogeneous boundary conditions as follow:
\begin{subequations}\label{eq.22}
\begin{align}
\text{(DE):}\quad & p'' + \lambda^2 p = 0, \label{eq.22a}\\
\text{(BC-1):}\quad & p(0) = 0, \label{eq.22b}\\
\text{(BC-2):}\quad & p'(1) = 0. \label{eq.22c}
\end{align}
\end{subequations}
\noindent
The general solution of \eqref{eq.22} is
\begin{subequations}\label{eq.23}
\begin{align}
p(x^*) &= A_1 \cos(\lambda x^*) + A_2 \sin(\lambda x^*),\label{eq.23a}\\
p'(x^*) &= -A_1 \lambda \sin(\lambda x^*) + A_2 \lambda \cos(\lambda x^*).\label{eq.23b}
\end{align}
\end{subequations}
\noindent
Placing the BCs we obtain the nontrivial solutions $A_1=0$ and $A_2\cos(\lambda)=0$. Since the cosine is a periodic function, the eigenvalue $\lambda$ must satisfy the solution for every positive odd half-integer of $\pi$, hence, the eigenvalues are given by
\begin{equation}\label{eq.24}
\lambda_n = \left(n-\frac{1}{2}\right)\pi,\quad n\in\mathbb{Z}^+.
\end{equation}
\noindent
The eigenfunction corresponding to the eigenvalue $\lambda_n$ is
\begin{equation}\label{eq.25}
p_n(x^*) = A_2 \sin(\lambda_n x^*).
\end{equation}
The time-dependent ODE equation that results from \eqref{eq.21}
\begin{equation}\label{eq.26}
q'(t^*) + \lambda^2 q(t^*) = 0
\end{equation}
\noindent
has the following general solution:
\begin{equation}\label{eq.27}
q_n(t^*) = q(0)\,\exp(-\lambda^2_n t^*).
\end{equation}
\noindent
Therefore, the product solution of \eqref{eq.20} is
\begin{equation}\label{eq.28}
w_n(x^*,t^*) = B_n\,\sin(\lambda_n x^*)\,\exp(-\lambda^2_n t^*).
\end{equation}
\noindent
where $B_n=A_2\,q(0)$. The principle of superposition shows that $w_n$, with $n\in\mathbb{Z}^+$, are solutions of a linear homogeneous problem. It follows that any linear combination of these solutions is also a solution of the linear homogeneous equation \eqref{eq.20}. Thus, taking into account that $B_n$ could be different for each solution, we have
\begin{equation}\label{eq.29}
w(x^*,t^*) = \sum_{n=1}^{\infty} B_n \sin(\lambda_n x^*)\,\exp(-\lambda^2_n t^*).
\end{equation}
\noindent
The initial condition is satisfied if
\begin{equation}\label{eq.30}
w(x^*,0) = -v(x^*) = \sum_{n=1}^{\infty} B_n \sin(\lambda_n x^*).
\end{equation}
\noindent
We first multiply both sides of \eqref{eq.30} by $p_m(x^*)$ \hyphen{for a given integer $m$ of $n$}, and integrate over $x^*$:
\begin{equation}\label{eq.31}
-\int_0^1 v(x^*) \sin(\lambda_m x^*)\, \text{d}x^*= \sum_{n=1}^{\infty} B_n \int_0^1 \sin(\lambda_n x^*) \sin(\lambda_m x^*)\, \text{d}x^*.
\end{equation}
\begin{figure}[t]
\begin{center}
\hspace{-0.1cm}\includegraphics[scale=0.425]{sv_solution_no_ads.pdf}
\caption{Concentration profiles using the separation of variables solution: (top) Eq.~\eqref{eq.35} with legends indicating different $t^*$; (bottom) Eq.~\eqref{eq.36}. \hfill{ }}
\label{f.sv_solution_no_ads}
\end{center}
\end{figure}
\noindent
The orthogonality property of the sine function implies that each term of the sum is zero whenever
$n \ne m$, then, the only term that appears on the right-hand side occurs when $m$ is replaced by $n$:
\begin{equation}\label{eq.32}
-\int_0^1 v(x^*) \sin(\lambda_n x^*)\, \text{d}x^*= B_n \int_0^1 \sin^2(\lambda_n x^*)\, \text{d}x^*.
\end{equation}
\noindent
Solving for $B_n$ we obtain:
\begin{equation}\label{eq.33}
B_n = -\frac{\int_0^1 v(x^*) \sin(\lambda_n x^*)\, \text{d}x^*}{\int_0^1 \sin^2(\lambda_n x^*)\, \text{d}x^*} = -2 \int_0^1 v(x^*) \sin(\lambda_n x^*)\, \text{d}x^* .
\end{equation}
\noindent
Replacing $v(x^*)=1$, results in
\begin{equation}\label{eq.34}
B_n = - 2 \int_0^1 \sin(\lambda_n x^*)\, \text{d}x^* = -\frac{2}{\lambda_n}.
\end{equation}
\noindent
Combining Eqs.~\eqref{eq.18}, \eqref{eq.29} and \eqref{eq.34}, the final solution to the problem is:
\begin{equation}\label{eq.35}
c^*(x^*,t^*) = 1 - \sum_{n=1}^{\infty} \frac{2}{\lambda_n} \sin(\lambda_n x^*)\exp(-\lambda^2_n t^*)
\end{equation}
\noindent
where the time-profile defined as
\begin{equation}\label{eq.36}
c^*(t^*) = 1 - \sum_{n=1}^{\infty} \frac{2}{\lambda_n^2} \exp(-\lambda^2_n t^*).
\end{equation}
\noindent
and the eigenvalues given by \eqref{eq.24}. Plots of \eqref{eq.35} and \eqref{eq.36} are shown in Fig.~\ref{f.sv_solution_no_ads}.
\section{Solutions to the diffusion problem with adsorption}
Now we will consider two approaches to solve the system \eqref{eq.1} with adsorption. There are solutions to this problem in literature, for $\beta=1$ and $\gamma=0$, giving rise to the surface adsorption effect inside the pores' walls, except at the bottom of the pore. We review this problem and also the solution for the Robin \hyphen{mixed} boundary condition, $\gamma=1$, when the adsorption at the pore-tip is considered.
\subsection{Adsorption in the wall's surface of the pore}
Setting $\beta=1$ and $\gamma=0$ in \eqref{eq.1} becomes a reaction-diffusion problem with Dirichlet and Neumann boundary conditions as follow:
\begin{subequations}\label{eq.37}
\begin{align}
\text{(DE):}\quad & \uppartial_t c^* = \uppartial_{xx} c^* - \alpha^2 c^*, \label{eq.37a}\\
\text{(BC-1):}\quad & c^*(0, t^*) = 1, \label{eq.37b}\\
\text{(BC-2):}\quad & \uppartial_{x} c^*(1, t^*) = 0, \label{eq.37c}\\
\text{(IC):}\quad & c^*(x^*, 0) = 0. \label{eq.37d}
\end{align}
\end{subequations}
\noindent
In the literature, exists solutions to this problem using eigenfunction expasion\cite{Kottke2009}, superposition of solutions\cite{MATSUNAGA2003226} and Laplace transform\cite{SELVARAJ2014885}. Here, taking advantage to the constant nature of the BCs we solve this problem using the Danckwerts method\cite{Crank1975}, which consists in solving the homogeneous DE, keeping the BCs and IC,
\begin{subequations}\label{eq.38}
\begin{align}
\text{(DE):}\quad & \uppartial_t\hat{c} = \uppartial_{xx}\hat{c}, \label{eq.38a}\\
\text{(BC-1):}\quad & \hat{c}(0, t^*) = 1, \label{eq.38b}\\
\text{(BC-2):}\quad & \uppartial_x \hat{c}(1, t^*) = 0, \label{eq.38c}\\
\text{(IC):}\quad & \hat{c}(x^*, 0) = 0. \label{eq.38d}
\end{align}
\end{subequations}
\noindent
and then expressing the solution as follow:
\begin{equation}\label{eq.39}
c^*(x^*, t^*)=\alpha^2 \int_0^t \hat{c}(x^*,\tau)\, \exp{(-\alpha^2\tau)}\text{d}\tau + \hat{c}(x^*,t^*)\, \exp{(-\alpha^2 t^*)}.
\end{equation}
Problem \eqref{eq.38} is the same as \eqref{eq.2}, with solution given by Eq.~\eqref{eq.35}. Thus, plugging this solution in \eqref{eq.39}, we get:
\begin{equation}\label{eq.40}
c^*(x^*, t^*)= 1 - \sum_{n=1}^{\infty} \frac{2}{\lambda_n}\sin(\lambda_n x^*)\left\{\frac{\alpha^2+\lambda_n^2\,\exp{[-(\alpha^2+\lambda_n^2)t^*]}}{(\alpha^2+\lambda_n^2)}\right\},
\end{equation}
\noindent
with $\lambda_n=(n-1/2)\pi$, $n\in\mathbb{Z}^+$. The time profile is given by:
\begin{equation}\label{eq.41}
c^*(t^*)= 1 - \sum_{n=1}^{\infty} \frac{2}{\lambda_n^2}\left\{\frac{\alpha^2+\lambda_n^2\,\exp{[-(\alpha^2+\lambda_n^2)t^*]}}{(\alpha^2+\lambda_n^2)}\right\}.
\end{equation}
Solutions of \eqref{eq.40} are presented in Figs.~\ref{f.Kottke_solution_1} and \ref{f.Kottke_solution_2} for different values of $\alpha^2$. Using Duhamel's theorem\cite{Ozisik2012} gives the same results.
\begin{figure}[t]
\begin{center}
\hspace{-0.1cm}\includegraphics[scale=0.425]{Kottke_solution_alpha_0_Danckwert.pdf}
\caption{Concentration profiles using the separation of variables solution: (top) Eq.~\eqref{eq.40} with legends indicating different $t^*$; (bottom) Eq.~\eqref{eq.41}. For all cases, $\alpha^2=0.01$. \hfill{ }}
\label{f.Kottke_solution_1}
\end{center}
\end{figure}
\subsection{Adsorption in the wall's surface and tip of the pore}
Setting $\beta=1$ and $\gamma=1$ in \eqref{eq.1} becomes the following reaction-diffusion problem with Dirichlet and Robin boundary conditions:
\begin{subequations}\label{eq.50}
\begin{align}
\text{(DE):}\quad & \uppartial_t c^* = \uppartial_{xx} c^* - \alpha^2 c^*, \label{eq.50a}\\
\text{(BC-1):}\quad & c^*(0, t^*) = 1, \label{eq.50b}\\
\text{(BC-2):}\quad & \uppartial_x c^*(1, t^*) +\alpha^2 c^*(1, t^*)=0, \label{eq.50c}\\
\text{(IC):}\quad & c^*(x^*, 0) = 0. \label{eq.50d}
\end{align}
\end{subequations}
\noindent
We define the transformation $c^*=\exp(-\alpha^2 t^*)\,\hat{c}$, and plug it into \eqref{eq.50}, which results as follow:
\begin{subequations}\label{eq.51}
\begin{align}
\text{(DE):}\quad & \uppartial_t \hat{c} = \uppartial_{xx} \hat{c}, \label{eq.51a}\\
\text{(BC-1):}\quad & \hat{c}(0, t^*) = \exp(\alpha^2 t^*) = f_1(t^*), \label{eq.51b}\\
\text{(BC-2):}\quad & \uppartial_x \hat{c}(1, t^*) +\alpha^2 \hat{c}(1, t^*) = 0 = f_2(t^*), \label{eq.51c}\\
\text{(IC):}\quad & \hat{c}(x^*, 0) = 0 = F(x^*). \label{eq.51d}
\end{align}
\end{subequations}
\noindent
Notice that the transformation introduced not only removes the reaction term, but also replaces the Dirichlet BC-1 with a time-dependent condition, $f_1(t^*)$. For convenience, we wrote $f_2(t^*)$ as the BC-2, and $F(x^*)$ to the IC, although they are homogeneous. We solve problem~\eqref{eq.51} using the Green's function method\cite{Ozisik2012}, where the solution expressed in terms of the Green's function, $G(x^*,t^*|x^\dagger,\tau)$, is as follow:
\begin{align}
\hat{c}(x^*, t^*) &= \int_0^1 G(x^*,t^*|x^\dagger,0)\,F(x^\dagger)\,(x^\dagger)^m\, \text{d}x^\dagger\nonumber\\
&\,\,\, + \int_0^{t^*} \int_0^1 G(x^*,t^*|x^\dagger,\tau)\,g(x^\dagger,\tau)\,(x^\dagger)^m\,\text{d}x^\dagger\,\text{d}\tau \nonumber\\
&\,\,\, + \sum_{i=1}^N \int_0^{t^*} x_i^m\,G(x^*,t^*|x_i,\tau)\, f_i(\tau)\,\text{d}\tau\label{eq.51-a}
\end{align}
\noindent
where $(x^\dagger)^m$ is the Sturm-Liouville weight function with $m=0$ for rectangular spatial coordinate, $x_i$ is the $i$th boundary point of the total $N$ boundary conditions prescribed, $g(x^*,t^*)$ is the generation term \hyphen{eliminated by the transformation}, $F(x^*)$ is the initial condition, and $f_i(t^*)$ are the non-homogeneous boundary conditions functions. Notice that for a Dirichlet boundary condition, we should replace $G(x^*,t^*|x_i,\tau)$ by $\uppartial_{x^\dagger}G(x^*,t^*|x_i,\tau)$.
\begin{figure}[t]
\begin{center}
\hspace{-0.1cm}\includegraphics[scale=0.425]{Kottke_solution_alpha_1_Danckwert.pdf}
\caption{Concentration profiles using the separation of variables solution: (top) Eq.~\eqref{eq.40} with legends indicating different $t^*$; (bottom) Eq.~\eqref{eq.41}. For all cases, $\alpha^2=1$. \hfill{ }}
\label{f.Kottke_solution_2}
\end{center}
\end{figure}
To find the Green's function that solves the problem, we first solve the associated homogeneous problem of~\eqref{eq.51}:
\begin{subequations}\label{eq.52}
\begin{align}
\text{(DE):}\quad & \uppartial_t \psi = \uppartial_{xx} \psi, \label{eq.52a}\\
\text{(BC-1):}\quad & \psi(0, t^*) = 0, \label{eq.52b}\\
\text{(BC-2):}\quad & \uppartial_x \psi(1, t^*) +\alpha^2 \psi(1, t^*) = 0, \label{eq.52c}\\
\text{(IC):}\quad & \psi(x^*, 0) = F(x^*). \label{eq.52d}
\end{align}
\end{subequations}
\noindent
Separation of variables, $\psi(x^*,t^*)=p(x^*)q(t^*)$, leads to the related boundary value problem of \eqref{eq.52},
\begin{subequations}\label{eq.53}
\begin{align}
\text{(DE):}\quad & p'' + \lambda p= 0, \label{eq.53a}\\
\text{(BC-1):}\quad & p(0) = 0, \label{eq.53b}\\
\text{(BC-2):}\quad & p'(1) +\alpha^2 p(1) = 0, \label{eq.53c}
\end{align}
\end{subequations}
\noindent
with eigenfunctions $p_n(x^*)=\sin(\lambda_n x^*)$ and the corresponding eigenvalues $\lambda_n$, defined as the roots of the trascendental equation, $\lambda_n\cot\lambda_n+\alpha^2=0$. When $n$ tends to infinity, $\lambda_n\cot\lambda_n\sim (n-1/2)\pi$. Thus, the eigenvalues express as $\lambda_n = (n-1/2)\pi+\alpha^2$, $n\in\mathbb{Z}^+$. The difference comparing boundary value problems \eqref{eq.53} with \eqref{eq.22}, lies in the eigenvalues due to the BC-2 conditions \hyphen{Robin and Neumann} of the problem. The time-dependent solution is $q_n(t^*)=q(0)\exp(-\lambda_n^2 t^*)$, where the initial condition is obtained using the same procedure as before in Eqs.~\eqref{eq.30}-\eqref{eq.33}:
\begin{equation}\label{eq.54}
q(0) = 2 \int_0^1 p_n(x^\dagger)\, F(x^\dagger)\, \mathrm{d}x^\dagger.
\end{equation}
\begin{figure}[t]
\begin{center}
\hspace{-0.1cm}\includegraphics[scale=0.425]{proposed_solution_alpha_0.pdf}
\caption{Concentration profiles using the separation of variables solution: (top) Eq.~\eqref{eq.51} with legends indicating different $t^*$; (bottom) Eq.~\eqref{eq.52}. For all cases, $\alpha^2=0.01$. \hfill{ }}
\label{f.proposed_solution}
\end{center}
\end{figure}
\noindent
The solution of \eqref{eq.52} is
\begin{align}
\psi(x^*,t^*) &= \sum_{n=1}^{\infty} p_n(x^*)\,q_n(t^*)\nonumber\\
&= \sum_{n=1}^{\infty} p_n(x^*)\,q(0)\,\exp(-\lambda_n^2 t^*)\nonumber\\
&= \int_0^1 \left[\sum_{n=1}^{\infty} 2 p_n(x^*) p_n(x^\dagger) \exp(-\lambda_n^2 t^*)\right]\, F(x^\dagger)\, \mathrm{d}x^\dagger.\label{eq.55}
\end{align}
\noindent
Last equation establishes the solution to the associated homogeneous problem. Rearranging Eq.~\eqref{eq.55} expressed in the form
\begin{equation}\label{eq.56}
\psi(x^*,t^*) = \int_0^1 K(x^*,x^\dagger,t^*)\, F(x^\dagger)\, \mathrm{d}x^\dagger.
\end{equation}
\noindent
implies that all the terms in the solution, except the initial condition function, are lumped into a single term $K(x^*,x^\dagger,t^*)$, called the kernel of the integration. Considering the Green's function approach for the solution of the same problem, the Green's function evaluated at $\tau=0$ is equivalent to the kernel of integration: $G(x^*,t^*|x^\dagger,0)\equiv K(x^*,x^\dagger,t^*)$.
Hence, the kernel obtained by rearranging the homogeneous part of the transient problem into the form given by Eq.~\eqref{eq.56}, represents the Green's function evaluated at $\tau=0$. The full Green's function, $G(x^*,t^*|x^\dagger,\tau)$, is obtained replacing $t$ by $t-\tau$ in $G(x^*,t^*|x^\dagger,0)$\cite{Ozisik2012}. Thus, the full Green's function determined from Eq.~\eqref{eq.55} is given by
\begin{align}
G(x^*,t^*|x^\dagger,\tau) &= \sum_{n=1}^{\infty} 2 p_n(x^*) p_n(x^\dagger) \exp[-\lambda_n^2(t^*-\tau)]\nonumber\\
&= \sum_{n=1}^{\infty} 2 \sin(\lambda_n x^*) \sin(\lambda_n x^\dagger) \exp[-\lambda_n^2(t^*-\tau)]\label{eq.57}
\end{align}
\noindent
Now that we have calculated the Green's function, we proceed to solve problem \eqref{eq.51}, using \eqref{eq.51-a}, with $F(x^*)=0$, $g(x^*,t^*)=0$, and $f_2(t^*)=0$ in the BC-2. Also, since the BC-1 is of the first type, we replace $G$ by $\uppartial_{x^\dagger}G$:
\begin{equation}\label{eq.58}
\uppartial_{x^\dagger}G(x^*,t^*|x_i,\tau) = \sum_{n=1}^{\infty} 2 \lambda_n \sin(\lambda_n x^*) \cos(\lambda_n x^\dagger) \exp[-\lambda_n^2(t^*-\tau)].
\end{equation}
\noindent
Then, Eq.~\eqref{eq.51-a} reduces to the following:
\begin{equation}\label{eq.59}
\hat{c}(x^*,t^*) = \sum_{n=1}^{\infty} 2 \lambda_n \sin(\lambda_n x^*) \left[\frac{\exp(\alpha^2 t^*) - \exp(-\lambda_n^2 t^*)}{(\alpha^2+\lambda_n^2)}\right].
\end{equation}
\noindent
The final solution to problem~\eqref{eq.50} is recovered taking the anti-trasformation, $c^*=\hat{c}\,\exp(-\alpha^2 t^*)$:
\begin{equation}\label{eq.60}
c^*(x^*,t^*) = \sum_{n=1}^{\infty} 2 \lambda_n \sin(\lambda_n x^*) \left\{\frac{1 - \exp[-t^*(\alpha^2+\lambda_n^2)]}{(\alpha^2+\lambda_n^2)}\right\}.
\end{equation}
\noindent
with $\lambda_n = (n-1/2)\pi+\alpha^2$, $n\in\mathbb{Z}^+$ and time profile as follow:
\begin{equation}\label{eq.61}
c^*(t^*) = \sum_{n=1}^{\infty} 2 [1-\cos(\lambda_n)] \left\{\frac{1 - \exp[-t^*(\alpha^2+\lambda_n^2)]}{(\alpha^2+\lambda_n^2)}\right\}.
\end{equation}
Figures~\ref{f.proposed_solution} and \ref{f.proposed_solution_2} show \eqref{eq.60} and \eqref{eq.61} for $\alpha^2=0.01$ and $\alpha^2=1$ for different times.
\begin{figure}[t]
\begin{center}
\hspace{-0.1cm}\includegraphics[scale=0.425]{proposed_solution_alpha_1.pdf}
\caption{Concentration profiles using the separation of variables solution: (top) Eq.~\eqref{eq.51} with legends indicating different $t^*$; (bottom) Eq.~\eqref{eq.52}. For all cases, $\alpha^2=1$. \hfill{ }}
\label{f.proposed_solution_2}
\end{center}
\end{figure}
\section{Solutions to the recovery process}
We will calculate the recovery process for three cases: no adsorption from separation of variables, adsorption with zero-flux and with non-zero flux at the end of the pores. Notice that for the similarity solution, the steady-state is never reached.
\subsection{Recovery step without adsorption}
The solution of the diffusion problem without adsorption is given by Eq.~\eqref{eq.35}, where the steady-state is reached at $c^*(x^*,\infty)=1$. Then, the problem to solve is as follow:
\begin{subequations}\label{eq.63}
\begin{align}
\text{(DE):}\quad & \uppartial_t u^* = \uppartial_{xx} u^*, \label{eq.63a}\\
\text{(BC-1):}\quad & u^*(0, t^*) = 0, \label{eq.63b}\\
\text{(BC-2):}\quad & \uppartial_x u^*(1, t^*) = 0, \label{eq.63c}\\
\text{(IC):}\quad & u^*(x^*, 0) = 1. \label{eq.63d}
\end{align}
\end{subequations}
\noindent
The solution of this problem is similar to that of \eqref{eq.20}, except for the negative sign in the IC. Following the same procedure, we calculated the solution by separation of variables to be
\begin{equation}\label{eq.64}
u^*(x^*,t^*) = \sum_{n=1}^{\infty} \frac{2}{\lambda_n} \sin(\lambda_n x^*) \exp(-\lambda_n^2 t^*).
\end{equation}
\noindent
with $\lambda_n = (n-1/2)\pi$, $n\in\mathbb{Z}^+$ and time profile
\begin{equation}\label{eq.65}
u^*(t^*) = \sum_{n=1}^{\infty} \frac{2}{\lambda_n^2} \exp(-\lambda_n^2 t^*).
\end{equation}
Figure~\ref{f.recovery_no_ads} shows the recovery solutions for non-adsorption.
\begin{figure}[t]
\begin{center}
\hspace{-0.1cm}\includegraphics[scale=0.425]{recovery_no_ads.pdf}
\caption{Concentration profiles in the recovery process without adsorption: (top) Eq.~\eqref{eq.64} with legends indicating different $t^*$; (bottom) Eq.~\eqref{eq.65}.\hfill{ }}
\label{f.recovery_no_ads}
\end{center}
\end{figure}
\subsection{Recovery step with surface adsorption}
The solution of the diffusion problem is given by Eq.~\eqref{eq.40} and the recovery process by
\begin{subequations}\label{eq.66}
\begin{align}
\text{(DE):}\quad & \uppartial_t u^* = \uppartial_{xx} u^* - \alpha^2 u^*, \label{eq.66a}\\
\text{(BC-1):}\quad & u^*(0, t^*) = 0, \label{eq.66b}\\
\text{(BC-2):}\quad & \uppartial_x u^*(1, t^*) = 0, \label{eq.66c}\\
\text{(IC):}\quad & u^*(x^*, 0) = 1-\sum_{n=1}^\infty \left(\frac{2}{\lambda_n}\right) \left(\frac{\alpha^2}{\alpha^2+\lambda_n^2}\right)\sin(\lambda_n x^*). \label{eq.66d}
\end{align}
\end{subequations}
\noindent
We solve the homogeneous DE using Danckwerts' method, the problem becomes
\begin{subequations}\label{eq.67}
\begin{align}
\text{(DE):}\quad & \uppartial_t \hat{u} = \uppartial_{xx} \hat{u}, \label{eq.67a}\\
\text{(BC-1):}\quad & \hat{u}(0, t^*) = 0, \label{eq.67b}\\
\text{(BC-2):}\quad & \uppartial_x \hat{u}(1, t^*) = 0, \label{eq.67c}\\
\text{(IC):}\quad & \hat{u}(x^*, 0) = 1-\sum_{n=1}^\infty \left(\frac{2}{\lambda_n}\right) \left(\frac{\alpha^2}{\alpha^2+\lambda_n^2}\right)\sin(\lambda_n x^*). \label{eq.67d}
\end{align}
\end{subequations}
\noindent
Separation of variables, $\hat{u}(x^*, t^*)=p(x^*)q(t^*)$, leads to the boundary value problem
\begin{subequations}\label{eq.67-a}
\begin{align}
\text{(DE):}\quad & p'' + \lambda p = 0, \label{eq.67-aa}\\
\text{(BC-1):}\quad & p(0) = 0, \label{eq.67-ba}\\
\text{(BC-2):}\quad & p'(1) = 0, \label{eq.67-ca}
\end{align}
\end{subequations}
\noindent
with same solution as that of \eqref{eq.22} with $p(x^*)=A_2 \sin(\lambda_n x^*)$ and $\lambda_n = (n-1/2)\pi$, $n\in\mathbb{Z}^+$. Setting $q(t^*)=q(0)\exp(-\lambda_n^2 t^*)$, we get
\begin{equation}\label{eq.68}
\hat{u}^*(x^*,t^*) = \sum_{n=1}^{\infty} B_n \sin(\lambda_n x^*)\exp(-\lambda_n^2 t^*).
\end{equation}
\noindent
where $B_n=A_2 q(0)$. Applying the IC we calculate $B_n$ as follow:
\begin{align}
\hat{u}^*(x^*,0) &= \sum_{n=1}^{\infty} B_n \sin(\lambda_n x^*)\nonumber\\
\int_0^1 \hat{u}^*(x^*,0) \sin(\lambda_m x^*) \text{d}x^* &= \int_0^1 \sum_{n=1}^{\infty} B_n \sin(\lambda_n x^*) \text{d}x^*.\label{eq.69}
\end{align}
\noindent
Replacing $\hat{u}^*(x^*,0)$ by IC~\eqref{eq.67d} and using the orthogonality property of sines in Eq.~\eqref{eq.69}, we find
\begin{equation}\label{eq.70}
B_n = \frac{2\lambda_n}{\alpha^2+\lambda_n^2}.
\end{equation}
Then,
\begin{equation}\label{eq.71}
\hat{u}^*(x^*,t^*) = \sum_{n=1}^{\infty} 2 \left(\frac{2\lambda_n}{\alpha^2+\lambda_n^2}\right) \sin(\lambda_n x^*) \exp(-\lambda_n^2 t^*).
\end{equation}
\noindent
Combining last expression with \eqref{eq.39}, we find
\begin{equation}\label{eq.72}
u^*(x^*,t^*) = \sum_{n=1}^{\infty} 2 \sin(\lambda_n x^*) \left\{\frac{\lambda_n[\alpha^2+\lambda_n^2\,\exp[-t^*(\alpha^2+\lambda_n^2)]]}{(\alpha^2+\lambda_n^2)^2}\right\}.
\end{equation}
\noindent
with $\lambda_n = (n-1/2)\pi$, $n\in\mathbb{Z}^+$. The time profile is given by
\begin{equation}\label{eq.73}
u^*(x^*) = \sum_{n=1}^{\infty} 2 \left\{\frac{[\alpha^2+\lambda_n^2\,\exp[-t^*(\alpha^2+\lambda_n^2)]]}{(\alpha^2+\lambda_n^2)^2}\right\}.
\end{equation}
Figures~\ref{f.recovery_kottke} and \ref{f.recovery_kottke_2} show the recovery solutions considering surface adsorption with zero flux at the end of the pores for two values of $\alpha$.
\begin{figure}[t]
\begin{center}
\hspace{-0.1cm}\includegraphics[scale=0.425]{recovery_Kottke_alpha_0_Danckwert.pdf}
\caption{Concentration profiles using the separation of variables solution: (top) Eq.~\eqref{eq.72} with legends indicating different $t^*$; (bottom) Eq.~\eqref{eq.73}. For all cases, $\alpha^2=0.01$. \hfill{ }}
\label{f.recovery_kottke}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\hspace{-0.1cm}\includegraphics[scale=0.425]{recovery_Kottke_alpha_1_Danckwert.pdf}
\caption{Concentration profiles using the separation of variables solution: (top) Eq.~\eqref{eq.72} with legends indicating different $t^*$; (bottom) Eq.~\eqref{eq.73}. For all cases, $\alpha^2=1$. \hfill{ }}
\label{f.recovery_kottke_2}
\end{center}
\end{figure}
\subsection{Recovery step with surface and pore-end adsorption}
The problem is defined as follow:
\begin{subequations}\label{eq.74}
\begin{align}
\text{(DE):}\quad & \uppartial_t u^* = \uppartial_{xx} u^* - \alpha^2 u^*, \label{eq.74a}\\
\text{(BC-1):}\quad & u^*(0, t^*) = 0, \label{eq.74b}\\
\text{(BC-2):}\quad & \uppartial_x u^*(1, t^*) + \alpha^2 u^*(1, t^*) = 0, \label{eq.74c}\\
\text{(IC):}\quad & u^*(x^*, 0) = \sum_{n=1}^\infty \left(\frac{2\lambda_n}{\alpha^2+\lambda_n^2}\right)\sin(\lambda_n x^*). \label{eq.74d}
\end{align}
\end{subequations}
\noindent
where the IC is given by the steady-state of Eq.~\eqref{eq.60}. Using the transformation $u^*=\hat{u}\,\exp(-a^2 t^*)$ we have
\begin{subequations}\label{eq.75}
\begin{align}
\text{(DE):}\quad & \uppartial_t \hat{u} = \uppartial_{xx} \hat{u}, \label{eq.75a}\\
\text{(BC-1):}\quad & \hat{u}(0, t^*) = 0, \label{eq.75b}\\
\text{(BC-2):}\quad & \uppartial_x \hat{u}(1, t^*) + \alpha^2 \hat{u}(1, t^*) = 0, \label{eq.75c}\\
\text{(IC):}\quad & \hat{u}(x^*, 0) = \sum_{n=1}^\infty \left(\frac{2\lambda_n}{\alpha^2+\lambda_n^2}\right)\sin(\lambda_n x^*). \label{eq.75d}
\end{align}
\end{subequations}
\noindent
Separation of variables leads to the following boundary value problem:
\begin{subequations}\label{eq.76}
\begin{align}
\text{(DE):}\quad & p''+\lambda p=0, \label{eq.76a}\\
\text{(BC-1):}\quad & p(0) = 0, \label{eq.76b}\\
\text{(BC-2):}\quad & p'(1) + \alpha^2 p(1) = 0, \label{eq.76c}
\end{align}
\end{subequations}
\noindent
Problem \eqref{eq.76} is the same as \eqref{eq.53}, with $p_n(x^*)=\sin(\lambda_n x^*)$ and $\lambda_n = (n-1/2)\pi+\alpha^2+\alpha^2$, $n\in\mathbb{Z}^+$. We consider a time solution of the form $q(t^*)=q(0)\,\exp(-\lambda_n^2 t^*)$, with
\begin{equation}\label{eq.77}
q(0) = \frac{2\lambda_n}{\alpha^2+\lambda_n^2}
\end{equation}
\noindent
calculated as above. Then, the solution to \eqref{eq.75} is
\begin{equation}\label{eq.78}
\hat{u}(x^*, t^*) = \sum_{n=1}^{\infty} \left(\frac{2\lambda_n}{\alpha^2+\lambda_n^2}\right) \sin(\lambda_n x^*) \exp(-\lambda_n^2 t^*).
\end{equation}
\noindent
Using the anti-trasformation, $u^*=\hat{u}\exp(-\alpha^2 t^*)$, we arrive at the solution of \eqref{eq.74}:
\begin{equation}\label{eq.79}
u^*(x^*, t^*) = \sum_{n=1}^{\infty} \left(\frac{2\lambda_n}{\alpha^2+\lambda_n^2}\right) \sin(\lambda_n x^*) \exp[- t^*(\alpha^2+\lambda_n^2)]
\end{equation}
\noindent
with time profile
\begin{equation}\label{eq.80}
u^*(t^*) = \sum_{n=1}^{\infty} 2 \left[\frac{1-\cos(\lambda_n)}{\alpha^2+\lambda_n^2}\right] \exp[- t^*(\alpha^2+\lambda_n^2)].
\end{equation}
Figures~\ref{f.recovery_proposed} and \ref{f.recovery_proposed_2} show the recovery solutions considering surface adsorption with zero flux at the end of the pores for two values of $\alpha$.
\begin{figure}[t]
\begin{center}
\hspace{-0.1cm}\includegraphics[scale=0.425]{recovery_proposed_alpha_0.pdf}
\caption{Concentration profiles using the separation of variables solution: (top) Eq.~\eqref{eq.79} with legends indicating different $t^*$; (bottom) Eq.~\eqref{eq.80}. For all cases, $\alpha^2=0.01$. \hfill{ }}
\label{f.recovery_proposed}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\hspace{-0.1cm}\includegraphics[scale=0.425]{recovery_proposed_alpha_1.pdf}
\caption{Concentration profiles using the separation of variables solution: (top) Eq.~\eqref{eq.79} with legends indicating different $t^*$; (bottom) Eq.~\eqref{eq.80}. For all cases, $\alpha^2=1$. \hfill{ }}
\label{f.recovery_proposed_2}
\end{center}
\end{figure}
\linefindoc
\bibliographystyle{unsrt}
|
2,877,628,091,208 | arxiv | \section{Introduction}
D-finiteness and operations preserving D-finiteness play an important role
in computer algebra. A function is called D-finite if it is annihilated by
a nonzero linear operator with polynomial coefficients. Annihilating operators
are used to represent D-finite functions, and to perform operations with them. It
is therefore of interest to know how big such operators are. The primary
interest is in the order of the operator. An operator of minimal order is
a generator of the ideal of all annihilating operators for a given D-finite function, and certain
facts about the function, for example its asymptotic behaviour, can be
most easily extracted from such an operator. A disadvantage of a minimal
order operator is that it may not be the smallest operator in terms of arithmetic size. The secondary
interest is in the degrees of the polynomial coefficients of the operator.
The arithmetic size is roughly the product of order and degree, and the
minimal arithmetic size is often not achieved by the operator of minimal
order. Instead, there is the possibility of trading order against degree:
allowing a slightly higher order of the
operator can lead to a substantial degree drop which altogether results in
an operator of smaller arithmetic size. This effect has been observed and
analyzed for many different operations related to linear operators
\cite{BCLSS2007,ChKa2012a,ChKa2012b,CJKS2013,Kaue2014}.
Typical bounds are formulated in terms of hyperbolas such that for every
point $(r,d)$ above the hyperbola, the function at hand has an
annihilating operator of order $r$ and degree~$d$. These hyperbolas are
known as order-degree curves.
Besides order and degree, the size of an operator depends on a third parameter,
called the height. It measures the size of the coefficients of the polynomial
coefficients of the operator. For example, for differential operators
over polynomials over the integers, it matters how long the integers
appearing in the operator are. If we define the height as the log of the largest integer
(in absolute value), the bitsize of an operator is roughly the product of its
order, its degree, and its height. Besides some bounds for
hypergeometric creative telescoping~\cite{KaYe2015} and D-finite closure properties~\cite{Kaue2014},
not much is known about the bit size of operators that arise from computations
with D-finite functions. The
experience is however that the point $(r,d)$ on the order-degree curve which
minimizes the arithmetic size does not also minimize the bitsize. But then,
which point on the curve does minimize the bitsize? Does the point of minimal
bitsize even sit on the curve, or can it happen that a further reduction of
the bitsize is possible by increasing both the order and the degree? These were
the motivating questions for the present paper.
Our goal is to extend the theory of order-degree curves to order-degree-height
surfaces. Instead of hyperbolas that describe the points $(r,d)$ such that a
given function has annihilating operators of order~$r$ and degree~$d$, we will
see surfaces defined by polynomials describing the points $(r,d,h)$
such that the function has annihilating operators of order~$r$, degree~$d$, and height~$h$.
Unfortunately, we do not have any such results for operators in $\set Z[x][\partial]$, with
the height defined by the longest integer. In order to be able to apply the
techniques from linear algebra that were used for deriving order-degree curves,
we instead consider operators in $C[y][x][\partial]$ where $C$ is a field, with
the height defined by the degree in~$y$. Our results in this setting suggest
that order and degree can indeed be traded against height, and experiments with
actual operators confirm this qualitative prediction, even though our bounds
quantitatively are not very tight. Some of our bounds are particularly bad for
large orders~$r$. While for a fixed degree and $r\to\infty$, we would usually
expect a bound on the height that converges to a constant, two of our bounds
grow quadratically in~$r$. This may be an indication that the linear algebra
approach which was used for deriving tight order-degree curves is not sufficient
for deriving sharp order-degree-height surfaces.
The remainder of the paper proceeds as follows. In Section~\ref{SEC:prelim},
we provide background and basic notions required in the paper. We then derive
order-degree-height surfaces for three different situations:
(a)~in Section~\ref{SEC:clm} for common left multiples of some given operators, extending the work of~\cite{Kaue2014},
(b)~in Section~\ref{SEC:ct} for hypergeometric creative telescoping, extending the work of~\cite{ChKa2012a}, and
(c)~in Section~\ref{SEC:desing} for contraction ideals of operators, extending the work of~\cite{CJKS2013}.
In all these cases, we derive polynomials $p$ such that for all $r,d,h$ with $p(r,d,h)\geq0$ there
exists an operator of order~$r$, degree~$d$, and height~$h$ with the desired feature.
The boundary of the region defined by this polynomial inequality is what we call the order-degree-height
surface.
\section{Notation and General Assumptions}\label{SEC:prelim}
The notation $R[\partial]$ already used in the introduction refers to an Ore
algebra over the commutative ring~$R$. Recall that such an Ore algebra is
defined in terms of an endomorphism $\sigma\colon R\to R$ and a $\sigma$-derivation
$\delta\colon R\to R$, i.e., a map satisfying $\delta(a+b)=\delta(a)+\delta(b)$
and $\delta(ab)=\delta(a)b+\sigma(a)\delta(b)$ for all $a,b\in R$.
The addition in $R[\partial]$ is defined as for polynomials, and the
multiplication in $R[\partial]$ extends the multiplication on $R$ through
the commutation rule $\partial a=\sigma(a)\partial+\delta(a)$ ($a\in R$).
An element $a$ of $R$ is called a constant if $\sigma(a)=a$ and $\delta(a)=0$.
Canonical examples for Ore algebras include differential operators, where
$\sigma=\operatorname{id}$ and $\delta$ is a derivation on~$R$, and difference operators,
where $\sigma$ is an automorphism on $R$ and $\delta=0$. Throughout this
paper, $R$ will be a ring of polynomials or rational functions in two variables
$x,y$ over a field~$C$ of characteristic zero, e.g. $R=C[x,y]$ or $R=C(x,y)$ or $R=C(x)[y]$, etc.
We assume throughout that the elements of $C[y]$ (and thus $C(y)$) are constants. We further
assume that $\sigma$ is an automorphism of $R$,
$\sigma$ and $\delta$ map polynomials to polynomials, and that
they do not increase degrees. More precisely, we assume
\begin{alignat*}1
&\deg_x(\sigma(f))=\deg_x(f),\quad\deg_x(\delta(f))\leq\deg_x(f),\\
&\deg_y(\sigma(f))=\deg_y(f),\quad\deg_y(\delta(f))\leq\deg_y(f)
\end{alignat*}
for all $f\in C[x,y]$.
This is true for example for differential operators with the derivation $\delta=\frac{d}{dx}$
and for difference operators with the shift $\sigma(x)=x+1$.
The order of an element $L\in R[\partial]$ is defined as $\deg_\partial(L)$.
The height of an element $p\in C[x,y]$ is defined as $\deg_y(p)$, and
$\deg_x(p)$ is called its degree. The height/degree of a rational
function $p/q$ is defined as $\deg(p)-\deg(q)$ (for $\deg=\deg_y$ and $\deg=\deg_x$,
respectively). If $\sigma=\operatorname{id}$ and $\delta=\frac{d}{dx}$, we write $D_x$ instead of~$\partial$,
and if $\sigma(x)=x+1$ and $\delta=0$, we write $S_x$ instead of~$\partial$.
Elements of an Ore algebra $R[\partial]$ can be interpreted as operators.
A function space $F$ for $R[\partial]$ is defined as a left $R[\partial]$-module.
For a fixed $f\in F$, the set of all $L\in R[\partial]$ with $L\cdot f=0$ forms
a left ideal of $R[\partial]$, called the annihilator of~$f$. As we will only
consider left ideals in this paper, we will drop the attribute `left' from
now on. If $R$ is a field, then $R[\partial]$ is a left-Euclidean domain, so
every ideal is generated by a single element. In this case, any nonzero
ideal element of minimal order can serve as a generator. We call such elements
minimal. They are pairwise associate, i.e., any two minimal elements of an
ideal are left-$R$-multiples of each other.
\section{Common Left Multiples}\label{SEC:clm}
A common left multiple of some operators $L_1,\dots,L_n\in R[\partial]$ is
an operator $L\in R[\partial]$ such that there exist operators $M_1,\dots,M_n\in R[\partial]$
with $L = M_1L_1=\cdots=M_nL_n$.
If $L_1,\dots,L_n$ are annihilating operators of certain functions $f_1,\dots,f_n$,
then $L$ is an annihilator for every $C$-linear combination $\alpha_1f_1+\cdots+\alpha_nf_n$
of these functions.
Bounds on orders and degrees of common left multiples were given in~\cite{BCLS2012}.
An order-degree curve for this case has appeared in~\cite{Kaue2014}.
Continuing the discussion in Section~2.2 of~\cite{Kaue2014},
we provide an order-degree-height surface describing the shapes of common
left multiples of $L_1,\dots,L_n$.
\begin{thm}\label{thm:lclm}
Let $L_1,\dots,L_n\in C[x,y][\partial]$ and let
$r_\ell=\deg_{\partial}(L_\ell)$,
$d_\ell=\deg_x(L_\ell)$,
$h_\ell=\deg_y(L_\ell)$
for $\ell=1,\dots,n$.
Let $r,d,h\in \set N$ be such that
\begin{alignat*}1
&(r+1)(d+1)(h+1)\\
&- (r+1)(d+1)\sum_{\ell=1}^n h_\ell \ + \ (h+1)\sum_{\ell=1}^n r_\ell d_\ell\\
&- (r+1)(h+1)\sum_{\ell=1}^n d_\ell \ + \ (d+1)\sum_{\ell=1}^n r_\ell h_\ell\\
&- (d+1)(h+1)\sum_{\ell=1}^n r_\ell \ + \ (r+1)\sum_{\ell=1}^n d_\ell h_\ell
- \sum_{\ell=1}^n r_\ell d_\ell h_\ell > 0.
\end{alignat*}
Then there exists a common left multiple of $L_1,\dots,L_n$ in $C[x,y][\partial]$
of order at most~$r$, degree at most~$d$, and height at most~$h$.
\end{thm}
\begin{proof}
Make an ansatz for $n$ operators
\[
M_\ell = \sum_{i=0}^{r-r_\ell}\sum_{j=0}^{d-d_\ell}\sum_{k=0}^{h-h_\ell} m_{i,j,k,\ell} y^kx^j\partial^i
\]
with undetermined coefficients $m_{i,j,k,\ell}$ and equate the coefficients of $M_\ell L_\ell$
for $\ell=1,\dots,n$ to each other in order to obtain a common left multiple of the desired shape.
This leads to
\begin{alignat*}1
&\sum_{\ell=1}^n\sum_{i=0}^{r-r_\ell}\sum_{j=0}^{d-d_\ell}\sum_{k=0}^{h-h_\ell}1
\end{alignat*}
variables and $(n-1)(r+1)(d+1)(h+1)$ equations.
If $r,d,h$ are as in the statement of the theorem, the linear system has more variables than
equations and therefore a nonzero solution.
\end{proof}
In the proof of the theorem, not only the input and output operators are restricted to $C[x,y][\partial]$,
but also the multipliers $M_1,\dots,M_n$. In general, allowing the $M_1,\dots,M_n$ to be in $C(x,y)[\partial]$
can lead to common multiples of lower degree or height.
\begin{example}
We consider two randomly chosen operators $L_1,L_2\in\set Q[x,y][D_x]$ with
$\deg_{D_x}(L_1)=2$, $\deg_{D_x}(L_2)=1$, $\deg_{x}(L_1)=1$,
$\deg_{x}(L_2)=2$, $\deg_{y}(L_1)=1$, $\deg_{y}(L_2)=1$.
In the following table, we compare the actual sizes of common left multiples of $L_1,L_2$
with the sizes predicted by Thm.~\ref{thm:lclm}.
A table entry $u|v$ in column~$r$ and row~$d$
means that Thm.~\ref{thm:lclm} predicts a common multiple of order~$r$, degree~$d$ and any height $h\geq u$,
and for the specific operators $L_1,L_2$ we found a common multiple of order~$r$, degree~$d$, and height $h=v$.
The common multiples were computed in $C(x,y)[D_x]$.
A dot means that no operator of the respective order and degree was found or predicted.
In this experiment, Thm.~\ref{thm:lclm} predicts orders and degrees correctly but the bound on the height overshoots.
At the same time, it does at least qualitatively reflect the effect that increasing both $r$ and $d$
makes it possible to decrease~$h$.
We have repeated the experiment with operators $L_1,L_2$ of some other shapes and always found a similar conclusion.
Note that in this particular example, Thm.~\ref{thm:lclm} also rightly predicts the symmetry w.r.t.\ $r$ and~$d$.
\[\scriptscriptstyle
\begin{array}{@{}c|c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{}}
d \\
10& {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {15|5} & {6|4} & {4|3} & {3|3} & {3|3} & {3|3} & {3|2} & {3|2} \\
9& {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {21|5} & {6|5} & {4|4} & {4|3} & {3|3} & {3|3} & {3|3} & {3|2} \\
8& {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {37|5} & {7|5} & {5|4} & {4|3} & {3|3} & {3|3} & {3|3} & {3|3} \\
7& {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {9|6} & {5|4} & {4|3} & {4|3} & {3|3} & {3|3} & {3|3} \\
6& {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {12|8} & {7|5} & {5|4} & {4|3} & {4|3} & {4|3} & {3|3} \\
5& {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {31|19} & {10|7} & {7|5} & {5|4} & {5|4} & {4|4} & {4|3} \\
4& {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {31|19} & {12|8} & {9|6} & {7|5} & {6|5} & {6|4} \\
3& {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {37|5} & {21|5} & {15|5} \\
2& {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} \\
1& {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} \\
0& {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} & {\cdot|\cdot} \\\hline
& 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & r
\end{array}
\]
\end{example}
\section{Creative telescoping}\label{SEC:ct}
In this section, we consider bivariate proper hypergeometric terms
and aim at describing the relationships between orders, degrees and heights
for their telescopers. Bounds on these three quantities have been given in \cite{MoZe2005,ChKa2012a,KaYe2015}.
An order-degree curve for this case was also provided in \cite{ChKa2012a}.
We will closely follow the discussion in~\cite{ChKa2012a} and further extend the curve
to an order-degree-height surface.
In this respect, we need to generalize all related notions introduced
previously to the bivariate case. Let $k$ be a new variable distinct
from the two variables $x,y$ and let $C(x,y,k)$ be the field of
rational functions in $x,y,k$. Let $S_x$ and $S_k$ denote the
difference operators on $C(x,y,k)$ defined by
\[
S_x(f(x,y,k)) = f(x+1,y,k)
\ \ \text{and}\ \
S_k(f(x,y,k)) = f(x,y,k+1)
\]
for any rational function $f\in C(x,y,k)$. Clearly, $S_x$ and $S_k$
commute with each other and leave elements in $C(y)$ fixed.
We will be considering some extension field $E$ of $C(x,y,k)$ on which
difference operators $S_x$ and $S_k$ can be naturally extended. A
{\em hypergeometric term} with respect to $x,k$ is a nonzero element $H$ of
such an extension field $E$ with $S_x(H)/H \in C(x,y,k)$ and
$S_k(H)/H\in C(x,y,k)$.
We are particularly interested in proper hypergeometric terms, that is,
hypergeometric terms which can be rewritten in the following form
\begin{equation}\label{EQ:hyper}
H = p \alpha^x \beta^k \prod_{m=1}^M
\frac{\Gamma(a_mx+a_m'k+a_m'')\Gamma(b_mx-b_m'k+b_m'')}
{\Gamma(u_mx+u_m'k+u_m'')\Gamma(v_mx-v_m'k+v_m'')},
\end{equation}
where $p\in C[x,y,k]\setminus\{0\}$, $\alpha,\beta\in C[y]\setminus\{0\}$, $M\in \set N$,
$a_m,a_m'$, $b_m,b_m'$, $u_m,u_m'$, $v_m,v_m'\in \set N$,
$a_m'',b_m'',u_m'',v_m''\in C[y]$.
Let $H$ be a proper hypergeometric term of the form \eqref{EQ:hyper}.
The method of creative telescoping consists in finding polynomials
$c_0,c_1,\dots,c_r\in C[x,y]$, not all zero, and a rational function
$g\in C(x,y,k)$ such that
\[
c_0 H + c_1 S_x(H)+\cdots+c_rS_x^r(H) = S_k(gH) - gH.
\]
If $c_0,c_1,\dots,c_r$ and $g$ are as above, then we say that the
operator $L = c_0+c_1S_x+\cdots+c_rS_x^r\in C[x,y][S_x]$ is a
{\em telescoper} for $H$ and $g$ is a {\em certificate} for $L$ (and
$H$). According to the fundamental lemma in \cite{WiZe1992a}, the term
$H$ always admits a telescoper. All telescopers for $H$ form an
ideal in $C[x,y][S_x]$.
The main task of this section is to derive a surface in the
$(r,d,h)$-space which indicates that for every integer point $(r,d,h)$
above the surface, there is a telescoper for $H$ of order $r$, degree $d$ and height~$h$.
Following \cite{ChKa2012a}, we make a case
distinction between non-rational input and rational input.
\subsection{The non-rational case}
In this subsection, we assume that the following holds.
\begin{convention}\label{CON:nonrat}
Let $H$ be a proper hypergeometric term of the form \eqref{EQ:hyper},
and assume that $H$ cannot be split into $H = fH_0$ for $f\in
C(x,y,k)$ and another proper hypergeometric term $H_0$ with
$S_k(H_0)/H_0 = 1$.
\end{convention}
Informally speaking, the above assumption excludes those terms of the
form \eqref{EQ:hyper} in which $\beta = 1$ and every $\Gamma$-term
involving $k$ can be cancelled against another one up to some rational
function. Those exceptional terms are treated separately in
Section~\ref{SUBSEC:rat} below.
With Convention~\ref{CON:nonrat}, for any rational function $g\in
C(x,y,k)\setminus\{0\}$, we know that $gH$ cannot be split into the indicated way
either. In particular, $S_k(gH)/(gH)\neq 1$, that is, $S_k(gH) -
gH \neq 0$. As a consequence, whenever the pair $(L,g)$ with $L\in C[x,y][S_x]$
and $g\in C(x,y,k)\setminus\{0\}$ is such that $L(H) = S_k(gH) - gH$, we can
guarantee that $L$ is not the zero operator. Thus, in the sequel of
this subsection, we need not worry about this requirement any more.
The main idea we are using for analysis is the same
as \cite{MoZe2005,ChKa2012a}, that is, to go through steps of
Zeilberger's algorithm \cite{Zeil1991} literally when applied to the
given proper hypergeometric term $H$, and then reduce the problem of computing a telescoper
to solving a homogeneous system of linear equations over~$C$, which will have
a nontrivial solution whenever it is underdetermined.
With Convention~\ref{CON:nonrat}, for any integer $m$ with $1\leq
m\leq M$, define
\begin{align*}
&A_m = a_mx+a_m'k+a_m'',\quad
B_m = b_mx-b_m'k+b_m'',\\
&U_m = u_mx+u_m'k+u_m'',\quad
V_m = v_mx-v_m'k+v_m''.
\end{align*}
For $p\in C[x,y,k]$ and $m\in \set N$, let $p^{\overline{m}} =
p(p+1)\dots(p+m-1)$ with the conventions $p^{\overline{0}}=1$ and
$p^{\overline{1}}=p$.
For an operator $L = c_0 + c_1 S_x + \cdots + c_r S_x^r\in
C[x,y][S_x]$, we then have
\begin{align*}
&L(H) = \sum_{i=0}^rc_i \alpha^i\frac{S_x^i(p)}p\prod_{m=1}^M
\frac{A_m^{\overline{ia_m}}B_m^{\overline{ib_m}}}
{U_m^{\overline{iu_m}}V_m^{\overline{iv_m}}}H
\\[1ex]
&= \left(\sum_{i=0}^rc_i\alpha^iS_x^i(p)\prod_{m=1}^MP_{i,m}\right)
\alpha^x\beta^k\prod_{m=1}^M
\frac{\Gamma(A_m)\Gamma(B_m)}{\Gamma(U_m+ru_m)\Gamma(V_m+rv_m)},
\end{align*}
where
\[
P_{i,m}=A_m^{\overline{ia_m}}B_m^{\overline{ib_m}}
(U_m+iu_m)^{\overline{(r-i)u_m}}(V_m+iv_m)^{\overline{(r-i)v_m}}.
\]
We can write
\begin{equation}\label{EQ:gosperform}
\frac{S_k(L(H))}{L(H)} = \frac{S_k(P)}P\frac{Q}{S_k(R)},
\end{equation}
where
\begin{align}
P &= \sum_{i=0}^rc_i\alpha^iS_x^i(p)\prod_{m=1}^MP_{i,m},
\label{EQ:P}\\[1ex]
Q &= \beta\prod_{m=1}^MA_m^{\overline{a_m'}}(V_m+rv_m-v_m')^{\overline{v_m'}},
\label{EQ:Q}\\[1ex]
R &= \prod_{m=1}^M(U_m+ru_m-u_m')^{\overline{u_m'}}B_m^{\overline{b_m'}}.
\label{EQ:R}
\end{align}
A remarkable feature of this decomposition is that the undetermined
coefficients $c_0,\dots,c_r$ only appear linearly in the polynomial
$P$ and yet do not appear at all in the polynomials $Q,R$.
Depending on the actual values of the parameters appearing in $H$,
the decomposition \eqref{EQ:gosperform} may or may not satisfy the
condition $\gcd(Q,S_k^i(R))=1$ for all $i\in \set N$, and thus the
corresponding equation
\begin{equation}\label{EQ:gosper}
P = QS_k(Y) - R Y
\end{equation}
is not necessarily the true Gosper equation.
However, even in this case, it only means that some solutions may be overlooked,
and any nontrivial solution $Y\in C(x,y)[k]$ to this equation will still give rise to a
correct telescoper for $H$ and a certificate. Since our main interest lies in bounding
the size of telescopers, it is sufficient to investigate under which circumstances the equation
\eqref{EQ:gosper} with the above choices of $P,Q,R$ has a solution.
We proceed by performing a coefficient comparison between both sides
of \eqref{EQ:gosper} with respect to powers of $x,y$ and $k$, rather
than merely with respect to powers of $k$ as in~\cite{MoZe2005} or powers
of $x,k$ as in~\cite{ChKa2012a}, yielding a linear system over $C$. In
order to better express the number of variables and equations in this
system, we make the following notational convention.
\begin{convention}\label{CON:notation}
With Convention~\ref{CON:nonrat}, let
\begin{align*}
&\vartheta_x = \deg_x(p),\quad \vartheta_y = \deg_y(p),\quad \vartheta_k = \deg_k(p),\\
&\mu = \max\Big\{\sum_{m=1}^M(a_m+b_m),\sum_{m=1}^M(u_m+v_m)\Big\},\\
&\nu = \max\Big\{\sum_{m=1}^M(a_m'+v_m'),\sum_{m=1}^M(u_m'+b_m')\Big\},\\
&\xi = \max\Big\{\deg_y(\alpha)+\sum_{m=1}^M(a_m\deg_y(A_m)+b_m\deg_y(B_m)),\\
&\hphantom{=\max\Big(\Big)}\sum_{m=1}^M(u_m\deg_y(U_m)+v_m\deg_y(V_m))\Big\},\\
&\eta = \max\Big\{\deg_y(\beta)+\sum_{m=1}^M(a_m'\deg_y(A_m)+v_m'\deg_y(V_m)),\\
&\hphantom{=\max\Big(\Big)}\sum_{m=1}^M(u_m'\deg_y(U_m)+b_m'\deg_y(B_m))\Big\}.
\end{align*}
\end{convention}
Note that these parameters are all nonnegative integers which only
depend on $H$ but not on $r, d$ or $h$.
\begin{lemma}\label{LEM:Pdeg}
With Convention~\ref{CON:notation}, let $d$ and $h$ be the degree and
height of $L$, respectively. Then
\begin{align*}
&\deg_x(P) \leq d+\vartheta_x+r\mu,
\quad
\deg_y(P) \leq h+\vartheta_y+r\xi,\\
\text{and}\quad &\deg_k(P) \leq \vartheta_k+r\mu.
\end{align*}
As a consequence, $P$ contains at most
\begin{equation}\label{EQ:Pterm}
(d+\vartheta_x+r\mu+1)(h+\vartheta_y+r\xi+1)(\vartheta_k+r\mu+1)
\end{equation}
nonzero monomials in terms of $x,y,k$.
\end{lemma}
\begin{proof}
Observe that
\begin{align*}
&\deg_x(c_i) \leq d,\quad \deg_y(c_i) \leq h,\quad \deg_k(c_i) = 0,\\
&\deg_x(P_{i,m}) \leq ia_m+ib_m+(r-i)u_m+(r-i)v_m,\\
&\deg_y(P_{i,m}) \leq ia_m\deg_y(A_m)+ib_m\deg_y(B_m)\\
&\qquad\qquad\quad+(r-i)u_m\deg_y(U_m)+(r-i)v_m\deg_y(V_m),\\
&\deg_k(P_{i,m}) \leq ia_m+ib_m+(r-i)u_m+(r-i)v_m
\end{align*}
for all $i = 0,\dots, r$ and $m = 1,\dots, M$. The lemma follows
from \eqref{EQ:P}.
\end{proof}
As suggested in \cite{ChKa2012a}, the degrees of $Y$ in $x,y$ and $k$ will be chosen in such a
way that $QS_k(Y)-RY$ only contains monomials which are expected to
occur in $P$, so that no additional equations will appear.
\begin{lemma}\label{LEM:Ydeg}
With Convention~\ref{CON:notation}, assume that $Y \in C[x,y,k]$
satisfies
\begin{align*}
&\deg_x(Y) \leq \deg_x(P)-\nu,\quad \deg_y(Y) \leq \deg_y(P) - \eta,\\
\text{and}\quad &\deg_k(Y) \leq \deg_k(P) - \nu.
\end{align*}
Then the number of nonzero monomials in terms of $x,y,k$ appearing in
$P - (QS_k(Y)-RY)$ is bounded by the integer defined in~\eqref{EQ:Pterm}.
\end{lemma}
\begin{proof}
It follows from \eqref{EQ:Q} and \eqref{EQ:R} that
\begin{align*}
&\max\{\deg_x(Q),\deg_x(R)\} =
\max\{\deg_k(Q),\deg_k(R)\} = \nu,\\
\text{and}\quad
&\max\{\deg_y(Q),\deg_y(R)\} = \eta.
\end{align*}
Thus the degrees of $P- (QS_k(Y)-RY)$ in $x,y,k$ are bounded above by
those of $P$, yielding the assertion by Lemma~\ref{LEM:Pdeg}.
\end{proof}
Now we are ready to present the main result of this subsection,
which can be viewed as a natural generalization of the order-degree
curve from \cite[Theorem~5]{ChKa2012a}.
\begin{theorem}\label{THM:hyperodh}
With Convention~\ref{CON:notation}, let $r,d,h\in \set N$ be such
that
$d+\vartheta_x+r\mu-\nu\geq 0$,
$h+\vartheta_y+r\xi-\eta\geq 0$, $\vartheta_k+r\mu-\nu\geq 0$, and
\begin{align*}
&(r+1)(d+1)(h+1)-(d+1+\vartheta_x+r\mu)(\vartheta_k+r\mu+1)\eta\\
&-(d+2+\vartheta_x+\vartheta_k+2r\mu-\nu)(h+1+\vartheta_y+r\xi-\eta)\nu
>0.
\end{align*}
Then there exists a telescoper for $H$ of order at most~$r$, degree at most~$d$ and
height at most~$h$.
\end{theorem}
\begin{proof}
In order to prove the theorem, it is sufficient to show that, for the
given triple $(r,d,h)$ in the assumption, the corresponding equation
$P = QS_k(Y) - RY$ has a nontrivial solution $Y \in C[x,y,k]$.
Lemma~\ref{LEM:Ydeg}, along with Lemma~\ref{LEM:Pdeg}, suggests the ansatz
\[
Y = \sum_{i=0}^{d+\vartheta_x+r\mu-\nu}
\,\sum_{j=0}^{h+\vartheta_y+r\xi-\eta}
\,\sum_{\ell=0}^{\vartheta_k+r\mu-\nu}
y_{i,j,\ell}x^iy^jk^\ell
\]
with undetermined coefficients $y_{i,j,\ell}$.
Together with the unknown coefficients over $C$ of the $c_i$ in $P$ given by \eqref{EQ:P},
we obtain from the equation \eqref{EQ:gosper} a linear system over $C$ with
\begin{align*}
&(r+1)(d+1)(h+1)\\
&+(d+\vartheta_x+r\mu-\nu+1)(h+\vartheta_y+r\xi-\eta+1)(\vartheta_k+r\mu-\nu+1)
\end{align*}
variables and, by Lemma~\ref{LEM:Ydeg}, at most
\[
(d+\vartheta_x+r\mu+1)(h+\vartheta_y+r\xi+1)(\vartheta_k+r\mu+1)
\]
equations. By adding and removing $\eta$ in the second parenthesis of the number of equations,
along with a direct calculation, one readily sees that the linear system for $r,d,h$ satisfying the
given constraints has more
variables than equations, ensuring the existence of such a nontrivial
solution. The assertion follows.
\end{proof}
\begin{example}\label{EX:nonrat}
For the hypergeometric term $H=k\Gamma(x+k+y^2)/\Gamma(x-k+y)$, the following table contains
the predicted and actual heights of telescopers for various orders and degrees.
An entry $u|v$ in cell $(r,d)$ means that Thm.~\ref{THM:hyperodh} predicts the existence of a telescoper
of order~$r$, degree~$d$ and height~$h\geq u$, and that there actually exists a telescoper of order~$r$,
degree~$d$ and height~$h=v$. A dot indicates that no operator with the corresponding order
and degree exist or is predicted.
\[\scriptscriptstyle
\begin{array}{@{}c@{\kern3pt}|@{\kern3pt}c@{\kern3pt}c@{\kern3pt}c@{\kern3pt}c@{\kern3pt}c@{\kern3pt}c@{\kern3pt}c@{\kern3pt}c@{\kern3pt}c@{\kern3pt}c@{}}
\llap{$d$ }
12 & \cdot|\cdot & \cdot|\cdot & 42|9 & 25|6 & 22|6 & 21|5 & \x{22}|5 & 22|5 & 23|4 & 24|4 \\
11 & \cdot|\cdot & \cdot|\cdot & 50|9 & 27|6 & 24|6 & 23|5 & \x{24}|5 & 24|5 & 25|5 & 26|4 \\
10 & \cdot|\cdot & \cdot|\cdot & 62|9 & 31|7 & 27|6 & 26|5 & 26|5 & \x{27}|5 & 28|5 & 29|4 \\
9 & \cdot|\cdot & \cdot|\cdot & 86|9 & 36|7 & 30|6 & 29|5 & 30|5 & \x{30}|5 & 32|5 & 33|5 \\
8 & \cdot|\cdot & \cdot|\cdot & 158|9 & 45|7 & 36|6 & 35|5 & 35|5 & \x{36}|5 & 37|5 & 39|5 \\
7 & \cdot|\cdot & \cdot|\cdot & \cdot|9 & 62|7 & 47|6 & 43|6 & 43|5 & \x{44}|5 & 46|5 & 47|5 \\
6 & \cdot|\cdot & \cdot|\cdot & \cdot|9 & 114|7 & 69|6 & 61|6 & 59|5 & \x{60}|5 & 61|5 & 63|5 \\
5 & \cdot|\cdot & \cdot|\cdot & \cdot|9 & \cdot|7 & 160|7 & 113|6 & 102|5 & 98|5 & \x{99}|5 & 101|5 \\
4 & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|7 & \cdot|7 & \cdot|6 & 570|6 & 371|5 & 312|5 & 288|5 \\
3 & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|10 & \cdot|8 & \cdot|6 & \cdot|6 & \cdot|6 & \cdot|6 & \cdot|6 \\
2 & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|8 & \cdot|8 & \cdot|8 & \cdot|8 & \cdot|8 \\
1 & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot \\
0 & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot \\\hline
& 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9\rlap{ $r$} \\
\end{array}
\]
Thm.~\ref{THM:hyperodh} inherits from the order-degree curve of~\cite{ChKa2012a} that it correctly predicts
the orders but overshoots its degrees.
For example, with $r=2$, the theorem predicts a telescoper of order 2 and degree $d$
whenever $d\geq 8$, and yet we actually found telescopers of order 2 and degree 5, 6 or 7
by direct calculation.
Also the predicted heights overshoot, but note
that the bound still supports trading order and degree against height, e.g.,
the height predicted for $(r,d)=(4,6)$ is lower than that for $(r,d)=(4,5)$ and $(r,d)=(3,6)$.
Note also that
the predicted heights are not weakly decreasing for fixed $d$ and growing~$r$, as they should
(leftmost violations are highlighted in bold).
This is a weakness of Thm.~\ref{THM:hyperodh} caused by the quadratic term in $r$ appearing in the
inequality.
\end{example}
\subsection{The rational case}\label{SUBSEC:rat}
In this subsection, we deal with the remaining case where the given
proper hypergeometric term $H$ of the form \eqref{EQ:hyper} can be
further written as $H = fH_0$ for $f\in C(x,y,k)$ and a proper
hypergeometric term $H_0$ with $S_k(H_0)/H_0=1$.
Let $a,b\in C[x,y,k]$ be such that $S_x(H_0)/H_0 = a/b$. Along similar lines
in the proof of~\cite[Lemma~9]{ChKa2012a}, one can show that whenever
$f$ admits a telescoper of order $r$, degree $d$ and height $h$, there always
exists a telescoper for $H$ of order $r$, degree at most $d+r\max\{\deg_x(a),\deg_x(b)\}$
and height at most $h+r\max\{\deg_y(a),\deg_y(b)\}$. Based on this transformation,
we may assume without loss of
generality that $H_0 = 1$. In other words, we assume that $H$ is a
proper hypergeometric term of the form \eqref{EQ:hyper} and, at the
same time, is a rational function in $C(x,y,k)$, which is equivalent
to say that $H$ is a rational function whose denominator factors into
integer-linear factors.
Following \cite[\mathcal{S} 4]{ChKa2012a}, we employ the direct algorithm of Le
\cite{Le2003a}, instead of Zeilberger's algorithm, for analyzing sizes of
telescopers in the rational case so as to obtain a much sharper bound.
Let $H$ be a rational proper hypergeometric term. The algorithm of
Le \cite{Le2003a} first decomposes $H$ as
\begin{equation}\label{EQ:Hdecomp}
H = S_k(g) - g
+ \frac1u\sum_{i=1}^sV_i\Big(\frac1{(a_i x + a_i' k+a_i'')^{e_i}}\Big),
\end{equation}
where $g\in C(x,y,k)$, $u\in C[x,y]$, $V_i\in C[x,y][S_x]$,
$a_i,a_i',e_i\in \set Z$, $a_i''\in C[y]$ with $a_i'>0, e_i>0$,
$\gcd(a_i,a_i') = 1$ for all $i$, and
\[
\Big(\frac{a_i}{a_i'}-\frac{a_j}{a_j'}\Big)x +
\Big(\frac{a_i''}{a_i'}-\frac{a_j''}{a_j'}\Big)
\notin \set Z
\]
for all $i\neq j$ with $e_i=e_j$. Then for $i = 1,\dots,s$, it
computes an operator $L_i\in C(x,y)[S_x]$ such that $S_x^{a_i'}-1$ is
a right divisor of $L_i(\frac1uV_i)$. A common left multiple $L\in
C[x,y][S_x]$ of the operators $L_1,\dots,L_s$ finally leads to a
telescoper for $H$.
Note that the main computational work happens in the last two steps.
It is sensible to assume in the following degree analysis that we
already know the decomposition \eqref{EQ:Hdecomp} and to express the
bounds in terms of quantities from the last term in \eqref{EQ:Hdecomp},
rather than from $H$.
\begin{theorem}\label{THM:ratodh}
Let $H$ be a rational proper hypergeometric term admitting the
decomposition \eqref{EQ:Hdecomp}. For $i = 1,\dots,s$, assume that
$V_i$ has degree $\vartheta_i$ and height $\tau_i$. Let $r,d,h\in \set N$
be such that $d\geq \deg_x(u)$, $h\geq \deg_y(u)$, and
\begin{align*}
&(r+1)(d+1-\deg_x(u))(h+1-\deg_y(u))-\sum_{i=1}^sa_i'\vartheta_i\tau_i \\
&-(d+1-\deg_x(u))\sum_{i=1}^sa_i'\tau_i
-(h+1-\deg_y(u))\sum_{i=1}^sa_i'\vartheta_i\\
&-(d+1-\deg_x(u))(h+1-\deg_y(u))\sum_{i=1}^sa_i'
>0
\end{align*}
Then there exists a telescoper $L$ for $H$ of order at most~$r$, degree at most~$d$
and height at most~$h$.
\end{theorem}
\begin{proof}
According to the algorithm of Le \cite{Le2003a}, it suffices to find
some $L\in C[x,y][S_x]$ and $R_i\in C(x,y)[S_x]$ with the property
that $L(\frac1uV_i) = R_i(S_x^{a_i'}-1)$ for all $i=1,\dots,s$.
Let $\rho_i$ be the order of $V_i$ for $i = 1,\dots,s$ and make an
ansatz $L = \tilde L u$ with
\[
\tilde L = \sum_{i=0}^r\sum_{j=0}^{d-\deg_x(u)}\sum_{\ell=0}^{h-\deg_y(u)}
c_{i,j,\ell}y^\ell x^jS_x^i.
\]
Then $L$ has order $r$, degree $d$, height $h$, and $L\frac1uV_i
= \tilde L V_i$ for $i = 1,\dots,s$. It thus amounts to constructing
operators $R_i\in C[x,y][S_x]$ such that $\tilde LV_i=R_i(S_x^{a_i'}-1)$.
Note that $\tilde LV_i$ has order $r+\rho_i$, degree
$d-\deg_x(u)+\vartheta_i$ and height $h-\deg_y(u)+\tau_i$. Also note
that $S_x^{a_i'}-1$ has order $a_i'$, degree 0 and height 0. So we
consider ansatzes for the $R_i$ of order $r+\rho_i-a_i'$, degree
$d-\deg_x(u)+\vartheta_i$ and height $h-\deg_y(u)+\tau_i$, respectively.
Comparing coefficients with respect to $x,y$ and $S_x$ in all the
required identities $\tilde LV_i = R_i(S_x^{a_i'}-1)$ leads to a
linear system with
\begin{align*}
&(r+1)(d-\deg_x(u)+1)(h-\deg_y(u)+1)\\
&+\sum_{i=1}^s(r+\rho_i-a_i'+1)(d-\deg_x(u)+\vartheta_i+1)(h-\deg_y(u)+\tau_i+1)
\end{align*}
variables and
\[
\sum_{i=1}^s(r+\rho_i+1)(d-\deg_x(u)+\vartheta_i+1)(h-\deg_y(u)+\tau_i+1)
\]
equations. Since $r,d,h$ satisfy the inequality given in the theorem,
the number of variables exceeds the number of equations, and thus the
resulting linear system has a nontrivial solution.
\end{proof}
\begin{example}\label{EX:rat}
For the rational function
\[
H=\frac1{x+y}(((y+1)^3+1)S_x-(x+y+y^3))(\frac1{x+3k+y}),
\]
the heights of telescopers predicted by Thm.~\ref{THM:ratodh} and
the heights of telescopers actually observed are as follows.
\[
\scriptscriptstyle
\begin{array}{@{}c|c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{}}
\llap{$d$ }
7& \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & 19|9 & 7|4 & 5|3 & 3|1 & 3|1 & 2|1 & 2|1 & 2|1 & 2|1 & 2|1 \\
6& \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & 22|9 & 8|4 & 5|3 & 4|1 & 3|1 & 2|1 & 2|1 & 2|1 & 2|1 & 2|1 \\
5&\cdot|\cdot & \cdot|\cdot & \cdot|\cdot & 28|9 & 8|4 & 5|3 & 4|1 & 3|1 & 3|1 & 2|1 & 2|1 & 2|1 & 2|1 \\
4& \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & 46|9 & 10|4 & 6|3 & 4|1 & 3|1 & 3|1 & 2|1 & 2|1 & 2|1 & 2|1 \\
3& \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & 13|4 & 7|3 & 5|1 & 4|1 & 3|1 & 3|1 & 2|1 & 2|1 & 2|1 \\
2& \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & 28|4 & 10|3 & 6|1 & 4|1 & 4|1 & 3|1 & 3|1 & 2|1 & 2|1 \\
1& \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & 19|1 & 10|1 &
7|1 & 5|1 & 4|1 & 4|1 & 3|1 \\
0& \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot &
\cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot \\\hline
& 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 \rlap{ $r$}
\end{array}
\]
Thm.~\ref{THM:ratodh} predicts orders and degrees correctly but the bound on the height overshoots.
At least in this example, while the prediction supports trading order and degree against height, the actual
operators seem to only allow for trading order against height. This might be a general phenomenon when $s=1$.
\end{example}
\section{Contraction Ideals}\label{SEC:desing}
\def\con{\operatorname{Con}}%
In~\cite{CJKS2013}, it has been shown that order-degree curves are caused by so-called
removable singularities. This analysis is more general than the results obtained
in the previous sections in so far as it is not limited to operators obtained by
a certain operation, e.g., creative telescoping, but applies to any operator ideal.
At the same time, the ``general'' result below does not include the results stated
above as special cases, because it depends on different assumptions on what is known
about the operators at hand.
In the present section, we make an attempt at extending the analysis of \cite{CJKS2013}
to order-degree-height surfaces. Although there is some theory about the
removability of singularities in Ore algebras of the form $C[x,y][\partial]$~\cite{zhang16,zhang17,CKLZ2019},
we will formulate our result in slightly different terms. For an operator
$L\in C[x,y][\partial]$, let $\<L>$ denote the ideal it generates in
$C(x,y)[\partial]$. Then $\con\<L>:=\<L>\cap C[x,y][\partial]$ is an ideal
of $C[x,y][\partial]$, called the \emph{contraction ideal} of~$\<L>$.
We assume that some elements of the contraction ideal $\con\<L>$ are given
and study how they give rise to order-degree-height surfaces about
the elements of~$\con\<L>$.
The construction proposed in~\cite{CJKS2013} can be summarized as follows.
Suppose we know two operators $L\in C[x][\partial]$ and $L_1\in\con\<L>$.
The goal is an estimate relating the orders $r$ and the degrees $d$ of the elements of~$\con\<L>$.
Suppose that $\deg_\partial(L_1)>\deg_\partial(L)$ and let $p\in C[x]$ and $P\in C[x][\partial]$ be such that $pL_1=PL$.
Suppose further that $\deg_x(p)>\deg_x(\operatorname{lc}_\partial(P))$.
In order to search for elements of $\con\<L>$ of order~$r$ and degree~$d$, make an ansatz
$(Q_0+Q_1\frac1pP)L$ with some operators $Q_0,Q_1\in C[x][\partial]$ with
\begin{alignat*}1
\deg_\partial(Q_0) &\leq r - \deg_\partial(L),\\
\deg_x(Q_0) &\leq \deg_x(P) - 1,\\
\deg_\partial(Q_1) &\leq r - \deg_\partial(L_1),\\
\deg_x(Q_1) &\leq \deg_x(p) - \deg_x(\operatorname{lc}_\partial(P)) - 1.
\end{alignat*}
Then $\deg_x(Q_0+Q_1\frac1pP)\leq\deg_x(P)-1$, so if we equate the coefficients of
all terms of degree $>d-\deg_x(L)$ to zero, we obtain
\[
(r-\deg_\partial(L)+1)(\deg_x(P)-1-d+\deg_x(L))
\]
linear constraints on the
\begin{alignat*}1
&(r-\deg_\partial(L)+1)\deg_x(P)\\
&+ (r-\deg_\partial(L_1)+1)(\deg_x(p) - \deg_x(\operatorname{lc}_\partial(P)))
\end{alignat*}
undetermined coefficients of $Q_0$ and~$Q_1$. The linear system is underdetermined if
$r\geq\deg_\partial(L)$ and
\[
d\geq\deg_x(L) -
\Bigl(1 - \frac{\deg_\partial(L_1)-\deg_\partial(L)}{r+1-\deg_\partial(L)}\Bigr)
\bigl(\deg_x(p) - \deg_x(\operatorname{lc}_\partial(P))\bigr).
\]
This matches the formula in Thm.~9 of~\cite{CJKS2013} for the case $m=1$.
Observe that every nonzero solution of the linear system gives rise to a nonzero
operator $Q_0+Q_1\frac1pP$, because the degree restrictions imposed in the ansatz
imply that the leading coefficient of $Q_1\frac1pP$ is a proper rational function while
$Q_0$ has only polynomial coefficients, and then
$Q_0+Q_1\frac1pP=0\Rightarrow Q_0=Q_1=0$.
For order-degree curves, the argument just sketched generalizes to the case where for
the given $L\in C[x][\partial]$ we know several operators $L_1,\dots,L_m\in\con\<L>$.
However, the general proof given in~\cite{CJKS2013} makes use of the fact that $C[x]$ is a
Euclidean domain. If we turn to operators in $C[x,y][\partial]$ we are faced with the
problem that $C[x,y]$ is not a Euclidean domain. This makes it less clear how the
ansatz must be shaped in order to ensure that a nonzero solution of the linear system
translates into a nonzero ideal element. Indeed, it seems impossible to formulate a
general ansatz whose shape only depends on the orders, degrees, and heights of
the operators involved. Instead, we must also take into account how many syzygies
there are between the leading coefficients of the operators. This quantity, for which
we introduce the following notation, will appear in our order-degree-height surface
formula.
\begin{defi}\label{def:syz}
For polynomials $u_1,\dots,u_m\in C[x,y]$, let
\begin{alignat*}1
&\operatorname{Syz}(u_1,\dots,u_m)\\
&:=\{(q_1,\dots,q_m)\in C[x,y]^m | q_1u_1 + \cdots + q_mu_m=0\}
\end{alignat*}
denote their syzygy module.
Let $p_1,\dots,p_m\in C[x,y]\setminus\{0\}$ and $P_1,\dots,P_m\in C[x,y][\partial]\setminus\{0\}$.
Suppose that $\deg_\partial(P_1)\leq\cdots\leq\deg_\partial(P_m)$.
For $n\in\set N$, let $m_n\in\{1,\dots,m\}$ be maximal such that $\deg_\partial(P_{m_n})\leq n$,
let $u_\ell:=\sigma^{n-\deg_\partial(P_\ell)}\Bigl(\frac{\operatorname{lc}_\partial(P_\ell)}{p_\ell}\Bigr)$
for $\ell=1,\dots,m_n$ and define
\begin{alignat*}1
V_n = \Bigl\{ &(q_1,\dots,q_{m_n})\in\operatorname{Syz}(u_1,\dots,u_{m_n}) : \\
& \deg_x q_\ell \leq\deg_x p_\ell - \deg_x\operatorname{lc}_\partial(P_\ell)\text{ and }\\
& \deg_y q_\ell \leq\deg_y p_\ell - \deg_y\operatorname{lc}_\partial(P_\ell)\text{ for all $\ell$} \Bigr\}.
\end{alignat*}
We define $c_n:=\dim_C V_n$.
\end{defi}
By the following lemma, the sequence $c_0,c_1,\dots$ stabilizes at $n=\max_{\ell=1}^m\deg_\partial(P_\ell)$.
In particular, for every operator $L$ and every choice $p_1,\dots,p_m$ and $P_1,\dots,P_m$ as in Def.~\ref{def:syz},
there is a constant $c$ such that $\sum_{n=0}^{r - \deg_\partial(L)} c_n\leq (r-\deg_\partial(L)+1) c$.
\begin{lemma}
Let $p_1,\dots,p_m$ and $P_1,\dots,P_m$ be as in Def.~\ref{def:syz}.
If $n\in\set N$ is such that $m_n=m_{n+1}$, then $c_n=c_{n+1}$.
\end{lemma}
\begin{proof}
Since $\sigma$ is an automorphism, we have
\begin{alignat*}1
&(q_1,\dots,q_{m_n})\in\operatorname{Syz}(u_1,\dots,u_{m_n})\\
&\iff(\sigma(q_1),\dots,\sigma(q_{m_n}))\in\operatorname{Syz}(\sigma(u_1),\dots,\sigma(u_{m_n}))
\end{alignat*}
for all $q_1,\dots,q_{m_n}$. Since $\sigma$ is also assumed to preserve degrees,
\[
\deg q_\ell\leq\deg p_\ell-\deg\operatorname{lc}_\partial(P_\ell)
\iff
\deg\sigma(q_\ell)\leq\deg p_\ell-\deg\operatorname{lc}_\partial(P_\ell)
\]
for all $\ell$ and every choice of~$q_\ell$, both for $\deg=\deg_x$ and $\deg=\deg_y$.
Together with the assumption $m_n=m_{n+1}$, it follows that $\dim_C V_n=\dim_C V_{n+1}$.
\end{proof}
\begin{thm}\label{thm:contraction}
Let $L\in C[x,y][\partial]$ and let $L_1,\dots,L_m\in\con\<L>\subseteq C[x,y][\partial]$.
For all $\ell=1,\dots,m$, let $p_\ell\in C[x,y]\setminus\{0\}$ and $P_\ell\in C[x,y][\partial]$
be such that $p_\ell L_\ell=P_\ell L$, the $p_\ell$ are pairwise coprime,
$\deg_x p_\ell >\deg_x\operatorname{lc}_\partial(P_\ell)$ and $\deg_y p_\ell >\deg_y\operatorname{lc}_\partial(P_\ell)$.
Define $\lambda_{x,\ell}$ and $\lambda_{y,\ell}$ by
\begin{alignat*}1
\lambda_{x,\ell} &:= \deg_x p_\ell - \deg_x \operatorname{lc}_\partial(P_\ell),\\
\lambda_{y,\ell} &:= \deg_y p_\ell - \deg_y \operatorname{lc}_\partial(P_\ell)
\end{alignat*}
for all $\ell$, and let
\begin{alignat*}3
\eta_x&=\max_{\ell=1}^m(\deg_x P_\ell - \deg_x\operatorname{lc}_\partial(P_\ell)),&\quad\mu_x&=\sum_{\ell=1}^m\deg_x(p_\ell),\\
\eta_y&=\max_{\ell=1}^m(\deg_y P_\ell - \deg_y\operatorname{lc}_\partial(P_\ell)),&\quad\mu_y&=\sum_{\ell=1}^m\deg_y(p_\ell),\\
\xi_x&=\sum_{\ell=1}^m(\deg_\partial(L_\ell){-}1)\deg_x(p_\ell),&&\null\kern-3em
\xi_y =\sum_{\ell=1}^m(\deg_\partial(L_\ell){-}1)\deg_y(p_\ell).
\end{alignat*}
Let the numbers $c_0,c_1,\dots$ be as in Def.~\ref{def:syz}.
Let $r,d,h\in\set N$ be such that $r\geq\deg_\partial(L),\deg_\partial(L_1),\dots,\deg_\partial(L_m)$ and
\begin{alignat*}1
&(r-\deg_\partial(L)+1)\Bigl(\tilde\eta_x\tilde\eta_y+\sum_{\ell=1}^m \lambda_{x,\ell}\lambda_{y,\ell}\\
&\quad - (\tilde\eta_x+r\mu_x-\xi_x)(\tilde\eta_y+r\mu_y-\xi_y)\\
&\quad + (d - \deg_x(L) + 1+r\mu_x-\xi_x)(h - \deg_y(L) + 1+r\mu_y-\xi_y)\Bigr)\\
&> \sum_{\ell=1}^m \deg_\partial(P_\ell)\lambda_{x,\ell}\lambda_{y,\ell}+ \max_{n=0}^{r - \deg_\partial(L)} c_n,
\end{alignat*}
where $\tilde\eta_x=\max(\eta_x,d-\deg_x(L)+1)$ and $\tilde\eta_y=\max(\eta_y,h-\deg_y(L)+1)$.
Then there exists a $Q\in C(x,y)[\partial]$ such that $QL\in C[x,y][\partial]$ and
$QL\neq0$ and $\deg_\partial(QL)\leq r$ and $\deg_x(QL)\leq d$ and $\deg_y(QL)\leq h$.
\end{thm}
\begin{proof}
For $r,d,h\in\set N$ as in the theorem, consider an ansatz
\[
Q:=Q_0 + Q_1\frac1{p_1}P_1 + \cdots + Q_m\frac1{p_m}P_m
\]
for a left-multiplier $Q$ for~$L$.
Note that for every choice $Q_0,\dots,Q_m\in C[x,y][\partial]$ we have $QL\in C[x,y][\partial]$.
Setting $p_0=1$ and $P_0=1$, we can write $Q=\sum_{\ell=0}^m Q_\ell\frac1{p_\ell}P_\ell$.
For $\ell\geq1$, we choose the degree bounds $\deg_x(Q_\ell)<\deg_x(p_\ell) - \deg_x \operatorname{lc}_\partial(P_\ell)=\lambda_{x,\ell}$
and $\deg_y(Q_\ell)<\deg_y(p_\ell) - \deg_y \operatorname{lc}_\partial(P_\ell)=\lambda_{y,\ell}$,
and for $\ell=0$, we choose the degree bounds $\deg_x(Q_0)<\tilde\eta_x$ and $\deg_y(Q_0)<\tilde\eta_y$.
We also impose the bounds $\deg_\partial(Q_\ell)\leq r - \deg_\partial(P_\ell) - \deg_\partial(L)$ for all~$\ell$.
We then have $\deg_\partial(QL)\leq r$ as well as $\deg_x(Q)<\tilde\eta_x$ and $\deg_y(Q)<\tilde\eta_y$.
In order to ensure the desired shape of~$QL$, we equate undesired coefficients of $Q$ to zero.
Because $\deg_x(QL)=\deg_x(Q)+\deg_x(L)$ and $\deg_y(QL)=\deg_y(Q)+\deg_y(L)$, we want $Q$
to be such that $\deg_x(Q)\leq d-\deg_x(L)$ and $\deg_y(Q)\leq h-\deg_y(L)$.
Undesired coefficients are those which violate these degree conditions.
Taking into account that the coefficients of $Q$ are rational functions whose common denominator
divides the polynomial $\prod_{\ell=1}^m\prod_{i=0}^{\deg_\partial Q_\ell}\sigma^i(p_\ell)$, which has
$x$-degree at most $r\mu_x - \xi_x$ and
$y$-degree at most $r\mu_y - \xi_y$, we get
\begin{alignat*}1
&\underbrace{(r - \deg_\partial(L) + 1)}_{\vbox{\footnotesize\hbox{number of $\partial$-mono-\vphantom{$y$}}\hbox{mials in $Q$}}}
\Bigl(\underbrace{(\tilde\eta_x+r\mu_x-\xi_x)(\tilde\eta_y+r\mu_y-\xi_y)}_{\vbox{\footnotesize\hbox{number of $x$-$y$-monomials}\hbox{in the numerator of $Q$}}}\\
&\hphantom{\times\bigl(}{}-\underbrace{(d - \deg_x(L) + 1+r\mu_x-\xi_x)(h - \deg_y(L) + 1+r\mu_y-\xi_y)}_{\text{number of $x$-$y$-monomials
that are allowed to stay}}\Bigr)
\end{alignat*}
equations. The number of unknowns in the ansatz is
\begin{alignat*}1
&\sum_{\ell=0}^m (\deg_\partial(Q_\ell)+1)(\deg_x(Q_\ell)+1)(\deg_y(Q_\ell)+1)\\
&=(r{-}\deg_\partial(L){+}1)\tilde\eta_x\tilde\eta_y + \sum_{\ell=1}^m(r{-}\deg_\partial(P_\ell){-}\deg_\partial(L){+}1)\lambda_{x,\ell}\lambda_{y,\ell}\\
&=(r{-}\deg_\partial(L){+}1)\Bigl(\tilde\eta_x\tilde\eta_y+\sum_{\ell=1}^m \lambda_{x,\ell}\lambda_{y,\ell}\Bigr)
-\sum_{\ell=1}^m \deg_\partial(P_\ell)\lambda_{x,\ell}\lambda_{y,\ell}.
\end{alignat*}
If the number of variables exceeds the number of equations, the linear system will have nonzero solutions.
However, a nonzero solution $(Q_0,\dots,Q_m)$ need not translate into a nonzero multiplier
$Q=\sum_{\ell=0}^m Q_\ell\frac1{p_\ell}P_\ell$.
There is a danger of cancellations. By the restrictions on the degrees, no cancellation can happen between
$Q_0$ and $\sum_{\ell=1}^m Q_\ell\frac1{p_\ell}P_\ell$, because $Q_0$ has only polynomial coefficients while
$\sum_{\ell=1}^m Q_\ell\frac1{p_\ell}P_\ell$ is either zero or the leading coefficient is a proper rational
function. It remains to avoid that $\sum_{\ell=1}^m Q_\ell\frac1{p_\ell}P_\ell$ is identically zero.
If it is identically zero, there must in particular be a cancellation among the leading terms.
Let $n$ be maximal such that at least one of $[\partial^n] Q_\ell\frac1{p_\ell}P_\ell$ is nonzero,
where we write $[\partial^i]X$ for the coefficient of $\partial^i$ in $X\in C(x,y)[\partial]$.
Then
\[
([\partial^{n-\deg_\partial(P_1)}]Q_1, \dots, [\partial^{n-\deg_\partial(P_m)}]Q_m)
\]
belongs to the vector space~$V_n$ of Def.~\ref{def:syz} (coordinates $\ell$ with $n<\deg_\partial(P_\ell)$
are meant to be omitted).
For each $n$ there can be at most $c_n$ many linearly independent solutions for which a cancellation happens.
Therefore, if the number of variables exceeds the number of equations by more than $\max_{n=0}^{r - \deg_\partial(L)} c_n$,
there must be at least one solution that does not completely cancel.
Since this is the case if $r,d,h$ are chosen as specified in the theorem, we are done.
\end{proof}
\begin{example}
Let $L\in C[x,y][S_x]$ be the minimal telescoper of the hypergeometric term $H=k\Gamma(x+k+y^2)/\Gamma(x-k+y)$
already considered in Example~\ref{EX:nonrat}.
It has order~2, degree~5, and height~9, and we have
\[
\operatorname{lc}_\partial(L) = (2 x+y^2+y)(x^2+x y^2+x y+y^3-1).
\]
There is an $L_1$ of $\con\<L>$ of order~3, degree~8, height~8 and with
\[
\operatorname{lc}_\partial(L_1) = 6 x^2+6 x y^2+6 x y+6 x+y^4+4 y^3+4 y^2+3 y.
\]
Applying Thm.~\ref{thm:contraction} to $L$ and $L_1$ gives the following height predictions
for elements of $\con\<L>$ of various orders $r$ and degrees~$d$ (in comparison to the actual
smallest heights for elements of the respective shape):
\[
\scriptscriptstyle
\begin{array}{@{}c|c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{\kern5pt}c@{}}
\llap{$d$ }10&\cdot|\cdot & \cdot|\cdot & \cdot|9 & 8|7 & 8|6 & 8|5 & 8|5 & 8|5 \\
9&\cdot|\cdot & \cdot|\cdot & \cdot|9 & 8|7 & 8|6 & 8|5 & 8|5 & 8|5 \\
8&\cdot|\cdot & \cdot|\cdot & \cdot|9 & 8|7 & 8|6 & 8|5 & 8|5 & 8|5 \\
7&\cdot|\cdot & \cdot|\cdot & \cdot|9 & 10|7 & \x{12}|6 & 13|6 & 15|5 & 17|5 \\
6&\cdot|\cdot & \cdot|\cdot & \cdot|9 & 13|7 & \x{18}|6 & 23|6 & 28|5 & 33|5 \\
5&\cdot|\cdot & \cdot|\cdot & \cdot|9 & 23|7 & \x{38}|7 & 53|6 & 68|5 & 83|5 \\
4&\cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|7 & \cdot|7 & \cdot|6 & \cdot|6 & \cdot|5 \\
3&\cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|10 & \cdot|8 & \cdot|6 & \cdot|6 & \cdot|6 \\
2&\cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|8 & \cdot|8 & \cdot|8 \\
1&\cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot \\
0&\cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot & \cdot|\cdot \\\hline
& 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7\rlap{ $r$}
\end{array}
\]
We see that Thm.~\ref{thm:contraction} overshoots but rightly indicates that order and degree
can be traded against height, for example for $(r,d)=(3,8)$, the predicted height is~$8$, which
is less than the predicted height for $(r,d)=(3,7)$.
Also, similar to Thm.~\ref{THM:hyperodh}, for fixed $d$ and increasing $r$ the minimal predicted heights
are not weakly decreasing.
Like in Thm.~\ref{THM:hyperodh}, we blame the quadratic term in $r$ appearing in the inequality for this
behaviour.
Thm.~\ref{thm:contraction} leads to better estimates than Thm.~\ref{THM:hyperodh} because knowing $L$ and $L_1$
in advance is a much stronger assumption than knowing the hypergeometric term~$H$.
\end{example}
Thm.~\ref{thm:contraction} appears to be a natural generalization of the order-degree curve from~\cite{CJKS2013}.
While this order-degree curve turns out to make quite accurate degree predictions, Thm.~\ref{thm:contraction}
is not that sharp. As mentioned in the beginning, an order-degree curve for an operator~$L$
emerges if there are $L_1,P,p$ with $pL_1=PL$ and $\deg_x(p)>\deg_x(\operatorname{lc}_\partial(P))$. In the univariate
case, we can arrange this situation whenever $\operatorname{lc}_\partial(L)$ has a removable factor. It can
in fact always be arranged that $\operatorname{lc}_\partial(P)=1$. This is no longer true for operators in
$C[x,y][\partial]$, because $C[x,y]$ is not a Euclidean domain. As a consequence, it can happen
(and seems to be common) that a factor of $\operatorname{lc}_\partial(L)$ can only be removed at the cost of
introducing another factor into the leading coefficient.
Another interesting difference between Thm.~\ref{thm:contraction} and Thm.~9 of~\cite{CJKS2013} is
that $\deg_x(P)$ cancels out in the derivation of the order-degree curve (cf. the summary at the
beginning of this section), but it no longer cancels in the more general setting of Thm.~\ref{thm:contraction}.
This difference is also responsible for the disturbing quadratic dependence on~$r$, which has no
counterpart in the univariate case.
\section{Conclusion}
We have shown several situations that order and degree can be traded against height.
The effect was observed experimentally with actual operators and is supported by the theoretical
bounds we have given.
We believe that our results will be useful for deriving refined complexity analyses for
operations involving operators that take heights into account.
Our results are limited to the case of operators in $C[x,y][\partial]$ with $y$-degree
as height, and it would be interesting to have analogous results for operators in $\set Z[x][\partial]$
with integer bitlength as height. We conclude the paper with an example indicating that
trading effects can also be expected in this setting.
\begin{example}
For $a_n=\sum_{k=0}^n(\binom{n}{2k}+\binom{2n}{k}^2)$ let $a(x)=\sum_{n=0}^\infty a_nx^n$.
Using LLL, we searched for differential operators of various orders~$r$ and degrees~$d$ that
annihilate the series $a(x)$ and involve short integers. The results are shown in the table
below; a number $h$ in cell $(r,d)$ of the table indicates that we found an operator of
size $(r,d)$ which only involves integers with at most $h$ decimal digits.
Note that since multiplying an operator from the left by a power of $x$ increases the
degree of the operator without changing any of its coefficients, the minimal heights
for a fixed $r$ must be weakly decreasing while moving along the positive direction of $d$-axis. The height~9 we observe
for order~12 and degree~14 contradicts this expectation. We believe that this is due
to the fact that LLL is not guaranteed to find the shortest vector of a given lattice.
For a fixed $d$ and increasing~$r$, it is conceivable that the minimal height increases.
The increase we observe at order~14 and degree~7 however also seems to be an artifact
of the LLL-computation, as the next few heights of degree~7 are again~9.
Same remarks apply to some other cells, e.g., $(12,9)$, $(14,15)$, etc.
\[\scriptscriptstyle
\begin{array}{@{}c@{\kern3pt}|@{\kern3pt}c@{\kern3pt}c@{\kern3pt}c@{\kern3pt}c@{\kern3pt}c@{\kern3pt}c@{\kern3pt}c@{\kern3pt}c@{\kern3pt}c@{\kern3pt}c@{\kern3pt}c@{\kern3pt}c@{\kern3pt}c@{\kern3pt}c@{\kern3pt}c@{\kern3pt}c@{\kern3pt}c@{}}
\llap{$d$ }20 & \cdot & 10 & 9 & 8 & 8 & 8 & 8 & 8 & 9 & 9 & 9 & 10 & 11 & 11 & 11 & 11 & 12 \\
19 & \cdot & 10 & 9 & 8 & 8 & 8 & 8 & 8 & 9 & 9 & 9 & 10 & 10 & 11 & 11 & 11 & 12 \\
18 & \cdot & 10 & 9 & 8 & 8 & 8 & 8 & 9 & 9 & 9 & 9 & 10 & 10 & 11 & 11 & 11 & 11 \\
17 & \cdot & 10 & 9 & 8 & 8 & 8 & 8 & 8 & 9 & 9 & 9 & 9 & 10 & 10 & 10 & 11 & 11 \\
16 & \cdot & 10 & 9 & 8 & 8 & 8 & 8 & 8 & 8 & 9 & 9 & 10 & 9 & 10 & 10 & 10 & 11 \\
15 & \cdot & 10 & 9 & 8 & 8 & 8 & 8 & 8 & 9 & 8 & 9 & 9 & 10 & 10 & 10 & 10 & 11 \\
14 & \cdot & 10 & 9 & 8 & 8 & 8 & 8 & 8 & 9 & 8 & 8 & 9 & 8 & 10 & 10 & 10 & 11 \\
13 & \cdot & 10 & 9 & 8 & 8 & 8 & 8 & 8 & 8 & 8 & 8 & 8 & 8 & 8 & 10 & 10 & 10 \\
12 & \cdot & 10 & 10 & 8 & 8 & 8 & 8 & 8 & 8 & 8 & 8 & 8 & 8 & 8 & 8 & 10 & 8 \\
11 & \cdot & \cdot & 11 & 9 & 8 & 8 & 8 & 8 & 8 & 8 & 8 & 8 & 8 & 8 & 8 & 8 & 8 \\
10 & \cdot & \cdot & 12 & 9 & 8 & 8 & 8 & 8 & 8 & 8 & 8 & 8 & 8 & 8 & 8 & 8 & 8 \\
9 & \cdot & \cdot & 14 & 10 & 9 & 8 & 8 & 8 & 9 & 8 & 8 & 8 & 8 & 8 & 8 & 8 & 8 \\
8 & \cdot & \cdot & 27 & 11 & 10 & 9 & 9 & 9 & 9 & 9 & 9 & 9 & 9 & 9 & 9 & 9 & 9 \\
7 & \cdot & \cdot & \cdot & 17 & 12 & 10 & 10 & 9 & 9 & 9 & 10 & 9 & 9 & 9 & 9 & 9 & 9 \\
6 & \cdot & \cdot & \cdot & \cdot & 27 & 15 & 12 & 11 & 11 & 11 & 11 & 10 & 11 & 11 & 10 & 11 & 10 \\
5 & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & 11 & 11 & 11 & 11 & 11 & 11 & 11 & 11 & 11 \\
4 & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ \hline
& 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20\rlap{ $r$} \\
\end{array}
\]
\end{example}
\noindent\textbf{Acknowledgement.}
We thank the referees for their careful reading and their valuable suggestions.
\bibliographystyle{plain}
|
2,877,628,091,209 | arxiv | \section{Introduction}
Topological systems are characterized by edge excitations that are remarkably robust
to perturbations. They arise due to a bulk-boundary correspondence, where
geometric properties of the bulk band-structure
control the nature of
excitations at the edge when the system is placed in a confined geometry.
Thus perturbations that cannot affect the bulk topological properties, cannot perturb the
edge states either. For an integer quantum Hall system for example, bulk bands have a non-zero
Chern number $C$, which also equals the number
of chiral edge modes~\cite{TKNN,Bellissard94,Avron94}. The topological nature of the system
is responsible for the highly precise quantization of the Hall conductance at $C e^2/h$.~\cite{Klitzing80}
Chern insulators are topological insulators (TIs) which show quantum Hall physics in
the absence of a magnetic field, where time-reversal symmetry is broken by introducing complex
hopping amplitudes~\cite{Haldane88}. This can be achieved by doping with magnetic impurities~\cite{Hasan12}.
Chern insulators can also be realized by the application of a circularly polarized
laser~\cite{Oka09,Inoue10,Kitagawa10,Lindner11}, where TIs arising out of such time-periodic
perturbations are referred to as Floquet TIs (FTIs)~\cite{Lindner11}.
The field of FTIs has grown in recent years because of
several experimental realizations ranging from periodically shaken lattices of cold-atomic gases~\cite{Esslinger14}, to
graphene~\cite{Karch10,Karch11}, Dirac fermions on the surface of 3D TIs~\cite{Gedik13} under external irradiation,
and photonic systems~\cite{Segev13,Hafezi13}. In fact FTIs are extremely rich, showing a variety of topological phases
as the amplitude, frequency, and polarization of the periodic drive is varied~\cite{Rudner13,Kundu14,Carpentier14,Dehghani15a}.
\begin{figure}
\includegraphics[height=9cm,width=9cm,keepaspectratio]{fig1.pdf}\\
\caption{
Sketch of the four topological phases studied, and labeled by $P_{1,2,3,4}$.
Laser frequency $\Omega$ is in units of the hopping strength $t_h$ with $6t_h$ being the
bandwidth of graphene. Laser amplitude $A_0$ is in units of the lattice spacing.
Solid lines are edge states from the center of the FBZ, while dashed lines are
edge states from the boundary of the FBZ. The arrows indicate the chirality of the edge states.
}
\label{fig1}
\end{figure}
The topological properties of time-periodic Hamiltonians are extracted by studying the
spectral properties of an effective time-independent Hamiltonian known as the
Floquet Hamiltonian,~\cite{Sambe73} which captures the time-evolution of the system over
one period. Since energy is not conserved upto integer multiples of the driving frequency, the eigen-energies
of the Floquet Hamiltonian are known as quasi-energies. Much like spatially periodic systems,
here too, the Floquet description leads to an over-counting where
quasi-energies separated by integer multiples of the laser frequency represent identical
eigen-modes. Thus in order to avoid this over-counting, one restricts the quasi-spectrum to one frequency of
the periodic drive, the so called Floquet Brillouin zone (FBZ).
Analysis of the Floquet quasi-energies and quasi-modes shows that not only can
FTIs be used to realize conventional Chern insulators,~\cite{Oka09,Esslinger14} they also have
unique properties coming from the fact that the energy is not conserved by integer multiples of the periodic drive.
As a result of this, the Chern number of the Floquet bands simply inform us of the difference between the number
of chiral edge modes above and below the Floquet band. Effective (2+1) D topological invariants need to
be defined to account for the the periodicity in the additional temporal direction, and to uniquely
determine the number of edge modes at a given quasi-energy~\cite{Rudner13}. In particular, for FTIs it is possible to
have anomalous edge states appearing at the boundaries
of the FBZ. FTIs can therefore realize topological systems where even though the Chern number
of the band is zero, yet equal number of chiral edge modes appear above and below it.
When the laser frequency is larger than the band-width,
conventional Chern insulators are realized for moderate laser amplitudes, where by conventional we mean
that there are edge states only at the center of FBZ, and the Chern number equals the number
of chiral edge modes. The anomalous edge states at the Floquet zone boundaries
typically appear for resonant lasers where the resonance creates effective band-inversions~\cite{Lindner11},
with the anomalous edge states appearing at these band-inversion
points. Edge states at the FBZ boundaries oscillate in time at higher frequencies relative to those edge states at the center
of the FBZ~\cite{Erhai14,Erhai14b}. This leads to a situation where not all the edge states
contribute equally to dc transport when the samples are connected to leads~\cite{Torres14}.
With all these unusual properties, one has to have
a clear picture of how all the different edge modes, the ones appearing at the center of the FBZ, and
the anomalous ones appearing in the boundaries of the FBZ, affect measurable quantities. Thus
we need to explore how these edge modes are occupied, and the current densities carried by them.
Remarkably, despite the intense activity in the field, this study has not been done, and we plan to undertake it here
for a closed system in the absence of external dissipation.
We present results for a FTI realized by irradiating
half-filled graphene in a cylindrical geometry with zig-zag edges, by
a circularly polarized laser. We assume the system is closed so that the occupation of all resulting Floquet quasi-energy
states is completely determined by the laser switch on protocol.
We study the four different
topological phases summarized in Fig.~\ref{fig1}
and labeled as $P_{1,2,3,4}$. Of these four phases,
one of them corresponds to an off-resonant high frequency laser
($P_1$), and the remaining ($P_{2,3,4}$) correspond to resonant low frequency lasers.
Moreover, of these four cases, two ($P_{1,2}$)
are conventional Chern insulators in that edge states appear only at the center of the FBZ, while for the other two
phases ($P_{3,4}$), anomalous edge states appear at the boundaries of the FBZ.
For the above phases we determine the occupation probability of
the bulk and edge states following a laser quench. Moreover from the edge state population, we give simple Landauer based arguments to estimate the conductance
of the edge modes. In doing so we arrive at estimates that are consistent with a Kubo formalism computation of the dc Hall conductance of
a bulk system with no boundaries~\cite{Dehghani15a,Dehghani15b}. Thus even though the conductance is not $Ce^2/h$ for
resonant lasers due to nonequilibrium occupation of bands, we uncover a bulk-boundary correspondence that persists even in
the nonequilibrium system, where the Hall response for a spatially extended system without edges is of the same magnitude as the transport
via edge states populated in a nonequilibrium way for precisely the same system but now with spatial boundaries.
In addition to edge state occupation, we also study the average current density, a quantity that can be locally measured using
magnetometers such as SQUIDs.
We find that the nonequilibrium population following the laser quench
breaks inversion symmetry and creates a net sheet of circulating current flowing on the cylinder.
We show that since the individual eigenstates are chiral, such a current density profile
results in a charge imbalance between the top and bottom edges of the cylinder.
In order to understand the symmetries
of the current density following the laser quench, we explore the symmetries of the current density carried by individual Floquet eigenstates,
and in the process highlight
how even though the instantaneous Hamiltonian has no special symmetries other than particle-hole symmetry, the Floquet Hamiltonian, on
averaging over one laser cycle, shows some additional emergent symmetries such as inversion symmetry. We discuss the role of these symmetries
on the current and charge densities generated by the laser quench.
We now briefly discuss the relation between our work and existing literature.
Our study is in a regime complementary to Ref.~\onlinecite{Torres14}
where a small sample in contact with leads was studied, and where the role of the anomalous edge modes is
determined by how well they hybridize with lead states. In contrast our study is for larger systems and also closed
systems such as those realized in ultra cold atomic gases~\cite{Esslinger14}. Our results are also relevant for
pump-probe spectroscopy of solid-state systems~\cite{Gedik13}, where time-resolved and angle resolved photoemission (ARPES) is a very effective way
of probing edge-state occupation probabilities.
Note that how edge states are occupied after a quench between two different static Hamiltonians
with different topological invariants has been studied.~\cite{Bhaseen15,Galilo15}
In our work we study quench dynamics for a case where the final Hamiltonian after the quench is not static but is periodic in time.
By virtue of this time-periodicity the
edge state structure is far richer than in conventional TIs, leading to richer dynamics.
Ref.~\onlinecite{Rigol14} studied dynamics in a similar system as
ours, however they focused only on the high frequency off-resonant case where the edge state structure is more
conventional. Here in contrast we study both off-resonant and resonant laser frequencies, thus highlighting
how the anomalous edge states are populated. In addition even for the off-resonant laser, our results are qualitatively
different from Ref.~\onlinecite{Rigol14} as our geometry, filling factor, and laser switch-on protocol results in a completely different
steady-state current density profile.
The paper is organized as follows. In Section~\ref{model}, we present the model and derive expressions for the
occupation probabilities and current densities. In Section~\ref{results} we present our results, while we
conclude in Section~\ref{conclu}, and give additional details in three appendices.
\section{Model} \label{model}
We consider graphene at half-filling in a cylindrical geometry with zig-zag edges that support edge states.
The graphene sheet is irradiated by a circularly polarized and spatially uniform laser of
amplitude $A_0$ and frequency $\Omega$.
Choosing $x$ to be the spatially uniform direction wrapping the cylinder, with $k_x$ being the momentum along
this direction, and labeling the sites along the
cylinder by $n_y=1 \ldots N_y$, where $N_y$ is even, the Hamiltonian of graphene without the laser is,
\begin{eqnarray}
&&H_G = -t_h\sum_{k_x,n_y=1\ldots N_y/2}\biggl[c^{\dagger}_{2n_y-1,k_x}c_{2n_y,k_x}\nonumber\\
&&\times \biggl(e^{-ik_x\delta_{1x}}+ e^{-ik_x\delta_{2x}}\biggr)
+h.c\biggr]\nonumber\\
&&+ \biggl[c^{\dagger}_{2n_y+1,k_x}c_{2n_y,k_x}+ h.c.\biggr]\biggl(1-\delta_{n_y=N_y/2}\biggr)\label{H1}.
\end{eqnarray}
Above odd and even sites are the $A$ and $B$ sub-lattices respectively, and the nearest-neighbor vectors measured from
the $B$ sub-lattice are,
\begin{eqnarray}
\vec{\delta}_1=\frac{a}{2}\left(\sqrt{3},-1\right);\,\, \vec{\delta}_2=\frac{a}{2}\left(-\sqrt{3},-1\right);\,\,
\vec{\delta}_3=a\left(0,1\right).
\end{eqnarray}
The laser enters through the replacement $c^{\dagger}_{\vec{r}'+\vec{r}}c_{\vec{r}'} \rightarrow c^{\dagger}_{\vec{r}'+\vec{r}}c_{\vec{r}'}
e^{-i\int_{\vec{r}'}^{\vec{r}'+\vec{r}}\vec{A}\cdot{d\vec{l}}}$. Thus
in the presence of a laser, the Hamiltonian gets modified to,
\begin{eqnarray}
&&H = -t_h\sum_{k_x,n_y=1\ldots N_y/2}
\biggl[c^{\dagger}_{2n_y-1,k_x}c_{2n_y,k_x}\nonumber\\
&&\times \biggl(e^{-ik_x\delta_{1x}-i \vec{A}\cdot\vec{\delta}_1}+
e^{-ik_x\delta_{2x}-i \vec{A}\cdot\vec{\delta}_{2}}\biggr)+h.c.\biggr]\nonumber\\
&&+ \biggl[c^{\dagger}_{2n_y+1,k_x}c_{2n_y,k_x}e^{-i\vec{A}\cdot\vec{\delta}_3}+ h.c.\biggr]\biggl(1-\delta_{n_y=N_y/2}\biggr),
\label{Hedge1}
\end{eqnarray}
where $\vec{A}= f(t)A_0\left[\cos(\Omega t),-\sin(\Omega t)\right]$ is the circularly polarized laser, and
$f(t)$ is a function that determines how the
laser was switched on. In this paper we will study the effect of a sudden quench which
corresponds to $f(t) =\Theta(t)$, $\Theta(x)$ being the Heaviside
function. Physically this corresponds to time-evolving the ground state of graphene by the Hamiltonian $H(t>0^+)$.
Before the laser is switched on, the wavefunction corresponds to the
half-filled ground-state of graphene $|\Psi_{\rm in}\rangle$ which in Fock-space
we write as,
\begin{eqnarray}
|\Psi_{\rm in}\rangle = \prod_{k_x,l= {\rm occ}}\epsilon^{\dagger}_{l,k_x}|0\rangle.
\end{eqnarray}
Above $l$ labels the exact eigenstates of graphene, there are $N_y$ of them for each $k_x$, and $l={\rm occ}$
implies the lowest $N_y/2$ occupied levels. These exact eigenstates can be expanded in the position
basis as,
\begin{eqnarray}
\epsilon_{l,k_x}^{\dagger}=\sum_{n_y=1\ldots N_y}a_{k_x,l,n_y}c_{n_y,k_x}^{\dagger},
\end{eqnarray}
where $a_{k_x,l,n_y}$ are complex coefficients.
In the Heisenberg representation, the switching on of the laser implies the time-evolution,
\begin{eqnarray}
&&\frac{d}{dt}c_{n_y,k_x} = i\left[H(t), c_{n_y,k_x}(t)\right]\nonumber\\
&&=-i\sum_{n_y'}\biggl[h_{k_x}(t)\biggr]_{n_y,n_y'}c_{n_y',k_x}(t),
\end{eqnarray}
where we have denoted the full Hamiltonian as $H=\sum_{k_x,n_y,n_y'}c_{n_y,k_x}^{\dagger}\left[h_{k_x}\right]_{n_y,n_y'}c_{n_y',k_x}$.
The solution of the above equation is
\begin{eqnarray}
&&c_{n_y,k_x}(t) = \sum_{n_y'}\biggl[U_{k_x}(t,0)\biggr]_{n_y,n_y'}c_{n_y',k_x}(0)\label{ctimev},
\end{eqnarray}
where $U_{k_x}$ is an $N_y\times N_y$ unitary matrix representing the time-evolution operator,
\begin{eqnarray}
i\frac{\partial}{\partial t}U_{k_x}(t,t')= H(t) U_{k_x}(t,t')\label{te1},
\end{eqnarray}
and obeys $U_{k_x}(t,t)=1$.
At times after the complete switch-on of the laser ($t,t'> 0^+$ for the quench) ,
\begin{eqnarray}
U_{k_x}(t,t') \!=\!\sum_{\alpha=1\ldots N_y}e^{-i\epsilon_{k_x\alpha}(t-t')}|\phi_{k_x,\alpha}(t)\rangle\langle \phi_{k_x,\alpha}(t')|,
\label{te3}
\end{eqnarray}
$\epsilon_{k_x\alpha}$ being the quasi-energies, while $|\phi_{k_x,\alpha}(t)\rangle$ are the time-periodic Floquet
quasi-modes~\cite{Sambe73}. In our representation these are $N_y$ component vectors whose components
we label as $\phi_{k_x,\alpha,n_y}$. Note the distinction between the time-periodic quasi-modes, and the
exact solution of the time-dependent Schr\"odinger equation $|\psi_{k_x,\alpha}(t)\rangle$, where
the latter is obtained from the former by multiplication by a time-dependent phase,
\begin{eqnarray}
|\psi_{k_x,\alpha}(t)\rangle= e^{-i\epsilon_{k_x\alpha}t}|\phi_{k_x,\alpha}(t)\rangle.
\end{eqnarray}
We obtain the quasi-energies and quasi-modes using standard methods~\cite{Sambe73}. The time-periodicity of the
Floquet modes allows an expansion in Fourier components,
\begin{eqnarray}
|\phi_{k_x,\alpha}(t)\rangle = \sum_m e^{i m\Omega t}|\phi_{k_x,\alpha}^m\rangle\label{fexp}.
\end{eqnarray}
Eq.~\eqref{te1} implies that the Fourier components $\phi_{k_x,\alpha,n_y}^m$ obey,
\begin{eqnarray}
&&\sum_{m}\biggl[H^{n,m}+m\Omega \delta_{m,n}\biggr]|\phi_{k_x,\alpha}^m\rangle=\epsilon_{k_x\alpha}|\phi^n_{k_x,\alpha}\rangle,\\
&&H^{n,m}= \frac{\Omega}{2\pi}\int_0^{2\pi/\Omega}dte^{-i(n-m)\Omega t}H(t).
\end{eqnarray}
Thus the time-dependent problem of Eq.~\eqref{te1}, has been traded for a time-independent problem, albeit in an expanded Hilbert
space due to the Fourier expansion.
In practice how many harmonics $|\phi^m\rangle$ need to be kept depends on the laser
parameters. Denoting the range of harmonics retained as $m=-M\ldots M$, we need to effectively solve
for the eigen-system of a $N_y(2M+1)\times N_y(2M+1)$ dimensional Hamiltonian.
High frequency and low amplitudes usually require retaining fewer harmonics than low frequency and large amplitudes.
For the four cases studied by us, we find good numerical convergence for $M=6$ for phases $P_{1,2,3}$, and $M=12$ for the phase $P_4$.
Once the Fourier components $|\phi^m\rangle$ are known, the Floquet modes at any time can be obtained from
Eq.~\eqref{fexp}, and the corresponding time-evolution operator can be determined from Eq.~\eqref{te3}.
A key physically relevant quantity entering in the expectation value of observables
is the occupation probability $O_{\alpha}(k_x)$ of the Floquet eigenstates labeled by $k_x,\alpha$.
For the quench this is simply given by overlaps between the Floquet eigenstate at $t=0$, and the
half-filled ground-state of graphene. To see this consider the simple case where initially only
a single mode of graphene labeled by $l$ is occupied. Thus the initial wave-function is
$|\psi_{k_x,\rm in}(0)\rangle = \epsilon_{l,k_x}^{\dagger}|0\rangle$. The quench implies, that from $t>0$, the state is
\begin{eqnarray}
&&|\Psi_{k_x}(t)\rangle = U_{k_x}(t,0)|\psi_{k_x,\rm in}(0)\rangle\nonumber\\
&&=\sum_{\alpha}e^{-i\epsilon_{k_x\alpha}t}|\phi_{k_x,\alpha}(t)\rangle\langle \phi_{k_x,\alpha}(0)|\epsilon_{l,k_x}^{\dagger}|0\rangle.
\end{eqnarray}
where in the last line we have used Eq.~\eqref{te3} for the time-evolution operator. What the above expression
implies is that the amplitude for being in the exact eigenstate of the
time-periodic Hamiltonian $|\psi_{k_x,\alpha}(t)\rangle=e^{-i\epsilon_{k_x\alpha}t}|\phi_{k_x,\alpha}(t)\rangle$ (which is the Floquet mode multiplied
by a phase), is a time-independent quantity and simply given by the overlap of the initial state and the exact eigenstate
at the time when the laser was switched on. We chose this time to be $t=0$. Thus the probability of being in the
exact eigenstate $k_x,\alpha$ is $|\langle \phi_{k_x,\alpha}(0)|\epsilon_{l,k_x}^{\dagger}|0\rangle|^2$
Accounting for the fact that initially not just one mode $l$, but many modes are occupied, the occupation
probability of the $\alpha$ quasi-energy level is simply obtained from summing over all the initially occupied states,
\begin{eqnarray}
&&O_{\alpha}(k_x) = \sum_{l={\rm occ}}|\langle \phi_{k_x,\alpha}(0)|\epsilon^{\dagger}_{l,k_x}|0\rangle|^2\nonumber\\
&&=\! \!\!\!\sum_{l={\rm occ}, n_y,n_y'}\!\!\!\biggl[\phi_{k_x,\alpha, n_y}^*(0)a_{k_x,l,n_y}\biggr]
\!\!\biggl[\phi_{k_x,\alpha, n_y'}(0)a^*_{k_x,l,n_y'}\biggr].
\label{Okxa}
\end{eqnarray}
One way to understand the meaning of these occupation probabilities is that since the final Hamiltonian is quadratic, it has
many conserved quantities, which by definition do not evolve in time. $O_{\alpha}(k_x)$ should be viewed as these conserved
quantities. As shown further below, these are also the natural quantities entering in physical observables.
It is also useful to study the momentum averaged occupation of the Floquet levels,
\begin{eqnarray}
O_{\alpha}= \frac{1}{N_x}\sum_{k_x}O_{\alpha}(k_x)\label{Oa},
\end{eqnarray}
where $N_x$ is the number of points in the $\hat{x}$ direction.
We are interested in the current density operator as this directly measures the nature of the chiral eigenstates
of the periodically driven system. In order to define a current operator, we apply a weak vector potential $\vec{A}_{\rm pr}$,
and expand the Hamiltonian to leading order in it. Thus,
\begin{eqnarray}
&&H(t)\rightarrow H(t) -i\sum_{rr'ab}c_{r'+r,a}^{\dagger}h_{r'+r,r'}^{ab}(t)c_{r',b}\nonumber\\
&&\times \vec{r}\cdot \vec{A}_{\rm pr}(r'+\frac{r}{2}),
\end{eqnarray}
where $a,b$ is the graphene sublattice index.
For simplicity, let us say that the vector potential is spatially uniform and applied along the
$x$-direction, then $H(t) = H(A_{\rm pr}=0) + \hat{J}_x A_{\rm pr}$,
where $\hat{J}_x$ is the current operator in the $x$-direction,
\begin{eqnarray}
\hat{J}_x= -i\sum_{r'rab}r_x c_{r'+r,a}^{\dagger}h_{r'+r,r'}^{ab}(t)c_{r',b}.
\end{eqnarray}
Because the system is spatially uniform along the $x$-direction, we perform a Fourier transform and write,
\begin{eqnarray}
&&\hat{J}_x=- i\frac{1}{N_x}\sum_{k_{1x}k_{2x}ab,r,r'}c_{k_{1x},r_y'+r_y,a}^{\dagger}c_{k_{2x},r_y',b}r_x\nonumber\\
&&\times e^{i{k}_{2x}r'_x - ik_{1x}(r'_x+r_x)}h_{r'+r,r'}^{ab}(t).
\end{eqnarray}
Since $h^{ab}_{r'+r,r'}$ depends only on $r$, the sum on $r_x'$ gives $k_{1x}=k_{2x}$. Then,
\begin{eqnarray}
&&\hat{J}_x
=\frac{1}{N_x}\sum_{k_{x},a,b,r_y',r_y}c_{k_{x},r_y'+r_y,a}^{\dagger}c_{k_{x},r_y',b}\nonumber\\
&&\times \partial_{k_x}\sum_{r_x}e^{- ik_{x}r_x}h_{r_x,r_y}^{ab}(t),\nonumber\\
&&=\frac{1}{2}\sum_{n_y=1\ldots N_y/2}\biggl(\hat{J}_{2n_y-1} + \hat{J}_{2n_y}\biggr),
\end{eqnarray}
where $\hat{J}_{n_y}$ is the current density at site $n_y$. Note that the current density in a unit-cell is the average of the
current density from the $A$ ($J_{2n_y-1}$) and $B$ sub-lattice ($J_{2n_y}$).
Using Eq.~\eqref{Hedge1}, current densities from the $A$ and $B$ sub-lattice are equal, and given by
\begin{eqnarray}
&&\hat{J}_{2n_y-1}
=\frac{t_h a}{N_x}\sqrt{3}\sum_{k_x}\biggl[c^{\dagger}_{2n_y-1,k_x}c_{2n_y,k_x}e^{-i\frac{A_0a}{2}\sin(\Omega t)}\nonumber\\
&&\times \sin\biggl(\frac{\sqrt{3}a}{2}\biggl\{k_x+A_0\cos(\Omega t)\biggr\}\biggr)+h.c.\biggr]\nonumber\\
&&= \hat{J}_{2 n_y}\label{curr1}.
\end{eqnarray}
It is convenient to expand the time-periodic matrix elements of the current operator in the Fourier basis,
\begin{eqnarray}
&&e^{i\frac{A_0a}{2}\sin(\Omega t)}\sin\biggl[\frac{\sqrt{3}a}{2}
\biggl\{k_x+A_0\cos(\Omega t)\biggr\}\biggr]\nonumber\\
&&= \sum_{m}e^{-im\Omega t}\tilde{J}_{-m}\left(A_0a\right)\sin\biggl(\frac{\sqrt{3}k_x a}{2}-\frac{m\pi}{3}\biggr),
\end{eqnarray}
where $\tilde{J}_m$ are the Bessel functions.
Thus the current density operator is
\begin{eqnarray}
&&\hat{J}_{2n_y-1}= \nonumber\\
&&\sqrt{3}\frac{t_h a}{N_x}\sum_{k_x,m}\biggl[\tilde{J}_{-m}\left(A_0a\right)\sin\biggl(\frac{\sqrt{3}k_x a}{2}-\frac{m\pi}{3}\biggr)\biggr]
\nonumber\\
&&\times
\biggl[\biggl(e^{i m\Omega t}c^{\dagger}_{2n_y-1,k_x}c_{2n_y,k_x}+h.c\biggr)\biggr]\label{jac}.
\end{eqnarray}
It is interesting to note that if one retains only the $m=0$ harmonic, the current density operator is the same as that for
the undriven case, but with the effective hopping amplitude
$t_h$ renormalized to $t_h \tilde{J}_0(A_0a)$ by the laser. For non-zero $m$, the above expression for the current operator highlights that the
electron tunneling between neighboring sites can be accompanied by $m$-photon absorption or emission processes, with
$t_h \tilde{J}_m(A_0a)$ controlling the amplitude of such processes.
\begin{figure}
\includegraphics[height=9cm,width=9cm,keepaspectratio]{fig2.pdf}\\
\caption{
Spectrum and occupation probabilities due to a quench for the case $P_1$ where
$A_0a=0.5,\Omega=10t_h$, and the Chern number is $C=1$. The system
supports a pair of chiral edge modes at the center of the FBZ. The area of the circles are proportional to the occupation probability
$O_{\alpha}(k_x)$.
}
\label{fig2}
\end{figure}
\begin{figure}
\includegraphics[height=9cm,width=9cm,keepaspectratio]{fig3.pdf}\\
\caption{
Spectrum and occupation probabilities due to a quench for the case $P_2$ where $A_0a=1.5,\Omega=5t_h$,
and the Chern number is $C=1$. The system
supports a pair of chiral edge modes at the center of the FBZ. The area of the circles are proportional to the occupation probability
$O_{\alpha}(k_x)$.
}
\label{fig3}
\end{figure}
The expectation value of the current density operator at a time $t$ after the quench is
\begin{eqnarray}
&&J_{2n_y-1}(t)= \langle \Psi_{\rm in}|{\cal \tilde{T}}e^{i\int_0^t dt' H(t')}\hat{J}_{2n_y-1}{\cal T}e^{-i\int_0^t dt' H(t')}|\Psi_{\rm in}\rangle\nonumber\\
&&= \sqrt{3}\frac{t_h a}{N_x}\sum_{k_x,m}\biggl[\tilde{J}_{-m}\left(A_0a\right)\sin\biggl(\frac{\sqrt{3}k_x a}{2}-\frac{m\pi}{3}\biggr)\biggr]\nonumber\\
&&\times
\biggl[\langle \Psi_{\rm in}|\biggl(e^{i m\Omega t}c^{\dagger}_{2n_y-1,k_x}(t)c_{2n_y,k_x}(t)+h.c\biggr)|\Psi_{\rm in}\rangle\biggr].
\label{jac1}
\end{eqnarray}
Above ${\cal T}, {\cal \tilde{T}}$ are the time and anti-time ordering operators respectively, and the time-dependent behavior of $c_{2n_y,k_x}(t)$
are obtained from Eq.~\eqref{ctimev}.
In Eq.~\eqref{jac1} we need to evaluate expectation values of the
kind $n_{j_1,j_2,k_x}(t) = \langle \Psi_{\rm in}|c_{j_1,k_x}^{\dagger}(t)c_{j_2,k_x}(t)|\Psi_{\rm in}\rangle$,
which using the time-evolution operator may be written as,
\begin{eqnarray}
&&n_{j_1,j_2,k_x}(t) = \langle \Psi_{\rm in}|c_{j_1,k_x}^{\dagger}(t)c_{j_2,k_x}(t)|\Psi_{\rm in}\rangle\nonumber\\
&&=\sum_{j'j''}\biggl[U_{k_x}(t,0)\biggr]_{j_2j'}\biggl[U_{k_x}(0,t)\biggr]_{j''j_1} \nonumber\\
&&\times \langle \Psi_{\rm in}| c_{j'',k_x}^{\dagger}(0)
c_{j',k_x}(0)|\Psi_{\rm in}\rangle\nonumber\\
&&=\sum_{j'j'', l={\rm occ}}
\biggl[U_{k_x}(t,0)\biggr]_{j_2j'}\biggl[U_{k_x}(0,t)\biggr]_{j''j_1} a_{k_x,l,j'}a^*_{k_x,l,j''}\nonumber\\
&&= \sum_{j',j'',\alpha\beta,l={\rm occ}} e^{-i\epsilon_{k_x\alpha} t + i\epsilon_{k_x\beta} t}\phi_{k_x,\alpha,j_2}(t) \phi_{k_x,\alpha,j'}^*(0)\nonumber\\
&&\times \phi_{k_x ,\beta,j''}(0)\phi^*_{k_x,\beta,j_1}(t)a_{k_x,l,j'}a^*_{k_x,l,j''}\label{nij}.
\end{eqnarray}
At long times, we need only keep $\alpha=\beta$ terms, as the $\alpha \neq \beta $ terms oscillate in time with different frequencies
for the different momenta $k_x$. Thus on summing over
$k_x$ the $\alpha\neq \beta $ terms vanish as a power-law due to dephasing.
Thus at long times, after the dephasing has set in, the current density is given by the ``diagonal ensemble'' corresponding to keeping only $\alpha=\beta$,
\begin{eqnarray}
&&J_{2n_y-1}(t\rightarrow \infty)= \nonumber\\
&&\sqrt{3}\frac{t_h a}{N_x}\sum_{k_x,m,j',j'',\alpha,l={\rm occ}}
\biggl[e^{i m\Omega t}\phi_{k_x,\alpha,2n_y}(t)\phi^*_{k_x,\alpha,2n_y-1}(t)\nonumber\\
&&\times \phi_{k_x,\alpha, j'}^*(0)\phi_{k_x,\alpha, j''}(0)a_{k_x,l,j'}a^*_{k_x,l,j''}+h.c.\biggr]\nonumber\\
&&\times \biggl[\tilde{J}_{-m}\left(A_0a\right)\sin\biggl(\frac{\sqrt{3}k_x a}{2}-\frac{m\pi}{3}\biggr)\biggr].
\end{eqnarray}
This result is still oscillatory over the period of the laser on account of the time periodicity of the Floquet modes.
Expanding the Floquet modes in their Fourier basis $\phi(t)=\sum_me^{i m \Omega t}\phi^m$, and time averaging over one cycle of the
laser, we find, the quench current density to be,
\begin{eqnarray}
J_{n_y}(t\rightarrow \infty)= \frac{1}{N_x}\sum_{k_x,\alpha={1\ldots N_y} }O_{\alpha}(k_x)j_{\alpha,n_y}(k_x)\label{currq},
\end{eqnarray}
where $j_{\alpha,n_y}(k_x)$ is the current density carried by an individual Floquet eigenstate labeled by $\alpha, k_x$ and
time averaged over a laser cycle,
\begin{eqnarray}
&&j_{\alpha,2n_y-1}(k_x)=\nonumber\\
&&\sqrt{3}t_h a\sum_{m,n}\biggl[\tilde{J}_{-m}\left(A_0a\right)\sin\biggl(\frac{\sqrt{3}k_x a}{2}-\frac{m\pi}{3}\biggr)\biggr]
\nonumber\\
&&\times 2{\rm Re}\biggl[\phi_{k_x,\alpha,2n_y}^n\biggl(\phi_{k_x,\alpha,2n_y-1}^{n+m}\biggr)^* \biggr],\label{curres}\\
&& j_{\alpha,2n_y}(k_x)= j_{\alpha,2n_y-1}(k_x).\label{curres2a}
\end{eqnarray}
Above we see the key role played by the occupation $O_{\alpha}(k_x)$ as the
current density at site $n_y$ is the current density $j_{\alpha,n_y}(k_x)$
from all Floquet eigenstates ($\alpha,k_x$) weighted by the occupation of the states.
In what follows we will not only discuss the quench current density defined in Eq.~\eqref{currq}, but also the current density of a single
Floquet eigenstate $\alpha$ when the occupation is the same for all $k_x$. This is defined as,
\begin{eqnarray}
J_{\alpha,n_y}= \frac{1}{N_x}\sum_{k_x}j_{\alpha,n_y}(k_x), \label{curres2}
\end{eqnarray}
and is simply the momentum average of the current density operator of a Floquet eigenstate.
Note that the current density is not what is directly measured in transport such as Hall response. For the latter,
a proper Kubo formula or Landauer formalism approach needs to be employed.
Employing Kubo formalism, one finds that the Hall current is determined by topological properties such as the time-averaged Berry
curvature, but now weighted by the occupation probabilities of the quasi-energy bands~\cite{Dehghani15a}.
The average current density on the other hand is far more sensitive to microscopic details, and can be probed using other methods
such as sensitive magnetometers like SQUIDs that respond to the local magnetization generated by
local currents~\cite{Bluhm09,Shibata15}.
\section{Results}\label{results}
The laser frequency and amplitude can be used to drive a series of topological phase transitions, and in this paper we focus
on the four topological phases summarized in Fig.~\ref{fig1}. Of these four phases, $P_1$ corresponds to a high-frequency
off-resonant laser where the laser frequency is larger than the bandwidth of graphene ($ \Omega > 6 t_h$). The
other three phases correspond to low-frequency resonant lasers ($ \Omega < 6 t_h$).
We first discuss the occupation probabilities for these four cases separately below, followed by a discussion of the
current densities.
\subsection{Occupation probability and bulk-boundary correspondence
in transport}
The phase $P_1$ corresponds to an off-resonant laser with parameters
$A_0a=0.5,\Omega = 10 t_h$ and Chern number $C=1$. The quasi-energies for this case are shown in Fig.~\ref{fig2}, and
include a pair of
chiral edge modes at the center of the FBZ. Thus for this case, the Chern number equals the number of chiral edge modes.
A key quantity is the occupation probabilities of the edge and bulk modes. For a quench switch-on protocol these
are given by Eq.~\eqref{Okxa}, and are quite simply determined by the
overlap of the Floquet modes at $t=0$ with the occupied states of
graphene.
The occupation probabilities of the Floquet levels for $P_1$ are indicated by circles in Fig.~\ref{fig2},
with the area of the circles proportional to the occupation $O_{\alpha}(k_x)$. The effect of the quench on the
occupation can also be summarized by simply taking the momentum average as defined in Eq.~\eqref{Oa},
and as plotted in the top left panel of Fig.~\ref{fig6}.
These figures
show that for the off-resonant laser, to a good degree, only the lower Floquet band is occupied even though the
laser was switched on as a quench. Fig.~\ref{fig6} in fact shows that the distribution function for $P_1$ looks like a
zero temperature Fermi-function of a half-filled state.
For all the phases studied here, one finds the following symmetry between the occupation probabilities for quasi-levels
with quasi-energies of the opposite sign,
\begin{eqnarray}
O_{\alpha}(k_x) + O_{N_{y} - \alpha + 1}(k_x)= 1.\label{DistFuncCompl}
\end{eqnarray}
We have proven the above relation in Appendix~\ref{app1}, where we show that it arises as a
consequence of the particle-hole symmetry of the Hamiltonian before the quench, and the Floquet Hamiltonian.
The momentum average of the above expression is $O_{\alpha} + O_{N_{y} - \alpha + 1}= 1$, a behavior which is clearly reflected
in all the panels of Fig.~\ref{fig6}.
In Ref.~\onlinecite{Dehghani15a} the effect of a quench switch-on of the laser was studied on bulk graphene, where there are no edge modes.
Several topological phases were studied,
and the Hall conductance from a Kubo formula approach was computed.
For the $P_1$ phase, the Hall conductance was found to
show a value close to the maximum value of $e^2/h$. This result is consistent
with our observation here that for the off-resonant laser, the quench occupation is very close
to that of an ideal half-filled Floquet band.
The results of Ref.~\onlinecite{Dehghani15a} for an extended system with no boundaries,
and our results here for a finite system which hosts edge states show
signatures of the bulk-boundary correspondence of TIs, where Hall response
in a bulk system can alternately be described in terms of transport by chiral edge states when a
infinitesimal chemical potential difference is applied to them.
This is because we find here that for $P_1$, the edge states in
the center of the FBZ survive the quench, and are occupied with a very low effective temperature.
Thus this pair will contribute to a conductance of ${\cal O}(e^2/h)$ within a Landauer formalism
that assumes there is no inelastic scattering. Any deviations from this value
is due to the small, albeit non-zero excitations of the bulk
states which have the opposite chirality to the edge states (see further discussion below).
The Hall response in closed systems without leads
can be measured experimentally in cold-atomic gases along the lines of Ref.~\onlinecite{Esslinger14}
where an application of an external potential gradient leads to a transverse drift of atoms due to
a non-zero Chern number of the atomic bands.
We make similar observations for the phase $P_2$
which now corresponds to a resonant laser where
$A_0a=1.5, \Omega = 5 t_h, C=1$.
The spectrum is shown in Fig.~\ref{fig3}. Thus this phase is similar to phase $P_1$
in being like a conventional
Chern insulator where the Chern number equals the number of chiral edge modes.
An interesting observation is the asymmetry in $k_x$, i.e., $O_{\alpha}(k_x)\neq O_{\alpha}(-k_x)$. This exists even for $P_1$,
but is less visible there.
The asymmetry in the occupation was also noticed in Ref.~\onlinecite{Dehghani14} where the effect of a
quench switch-on of the laser was studied on a bulk system (with no edge modes).
The asymmetry arises because the laser breaks
inversion symmetry. For our case in particular, at the switch-on time $t=0$, the laser is pointed entirely along the $x$-direction,
thus breaking the inversion symmetry in $\hat{x}$.
Both the $O_{\alpha}(k_x)$ in Fig.~\ref{fig3}, as well as its momentum average
in the top right panel of Fig.~\ref{fig6} show that the $P_2$
case corresponds to a slightly higher effective temperature in comparison to
phase $P_1$, with both lower ($\epsilon<0$) and upper ($\epsilon>0$)
edge modes getting occupied, and a larger fraction of
the bulk states being occupied. Yet the bulk excitation density is still quite low like $P_1$.
A bulk Kubo formula computation for the dc Hall conductance in a spatially extended system for this case revealed~\cite{Dehghani15a} a
result of ${\cal O}(e^2/h)$, consistent with the
low excitation density of bulk states generated by the quench even for this phase. We would arrive at the same
conclusion if we were to alternately attribute the entire Hall response as due to the chiral edge-state studied here,
where the bulk excitations degrade the maximum possible value by a small amount.
While it is simple to understand why a pair of counter-propagating edge states at zero effective temperature (such
as those encountered so far) will give a linear response conductance of ${\cal O}(e^2/h)$,
we now briefly explain why a pair of counter-propagating edge states at infinite effective temperature will give
zero contribution to dc transport. If we think of the dc transport as a linear response to a small chemical potential
difference, a net current flows because the population of say the left mover is increased slightly over the right mover.
If now the pair were such that all states were uniformly occupied, a small voltage bias will not change the
net occupation between left and right movers, leading to zero conductance.
This simple picture will come in handy when understanding the bulk-boundary correspondence in the phases $P_{3,4}$ below.
The phase $P_3$ is also a resonant laser corresponding to
$A_0a=0.5, \Omega = 5 t_h, C=3$, but it is very different from the resonant case $P_2$ discussed above.
The spectrum for $P_3$ is shown in Fig.~\ref{fig4}, and reveals the unusual properties of the
Floquet Chern insulator, where anomalous edge states
appear at the Floquet zone boundaries. The chiralities of these edge modes are shown schematically in Fig.~\ref{fig1},
and explicitly via the current densities in Fig.~\ref{fig8}.
Thus for this case the Chern number now equals the difference between the number of chiral
edge modes above and below the band, where there are two right movers above and one left mover below the
band on one of the two spatial boundaries. A clear signature of the laser
resonance is seen in both Fig.~\ref{fig4} and the lower left panel of Fig.~\ref{fig6}.
The resonance shows up as a selective depletion of the lower Floquet band, and the corresponding
selective occupation of the upper band.
Note that the points in $k_x$ where the occupation changes suddenly due to the resonance condition, are
also the points in $k_x$ at which anomalous edge states appear. This is because the laser resonance effectively produces a band
crossing at $|k_x|a\sim 0.5$ in Fig.~\ref{fig4}. This band crossing is accompanied by a change in the Chern number and a corresponding
change in the number of edge modes.
Thus phase $P_3$ is special in that in the same phase one has edge modes that arise due to
off-resonant and resonant processes.
The pair of edge modes located at
$\epsilon=0$ have the same origin as in the high frequency laser (phase $P_1$) as they arise due to off-resonant virtual processes,
while the two pairs of anomalous edge modes arise due to resonant processes. This is also reflected in the fact that the
edge modes at $\epsilon=0$ are occupied at a very low effective temperature (see the sharp step at $\alpha=N_y/2$ in
lower left panel of Fig.~\ref{fig6}), while the anomalous edge modes are at a much higher effective temperature.
Note that anomalous edge modes are not so clearly visible in Fig.~\ref{fig4} as the quasi-energy gap in which they
live are rather small. However the corresponding current densities carried by them is shown in Fig.~\ref{fig8} and indeed
show the current density to be localized at the boundary.
Interestingly for a quench in a bulk system, the case $P_3$ showed a Hall conductance of
approximately $e^2/h$ in Ref.~\onlinecite{Dehghani15a}. This is
a far deviation of $3 e^2/h$ for the Hall conductance
if only the lower Floquet band was fully occupied.
This came about because in the bulk computation of Ref.~\onlinecite{Dehghani15a}, and as can also be seen here in Fig.~\ref{fig4},
the resonance significantly populates portions of the upper Floquet band. Since the upper Floquet band
has the opposite Berry curvature to the lower one, it reduced the Hall conductance to almost
$1/3$ of its maximum value of $3 e^2/h$ in Ref.~\onlinecite{Dehghani15a}.
As a signature of the bulk-boundary correspondence
in topological systems, a dc Hall conductance of
$\sim 1 e^2/h$ is consistent with our observation here that for the phase $P_3$, of the 3 pairs of edge states likely
to participate in transport, 2 of them, in particular the ones that reside at the boundaries of the FBZ are at a much higher effective temperature
as they arise due to resonant processes.
Thus these two pairs contribute relatively little to dc transport. Most of the dc transport in an edge state picture
comes from the off-resonant pair of edge states located at the center of the FBZ. Any further deviations from $1 e^2/h$
is due to residual bulk excitations that have the opposite chirality to the edge-state.
\begin{figure}
\includegraphics[height=9cm,width=9cm,keepaspectratio]{fig4.pdf}\\
\caption{
Spectrum and occupation probabilities due to a quench for the case $P_3$ where
$A_0a=0.5,\Omega=5t_h$, and the Chern number is $C=3$. The system
supports a pair of chiral edge modes at the center of the FBZ, and two pairs of chiral edge modes on the
Floquet zone boundaries (see Fig.~\ref{fig1} and Fig.~\ref{fig8}).
The area of the circles are proportional to the occupation probability
$O_{\alpha}(k_x)$.
}
\label{fig4}
\end{figure}
\begin{figure}
\includegraphics[height=9cm,width=9cm,keepaspectratio]{fig5.pdf}\\
\caption{
Spectrum and occupation probabilities due to a quench for the case $P_4$ where
$A_0a=10,\Omega=0.5t_h$, and the Chern number is $C=0$. The system
supports two pairs of chiral edge modes at the center of the FBZ, and two pairs of chiral edge modes on the
Floquet zone boundaries (see Fig.~\ref{fig1} and Fig.~\ref{fig8}).
The area of the circles are proportional to the occupation probability $O_{\alpha}(k_x)$.
}
\label{fig5}
\end{figure}
The third resonant case corresponds to phase $P_4$ with laser
parameters $A_0a =10, \Omega=0.5 t_h,C=0 $. As the spectrum in Fig.~\ref{fig5} shows,
this case also highlights a peculiarity of Floquet Chern insulators
in that it is possible to have bands with zero Chern number, and yet
topological edge states appear above and below the quasi-band. For this case there are 4 pairs of edge states with the same chirality,
with two of these residing
above the Floquet band, and two residing below the Floquet band.
Lower right panel of Fig.~\ref{fig6} shows that $P_4$ is like an infinite temperature state as
the occupation probabilities
of all the levels are almost the same. This is not surprising given that the laser frequency is much smaller
than the band-width, leading to
many pockets of resonances. These pockets are not sharp like in phase $P_3$, but get smoothened out due to the large laser
amplitude that increases the matrix elements for multi-photon processes.
The dc and optical Hall conductance for a bulk system for the same laser parameters as $P_4$
was studied in Ref.~\onlinecite{Dehghani15b}. It was found that a small albeit non-zero Hall conductance is possible for a
laser quench, even though
an ideal occupation of the Floquet bands would lead to a zero Hall conductance. This non-zero conductance comes about because of the
nonequilibrium occupation of the Floquet bands, each of which have a net chirality (see further discussion below)
leading to a non-zero Hall
response. The magnitude of the Hall response is much smaller than $e^2/h$, and consistent with all 4 pairs of edge modes being
occupied at an infinite effective temperature.
This qualitative similarity between the results in this paper
and the bulk computation of Refs.~\onlinecite{Dehghani15a,Dehghani15b} based on
the Kubo formalism, is a signature of the bulk-boundary correspondence that
exists in topological systems, and appears to persist even out of equilibrium.
\begin{figure}
\includegraphics[height=9cm,width=9cm,keepaspectratio]{fig6.pdf}\\
\caption{
Quench occupation probabilities of the Floquet quasi-energy levels averaged over the momenta $k_x$.
Clockwise from top left, phases $P_1$, $P_2$, $P_4$ and $P_3$ (see Fig.~\ref{fig1}).
$P_4$ has an almost infinite effective temperature, while $P_1$ has an almost zero effective temperature.
}
\label{fig6}
\end{figure}
\begin{figure}
\includegraphics[height=9cm,width=8.8cm,keepaspectratio]{fig7.pdf}\\
\caption{Upper panel: Case $P_1$ where $A_0a=0.5,\Omega=10t_h,C=1$. Lower-panel: Case $P_2$ where $A_0a=1.5,\Omega=5t_h,C=1$.
Both correspond to strip width $N_y=40$.
Current densities of three exact Floquet eigenstates: one from the Floquet band edge ($\alpha=1$), another from
the center of the Floquet band ($\alpha=N_y/4=10$) and a third from the edge state located at the center of the
FBZ ($\alpha=N_y/2=20$). The current
densities from the two bulk states $\alpha=1,10$ are much smaller than that from the edge states, and are also of the
opposite chirality from that of the edge state.
}
\label{fig7}
\end{figure}
\subsection{Current density}
Finally we turn to the question of the current densities, a key quantity that directly probes the chiral nature of the system.
$J_{\alpha,n_y}$,
defined in Eq.~\eqref{curres2}, is the current density of the $\alpha$ Floquet level given that all $k_x$ states are
equally occupied. In contrast, the quench current density given
in Eq.~\eqref{currq} is the current density of the Floquet eigenstates, but weighted by
the occupation probabilities $O_{\alpha}(k_x)$ of these states.
We first discuss the current densities of the Floquet eigenstates before we turn to the quench current density
which has contributions from all Floquet eigenstates.
It is first useful to make some general observations. For a fully occupied band,
the current density is zero. This manifests in many ways, for example the two bulk-bands have opposite
Chern number~\cite{Dehghani15a} so
that when both bands are fully occupied, there is no Hall response. For our system with edges, this implies
\begin{eqnarray}
\sum_{\alpha=1\ldots N_y} J_{\alpha,n_y}=0.
\end{eqnarray}
An important point to note is that even though the instantaneous $H(t)$
has no particular symmetry other than particle-hole symmetry, the Floquet Hamiltonian shows additional symmetries such as inversion symmetry
when the Floquet modes are averaged over one cycle of the laser (see Appendices~\ref{app1},~\ref{app2}).
This is also seen by noting that in the high-frequency limit,
a Magnus expansion of the Floquet Hamiltonian yields the Haldane model with
particle-hole symmetry, inversion symmetry, but broken time-reversal symmetry.~\cite{Oka09,Kitagawa10}
As shown in Appendix~\ref{app3}, a consequence of these symmetries is that the current density carried
by a Floquet eigenstate time-averaged over a laser cycle is
exactly anti-symmetric in position,
\begin{eqnarray}
J_{\alpha,n_y}=-J_{\alpha, N_y-n_y+1}.\label{CurrentInv}
\end{eqnarray}
Furthermore, there exists an exact symmetry between current densities from lower
($1\leq \alpha \leq N_y/2$) and upper ($ 1+N_y/2\leq \alpha \leq N_y $) Floquet bands,
\begin{eqnarray}
J_{\alpha,n_y} = J_{N_y-\alpha+1,n_y}. \label{CurrentChiral}
\end{eqnarray}
\begin{figure}
\includegraphics[height=9cm,width=8.7cm,keepaspectratio]{fig8.pdf}\\
\caption{Upper panel: Case $P_3$ where $A_0a=0.5,\Omega=5t_h,C=3$ with strip width $N_y=200$.
Lower-panel: Case $P_4$ where $A_0a=10,\Omega=0.5t_h,C=0$
with strip width $N_y=100$. For each panel,
current densities of three exact Floquet eigenstates, one state from the Floquet band edge ($\alpha=1$), another from
the Floquet band center ($\alpha=N_y/4=50 (P_3), 25 (P_4)$) and a third from the edge state(s) located at the center of the
FBZ ($\alpha=N_y/2=100 (P_3), 50 (P_4)$) are shown.
The current density from the bulk state $\alpha=N_y/4$ is much smaller than that from the edge states
($\alpha=1,N_y/2$). The current density
over only half the strip has been plotted as for the other half the current is anti-symmetric to the first half.
}
\label{fig8}
\end{figure}
The implications of this for the Floquet states is that for an exactly half-filled Floquet band, the current density
vanishes,
\begin{eqnarray}
\sum_{\alpha=1\ldots N_y/2} J_{\alpha,n_y}=0.
\end{eqnarray}
\begin{figure}
\includegraphics[height=9cm,width=8.8cm,keepaspectratio]{fig9.pdf}\\
\caption{
Current density flowing in the $\hat{x}$ direction and plotted as a function of the $y\equiv n_y$,
at long times after the laser quench
for clockwise from top-left
$P_1\equiv \left[A_0a=0.5,\Omega = 10 t_h,C=1\right]$,
$P_2\equiv \left[A_0a=1.5, \Omega = 5 t_h,C=1\right]$, $P_4\equiv \left[A_0a=10, \Omega=0.5 t_h,C=0\right]$, and
$P_3 \equiv \left[A_0a=0.5,\Omega=5t_h, C=3\right]$. The quench current is symmetric in $n_y$, hence
for the lower two panels, the current density is plotted over only half the strip.
For a half-filled Floquet band, the current density is zero due to particle-hole symmetry.
}
\label{fig9}
\end{figure}
Figs.~\ref{fig7} and~\ref{fig8} show the current densities (all our results are time-averaged over
a laser cycle) for three Floquet eigenstates, one corresponding to the
lowest Floquet level $\alpha=1$,
the second a level from
the middle of the lower band $\alpha=N_y/4$, and the third being the edge mode at $\alpha=N_y/2$. While Fig.~\ref{fig7} is
for the two topological phases $P_{1,2}$ which correspond to a conventional Chern insulator,
Fig.~\ref{fig8} is for the cases $P_{3,4}$
where anomalous edge states appear at the
Floquet zone boundaries. For Fig.~\ref{fig8} therefore, $\alpha=1$ is also an edge state.
Note that since the edge state localization length for phases $P_{3,4}$ is quite long, for this
reason we had to work with cylinders of longer lengths so as to prevent the edge states at the boundaries of the
FBZ from hybridizing.
As noted above, the current density of the exactly half-filled case is zero. The way this cancellation
comes about for the Chern insulators $P_{1,2}$ (Fig.~\ref{fig7})
with a single edge state corresponding to $\alpha=N_y/2$ is that the current density of
that edge state is opposite in sign to the current density of all the bulk states $\alpha = 1 \ldots (-1+N_y/2)$.
Thus while each bulk state contributes a relatively small amount to the current density, all their contributions add up to
a net value such that it exactly cancels the current density from the edge state.
For the phase $P_3$ the edge states from the FBZ boundaries ($\alpha=1$) have the opposite
chirality to the edge state from the FBZ center ($\alpha = N_y/2$), and this can be clearly seen in
the top panel of Fig.~\ref{fig8}. This figure also shows that for $P_3$, the magnitude
of the edge-currents from the states in the zone-boundary are much smaller than those from the center.
For the phase $P_4$ the edge states from the FBZ boundaries ($\alpha=1$) have the same chirality
as the edge-state from the FBZ center ($\alpha = N_y/2$). This is reflected in the lower panel of
Fig.~\ref{fig8} where
the peak values of the current densities from the edge states at $\alpha=1$ and $\alpha=N_y/2$ are
indeed of the same sign, although they
may have opposite signs within a few lattice spacings of the boundary.
Note that even though current densities carried by the anomalous edge states at the FBZ boundaries can be of the same magnitude as
those of the edge states at the center of the FBZ, it does not imply that they affect physical observables in the same way.
This is because physical observables, such as the quench current density, are obtained by averaging over
all the Floquet states, where each state gets weighted by their respective occupation probabilities.
Moreover note that this current density is not what one measures in transport such as Hall response, where the latter is a
linear response to an external voltage difference, and given by the Kubo or Landauer formalism.
As argued in the previous sub-section,
an effective high temperature of the anomalous edge states imply they contribute relatively little to transport.
We now discuss the quench current density i.e.,
the current density carried by the wavefunction at long times after the quench. This is shown in
the four panels in Fig.~\ref{fig9} for the four phases.
Even though the current
density of each exact eigenstate is anti-symmetric in position along the cylinder, the quench current
density is symmetric in position.
\begin{eqnarray}
J_{N_y-n_y+1}(t\rightarrow \infty) = J_{n_y}(t\rightarrow\infty).
\end{eqnarray}
Why this is so is explained in Appendix~\ref{app3}.
Here we give another quick way to understand this.
It is convenient to define
the time-averaged local density in the diagonal ensemble, which using Eq.~\eqref{nij} is,
\begin{eqnarray}
&&\rho_{n_y}= \frac{1}{N_x}\overline{\sum_{k_x} n_{n_y,n_y,k_x}(t)}\nonumber\\
&&= \frac{1}{N_x}\sum_{k_x,\alpha = 1 \ldots N_y}O_{\alpha}(k_x) \sum_m\biggl|\phi_{k_x,\alpha,n_y}^m\biggr|^2\label{rhodef}.
\end{eqnarray}
As shown in Appendix~\ref{app2}, the combination of being at half-filling, the particle-hole and inversion symmetry
of the Floquet Hamiltonian, and the fact that the quench breaks
inversion symmetry causing $O_{\alpha}(k_x) \neq O_{\alpha}(-k_x)$,
results in a local deviation of the density from half-filling, which is anti-symmetric in position.
In particular, defining, $\delta \rho_{n_y}=\rho_{n_y}-1/2$, we have the relation,
\begin{eqnarray}
\delta \rho_{n_y}= -\delta \rho_{N_y-n_y+1}\label{cdev}.
\end{eqnarray}
Semi-classically, the current density is a product of $\delta \rho_{n_y} v_{n_y}$
where $v_{n_y}$ is the velocity. Since the perfect chirality of the exact Floquet eigenstates implies
$v_{n_y}$ is anti-symmetric in position, and $\delta \rho_{n_y}$ is also anti-symmetric in position,
this implies a net quench current density which is symmetric in position.
If the quench had preserved inversion symmetry by providing an occupation probability with
the symmetry $O_{\alpha}(k_x)=O_{\alpha}(-k_x)$, then we would have had $\delta\rho_{n_y}=0$, and consequently the
time-averaged quench current density would have been zero at half-filling.
A simple way to see why all this comes about due to the breaking of inversion symmetry, note that at the time
of the laser quench at $t=0$, the vector potential is completely pointing along the $\hat{x}$ direction. Thus
inversion symmetry in $\hat{x}$ is broken resulting in an unequal population of states at $+k_x$ and $-k_x$.
This implies that for every $|k_x|$, there is a net current flowing in the system due to periodic boundary
conditions in $\hat{x}$. Since individual eigenstates are exactly chiral along the $\hat{y}$ direction,
this current density has to also imply that some net charge be moved from one end of the cylinder to the other.
A slower quench will reduce the magnitude of this effect.
To generate this quench current density, all we needed was to break inversion symmetry.
If we had no particle-hole symmetry for example due to next-nearest-neighbor hopping, then we would have still
generated a quench current density, but this current density would not have been exactly symmetric in position.
Thus we find that the quench current density appears as a sheet of circulating current on the surface of the cylinder.
Such a current density profile will generate a
local magnetization, and is therefore detectable using sensitive magnetometers such as SQUIDs~\cite{Bluhm09,Shibata15}.
This result may also be useful for using
a fast quench as a tool to generate dissipationless current flow in a carbon nanotube.
We now briefly discuss how our results depend on the the length of the cylinder. We have taken care to ensure that the
length is sufficiently long so as to clearly identify the bulk and edge states. As the length is further increased
from our chosen lengths, the edge states are not modified, while the bulk spectrum fills out more
as more states are being added. The physical observables we study are properly normalized to account for this effect, and therefore
do not depend sensitively on the length of the cylinder.
\section{Conclusions}\label{conclu}
One of the properties that make FTIs unique is a rich structure of edge states, with edge modes
appearing both at the center of the FBZ as well as the boundaries. For
a low amplitude laser, one can identify the former
with off-resonant, and the latter by resonant processes. In this paper
we have highlighted how these qualitatively different edge modes behave, the current densities
carried by them, and how they are occupied
in a closed quantum system where the laser was switched on as a quench.
We find that for an off-resonant laser,
despite the laser quench, the Floquet level occupation is remarkably close to a half-filled zero temperature Fermi
function. This is consistent with a bulk Kubo-formula computation for the Hall conductance~\cite{Dehghani15a} which
found it to be quite close to $e^2/h$.
For the resonant laser, we find a selective depletion and occupation
of the Floquet modes, i.e., a laser induced population inversion for
selected regions in momentum space. As a consequence we find that in the same phase (phase $P_3$ for example)
the edge states at the center of the FBZ are occupied at a low effective temperature, while the
edge states at the boundaries of the FBZ, and arising due to resonant processes, are occupied with
a high effective temperature.
A slower quench will not qualitatively affect this result
because the laser resonance condition does not depend on the amplitude of the laser, but only on its frequency~\cite{Santoro15}.
By simply looking at how the edge states are occupied, and what fraction
of the bulk is excited, we can use the Landauer formalism to make a simple estimate for the conductance of the edge states and hence
the Hall response. We find this
estimate to be consistent with the
Hall response for a spatially uniform system where
no edge modes exist and the entire Hall response is purely due to bulk states
and was computed using the Kubo formalism~\cite{Dehghani15a,Dehghani15b}.
This is a signature of the bulk-boundary correspondence in FTIs where Hall response can be
captured by two complementary ways, one entirely involving bulk states in an infinite system,
and the second involving effective 1D transport along chiral edge states. Moreover this correspondence
and in particular the edge state picture
also explains why the Hall response for some phases like $P_3$, when accounting properly for the
nonequilibrium occupation is only $\sim 1/3$ of its maximum possible value of $C e^2/h$.
We also find that the
expectation values of the time-averaged quench current density shows some special symmetries, for
example it is exactly symmetric along the length of the cylinder. We have shown that these
have to do with the underlying
particle-hole and inversion symmetry of the Floquet Hamiltonian and of graphene, where for the former the inversion
symmetry manifests itself only
after time-averaging over one cycle of the laser.
We also showed that the quench current density arises because the laser quench breaks inversion symmetry say in the $x$-direction, leading to
an asymmetric occupation of $+k_x$ and $-k_x$ states. Thus when periodic boundary conditions are imposed
in the $x$-direction, this leads to a circulating current on the surface of the cylinder. This
current density profile is markedly different from the naive expectation of having clockwise currents at the
top, and anti-clockwise currents at the bottom of the cylinder, where the latter would be the profile
only in an exact eigenstate of the system. Since each eigenstate is chiral, we also showed that the quench current density
profile leads to a removal of charge from one end of the cylinder to the other. This unusual current density profile
can be detected using magnetometers such as SQUIDs. Moreover laser quenches can be used as a tool
for generating a net dissipationless current flow in carbon nanotubes.
{\sl Acknowledgments:}
The authors thank Y. Lemonik and M. Rudner for helpful discussions.
This work was supported by the US Department of Energy,
Office of Science, Basic Energy Sciences, under Award No.~DE-SC0010821.
|
2,877,628,091,210 | arxiv | \section{Introduction}\label{sec:introduction}
Deep reinforcement learning has seen numerous successes in recent years~\citep{silver2017mastering,openai2018dexterous,vinyals2019grandmaster}, but still faces challenges in domains where data is limited or expensive.
One candidate solution to address the challenges and improve data efficiency is to impose hierarchical policy structures.
By dividing an agent into a combination of low-level and high-level controllers, the options framework \citep{sutton1999between,precup2000temporal} introduces a form of action abstraction, effectively reducing the high-level controller's task to choosing from a discrete set of reusable sub-policies.
The framework further enables temporal abstraction by explicitly modelling the temporal continuation of low-level behaviors.
Unfortunately, in practice, hierarchical control schemes often introduce technical challenges, including a tendency to learn degenerate solutions preventing the agent from using its full capacity \citep{ harb2018waiting}, undesirable trade-offs between learning efficiency and final performance \citep{harut2019termination}, or the increased variance of updates \citep{precup2000temporal}.
Additional challenges in off-policy learning for hierarchical approaches \citep{NIPS2005_2767} led to a focus of recent works on the on-policy setting, forgoing the considerable improvements in data efficiency often connected to off-policy methods.
\begin{figure*}[t]
\centering
\begin{tabular}{ccc}
\includegraphics[trim=0 3mm 0 0,clip,width = 0.29\textwidth,valign=b]{figures/gmf.png} &
\includegraphics[trim=0 9mm 0 0,clip,width = 0.32\textwidth,valign=b]{figures/gm.png} &
\includegraphics[width = 0.29\textwidth,valign=b]{figures/gmmc3.png}
\end{tabular}
\vspace{-1mm}
\caption{Graphical model for flat policies (left), mixture policies (middle) - introducing a type of action abstraction, and option policies (right) - adding temporal abstraction via autoregressive options.
While the action $a$ is solely dependent on the state $s$ for flat policies, mixture policies introduce the additional component or option $o$ which affects the actions (following Equation \ref{eq:mixture}). Option policies do not change the direct dependencies for actions but instead affect the options themselves, which are now also dependent on the previous option and its potential termination $b$ (following Equation \ref{eq:optiontransitions}). }
\label{fig:2dimensional}
\vspace{-1mm}
\end{figure*}
We propose an approach to address these drawbacks, Hindsight Off-policy Options~(HO2): a method for data-efficient, robust, off-policy option learning.
The algorithm simultaneously learns a high-level controller and low-level options via a single end-to-end optimization procedure. It improves data efficiency by leveraging off-policy learning and inferring distributions over option for trajectories in hindsight to maximize the likelihood of good actions and options.
To facilitate off-policy learning the algorithm does not condition on executed options but treats these as latent variables during optimization and marginalizes over all options to compute the exact likelihood.
HO2~backpropagates through the resulting dynamic programming inference graph (conceptually related to \citep{rabiner1989tutorial,shiarlis2018taco, smith2018inference}) to enable the training of all policy components from trajectories, independent of the executed option.
As an additional benefit, the formulation of the inference graph allows to impose intuitive, hard constraints on the option termination frequency, thereby regularizing the learned solution (and encouraging temporally-extended behaviors) independently of the scale of the reward.
The policy update follows an expectation-maximization perspective and generates an intermediate, non-parametric policy, which is adapted to maximize agent performance. This enables the update of the parametric policy to rely on simple weighted maximum likelihood, without requiring further approximations such as Monte Carlo estimation or continuous relaxation~\citep{li2019sub}.
Finally, the updates are stabilized using adaptive trust-region constraints, demonstrating the importance of robust policy optimization for hierarchical reinforcement learning (HRL) in line with recent work on on-policy option learning \citep{zhang2019dac}.
We experimentally compare HO2~to prior option learning methods.
By treating options as latent variables in off-policy learning and enabling backpropagation through the inference procedure, HO2~demonstrates to be more efficient than prior approaches such as the Option-Critic \citep{bacon2017option} or DAC \citep{zhang2019dac}. HO2 additionally outperforms IOPG \citep{smith2018inference}, which considers a similar perspective but still builds on on-policy training.
To better understand different abstractions in option learning, we compare with corresponding policy optimization methods for flat policies \citep{abdolmaleki2018relative} and mixture policies without temporal abstraction \citep{wulfmeier2019regularized}
thereby allowing us to isolate the benefits of both action and temporal abstraction.
Both properties demonstrate particular relevance in more demanding simulated robot manipulation tasks from raw pixel inputs.
We further perform extensive ablations to evaluate the impact of trust-region constraints, off-policyness, option decomposition, and the benefits of maximizing temporal abstraction when using pre-trained options versus learning from scratch.
Our main contributions include:
\begin{itemize}
\item A robust, efficient off-policy option learning algorithm enabled by a probabilistic inference perspective on HRL. The method outperforms existing option learning methods on common benchmarks and demonstrates benefits on pixel-based 3D robot manipulation tasks.
\item An intuitive technique to further encourage temporal abstraction beyond the core method, using the inference graph to constrain option switches without additional weighted loss terms.
\item A careful analysis to improve our understanding of the options framework by isolating the impact of action abstraction and temporal abstraction.
\item Further ablation and analysis of several algorithmic choices: trust-region constraints, off-policy versus on-policy data, option decomposition, and the importance of temporal abstraction with pre-trained options versus learning from scratch.
\end{itemize}
\section*{Acknowledgments}
The authors would like to thank Peter Humphreys, Satinder Baveja, Tobias Springenberg, and Yusuf Aytar for helpful discussion and relevant feedback which helped to shape the publication.
We additionally like to acknowledge the support of the DeepMind robotics lab for infrastructure and engineering support.
\clearpage
\section{Conclusions}\label{sec:conclusions}
We introduce a robust, efficient algorithm for off-policy training of option policies. The approach outperforms recent work in option learning on common benchmarks and is able to solve complex, simulated robot manipulation tasks from raw pixel inputs more reliably than competitive baselines.
HO2~takes a probabilistic inference perspective to option learning, infers option and action probabilities for trajectories in hindsight, and performs critic-weighted maximum-likelihood estimation by backpropagating through the inference step.
Being able to infer options for a given trajectory allows robust off-policy training and determination of updates for all instead of only for the executed options. It also makes it possible to impose constraints on the termination frequency independently of an environment's reward scale.
We separately analyze the impact of action abstraction (via mixture policies), and temporal abstraction (via options). We find that each abstraction independently improves performance. Additional maximization of temporal consistency for option choices is beneficial when transferring pre-trained options but displays a limited effect when learning from scratch.
Furthermore, we investigate the consequences of the off-policyness of training data and demonstrate the benefits of trust-region constraints for option learning.
We examine the impact of different agent and environment properties (such as information asymmetry, tasks, and embodiments) with respect to task decomposition and option clustering; a direction which provides opportunities for further investigation in the future.
Finally, since our method is based on (weighted) maximum likelihood estimation, it can be adapted naturally to learn structured behavior representations in mixed data regimes, e.g. to learn from combinations of demonstrations, logged data, and online trajectories. This opens up promising directions for future work.
\section{Method}\label{sec:method}\label{sec:prelims}
We start by considering a reinforcement learning setting with an agent operating in a Markov Decision Process (MDP) consisting of the state space $\mathcal{S}$, the action space $\mathcal{A}$, and the transition probability $p(s_{t+1}|s_t,a_t)$ of reaching state $s_{t+1}$ from state $s_t$ when executing action $a_t$.
The agent's behavior is commonly described as a conditional distribution with actions $a_t$ drawn from the agent's policy $\pi(a_t | s_t)$.
Jointly, the transition dynamics and policy induce the marginal state visitation distribution $p(s_t)$. The discount factor $\gamma$ together with the reward $r_t=r\left(s_{t}, a_{t}\right)$ gives rise to the expected return, which the agent aims to maximize:
$J(\pi) = \mathbb{E}_{p\left(s_t\right),\pi(a_t|s_t)}\Big[\sum_{t=0}^{\infty} \gamma^{t} r_t \Big]$.
\subsection{Policy Types}
Option policies introduce temporal and action abstraction in comparison to commonly-used flat Gaussian policies.
Our goal in this work is not only to introduce this additional structure to improve data efficiency but to further understand the impact of the different abstractions.
For this purpose, we further study mixture distributions. They represent an intermediate case with only action abstraction, as described in Figure \ref{fig:2dimensional}.
We begin by covering both policy types in the following paragraphs. First, we focus on computing likelihoods of actions (and options) under a policy. Then, we describe the proposed critic-weighted maximum likelihood algorithm to train hierarchical policies.
\paragraph{Mixture Policies} This type extends flat policies $\pi(a_t | s_t)$ by introducing a high-level controller that samples from multiple options (low-level policies) independently at each timestep (Figure \ref{fig:2dimensional}).
The joint probability of actions and options
is given as:
\begin{align}\label{eq:mixture}
\pi_{\theta}(a_t,o_t | s_t) = \pi^L \left(a_t | s_t, o_t\right) \pi^H\left(o_t | s_t\right),
\end{align}
where $\pi^H$ and $\pi^L$ respectively represent high-level policy (which for the mixture is equal to a Categorical distribution $\pi^H\left(o_t | s_t\right) = \pi^C\left(o_t | s_t\right)$) and low-level policy (components of the resulting mixture distribution), and $o$ is the index of the sub-policy or mixture component.
\paragraph{Option Policies} This type extends mixture policies by incorporating temporal abstraction.
We follow the semi-MDP and \textit{call-and-return} option model \citep{sutton1999between}, defining an option as a triple $(I(s_t,o_t),\pi^L(a_t|s_t,o_t),\beta(s_t,o_t))$. The initiation condition $I$ describes an option's probability to start in a state and is simplified to $I(s_t,o_t)=1 \forall s_t \in \mathcal{S}$ following \citep{bacon2017option,zhang2019dac}. The termination condition $b_t \sim \beta(s_t, o_t)$ denotes a Bernoulli distribution describing the option's probability to terminate in any given state, and the action distribution for a given option is modelled by $\pi^L(a_t|s_t,o_t)$.
Every time the agent observes a state, the current option's termination condition is sampled. If subsequently no option is active, a new option is sampled from the controller $\pi^C(o_{t}|s_{t})$. Finally, we sample from either the continued or new option to generate a new action.
The resulting transition probabilities between options are described by
\begin{align}
p\left(o_t | s_t, o_{t-1}\right) =~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\label{eq:optiontransitions}
\\
\nonumber\begin{cases}
1-\beta(s_t,o_{t-1}) (1- \pi^C(o_t|s_t)) & \text{if } o_{t} = o_{t-1} \\
\beta(s_t,o_{t-1}) \pi^C(o_t|s_t) & \text{otherwise
\end{cases}
\end{align}
During interaction in an environment, an agent samples individual options. However, during learning HO2~takes a probabilistic inference perspective with options as latent variables and states and actions as observed variables.
This allows us to infer likely options over a whole trajectory in hindsight, leading to efficient intra-option learning \citep{precup2000temporal} for all options independently of the executed option. This is particularly relevant for off-policy learning, as options can change between data generation and learning.
Following the graphical model in Figure \ref{fig:2dimensional} and corresponding transition probabilities in Equation \ref{eq:optiontransitions}, the probability of being in option $o_t$ at timestep $t$ across a trajectory $h_t= \{s_t, a_{t-1}, s_{t-1},... s_0, a_0\}$ is determined in a recursive manner based on the previous timestep's option probabilities. For the first timestep, the probabilities are given by the high-level controller $\pi^{H}\left(o_0 | h_0\right) = \pi^C\left(o_0 | s_0\right)$. For all consecutive steps are computed as follows for $M$ options:
\begin{eqnarray}
\begin{aligned}
\tilde{\pi}^{H}\left(o_t | h_t\right) = \sum_{o_{t-1}=1}^M \big[ p\left(o_t|s_t, o_{t-1}\right) \pi^H\left(o_{t-1} | h_{t-1}\right) \label{eq:dynamic1} \\
{\pi^L\left(a_{t-1} | s_{t-1}, o_{t-1}\right)}\big]
\end{aligned}
\end{eqnarray}
The distribution is normalized at each timestep following $\pi^H\left(o_t | h_t\right) = {\tilde{\pi}^{H}\left(o_t | h_t\right)}/ {\sum_{o'_{t}=1}^M\tilde{\pi}^{H}\left(o'_t | h_t\right)}$.
Performing this exact marginalization at each timestep is much more efficient than computing independently over all possible sequences of options and reduces variance compared to sampling-based approximations.
Building on the option probabilities, Equation \ref{eq:option_probs} conceptualizes the connection between mixture and option policies.
\begin{align}\label{eq:option_probs}
\pi_{\theta}(a_t,o_t | h_{t}) = \pi^L\left(a_t | s_t, o_t\right) \pi^H\left(o_t | h_t\right)
\end{align}
In both cases, the low-level policies $\pi^L$ only depend on the current state.
However, where mixtures only depend on the current state $s_t$ for the high-level probabilities $\pi^H$, with options we can take into account compressed information about the history $h_{t}$ as facilitated by the previous timestep's distribution over options $\pi^H\left(o_{t-1} | h_{t-1}\right)$.
This dynamic programming formulation in Equation \ref{eq:dynamic1}
enables the exact computation of the likelihood of actions and options along off-policy trajectories.
We can use automatic differentiation in modern deep learning frameworks (e.g. \citep{abadi2016tensorflow}) to backpropagate through the graph and determine the gradient updates for all policy parameters.
\subsection{Agent Updates}
We continue by describing the policy improvement algorithm, which uses the previously determined option probabilities.
The three main steps are: 1) update the critic (Eq. \ref{eq:objective_q_value}); 2) generate an intermediate, non-parametric policy based on the updated critic (Eq. \ref{eq:objective_q}); 3) update the parametric policy to align to the non-parametric improvement (Eq. \ref{eq:objective_pi}).
By handling the maximization of expected returns with a closed-loop solution for a non-parametric intermediate policy, the update of the parametric policy can build on simple, weighted maximum likelihood.
In essence, we do not rely on differentiating an expectation over a distribution with respect to parameters of the distribution.
This enables training a broad range of distributions (including discrete ones) without further approximations such as required when the update relies on the reparametrization trick \citep{li2019sub}.
\begin{figure*}[t]
\centering
\begin{tabular}{cc}
\includegraphics[width = 0.43\textwidth]{figures/2d.png}&
\includegraphics[width = 0.53\textwidth]{figures/3d.png}
\end{tabular}
\caption{Representation of the dynamic programming forward pass - bold arrows represent connections without switching. Left: example with two options. Right: extension of the graph to explicitly count the number of switches. Marginalization over the dimension of switches determines component probabilities. By limiting over which nodes to sum at every timestep, the optimization can be targeted to fewer switches and more consistent option execution.}
\label{fig:example}
\end{figure*}
\paragraph{Policy Evaluation}
In comparison to prior work on training mixture policies~\citep{wulfmeier2019regularized}, the critic for option policies is a function of $s$, $a$, {and} $o$ since the current option
influences the likelihood of future actions and thus rewards. Note that even though we express the policy as a function of the history $h_t$, $Q$ is a function of $o_t, s_t, a_t$, since these are sufficient to render the future trajectory independent of the past (see the graphical model in Figure \ref{fig:2dimensional}).
We define the TD(0) objective as
\begin{align}
\label{eq:objective_q_value}
\min_\phi L(\phi) = \mathbb{E}_{s_t,a_t,o_t \sim \mathcal{D}} \Big[ \big( Q_{\text{T}} - {Q}_\phi(s_t, a_t, o_t))^2 \Big],
\end{align}
where the current states, actions, and options are sampled from the current replay buffer $\mathcal{D}$.
For the 1-step target $Q_{\text{T}}= r_t+\gamma \mathbb{E}_{s_{t+1},a_{t+1},o_{t+1}}[{{Q'}}(s_{t+1}, a_{t+1}, o_{t+1})]$,
the expectation over the next state is approximated with the sample $s_{t+1}$ from the replay buffer,
and we estimate the value by sampling actions and options according to $a_{t+1}, o_{t+1} \sim \pi'(\cdot|h_{t+1})$ following Equation \ref{eq:option_probs}. $\pi'$ and $Q'$ respectively represent target networks for policy and critic which are used to stabilize training.
\paragraph{Policy Improvement}
We follow an Expectation-Maximization procedure similar to \citep{wulfmeier2019regularized,abdolmaleki2018maximum}, which first computes an improved non-parametric policy and then updates the parametric policy to match this target.
In comparison to prior work, the policy does not only depend on the current state $s_t$ but also on compressed information about the rest of the previous trajectory $h_t$, building on Equation \ref{eq:dynamic1}.
Given the critic, all we require to optimize option policies is the ability to sample from the policy and determine the log-likelihood (gradient) of actions and options under the policy.
The first step provides us with a non-parametric policy $q(a_t,o_t|h_t)$.
\begin{eqnarray}
\begin{aligned}
\max_{q} J(q) =\ \mathbb{E}_{a_t,o_t \sim q, h_t \sim \mathcal{D}}\big[{Q_\phi}(s_t, a_t, o_t) \big], \\
\text{s.t. } \mathbb{E}_{h_t \sim \mathcal{D}}\Big[
\mathrm{KL}\big(q(\cdot|h_t) \| \pi_\theta(\cdot|h_t)\big)\Big] \le \epsilon_E,
\end{aligned}
\label{eq:objective_q}
\end{eqnarray}
where $\mathrm{KL}(\cdot \| \cdot)$ denotes the Kullback-Leibler divergence, and $\epsilon_E$ defines a bound on the KL.
We can find the solution to the constrained optimization problem in Equation \ref{eq:objective_q} in closed-form and obtain
\begin{equation} \label{eq:q_closed}
q(a_t,o_t|h_t) \propto \pi_{\theta}(a_t,o_t|h_t) \exp\left({{{Q_\phi}(s_t,a_t,o_t)}/{\eta}}\right).
\end{equation}
Practically speaking, this step computes samples from the previous policy and weights these based on the corresponding temperature-calibrated values of the critic. The temperature parameter $\eta$ is computed following the dual of the Lagrangian. The derivation and final form of the dual can be found in Appendix \ref{app:nonparam}, Equation \ref{eq:dual_eta}.
To align the parametric to the improved non-parametric policy in the second step, we minimize their KL divergence.
\begin{eqnarray}
\begin{aligned}
\theta = \arg \min_{\theta} \mathbb{E}_{h_t \sim \mathcal{D}}\Big[ \mathrm{KL}\big( q(\cdot | h_t) \| \pi_{\theta}(\cdot | h_t) \big) \Big], \label{eq:objective_pi}\\
\text{s.t. } \mathbb{E}_{h_t \sim \mathcal{D}}\Big[
\mathcal{T}\big(\pi_{\theta_{+}}(\cdot|h_t) \| \pi_{\theta}(\cdot|h_t)\big)\Big] \le \epsilon_M
\end{aligned}
\end{eqnarray}
The distance function $\mathcal{T}$ in Equation \ref{eq:objective_pi} has a trust-region effect and stabilizes learning by constraining the change in the parametric policy. The computed option probabilities from Equation \ref{eq:dynamic1} are used in Equation \ref{eq:q_closed} to enable sampling of options as well as Equation \ref{eq:objective_pi} to determine and maximize the likelihood of samples under the policy.
We can apply Lagrangian relaxation again and solve the primal as detailed in Appendix \ref{app:param}.
Finally, we describe the complete pseudo-code for HO2~in Algorithm \ref{alg:learner}.
Note that both Gaussian and mixture policies have been trained in prior work via methods relying on critic-weighted maximum likelihood \citep{abdolmaleki2018relative, wulfmeier2019regularized}. By comparing with the extension towards option policies described above, we will make use of this connection to isolate the impact of action abstraction and temporal abstraction in the option framework in Section \ref{sec:multi}.
\begin{algorithm}[t]
\caption{Hindsight Off-policy Options}\label{alg:learner}
\begin{algorithmic}
\STATE \textbf{input:} initial parameters for $\theta,~\eta$ and $\phi$, KL regularization parameters $\epsilon$, set of trajectories $\tau$
\WHILE{not done}
\STATE sample trajectories $\tau$ from replay buffer
\STATE \color{gray}// forward pass along sampled trajectories
\STATE \color{black}determine component probabilities $\pi^H\left(o_t | h_t\right)$ (Eq. \ref{eq:dynamic1})
\STATE sample actions $a_j$ and options $o_j$ from $\pi_{\theta}(\cdot|h_t)$ (Eq. \ref{eq:option_probs}) to estimate expectations
\STATE \color{gray}// compute gradients over batch for policy, Lagrangian multipliers and Q-function
\STATE \color{black}$\delta_\theta \leftarrow -\nabla_\theta \sum_{h_t \in \tau} \sum_{j} [ \exp\left({Q_\phi(s_t,a_j,o_j)}/{\eta}\right)$ \\ \hfill $\log \pi_{\theta}(a_j, o_j | h_t) ] $ following Eq. \ref{eq:q_closed} and \ref{eq:objective_pi}
\STATE $\delta_{\eta} \leftarrow \nabla_\eta g(\eta) = \nabla_\eta \eta\epsilon+\eta \sum_{h_t \in \tau} \log \sum_{j}[$ \\ \hfill $\exp\left({Q_\phi(s_t,a_j,o_j)}/{\eta}\right) ]$ following Eq. \ref{eq:dual_eta}
\STATE $\delta_\phi \leftarrow \nabla_{\phi} \sum_{(s_t, a_t, o_t) \in \tau} \big( {Q}_\phi(s_t, a_t, o_t) -
Q_{\text{T}} \big)^2$ \\ \hfill following Eq. \ref{eq:objective_q_value}
\STATE \color{black}update $\theta, \eta, \phi$ \color{gray}~~~~~~// apply gradient updates \color{black}
\IF{number of iterations = target update}
\STATE \color{black}$\pi' = \pi_\theta$, $Q' = Q_\phi$ \color{gray}~~~~~~// update target networks for policy $\pi'$ and value function $Q'$ \color{black}
\ENDIF
\ENDWHIL
\end{algorithmic}
\end{algorithm}
\begin{figure*}[b]
\centering
\begin{tabular}{c}
\includegraphics[trim=3mm 0 0 0, clip,width = 0.98\textwidth]{results/png/gym_wppo_main.png}
\end{tabular}
\caption{Results on OpenAI gym. Dashed black line represents DAC \citep{zhang2019dac}, dotted line represents Option-Critic \citep{bacon2017option}, solid line represents IOPG \citep{smith2018inference}, dash-dotted line represents PPO \citep{schulman2017proximal} (approximate results after $2 \times 10^{6}$ steps from \citep{zhang2019dac}). We limit the number of switches to 5 for HO2-limits. HO2~ consistently performs better or on par with existing option learning algorithms. }
\label{fig:openaigym}
\end{figure*}
\subsection{Maximizing Temporal Abstraction
\label{sec:temporal_abstraction}
Persisting with each option over longer time periods can help to reduce the search space and simplify exploration \citep{sutton1999between,harb2018waiting}.
Previous approaches (e.g. \citep{harb2018waiting}) rely on additional weighted loss terms which penalize option transitions.
In addition to the main HO2~algorithm, we introduce an extension mechanism to explicitly limit the maximum number of switches between options along a trajectory to increase temporal abstraction.
In comparison to additional loss terms, a parameter for the maximum number of switches can be chosen independently of the reward scale of an environment and provides an intuitive semantic interpretation, both aspects simplify manual adaptation.
We extend the 2D graph for computing option probabilities (Figure \ref{fig:example}) with a third dimension representing the number of switches between options. Practically, this means that we are modelling $\pi^H(o_t,n_t|h_t)$ where $n_t$ represents the number of switches until timestep $t$.
We can now marginalize over \textit{all} numbers of switches to again determine the option probabilities. Instead, to encourage option consistency across timesteps, we can sum over \textit{only a subset} of nodes for all $n \leq N$ with $N$ smaller than the maximal number of switches leading to $\pi^H \left(o_t| h_t\right) = \sum_{n_t=0}^N \pi^H \left(o_t, n_t| h_t\right)$
For the first timestep, only 0 switches are possible, such that $\pi^H\left(o_0 ,n_0=0| h_0\right) = \pi^C\left(o_0 | s_0\right)$ and $0$ for all other values of $n$.
For further timesteps, all edges resulting in option terminations $\beta$ lead to the next step's option probabilities with increased number of switches $n_{t+1} = n_t +1$. All edges representing the continuation of an option lead to $n_{t+1} = n_t $.
This results in the computation of the joint distribution for $t>0$:
\begin{eqnarray}
\begin{aligned}
\tilde{\pi}^{H}\left(o_t,n_t | h_t\right) = \sum_{\substack{o_{t-1}=1,\\ n_{t-1}=1}}^{M,N} p\left(o_t,n_t|s_t, o_{t-1},n_{t-1}\right) \\ \pi^H\left(o_{t-1},n_{t-1} | h_{t-1}\right) \label{eq:dynamic2}
{\pi^L\left(a_{t-1} | s_{t-1}, o_{t-1}\right)} %
\end{aligned}
\end{eqnarray}
\noindent which can then be normalized using $\pi^H\left(o_t,n_t | h_t\right) = {\tilde{\pi}^{H}\left(o_t,n_t | h_t\right)}/ {\sum_{o'_{t}=1}^M\sum_{n'_{t}=1}^L\tilde{\pi}^{H}\left(o'_t ,n'_t| h_t\right)}$.
The option and switch index transitions $p\left(o_t,n_t|s_t, o_{t-1},n_{t-1}\right)$ are further described in Equation \ref{eq:optionswitch_transitions} in the Appendix.
\section{Related Work}\label{sec:related}
Hierarchy has been investigated in many forms in reinforcement learning to improve data gathering as well as data fitting aspects.
Goal-based approaches commonly define a grounded interface between high- and low-level policies. The high level acts by providing goals to the low level, which is trained to achieve these goals \citep{dayan1993feudal,levy2017learning,nachum2018near,nachum2018data,vezhnevets2017feudal}, effectively generating separate objectives and improving exploration.
These methods have been able to overcome very sparse reward domains but commonly require domain knowledge to define the interface. In addition, a hand-crafted interface can limit expressiveness of achievable behaviors.
Non-crafted, emergent interfaces within policies have been investigated from an RL-as-inference perspective via policies with continuous latent variables \citep{haarnoja2018latent,hausman2018learning,heess2016learning,igl2019multitask,tirumala2019exploiting,teh2017distral}. Related to these approaches, we provide a probabilistic inference perspective to off-policy option learning and benefit from efficient dynamic programming inference procedures.
We furthermore build on the related idea of information asymmetry \citep{pinto2017asymmetric,galashov2018information,tirumala2019exploiting} - providing a part of the observations only to a part of the model. The asymmetry can lead to an information bottleneck affecting the properties of learned low-level policies. We build on the intuition and demonstrate how option diversity can be affected in ablations in Section \ref{sec:ablations}.
At its core, our work builds on and investigates the option framework \citep{precup2000temporal,sutton1999between},
which can be seen as describing policies with an autoregressive, discrete latent space.
Option policies commonly use a high-level controller to choose from a set of options or skills. These options additionally include termination conditions to enable a skill to represent temporally extended behavior.
Without termination conditions, options can be seen as equivalent to components under a mixture distribution, and this simplified formulation has been applied successfully in different methods \citep{agostini2010reinforcement,daniel2016hierarchical,wulfmeier2019regularized}.
Recent work has also investigated temporally extended low-level behaviours of fixed length \citep{frans2018meta,li2019sub,nachum2018data}, which do not learn the option duration or termination condition.
With HO2, enabling to optimize the extension of low-level behaviour in the option framework provides additional flexibility and removes the engineering effort of choosing the right hyperparameters.
The option framework has been further extended and improved for more practical application \citep{bacon2017option,harb2018waiting,harut2019termination,NIPS2005_2767,riemer2018learning,smith2018inference}.
HO2~ relies on off-policy training and treats options as latent variables. This enables backpropagation through the option inference procedure and considerable improvements in comparison to efficient than approaches relying on on-policy updates and on-option learning purely for executed options.
Related, IOPG \citep{smith2018inference} also considers an inference perspective but only includes on-policy results which naturally have poorer data efficiency.
Finally, the benefits of options and other modular policy styles have also been applied in the supervised case for learning from demonstration \citep{fox2017multi, krishnan2017ddco,shiarlis2018taco}.
One important step to increase the robustness of option learning has been taken in \citep{zhang2019dac} by building on robust (on-policy) policy optimization with PPO \citep{schulman2017proximal}. HO2~has similar robustness benefits, but additionally improves data-efficiency by building on off-policy learning, hindsight inference of options, and additional trust-region constraints \citep{abdolmaleki2018maximum,wulfmeier2019regularized}. Related inference procedures have also been investigated in imitation learning \citep{shiarlis2018taco} as well as on-policy RL \citep{smith2018inference}.
In addition to inferring options in hindsight, off-policy learning enables us to assign rewards for multiple tasks, which has been successfully applied with flat, non-hierarchical policies \citep{andrychowicz2017hindsight,riedmiller2018learning, cabi2017intentional} and goal-based hierarchical approaches \citep{levy2017learning,nachum2018data}.
\section{Experiments}\label{sec:experiments}
In this section, we aim to answer a set of questions to better understand the contribution of different aspects to the performance of option learning - in particular with respect to the proposed method, HO2.
To start, in Section \ref{sec:single} we explore two questions: (1) How well does HO2 perform in comparison to existing option learning methods? and (2) How important is off-policy training in this context?
We use a set of common OpenAI gym \citep{Brockman2016OpenAIG} benchmarks to answer these questions.
In Section \ref{sec:multi} we ask: (3) How do action abstraction in mixture policies and the additional temporal abstraction brought by option policies individually impact performance?
We use more complex, pixel-based 3D robotic manipulation experiments to investigate these two aspects and evaluate scalability with respect to higher dimensional input and state spaces.
We also explore: (4) How does increased temporal consistency impact performance, particularly with respect to sequential transfer with pre-trained options?
Finally, we perform additional ablations in Section \ref{sec:ablations} to investigate the challenges of robust off-policy option learning and improve understanding of how options decompose behavior based on various environment and algorithmic aspects.
Across domains, we use HO2~to train option policies, RHPO \citep{wulfmeier2019regularized} for the reduced case of mixture-of-Gaussians policies with sampling of options at every timestep and MPO \citep{abdolmaleki2018relative} to train individual flat Gaussian policies - all based on critic-weighted maximum likelihood estimation for policy optimization.
\subsection{Comparison of Option Learning Methods} \label{sec:single}
We compare HO2~(with and without limits on option switches) against competitive baselines for option learning in common, feature-based continuous action space domains. HO2~outperforms baselines including Double Actor-Critic (DAC) \citep{zhang2019dac}, Inferred Option Policy Gradient (IOPG) \citep{smith2018inference} and Option-Critic (OC) \citep{bacon2017option}.
With PPO~\citep{schulman2017proximal}, we include a commonly used on-policy method for flat policies which in addition serves as the foundation for the DAC algorithm.
As demonstrated in Figure \ref{fig:openaigym}, HO2~performs better than or commensurate to existing option learning algorithms such as DAC, IOPG and Option-Critic as well as PPO.
Training mixture policies (via RHPO \citep{wulfmeier2019regularized}) without temporal abstraction slightly reduces both performance and sample efficiency but still outperforms on-policy methods in many cases.
Here, enabling temporal abstraction (even without explicitly maximizing it) provides an inductive bias to reduce the search space for the high-level controller and can lead to more data-efficient learning, such that HO2~even without constraints performs better than RHPO.
Finally, while less data-efficient than HO2, even off-policy learning alone with flat Gaussian policies (here MPO~\citep{abdolmaleki2018maximum}) can outperform current on-policy option algorithms, for example in the HalfCheetah and Swimmer domains while otherwise at least performing on par.
This emphasizes the importance of a strong underlying policy optimization method.
Using the switch constraints for increasing temporal abstraction from Section \ref{sec:method} can provide minor benefits in some tasks but has overall only a minor effect on performance. We further investigate this setting in sequential transfer in Section \ref{sec:sequential}.
It has to be noted that given the comparable simplicity of these tasks, considerable performance gains are achieved with pure off-policy training.
In the next section, we study more complex domains to isolate additional gains from action and temporal abstraction.
\subsection{Ablations: Action Abstraction and Temporal Abstraction}
\label{sec:multi}
\begin{figure}[t]
\centering
\begin{tabular}{cc}
\includegraphics[trim=1in 0.2in 0.5in 0, clip,height = 0.175\textwidth]{figures/env2.png}~
\includegraphics[height = 0.175\textwidth]{figures/env9.png}\\
\includegraphics[trim=0.in 0.8in 0.in 0.in, clip,height = 0.19\textwidth]{figures/env8.png}
\includegraphics[height = 0.19\textwidth]{figures/env3.png}
\end{tabular}
\caption{Ball-In-Cup (top) and Stacking (bottom). Left: Environments. Right: Example agent observations.}
\label{fig:evs}
\end{figure}
We next ablate different aspects of HO2~on more complex simulated 3D robot manipulation tasks - stacking and the more dynamic ball-in-cup (BIC) - as displayed in Figure \ref{fig:evs}, based on robot proprioception and raw pixel inputs (64x64 pixel, 2 cameras for BIC and 3 for stacking).
Since the performance of HO2~for training from scratch is relatively independent of switch constraints (Figure \ref{fig:openaigym}), we will simplify our figures by focusing on the base method.
To reduce data requirements, we use a set of common techniques to improve data-efficiency and accelerate learning for all methods. We will apply multi-task learning with a related set of tasks to provide a curriculum,
with details in Appendix \ref{app:additiona_exps}. Furthermore, we assign rewards for all tasks to any generated transition data in hindsight to improve data efficiency and exploration \citep{andrychowicz2017hindsight,riedmiller2018learning,wulfmeier2019regularized, cabi2017intentional}.
Across all tasks, except for simple positioning and reach tasks (see Appendix \ref{app:additiona_exps}), action abstraction improves performance (mixture policies via RHPO versus flat Gaussian policies via MPO).
In particular the more challenging stacking tasks shown in Figure \ref{fig:multitask} intuitively benefit from shared sub-behaviors with easier tasks.
Finally, the introduction of temporal abstraction (option policies via HO2~vs mixture policy via RHPO) further improves both performance and sample efficiency, especially on the more complex stacking tasks.
The ability to learn explicit termination conditions, which can be understood as classifiers between two conditions, instead of the high-level controller, as classifier between all options, can considerably simplify the learning problem.
\begin{figure}[t]
\centering
\setlength{\tabcolsep}{0pt}
\begin{tabular}{cc}
\includegraphics[width = 0.24\textwidth]{results/png/bic/BIC_4.png}
\includegraphics[width = 0.24\textwidth]{results/png/bic/BIC_0.png}\\
\includegraphics[width = 0.24\textwidth]{results/png/stack/pile1_5.png}
\includegraphics[width = 0.24\textwidth]{results/png/stack/pile1_6.png}
\end{tabular}
\vspace{-3pt}
\caption{Results for option policies, and ablations via mixture policies, and single Gaussian policies (respectively HO2, RHPO, and MPO) with pixel-based ball-in-cup (left) and pixel-based block stacking (right). All four tasks displayed use sparse binary rewards, such that the obtained reward represents the number of timesteps where the corresponding condition - such as the ball is in the cup - is fulfilled.
See Appendix \ref{app:details} for details and additional tasks.}
\label{fig:multitask}
\vspace{-8pt}
\end{figure}
\paragraph{Optimizing for Temporal Abstraction}
\label{sec:sequential}
There is a difference between simplifying the representation of temporal abstraction for the agent and explicitly maximizing it.
The ability to represent temporally abstract behavior in HO2~via the use of explicit termination conditions consistently helps in prior experiments.
However, these experiments show limited benefit when increasing temporal consistency (by limiting the number of switches following Section \ref{sec:temporal_abstraction}) for training from scratch.
In this section, we further evaluate temporal abstraction for sequential transfer with pre-trained options.
We first train low-level options for all tasks except for the most complex task in each domain by applying HO2. Next, given a set of pre-trained options, we only train the final task and compare training with and without maximizing temporal abstraction.
We use the domains from Section \ref{sec:multi}, block stacking and BIC
As shown in Figure \ref{fig:sequential}, we can see that more consistent options lead to increased performance in the transfer domain. Intuitively, increased temporal consistency and fewer switches lead to a smaller search space from the perspective of the high-level controller.
While the same mechanism should also apply for training from scratch, we hypothesize that the added complexity of simultaneously learning the low-level behaviors (while maximizing temporal consistency) outweighs the benefits. Finding a set of options which not only solve a task but also represent temporally consistent behavior can be harder, and require more data, than just solving the task.
\begin{figure}[t]
\centering
\setlength{\tabcolsep}{0pt}
\begin{tabular}{cc}
\includegraphics[trim=3mm 0 0 0,clip,height = 0.158\textwidth]{results/png/sequential/bic_sequential.png}
\includegraphics[height = 0.158\textwidth]{results/png/sequential/bic_sequential_switchrate.png}\\
\includegraphics[height = 0.158\textwidth]{results/png/sequential/pile_sequential.png}
\includegraphics[height = 0.158\textwidth]{results/png/sequential/pile_sequential_switchrate.png}
\end{tabular}
\caption{The sequential transfer experiments for temporal abstraction show considerable improvements for limited switches. Top: BIC. Bottom: Stack. In addition, we visualize the actual agent option switch rate in the environment to directly demonstrate the constraint's effect.}
\label{fig:sequential}
\end{figure}
\subsection{Ablations: Off-policy, Robustness, Decomposition} \label{sec:ablations}
In this section, we investigate different algorithmic aspects to get a better understanding of the method, properties of the learned options, and how to achieve robust training in the off-policy setting.
\paragraph{Off-policy Option Learning}
In off-policy hierarchical RL, the low-level policy underlying an option can change after trajectories are generated.
This results in a non-stationarity for training the high-level controller.
In addition, including previously executed actions in the forward computation for component probabilities can introduce additional variance into the objective.
In practice, we find that removing the conditioning on low-level probabilities (the $\pi_L$ terms in Equation \ref{eq:dynamic1}) improves performance and stability.
The effect is displayed in Figure \ref{fig:ac}, where the conditioning of high-level component probabilities on
low-level action probabilities (see Section \ref{sec:method}) is detrimental.
\begin{figure}[h]
\centering
\setlength{\tabcolsep}{1pt}
\begin{tabular}{cc}
\includegraphics[trim=.1in 0 17.5in 0,clip,height = 0.165\textwidth]{results/png/gym_ac.png}
\includegraphics[trim=12in 0 5.7in 0,clip,height = 0.165\textwidth]{results/png/gym_ac.png}
\end{tabular}
\caption{Results on OpenAI gym with/without option probabilities being conditioned on past actions.
}
\label{fig:ac}
\end{figure}
We additionally evaluate this effect in the on-policy setting in Appendix~\ref{sec:off_policy_exps} and find its impact to be diminished, demonstrating the connection between the effect and an off-policy setting.
While we apply this simple heuristic for HO2, the problem has lead to various off-policy corrections for goal-based HRL \citep{nachum2018data,levy2017learning} which provide a valuable direction for future work.
\paragraph{Trust-regions and Robustness}
Previous work has shown the benefits of applying trust-region constraints for policy updates of non-hierarchical policies \citep{schulman2015trust, abdolmaleki2018maximum}.
In this section, we vary the strength of constraints on the option probability updates (both termination conditions $\beta$ and the high-level controller $\pi_C$).
As displayed in Figure~\ref{fig:trustregion}, the approach is robust across multiple orders of magnitude, but very
weak or strong constraints can considerably degrade performance.
Note that a high value is essentially equal to not using a constraint and causes very low performance. Therefore, option learning here relies strongly on trust-region constraints. Further experiments can be found in Appendix~\ref{sec:trust_region_exps}.
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\includegraphics[trim=3mm 3mm 0 0,clip,width=0.23\textwidth]{results/png/trustregion/pile1_tr_2.png}
\includegraphics[trim=3mm 3mm 0 0,clip,width=0.23\textwidth]{results/png/trustregion/pile1_tr_6.png}
\end{tabular}
\caption{Block stacking results for two tasks with different trust-region constraints. Note that the importance of constraints increases for more complex tasks.}
\label{fig:trustregion}
\end{figure}
\paragraph{Decomposition and Option Clustering}
To investigate how HO2~uses its capacity and decomposes behavior into options, we apply it to a variety of simple and interpretable locomotion tasks.
In these tasks, the agent body (``Ball'', ``Ant'', or ``Quadruped'') must go to one of three targets in a room, with the task specified by the target locations and a selected target index.
As shown for the ``Ant'' case in Figure~\ref{fig:predicate_decomp}, we find that option decomposition intuitively depends on both the task properties and algorithm settings. In particular \textit{information asymmetry (IA)}, achieved by providing task information only to the high-level controller, can address degenerate solutions and lead to increased diversity with respect to options (as shown by the histogram over options) and more specialized options (represented by the clearer clustering of samples in action space).
We can measure this quantitatively, using (1) the Silhouette score, a measure of clustering accuracy based on inter- and intra-cluster distances\footnotemark; and (2) entropy over the option histogram, to quantify diversity.
These metrics are reported in Table~\ref{table:predicates_quant_main} for all bodies, with and without information asymmetry.
The results show that for all cases, IA leads to greater option diversity and clearer separation of option clusters with respect to action space, state space, and task.
\footnotetext{The silhouette score is a value in $[-1, 1]$ with higher values indicating cluster separability. We note that the values obtained in this setting do not correspond to high \textit{absolute} separability, as multiple options can be used to model the same skill or behavior abstraction. We are instead interested in the \textit{relative} clustering score for different scenarios.}
More extensive experiments and discussion can be found in Appendix~\ref{sec:locomotion}, including additional quantitative and qualitative results for the other bodies and scenarios.
To summarize, the analyses yield a number of relevant observations, showing that (1) for simpler bodies like a ``Ball'', the options are interpretable (forward torque, and turning left/right at different rates); and (2) applying the switch constraint introduced in Section~\ref{sec:sequential} leads to increased temporal abstraction without reducing the agent's ability to solve the task.
\begin{figure}[h]
\includegraphics[scale = 0.19, trim=3mm 0 0 0, clip]{results/predicates/ant_hist_noIA_main.png}
\includegraphics[scale = 0.21, trim=0 0 0 3mm, clip]{results/predicates/ant_actions_noIA_main.png}
\includegraphics[scale = 0.19, trim=3mm 0 0 0, clip]{results/predicates/ant_hist_fullIA_main.png}
\includegraphics[scale = 0.21, trim=0 0 0 3mm, clip]{results/predicates/ant_actions_fullIA_main.png}
\caption{Analysis on Ant locomotion tasks, showing histogram over options, and t-SNE scatter plots in action space colored by option. Left: without IA. Right: with IA. Agents with IA use more components and show clearer option clustering in the action space.}
\label{fig:predicate_decomp}
\end{figure}
\begin{table}[!h]
\centering
\scalebox{0.65}{
\begin{tabular}{ c c | c c c c }
\hline \\ [-1.5ex]
\multicolumn{2}{c}{\bf{Scenario}} & \bf{Option entropy} & \bf{s (action)} & \bf{s (state)} & \bf{s (task)} \\
\hline \\ [-1.5ex]
\multirow{2}{*}{Ball} & No IA & $1.80 \pm 0.21$ & $-0.30 \pm 0.04$ & $-0.25 \pm 0.14$ & $-0.13 \pm 0.05$ \\
& With IA & $2.23 \pm 0.03$ & $-0.13 \pm 0.04$ & $-0.11 \pm 0.04$ & $-0.05 \pm 0.00$ \\
\cline{2-6} \\ [-1.5ex]
\multirow{2}{*}{Ant} & No IA & $1.60 \pm 0.08$ & $-0.12 \pm 0.02$ & $-0.15 \pm 0.07$ & $-0.08 \pm 0.03$ \\
& With IA & $2.22 \pm 0.04$ & $-0.05 \pm 0.01$ & $-0.05 \pm 0.01$ & $-0.05 \pm 0.01$ \\
\cline{2-6} \\ [-1.5ex]
\multirow{2}{*}{Quad} & No IA & $1.55 \pm 0.29$ & $-0.07 \pm 0.04$ & $-0.12 \pm 0.03$ & $-0.11 \pm 0.02$\\
& With IA & $2.23 \pm 0.04$ & $-0.03 \pm 0.03$ & $-0.03 \pm 0.00$ & $-0.05 \pm 0.01$ \\
\hline
\end{tabular}}
\vspace{1ex}
\caption{Quantitative results
indicating the diversity of options used (entropy), and clustering accuracy in action and state spaces (silhouette score $s$), with and without information asymmetry (IA). Switching constraints are applied in all cases.
Higher values indicate greater separability by option / component.}
\label{table:predicates_quant_main}
\end{table}
\section{Additional Derivations \label{app:additiona_derivations}}
In this section we explain the derivations for training option policies with options parameterized as Gaussian distributions. Each policy improvement step is split into two parts: non-parametric and parametric update.
\subsection{Non-parametric Option Policy Update \label{app:nonparam}}
In order to obtain the non-parametric policy improvement we optimize the following equation:
\begin{equation*}
\begin{aligned}
& \max_q \mathbb{E}_{h_t \sim p(h_t)} \big[ \mathbb{E}_{a_t,o_t \sim q} \big[Q_\phi(s_t,a_t,o_t)] \big] \\
& s.t. \mathbb{E}_{h_t \sim p(h_t)} \big[ \textrm{KL}(q(\cdot|h_t) , \pi_\theta(\cdot|h_t) ) \big] < \epsilon_E\\
& s.t. \mathbb{E}_{h_t \sim p(h_t)} \big[ \mathbb{E}_{q(a_t,o_t | h_t)} \big [1 \big] \big] = 1.
\end{aligned}
\end{equation*}
for each step $t$ of a trajectory, where $h_{t} = \{s_t, a_{t-1}, s_{t-1},... a_0,s_0\}$ represents the history of states and actions and $p(h_t)$ describes the distribution over histories for timestep $t$, which in practice are approximated via the use of a replay buffer $\mathcal{D}$. When sampling $h_t$, the state $s_t$ is the first element of the history. The inequality constraint describes the maximum allowed KL divergence between intermediate update and previous parametric policy, while the equality constraint simply ensures that the intermediate update represents a normalized distribution.
Subsequently, in order to render the following derivations more intuitive, we replace the expectations and explicitly use integrals.
The Lagrangian $L(q,\eta,\gamma)$ can now be formulated as
\begin{align}\label{eq:completelagrange1}
L(q,\eta,\gamma) = \iiint p(h_t) q(a_t,o_t|h_t) Q_\phi(s_t,a_t,o_t) \\
\hfill\nonumber \diff o_t \diff a_t \diff h_t \\
\nonumber +\eta \bigg(\epsilon_E - \iiint p(h_t)q(a_t,o_t|h_t)
\log \frac{q(a_t,o_t|h_t)}{\pi_\theta(a_t,o_t|h_t)}\\
\nonumber \hfill \diff o_t \diff a_t \diff h_t \bigg)\\
\nonumber + \gamma\left(1 - \iiint p(h_t) q(a_t,o_t|h_t)\diff o_t \diff a_t \diff h_t \right).
\end{align}
Next to maximize the Lagrangian with respect to the primal variable $q$, we determine its derivative as,
\begin{align*}
\frac{\partial L(q,\eta,\gamma)}{{\partial q}} = Q_\phi(a_t,o_t,s_t) - \eta\log q(a_t,o_t|h_t) \\
\nonumber+\eta\log \pi_\theta(a_t,o_t|h_t) - \eta - \gamma.
\end{align*}
In the next step, we can set the left hand side to zero and rearrange terms to obtain
\begin{align*}
q(a_t,o_t|h_t) = \pi_\theta(a_t,o_t|h_t)\exp\left(\frac{Q_\phi(s_t,a_t,o_t)}{\eta}\right)\\
\exp\left(-\frac{\eta+\gamma}{\eta}\right).
\end{align*}
The last exponential term represents a normalization constant for $q$, which we can formulate as
\begin{eqnarray}
\begin{aligned}
\frac{\eta+\gamma}{\eta}= \log\bigg(&\iint \pi_\theta(a_t,o_t|h_t)\label{eq:norm}\\
&\exp\left(\frac{Q_\phi(s_t,a_t,o_t)}{\eta}\right)\diff o_t \diff a_t\bigg).
\end{aligned}
\end{eqnarray}
In order to obtain the dual function $g(\eta)$, we insert the solution for the primal variable into the Lagrangian in Equation \ref{eq:completelagrange1} which yields
\begin{align*}
L(q,\eta,\gamma) = \iiint p(h_t)q(a_t,o_t|h_t) Q_\phi(s_t,a_t,o_t) \\
\hfill \diff o_t \diff a_t \diff h_t \\
+\eta\bigg(\epsilon_E - \iiint p(h_t) q(a_t,o_t|h_t)\\
\log {\scriptscriptstyle \frac{\pi_\theta(a_t,o_t|h_t)\exp\left(\frac{Q_\phi(s_t,a_t,o_t)}{\eta}\right)\exp\left(-\frac{\eta+\gamma}{\eta}\right)}{\pi_\theta(a_t,o_t|h_t)} } \diff o_t \diff a_t \diff h_t \bigg) \\
+ \gamma\left(1 - \iiint p(h_t) q(a_t,o_t|h_t)\diff o_t \diff a_t \diff h_t \right).
\end{align*}
We expand the equation and rearrange to obtain
\begin{align*}
L(q,\eta,\gamma) &=\iiint p(h_t) q(a_t,o_t|h_t) Q_\phi(s_t,a_t,o_t) \\
& \hfill ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \diff o_t \diff a_t \diff h_t \\
& - \eta\iiint p(h_t) q(a_t,o_t|h_t)\Big[\frac{Q_\phi(s_t,a_t,o_t)}{\eta} \\
&\hfill+ \log \pi_\theta(a_t,o_t|h_t) - \frac{\eta+\gamma}{\eta}\Big]\diff o_t \diff a_t \diff h_t \\& + \eta\epsilon_E
+\eta\iiint p(h_t) q(a_t,o_t|h_t)\\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\log \pi_\theta(a_t,o_t|h_t)\diff o_t \diff a_t \diff h_t \\ & + \gamma\left(1 - \iiint p(h_t) q(a_t,o_t|h_t)\diff o_t \diff a_t \diff h_t \right).
\end{align*}
In the next step, most of the terms cancel out and after additional rearranging of the terms we obtain
\begin{align*}
L(q,\eta,\gamma) = \eta\epsilon_E + \eta\int p(h_t)\frac{\eta+\gamma}{\eta}\diff h_t.
\end{align*}
We have already calculated the term inside the integral in Equation \ref{eq:norm}, which we now insert to obtain
\begin{align}
g(\eta) =& \min_q L(q,\eta,\gamma)\label{eq:dual_eta}\\
=& \eta\epsilon_E+\eta\int p(h_t)\log\bigg(\iint \pi_\theta(a_t,o_t|h_t)\nonumber\\
&~~~~~~~~~~~~~~~~~~~\exp\left(\frac{Q_\phi(s_t,a_t,o_t)}{\eta}\right)\diff o_t \diff a_t \bigg)\diff h_t \nonumber\\
=& \eta\epsilon_E+\eta \mathbb{E}_{h_t\sim p(h_t)} \Big[ \log\bigg(\mathbb{E}_{a_t,o_t \sim \pi_\theta}\Big[\nonumber\\
&~~~~~~~~~~~~~~~~~~~~ \exp\left(\frac{Q_\phi(s_t,a_t,o_t)}{\eta}\right)\Big]\bigg)\Big]. \nonumber
\end{align}
The dual in Equation \ref{eq:dual_eta} can finally be minimized with respect to $\eta$ based on samples from the replay buffer and policy.
\subsection{Parametric Option Policy Update \label{app:param}}
After obtaining the non-parametric policy improvement, we can align the parametric option policy to the current non-parametric policy.
As the non-parametric policy is represented by a set of samples from the parametric policy with additional weighting, this step effectively employs a type of critic-weighted maximum likelihood estimation. In addition, we introduce regularization based on a distance function $\mathcal{T}$ which has a trust-region effect for the update and stabilizes learning.
\begin{eqnarray*}
\begin{aligned}
\theta_{new} =&\arg \min_{\theta} \mathbb{E}_{h_t \sim p(h_t)}\Big[ \mathrm{KL}\big( q(a_t,o_t | h_t) \| \pi_{\theta}(a_t,o_t | h_t) \big) \Big] \\
=&\arg \min_{\theta} \mathbb{E}_{h_t \sim p(h_t)}\Big[ \mathbb{E}_{a_t,o_t \sim q} \Big[ \log q(a_t,o_t | h_t) \\
\nonumber &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~-\log \pi_{\theta}(a_t,o_t | h_t) \Big] \Big] \\
=& \argmax_{\theta}
\mathbb{E}_{h_t \sim p(h_t), a_t, o_t \sim q}\Big[ \log \pi_{\theta}(a_t, o_t |h_t) \Big], \quad \\&\textrm{s.t.}\:\mathbb{E}_{h_t \sim p(h_t)} \Big[ \mathcal{T}(\pi_{\theta_{new}}(\cdot|h_t) | \pi_{\theta}(\cdot|h_t)) \Big] < \epsilon_M,
\end{aligned}
\end{eqnarray*}
where $h_t \sim p(h_t)$ is a trajectory segment, which in practice sampled from the dataset $\mathcal{D}$, $\mathcal{T}$ is an arbitrary distance function between the new policy and the previous policy. $\epsilon_M$ denotes the allowed change for the policy. We again employ Lagrangian Relaxation to enable gradient based optimization of the objective, yielding the following primal:
\begin{align}\label{eq:lagrange}
\max_\theta \min_{\alpha > 0} L(\theta,\alpha) = \mathbb{E}_{ h_t \sim p(h_t), a_t,o_t \sim q}\Big[ \log \pi_{\theta}(a_t,o_t|h_t) \Big] \nonumber\\
+\alpha\Big(\epsilon_M - \mathbb{E}_{h_t \sim p(h_t)} \big[ \mathcal{T}(\pi_{\theta_{new}}(\cdot|h_t) , \pi_{\theta}(\cdot|h_t))\big]\Big).
\end{align}
We can solve for $\theta$ by iterating the inner and outer optimization programs independently. In practice we find that it is most efficient to update both in parallel.
We also define the following distance function between old and new option policies
\begin{align*}
\mathcal{T}(\pi_{\theta_{new}}(\cdot|h_t),\pi_{\theta}(\cdot|h_t)) = \mathcal{T}_{H}(h_t) +\mathcal{T}_{T}(h_t) + \mathcal{T}_{L}(h_t)
\end{align*}
\begin{align*}
\mathcal{T}_{H}(h_t) = \textrm{KL}(\textrm{Cat}(\{\alpha_{\theta_{new}}^{j}(h_t)\}_{j=1...M}) \|\\
\textrm{Cat}(\{\alpha_{\theta}^{j}(h_t)\}_{j=1...M}))\\
\mathcal{T}_{T}(h_t) = \frac{1}{M}\sum_{j=1}^{M} \textrm{KL}(\textrm{Cat}(\{\beta_{\theta_{new}}^{ij}(h_t)\}_{j=1...2}) \| \\ \textrm{Cat}(\{\beta_{\theta}^{ij}(h_t)\}_{j=1...2}))\\
\mathcal{T}_{L}(h_t) = \frac{1}{M}\sum_{j=1}^{M}\textrm{KL}(\mathcal{N}(\mu^j_{\theta_{new}}(h_t),\Sigma^j_{\theta_{new}}(h_t)) \| \\ \mathcal{N}(\mu^j_{\theta}(h_t),\Sigma^j_{\theta}(h_t)))
\end{align*}
where $\mathcal{T}_{H}$ evaluates the KL between the categorical distributions of the high-level controller, $\mathcal{T}_{T}$ is the average KL between the categorical distributions of the all termination conditions, and $\mathcal{T}_{L}$ corresponds to the average KL across Gaussian components. In practice, we can exert additional control over the convergence of model components by applying different $\epsilon_M$ to different model parts (high-level controller, termination conditions, options).
\subsection{Transition Probabilities for Option and Switch Indices}
The transitions for option $o$ and switch index $n$ are given by:
\begin{align}
p(o_t,n_t|s_t,o_{t-1},n_{t-1}) = ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ &\nonumber\\
\begin{cases} \label{eq:optionswitch_transitions}
(1-\beta(s_t,o_{t-1})) & \text{if } n_{t} = n_{t-1}, o_t=o_{t-1} \\
\beta(s_t,o_{t-1}) \pi^C(o_t|s_t) & \text{if } n_{t} = n_{t-1}+1\\
0 & \text{otherwise}
\end{cases}
\end{align}
\section{Additional Experiments \label{app:additiona_exps}}
\subsection{Decomposition and Option Clustering}
\label{sec:locomotion}
We further deploy HO2~on a set of simple locomotion tasks, where the goal is for an agent to move to one of three randomized target locations in a square room.
These are specified as a set of target locations and a task index to select the target of interest.
The main research questions we aim to answer (both qualitatively and quantitatively) are: (1) How do the discovered options specialize and represent different behaviors?; and (2) How is this decomposition affected by variations in the task, embodiment, or
algorithmic properties of the agent?
To answer these questions, we investigate a number of variations:
\begin{itemize}
\item Three bodies: a quadruped with two (``Ant'') or three (``Quad'') torque-controlled joints on each leg, and a rolling ball (``Ball'') controlled by applying yaw and forward roll torques.
\item With or without \textit{information asymmetry} (IA) between high- and low-level controllers, where the task index and target positions are withheld from the options and only provided to the categorical option controller.
\item With or without a limited number of switches in the optimization.
\end{itemize}
Information-asymmetry (IA) in particular, has recently been shown to be effective for learning general skills~\citep{galashov2018information}: by withholding task-information from low-level options, they can learn task-agnostic, temporally-consistent behaviors that can be composed by the option controller to solve a task.
This mirrors the setup in the aforementioned Sawyer tasks, where the task information is only fed to the high-level controller.
For each of the different cases, we qualitatively evaluate the trained agent over $100$ episodes, and generate histograms over the different options used, and scatter plots to indicate how options cluster the state/action spaces and task information.
We also present quantitative measures (over $5$ seeds) to accompany these plots, in the form of (1) Silhouette score, a measure of clustering accuracy based on inter- and intra-cluster distances\footnotemark; and (2) entropy over the option histogram, to quantify diversity.
The quantitative results are shown in Table~\ref{table:predicates_quant}, and the qualitative plots are shown in Figure~\ref{fig:predicates_qual}.
Details and images of the environment are in Section~\ref{sec:locomotion_details}.
\footnotetext{The silhouette score is a value in $[-1, 1]$ with higher values indicating cluster separability. We note that the values obtained in this setting do not correspond to high \textit{absolute} separability, as multiple options can be used to model the same skill or behavior abstraction. We are instead interested in the \textit{relative} clustering score for different scenarios.}
\begin{table*}[h!]
\centering
\scalebox{0.95}{
\begin{tabular}{ c | c | c c c}
& \multirow{2}{*}{\bf{Histogram over options}} & \multicolumn{3}{c}{\bf{t-SNE scatter plots}} \\
& & \bf{Actions} & \bf{States} & \bf{Task} \\
\hline \\ [-1.5ex]
\rotatebox{90}{\bf{Ball, no IA}} & \includegraphics[scale = 0.19]{results/predicates/ball_hist_noIA.png}
& \includegraphics[scale = 0.19]{results/predicates/ball_actions_noIA.png}
& \includegraphics[scale = 0.19]{results/predicates/ball_states_noIA.png}
& \includegraphics[scale = 0.19]{results/predicates/ball_task_noIA.png} \\
\rotatebox{90}{\bf{Ball, with IA}} & \includegraphics[scale = 0.19]{results/predicates/ball_hist_fullIA.png}
& \includegraphics[scale = 0.19]{results/predicates/ball_actions_fullIA.png}
& \includegraphics[scale = 0.19]{results/predicates/ball_states_fullIA.png}
& \includegraphics[scale = 0.19]{results/predicates/ball_task_fullIA.png} \\
\rotatebox{90}{\bf{Ant, no IA}} & \includegraphics[scale = 0.19]{results/predicates/ant_hist_noIA.png}
& \includegraphics[scale = 0.19]{results/predicates/ant_actions_noIA.png}
& \includegraphics[scale = 0.19]{results/predicates/ant_states_noIA.png}
& \includegraphics[scale = 0.19]{results/predicates/ant_task_noIA.png} \\
\rotatebox{90}{\bf{Ant, with IA}} & \includegraphics[scale = 0.19]{results/predicates/ant_hist_fullIA.png}
& \includegraphics[scale = 0.19]{results/predicates/ant_actions_fullIA.png}
& \includegraphics[scale = 0.19]{results/predicates/ant_states_fullIA.png}
& \includegraphics[scale = 0.19]{results/predicates/ant_task_fullIA.png} \\
\rotatebox{90}{\bf{Quad, no IA}} & \includegraphics[scale = 0.19]{results/predicates/quad_hist_noIA.png}
& \includegraphics[scale = 0.19]{results/predicates/quad_actions_noIA.png}
& \includegraphics[scale = 0.19]{results/predicates/quad_states_noIA.png}
& \includegraphics[scale = 0.19]{results/predicates/quad_task_noIA.png} \\
\rotatebox{90}{\bf{Quad, with IA}} & \includegraphics[scale = 0.19]{results/predicates/quad_hist_fullIA.png}
& \includegraphics[scale = 0.19]{results/predicates/quad_actions_fullIA.png}
& \includegraphics[scale = 0.19]{results/predicates/quad_states_fullIA.png}
& \includegraphics[scale = 0.19]{results/predicates/quad_task_fullIA.png} \\
\vspace{1ex}
\end{tabular}}
\captionof{figure}{Qualitative results for the three bodies (Ball, Ant, Quad) without limited switches, both with and without IA, obtained over $100$ evaluation episodes. \textbf{Left}: the histogram over different options used by each agent; \textbf{Centre to right}: scatter plots of the action space, state space, and task information, colored by the corresponding option selected. Each of these spaces has been projected to $2D$ using t-SNE, except for the two-dimensional action space for Ball, which is plotted directly. For each case, care has been taken to choose a median / representative model out of $5$ seeds. }
\label{fig:predicates_qual}
\end{table*}
\begin{table*}[h]
\centering
\scalebox{0.7}{
\begin{tabular}{ c | c c | c c c c c }
\hline \\ [-1.5ex]
& \multicolumn{2}{c}{\bf{Scenario}} & \bf{Option entropy} & \bf{Switch rate} & \bf{Cluster score (actions)} & \bf{Cluster score (states)} & \bf{Cluster score (tasks)} \\
\hline \\ [-1.5ex]
\multirow{6}{*}{\rotatebox[origin=c]{90}{\bf{Regular}}} & \multirow{2}{*}{Ball} & No IA & $2.105 \pm 0.074$ & $0.196 \pm 0.010$ & $-0.269 \pm 0.058$ & $-0.110 \pm 0.025$ & $-0.056 \pm 0.011$ \\
& & With IA & $2.123 \pm 0.066$ & $0.346 \pm 0.024$ & $-0.056 \pm 0.024$ & $-0.164 \pm 0.051$ & $-0.057 \pm 0.008$ \\
\cline{4-8} \\ [-1.5ex]
& \multirow{2}{*}{Ant} & No IA & $1.583 \pm 0.277$ & $0.268 \pm 0.043$ & $-0.148 \pm 0.034$ & $-0.182 \pm 0.068$ & $-0.075 \pm 0.011$ \\
& & With IA & $2.119 \pm 0.073$ & $0.303 \pm 0.019$ & $-0.053 \pm 0.021$ & $-0.066 \pm 0.024$ & $-0.052 \pm 0.006$ \\
\cline{4-8} \\ [-1.5ex]
& \multirow{2}{*}{Quad} & No IA & $1.792 \pm 0.127$ & $0.336 \pm 0.019$ & $-0.078 \pm 0.064$ & $-0.113 \pm 0.035$ & $-0.089 \pm 0.050$ \\
& & With IA & $2.210 \pm 0.037$ & $0.403 \pm 0.014$ & $0.029 \pm 0.029$ & $-0.040 \pm 0.003$ & $-0.047 \pm 0.006$ \\
\hline \\ [-1.5ex]
\multirow{6}{*}{\rotatebox[origin=c]{90}{\parbox{2cm}{\bf{Limited Switches}}}} & \multirow{2}{*}{Ball} & No IA & $1.804 \pm 0.214$ & $0.020 \pm 0.009$ & $-0.304 \pm 0.040$ & $-0.250 \pm 0.135$ & $-0.131 \pm 0.049$ \\
& & With IA & $2.233 \pm 0.027$ & $0.142 \pm 0.015$ & $-0.132 \pm 0.035$ & $-0.113 \pm 0.043$ & $-0.053 \pm 0.003$ \\
\cline{4-8} \\ [-1.5ex]
& \multirow{2}{*}{Ant} & No IA & $1.600 \pm 0.076$ & $0.073 \pm 0.014$ & $-0.124 \pm 0.017$ & $-0.155 \pm 0.067$ & $-0.084 \pm 0.034$ \\
& & With IA & $2.222 \pm 0.043$ & $0.141 \pm 0.015$ & $-0.052 \pm 0.011$ & $-0.054 \pm 0.014$ & $-0.050 \pm 0.007$ \\
\cline{4-8} \\ [-1.5ex]
& \multirow{2}{*}{Quad} & No IA & $1.549 \pm 0.293$ & $0.185 \pm 0.029$ & $-0.075 \pm 0.036$ & $-0.126 \pm 0.030$ & $-0.112 \pm 0.022$\\
& & With IA & $2.231 \pm 0.042$ & $0.167 \pm 0.025$ & $-0.029 \pm 0.029$ & $-0.032 \pm 0.004$ & $-0.053 \pm 0.009$ \\
\hline
\end{tabular}}
\vspace{1ex}
\caption{Quantitative results
indicating the diversity of options used (entropy), and clustering accuracy in action and state spaces (silhouette score), with and without information asymmetry (IA), and with or without limited number of switches.
Higher values indicate greater separability by option / component.}
\label{table:predicates_quant}
\end{table*}
The results show a number of trends.
Firstly, the usage of IA leads to a greater diversity of options used, across all bodies.
Secondly, with IA, the options tend to lead to specialized actions, as demonstrated by the clearer option separation in action space.
In the case of the $2D$ action space of the ball, the options correspond to turning left or right (y-axis) at different forward torques (x-axis).
Thirdly, while the simple Ball can learn these high-level body-agnostic behaviors, the options for more complex bodies have greater switch rates that suggest the learned behaviors may be related to lower-level motor behaviors over a shorter timescale.
Lastly, limiting the number of switches during marginalization does indeed lead to a lower switch rate between options, without hampering the ability of the agent to complete the task.
\subsection{Action and Temporal Abstraction Experiments}
The complete results for all pixel and proprioception based multitask experiments for ball-in-cup and stacking can be respectively found in Figures \ref{fig:bic_all} and \ref{fig:stack_all}.
Both RHPO and HO2~outperform a simple Gaussian policy trained via MPO. HO2~additionally improves performance over mixture policies (RHPO) demonstrating that the ability to learn temporal abstraction proves beneficial in these domains. The difference grows as the task complexity increases and is particularly pronounced for the final stacking tasks.
\begin{figure*}[h]
\centering
\begin{tabular}{ccc}
\includegraphics[width = 0.3\textwidth]{results/png/bic/BIC_1.png}
\includegraphics[width = 0.3\textwidth]{results/png/bic/BIC_2.png}
\includegraphics[width = 0.3\textwidth]{results/png/bic/BIC_3.png}\\
\includegraphics[width = 0.3\textwidth]{results/png/bic/BIC_4.png}
\includegraphics[width = 0.3\textwidth]{results/png/bic/BIC_0.png}
\end{tabular}
\caption{Complete results on pixel-based ball-in-cup experiments.}
\label{fig:bic_all}
\end{figure*}
\begin{figure*}[h]
\centering
\begin{tabular}{cccc}
\includegraphics[width = 0.235\textwidth]{results/png/stack/pile1_0.png}
\includegraphics[width = 0.235\textwidth]{results/png/stack/pile1_1.png}
\includegraphics[width = 0.235\textwidth]{results/png/stack/pile1_2.png}
\includegraphics[width = 0.235\textwidth]{results/png/stack/pile1_3.png} \\
\includegraphics[width = 0.235\textwidth]{results/png/stack/pile1_4.png}
\includegraphics[width = 0.235\textwidth]{results/png/stack/pile1_5.png}
\includegraphics[width = 0.235\textwidth]{results/png/stack/pile1_6.png}
\end{tabular}
\caption{Complete results on pixel-based stacking experiments.}
\label{fig:stack_all}
\end{figure*}
\subsection{Task-agnostic Terminations}
The perspective of options as task-independent skills with termination conditions as being part of a skill, leads to termination conditions which are also task independent. We show that at least in this limited set of experiments, the perspective of task-dependent termination conditions - i.e. with access to task information - which can be understood as part of the high-level control mechanism for activating options improves performance.
Intuitively, by removing task information from the termination conditions, we constrain the space of solutions which first accelerates training slightly but limits final performance. It additionally shows that while we benefit when sharing options across tasks, each task gains from controlling the length of these options independently.
Based on these results, the termination conditions across all other multi-task experiments are conditioned on the active task.
The complete results for all experiments with task-agnostic terminations can be found in Figure \ref{fig:terminations}.
\begin{figure*}[h]
\centering
\begin{tabular}{cccc}
\includegraphics[width = .235\textwidth]{results/png/terminations/pile1_terminations_0.png}
\includegraphics[width = .235\textwidth]{results/png/terminations/pile1_terminations_1.png}
\includegraphics[width = .235\textwidth]{results/png/terminations/pile1_terminations_2.png}
\includegraphics[width = .235\textwidth]{results/png/terminations/pile1_terminations_3.png}\\
\includegraphics[width = .235\textwidth]{results/png/terminations/pile1_terminations_4.png}
\includegraphics[width = .235\textwidth]{results/png/terminations/pile1_terminations_5.png}
\includegraphics[width = .235\textwidth]{results/png/terminations/pile1_terminations_6.png}
\end{tabular}
\caption{Complete results on multi-task block stacking with and without conditioning termination conditions on tasks.}
\label{fig:terminations}
\end{figure*}
\subsection{Off-Policy Option Learning}
\label{sec:off_policy_exps}
In order to train in a more on-policy regime, we reduce the size of the replay buffer by two orders of magnitude and increase the ratio between data generation (actor steps) and data fitting (learner steps) by one order of magnitude. The resulting algorithm is run without any additional hyperparameter tuning to provide an insight into the effect of conditioning on action probabilities under options in the inference procedure.
We can see that in the on-policy case the impact of this change is less pronounced. Across all cases, we were unable to generate significant performance gains by including action conditioning into the inference procedure.
The complete results for all experiments with and without the action-conditional inference procedure can be found in Figure \ref{fig:ac_onpol}.
\begin{figure*}[h]
\centering
\begin{tabular}{c}
\includegraphics[width = 0.98\textwidth]{results/png/gym_ac.png}\\
\includegraphics[width = 0.98\textwidth]{results/png/gym_ac_onpol.png}\\
\end{tabular}
\caption{Complete results on OpenAI gym with and without conditioning component probabilities on past executed actions. For the off-policy (top) and on-policy case (bottom). The on-policy approaches uses data considerably less efficiently and the x-axis is correspondingly adapted. }
\label{fig:ac_onpol}
\end{figure*}
\subsection{Trust-region Constraints}
\label{sec:trust_region_exps}
The complete results for all trust-region ablation experiments can be found in Figure \ref{fig:trustregions_all}.
With the exception of very high or very low constraints, the approach trains robustly, but performance drops considerably when we remove the constraint fully.
\begin{figure*}[h]
\centering
\begin{tabular}{cccc}
\includegraphics[width = .235\textwidth]{results/png/trustregion/pile1_tr_0.png}
\includegraphics[width = .235\textwidth]{results/png/trustregion/pile1_tr_1.png}
\includegraphics[width = .235\textwidth]{results/png/trustregion/pile1_tr_2.png}
\includegraphics[width = .235\textwidth]{results/png/trustregion/pile1_tr_3.png}\\
\includegraphics[width = .235\textwidth]{results/png/trustregion/pile1_tr_4.png}
\includegraphics[width = .235\textwidth]{results/png/trustregion/pile1_tr_5.png}
\includegraphics[width = .235\textwidth]{results/png/trustregion/pile1_tr_6.png}
\end{tabular}
\caption{Complete results on block stacking with varying trust-region constraints for both termination conditions $\beta$ and the high-level controller $\pi_C$. }
\label{fig:trustregions_all}
\end{figure*}
\subsection{Single Time-Step vs Multi Time-Step Inference}
\label{sec:sampling_exps}
To investigate the impact of probabilistic inference of posterior option distributions $\pi_H(o_t|h_t)$ along the whole sampled trajectory instead of using sampling-based approximations until the current timestep, we perform additional ablations displayed in Figure \ref{fig:sampling_options}. Note that we are required to perform probabilistic inference for at least one step to use backpropagation through the inference step to update our policy components. Any completely sampling-based approach would require a different policy optimizer (e.g. via likelihood ratio or reparametrization trick) which would introduce additional compounding effects.
We compare HO2~with an ablated version where we do not compute the option probabilities along the trajectory following Equation \ref{eq:dynamic1} but instead use an approximation with only concrete option samples propagating across timesteps for all steps until the current step.
To generate action samples, we therefore sample options for every timestep along a trajectory without keeping a complete distribution over options and sample actions only from the active option at every timestep.
To determine the likelihood of actions and options for every timestep, we rely on Equation \ref{eq:optiontransitions} based the sampled options of the previous timestep.
By using samples and the critic-weighted update procedure from Equation \ref{eq:objective_pi}, we can only generate gradients for the policy for the current timestep instead of backpropagating through the whole inference procedure. We find that using both samples from executed options reloaded from the buffer as well as new samples during learning can reduce performance depending on the domain. However, in the Hopper-v2 environment, sampling during learning performs slightly better than inferring options.
\begin{figure*}[h]
\centering
\begin{tabular}{c}
\includegraphics[width = .98\textwidth]{results/png/gym_samples.png}
\end{tabular}
\caption{Ablation results comparing inferred options with sampled options during learning (sampled) and during execution (executed). The ablation is run with five actors instead of a single one as used in the OpenAI gym experiments in order to generate results faster. }
\label{fig:sampling_options}
\end{figure*}
\section{Additional Experiment Details \label{app:details}}
\subsection{OpenAI Gym Experiments}
All experiments are run with asynchronous learner and actors. We use a single actor and report performance over the number of transitions generated. Following \citep{wulfmeier2019regularized}, both HO2~and RHPO use different biases for the initial mean of all options or mixture components - distributed between minimum and maximum action output. This provides a small but non-negligible benefit and supports specialization of individual options. In line with our baselines (DAC \citep{zhang2019dac}, IOPG \citep{smith2018inference}, Option Critic \citep{bacon2017option}) we use 4 options or mixture components for the OpenAI gym experiments. We run all experiments with 5 samples and report variance and mean. All experiments are run with a single actor in a distributed setting. The variant with limited switches limits to 2 switches over a sequence length of 8. Lower and higher values led to comparable results.
\begin{table}[h!]
\begin{center}
\begin{tabular}{c||c|c|c}
Hyperparameters & HO2 & RHPO & MPO \\
\hline
Policy net & \multicolumn{3}{c}{256-256}\\
Number of actions samples& \multicolumn{3}{c}{20}\\
Q function net & \multicolumn{3}{c}{256-256}\\
Number of components & \multicolumn{2}{c}{4} & NA\\
$\epsilon$ &\multicolumn{3}{c}{0.1} \\
$\epsilon_{\mu}$ & \multicolumn{3}{c}{5e-4} \\
$\epsilon_{\Sigma}$ & \multicolumn{3}{c}{5e-5}\\
$\epsilon_{\alpha}$ & \multicolumn{2}{c}{1e-4} & NA\\
$\epsilon_{t}$ & 1e-4 & \multicolumn{2}{c}{NA}\\
Discount factor ($\gamma$) & \multicolumn{3}{c}{0.99} \\
Adam learning rate & \multicolumn{3}{c}{3e-4} \\
Replay buffer size & \multicolumn{3}{c}{2e6} \\
Target network update period & \multicolumn{3}{c}{200}\\
Batch size & \multicolumn{3}{c}{256}\\
Activation function & \multicolumn{3}{c}{elu}\\
Layer norm on first layer & \multicolumn{3}{c}{Yes}\\
Tanh on output of layer norm & \multicolumn{3}{c}{Yes}\\
Tanh on actions (Q-function) & \multicolumn{3}{c}{Yes} \\
Sequence length & \multicolumn{3}{c}{8}
\end{tabular}
\end{center}
\caption{Hyperparameters - OpenAI gym}
\label{tab:Single}
\end{table}
\subsection{Action and Temporal Abstraction Experiments}
Shared across all algorithms, we use 3-layer convolutional policy and Q-function torsos with [128, 64, 64] feature channels, [(4, 4), (3, 3), (3, 3)] as kernels and stride 2.
For all multitask domains, we build on information asymmetry and only provide task information as input to the high-level controller and termination conditions to create additional incentive for the options to specialize. The Q-function has access to all observations (see the corresponding tables in this section).
We follow \citep{riedmiller2018learning,wulfmeier2019regularized} and assign rewards for all possible tasks to trajectories when adding data to the replay buffer independent of the generating policy.
\begin{table}[h!]
\begin{center}
\begin{tabular}{c||c|c|c}
Hyperparameters & HO2 & RHPO & MPO \\
\hline
Policy torso \\(shared across tasks) & \multicolumn{2}{c}{256} & 512\\
\begin{tabular}{@{}c@{}} Policy task-dependent \\ heads \end{tabular} &\multicolumn{2}{c}{ \begin{tabular}{@{}c@{}} 100 (cat.) \end{tabular}} & 200 \\
\begin{tabular}{@{}c@{}} Policy shared \\ heads \end{tabular} & \multicolumn{2}{c}{\begin{tabular}{@{}c@{}} 100 (comp.) \end{tabular}} & NA \\
\begin{tabular}{@{}c@{}} Policy task-dependent \\ terminations \end{tabular} &\begin{tabular}{@{}c@{}} 100 \\ (term.) \end{tabular} & NA & NA \\
$\epsilon_{\mu}$ & \multicolumn{3}{c}{1e-3} \\
$\epsilon_{\Sigma}$ & \multicolumn{3}{c}{1e-5}\\
$\epsilon_{\alpha}$ & \multicolumn{2}{c}{1e-4} & NA\\
$\epsilon_{t}$ & 1e-4 & \multicolumn{2}{c}{NA}\\
Number of action samples& \multicolumn{3}{c}{20}\\
Q function torso \\(shared across tasks) & \multicolumn{3}{c}{400}\\
Q function head \\(per task)& \multicolumn{3}{c}{300}\\
Number of components & \multicolumn{2}{c}{\begin{tabular}{@{}c@{}} number of tasks \end{tabular}} & NA\\
Replay buffer size & \multicolumn{3}{c}{1e6} \\
Target network \\ update period & \multicolumn{3}{c}{500}\\
Batch size & \multicolumn{3}{c}{256}\\
\end{tabular}
\end{center}
\caption{Hyperparameters. Values are taken from the OpenAI gym experiments with the above mentioned changes.}
\label{tab:MPOMulti}
\end{table}
\paragraph{Stacking}
The setup consists of a Sawyer robot arm mounted on a table and equipped with a Robotiq 2F-85 parallel gripper. In front of the robot there is a basket of size 20x20 cm which contains three cubes with an edge length of 5 cm (see Figure \ref{fig:evs}).
The agent is provided with proprioception information for the arm (joint positions, velocities and torques), and the tool center point position computed via forward kinematics. For the gripper, it receives the motor position and velocity, as well as a binary grasp flag. It also receives a wrist sensor's force and torque readings. Finally, it is provided with three RGB camera images at $64 \times 64$ resolution. At each timestep, a history of two previous observations (except for the images) is provided to the agent, along with the last two joint control commands. The observation space is detailed in Table \ref{tab:sawyer_observations}. All stacking experiments are run with 50 actors in parallel and reported over the current episodes generated by any actor. Episode lengths are up to 600 steps.
The robot arm is controlled in Cartesian velocity mode at 20Hz. The action space for the agent is 5-dimensional, as detailed in Table \ref{tab:pile_actions}. The gripper movement is also restricted to a cubic volume above the basket using virtual walls.
\begin{table}[h!]
\caption{Action space for the Sawyer Stacking experiments.}
\label{sawyer-action-table2}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{tabular}{lccc}
\toprule
Entry & Dims & Unit & Range \\
\midrule
Translational Velocity in x, y, z & 3 & m/s & [-0.07, 0.07] \\
Wrist Rotation Velocity & 1 & rad/s & [-1, 1] \\
Finger speed & 1 & tics/s & [-255, 255] \\
\bottomrule
\end{tabular}
\end{small}
\end{center}
\vskip -0.1in
\label{tab:pile_actions}
\end{table}
\begin{table}[h!]
\caption{Observations for the Sawyer Stacking experiments. The TCP's pose is represented as its world coordinate position and quaternion. In the table, $m$ denotes meters, $rad$ denotes radians, and $q$ refers to a quaternion in arbitrary units ($au$).}
\label{tab:sawyer_observations}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{tabular}{lccc}
\toprule
Entry & Dims & Unit & History \\
\midrule
Joint Position (Arm) & 7 & rad & 2 \\
Joint Velocity (Arm) & 7 & rad/s & 2 \\
Joint Torque (Arm) & 7 & Nm & 2 \\
Joint Position (Hand) & 1 & tics & 2 \\
Joint Velocity (Hand) & 1 & tics/s & 2 \\
Force-Torque (Wrist) & 6 & N, Nm & 2 \\
Binary Grasp Sensor & 1 & au & 2 \\
TCP Pose & 7 & m, au & 2 \\
Camera images & $3 \times 64 \times $ & R/G/B value & 0 \\
& $64 \times 3$ & & \\
Last Control Command & 8 & rad/s, tics/s & 2 \\
\bottomrule
\end{tabular}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
\begin{equation}
stol(v, \epsilon, r) =
\begin{cases}
1 &\text{iff} \ |v| < \epsilon \\
1 - tanh^2( \frac{atanh(\sqrt{0.95})}{r} |v|) &\text{else}
\end{cases}
\label{eq:shaped_tolerance}
\end{equation}
\begin{equation}
slin(v, \epsilon_{min}, \epsilon_{max}) =
\begin{cases}
0 &\text{iff} \ v < \epsilon_{min} \\
1 &\text{iff} \ v > \epsilon_{max} \\
\frac{v - \epsilon_{min}}{\epsilon_{max} - \epsilon_{min}} &\text{else}
\end{cases}
\label{eq:shaped_tolerance2}
\end{equation}
\begin{equation}
btol(v, \epsilon) =
\begin{cases}
1 &\text{iff} \ |v| < \epsilon \\
0 &\text{else}
\end{cases}
\end{equation}
\begin{itemize}
\item \textit{REACH(G)}: $stol(d(TCP, G), 0.02, 0.15)$: \\
Minimize the distance of the TCP to the green cube.
\item \textit{GRASP}: \\
Activate grasp sensor of gripper ("inward grasp signal" of Robotiq gripper)
\item \textit{LIFT(G)}: $slin(G, 0.03, 0.10)$ \\
Increase z coordinate of an object more than 3cm relative to the table.
\item \textit{PLACE\_WIDE(G, Y)}: $stol(d(G, Y + [0,0,0.05]), 0.01, 0.20)$\\
Bring green cube to a position 5cm above the yellow cube.
\item \textit{PLACE\_NARROW(G, Y)}: $stol(d(G, Y + [0,0,0.05]), 0.00, 0.01)$: \\
Like PLACE\_WIDE(G, Y) but more precise.
\item \textit{STACK(G, Y)}: $btol(d_{xy}(G, Y), 0.03) * btol(d_z(G, Y) + 0.05, 0.01) * (1 - \textit{GRASP})$ \\
Sparse binary reward for bringing the green cube on top of the yellow one (with 3cm tolerance horizontally and 1cm vertically) and disengaging the grasp sensor.
\item \textit{STACK\_AND\_LEAVE(G, Y)}: $ stol(d_z(TCP, G)+0.10, 0.03, 0.10) * \textit{STACK(G, Y)}$ \\
Like STACK(G, Y), but needs to move the arm 10cm above the green cube.
\end{itemize}
\paragraph{Ball-In-Cup}
This task consists of a Sawyer robot arm mounted on a pedestal. A partially see-through cup structure with a radius of 11cm and height of 17cm is attached to the wrist flange. Between cup and wrist there is a ball bearing, to which a yellow ball of 4.9cm diameter is attached via a string of 46.5cm length (see Figure \ref{fig:evs}).
Most of the settings for the experiment align with the stacking task. The agent is provided with proprioception information for the arm (joint positions, velocities and torques), and the tool center point and cup positions computed via forward kinematics. It is also provided with two RGB camera images at $64 \times 64$ resolution. At each timestep, a history of two previous observations (except for the images) is provided to the agent, along with the last two joint control commands. The observation space is detailed in Table \ref{tab:sawyer_bic_observations}. All BIC experiments are run with 20 actors in parallel and reported over the current episodes generated by any actor. Episode lengths are up to 600 steps.
The position of the ball in the cup's coordinate frame is available for reward computation, but not exposed to the agent.
The robot arm is controlled in joint velocity mode at 20Hz. The action space for the agent is 4-dimensional, with only 4 out of 7 joints being actuated, in order to avoid self-collision. Details are provided in Table \ref{tab:pile_actions}.
\begin{table}[h!]
\caption{Action space for the Sawyer Ball-in-Cup experiments.}
\label{sawyer-action-table3}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{tabular}{lccc}
\toprule
Entry & Dims & Unit & Range \\
\midrule
Rotational Joint Velocity \\
for joints 1, 2, 6 and 7 & 4 & rad/s & [-2, 2] \\
\bottomrule
\end{tabular}
\end{small}
\end{center}
\vskip -0.1in
\label{tab:bic_actions}
\end{table}
\begin{table}[h!]
\caption{Observations for the Sawyer Ball-in-Cup experiments. In the table, $m$ denotes meters, $rad$ denotes radians, and $q$ refers to a quaternion in arbitrary units ($au$). Note: the joint velocity and command represent the robot's internal state; the 3 degrees of freedom that were fixed provide a constant input of 0.}
\label{tab:sawyer_bic_observations}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{tabular}{lcc}
\toprule
Entry & Dims & Unit \\
\midrule
Joint Position (Arm) & 7 & rad \\
Joint Velocity (Arm) & 7 & rad/s \\
TCP Pose & 7 & m, au \\
Camera images & $2 \times 64 \times 64 \times 3$ & R/G/B value \\
Last Control Command & 7 & rad/s \\
\bottomrule
\end{tabular}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
Let $B_A$ be the Cartesian position in meters of the ball in the cup's coordinate frame (with an origin at the center of the cup's bottom), along axes $A \in \{x, y, z\}$.
\begin{itemize}
\item \textit{CATCH}: $0.17 > B_z > 0$ and $||B_{xy}||_2 < 0.11$ \\
Binary reward if the ball is inside the volume of the cup.
\item \textit{BALL\_ABOVE\_BASE}: $B_z > 0$ \\
Binary reward if the ball is above the bottom plane of the cup.
\item \textit{BALL\_ABOVE\_RIM}: $B_z > 0.17$\\
Binary reward if the ball is above the top plane of the cup.
\item \textit{BALL\_NEAR\_MAX}: $B_z > 0.3$\\
Binary reward if the ball is near the maximum possible height above the cup.
\item \textit{BALL\_NEAR\_RIM}: $1 - tanh^2( \frac{atanh(\sqrt{0.95})}{0.5} \times ||B_{xyz}-(0,0,0.17)||_2)$\\
Shaped distance of the ball to the center of the cup opening (0.95 loss at a distance of 0.5).
\end{itemize}
\subsection{Pre-training and Sequential Transfer Experiments}
The sequential transfer experiments are performed with the same settings as their multitask equivalents. However, they rely on a pre-training step in which we take all but the final task in each domain and train HO2~to pre-train options which we then transfer with a new high-level controller on the final task. Fine-tuning of the options is enabled as we find that it produces slightly better performance. Only data used for the final training step is reported but all both approaches were trained for the same amount of data during pretraining until convergence.
The variant with limited switches limits to 4 switches over a sequence length of 16.
\subsection{Locomotion experiments}
\label{sec:locomotion_details}
\begin{figure}[h]
\centering
\includegraphics[width = .3\textwidth, clip, trim = 0 0 0 2mm]{figures/env_ball.png}
\includegraphics[width = .3\textwidth, clip, trim = 0 0 0 0]{figures/env_ant.png}
\includegraphics[width = .3\textwidth, clip, trim = 0 0 0 3mm]{figures/env_quad.png}
\caption{The environment used for simple locomotion tasks with Ball (top), Ant (center) and Quadruped (bottom).}
\label{fig:locomotion_env}
\end{figure}
Figure~\ref{fig:locomotion_env} shows examples of the environment for the different bodies used.
In addition to proprioceptive agent state information (which includes the body height, position of the end-effectors, the positions and velocities of its joints and sensor readings from an accelerometer, gyroscope and velocimeter attached to its torso), the state space also includes the ego-centric coordinates of all target locations and a categorical index specifying the task of interest. Table \ref{tab:predicate_observations} contains an overview of the observations and action dimensions for this task.
The agent receives a sparse reward of $+60$ if part of its body reaches a square surrounding the predicate location, and $0$ otherwise.
Both the agent spawn location and target locations are randomized at the start of each episode, ensuring that the agent must use both the task index and target locations to solve the task.
\begin{table}[h!]
\caption{Observations for the \textit{go to one of 3 targets} task with Ball, Ant, and Quadruped.}
\label{tab:predicate_observations}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{tabular}{lccc}
\toprule
Entry & Dimensionality \\
\midrule
Task Index & 3 \\
Target locations & 9 \\
Proprioception (Ball) & 16 \\
Proprioception (Ant) & 41 \\
Proprioception (Quad) & 57 \\
Action Dim (Ball) & 2 \\
Action Dim (Ant) & 8 \\
Action Dim (Quad) & 12 \\
\bottomrule
\end{tabular}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
\section{Additional Experiments \label{app:additiona_exps}}
\subsection{Decomposition and Option Clustering}
\label{sec:locomotion}
We further deploy HO2~on a set of simple locomotion tasks, where the goal is for an agent to move to one of three randomized target locations in a square room.
These are specified as a set of target locations and a task index to select the target of interest.
The main research questions we aim to answer (both qualitatively and quantitatively) are: (1) How do the discovered options specialize and represent different behaviors?; and (2) How is this decomposition affected by variations in the task, embodiment, or
algorithmic properties of the agent?
To answer these questions, we investigate a number of variations:
\begin{itemize}
\item Three bodies: a quadruped with two (``Ant'') or three (``Quad'') torque-controlled joints on each leg, and a rolling ball (``Ball'') controlled by applying yaw and forward roll torques.
\item With or without \textit{information asymmetry} (IA) between high- and low-level controllers, where the task index and target positions are withheld from the options and only provided to the categorical option controller.
\item With or without a limited number of switches in the optimization.
\end{itemize}
Information-asymmetry (IA) in particular, has recently been shown to be effective for learning general skills~\citep{galashov2018information}: by withholding task-information from low-level options, they can learn task-agnostic, temporally-consistent behaviors that can be composed by the option controller to solve a task.
This mirrors the setup in the aforementioned Sawyer tasks, where the task information is only fed to the high-level controller.
For each of the different cases, we qualitatively evaluate the trained agent over $100$ episodes, and generate histograms over the different options used, and scatter plots to indicate how options cluster the state/action spaces and task information.
We also present quantitative measures (over $5$ seeds) to accompany these plots, in the form of (1) Silhouette score, a measure of clustering accuracy based on inter- and intra-cluster distances\footnotemark; and (2) entropy over the option histogram, to quantify diversity.
The quantitative results are shown in Table~\ref{table:predicates_quant}, and the qualitative plots are shown in Figure~\ref{fig:predicates_qual}.
Details and images of the environment are in Section~\ref{sec:locomotion_details}.
\footnotetext{The silhouette score is a value in $[-1, 1]$ with higher values indicating cluster separability. We note that the values obtained in this setting do not correspond to high \textit{absolute} separability, as multiple options can be used to model the same skill or behavior abstraction. We are instead interested in the \textit{relative} clustering score for different scenarios.}
\begin{table*}[h!]
\centering
\scalebox{0.95}{
\begin{tabular}{ c | c | c c c}
& \multirow{2}{*}{\bf{Histogram over options}} & \multicolumn{3}{c}{\bf{t-SNE scatter plots}} \\
& & \bf{Actions} & \bf{States} & \bf{Task} \\
\hline \\ [-1.5ex]
\rotatebox{90}{\bf{Ball, no IA}} & \includegraphics[scale = 0.19]{results/predicates/ball_hist_noIA.png}
& \includegraphics[scale = 0.19]{results/predicates/ball_actions_noIA.png}
& \includegraphics[scale = 0.19]{results/predicates/ball_states_noIA.png}
& \includegraphics[scale = 0.19]{results/predicates/ball_task_noIA.png} \\
\rotatebox{90}{\bf{Ball, with IA}} & \includegraphics[scale = 0.19]{results/predicates/ball_hist_fullIA.png}
& \includegraphics[scale = 0.19]{results/predicates/ball_actions_fullIA.png}
& \includegraphics[scale = 0.19]{results/predicates/ball_states_fullIA.png}
& \includegraphics[scale = 0.19]{results/predicates/ball_task_fullIA.png} \\
\rotatebox{90}{\bf{Ant, no IA}} & \includegraphics[scale = 0.19]{results/predicates/ant_hist_noIA.png}
& \includegraphics[scale = 0.19]{results/predicates/ant_actions_noIA.png}
& \includegraphics[scale = 0.19]{results/predicates/ant_states_noIA.png}
& \includegraphics[scale = 0.19]{results/predicates/ant_task_noIA.png} \\
\rotatebox{90}{\bf{Ant, with IA}} & \includegraphics[scale = 0.19]{results/predicates/ant_hist_fullIA.png}
& \includegraphics[scale = 0.19]{results/predicates/ant_actions_fullIA.png}
& \includegraphics[scale = 0.19]{results/predicates/ant_states_fullIA.png}
& \includegraphics[scale = 0.19]{results/predicates/ant_task_fullIA.png} \\
\rotatebox{90}{\bf{Quad, no IA}} & \includegraphics[scale = 0.19]{results/predicates/quad_hist_noIA.png}
& \includegraphics[scale = 0.19]{results/predicates/quad_actions_noIA.png}
& \includegraphics[scale = 0.19]{results/predicates/quad_states_noIA.png}
& \includegraphics[scale = 0.19]{results/predicates/quad_task_noIA.png} \\
\rotatebox{90}{\bf{Quad, with IA}} & \includegraphics[scale = 0.19]{results/predicates/quad_hist_fullIA.png}
& \includegraphics[scale = 0.19]{results/predicates/quad_actions_fullIA.png}
& \includegraphics[scale = 0.19]{results/predicates/quad_states_fullIA.png}
& \includegraphics[scale = 0.19]{results/predicates/quad_task_fullIA.png} \\
\vspace{1ex}
\end{tabular}}
\captionof{figure}{Qualitative results for the three bodies (Ball, Ant, Quad) without limited switches, both with and without IA, obtained over $100$ evaluation episodes. \textbf{Left}: the histogram over different options used by each agent; \textbf{Centre to right}: scatter plots of the action space, state space, and task information, colored by the corresponding option selected. Each of these spaces has been projected to $2D$ using t-SNE, except for the two-dimensional action space for Ball, which is plotted directly. For each case, care has been taken to choose a median / representative model out of $5$ seeds. }
\label{fig:predicates_qual}
\end{table*}
\begin{table*}[h]
\centering
\scalebox{0.7}{
\begin{tabular}{ c | c c | c c c c c }
\hline \\ [-1.5ex]
& \multicolumn{2}{c}{\bf{Scenario}} & \bf{Option entropy} & \bf{Switch rate} & \bf{Cluster score (actions)} & \bf{Cluster score (states)} & \bf{Cluster score (tasks)} \\
\hline \\ [-1.5ex]
\multirow{6}{*}{\rotatebox[origin=c]{90}{\bf{Regular}}} & \multirow{2}{*}{Ball} & No IA & $2.105 \pm 0.074$ & $0.196 \pm 0.010$ & $-0.269 \pm 0.058$ & $-0.110 \pm 0.025$ & $-0.056 \pm 0.011$ \\
& & With IA & $2.123 \pm 0.066$ & $0.346 \pm 0.024$ & $-0.056 \pm 0.024$ & $-0.164 \pm 0.051$ & $-0.057 \pm 0.008$ \\
\cline{4-8} \\ [-1.5ex]
& \multirow{2}{*}{Ant} & No IA & $1.583 \pm 0.277$ & $0.268 \pm 0.043$ & $-0.148 \pm 0.034$ & $-0.182 \pm 0.068$ & $-0.075 \pm 0.011$ \\
& & With IA & $2.119 \pm 0.073$ & $0.303 \pm 0.019$ & $-0.053 \pm 0.021$ & $-0.066 \pm 0.024$ & $-0.052 \pm 0.006$ \\
\cline{4-8} \\ [-1.5ex]
& \multirow{2}{*}{Quad} & No IA & $1.792 \pm 0.127$ & $0.336 \pm 0.019$ & $-0.078 \pm 0.064$ & $-0.113 \pm 0.035$ & $-0.089 \pm 0.050$ \\
& & With IA & $2.210 \pm 0.037$ & $0.403 \pm 0.014$ & $0.029 \pm 0.029$ & $-0.040 \pm 0.003$ & $-0.047 \pm 0.006$ \\
\hline \\ [-1.5ex]
\multirow{6}{*}{\rotatebox[origin=c]{90}{\parbox{2cm}{\bf{Limited Switches}}}} & \multirow{2}{*}{Ball} & No IA & $1.804 \pm 0.214$ & $0.020 \pm 0.009$ & $-0.304 \pm 0.040$ & $-0.250 \pm 0.135$ & $-0.131 \pm 0.049$ \\
& & With IA & $2.233 \pm 0.027$ & $0.142 \pm 0.015$ & $-0.132 \pm 0.035$ & $-0.113 \pm 0.043$ & $-0.053 \pm 0.003$ \\
\cline{4-8} \\ [-1.5ex]
& \multirow{2}{*}{Ant} & No IA & $1.600 \pm 0.076$ & $0.073 \pm 0.014$ & $-0.124 \pm 0.017$ & $-0.155 \pm 0.067$ & $-0.084 \pm 0.034$ \\
& & With IA & $2.222 \pm 0.043$ & $0.141 \pm 0.015$ & $-0.052 \pm 0.011$ & $-0.054 \pm 0.014$ & $-0.050 \pm 0.007$ \\
\cline{4-8} \\ [-1.5ex]
& \multirow{2}{*}{Quad} & No IA & $1.549 \pm 0.293$ & $0.185 \pm 0.029$ & $-0.075 \pm 0.036$ & $-0.126 \pm 0.030$ & $-0.112 \pm 0.022$\\
& & With IA & $2.231 \pm 0.042$ & $0.167 \pm 0.025$ & $-0.029 \pm 0.029$ & $-0.032 \pm 0.004$ & $-0.053 \pm 0.009$ \\
\hline
\end{tabular}}
\vspace{1ex}
\caption{Quantitative results
indicating the diversity of options used (entropy), and clustering accuracy in action and state spaces (silhouette score), with and without information asymmetry (IA), and with or without limited number of switches.
Higher values indicate greater separability by option / component.}
\label{table:predicates_quant}
\end{table*}
The results show a number of trends.
Firstly, the usage of IA leads to a greater diversity of options used, across all bodies.
Secondly, with IA, the options tend to lead to specialized actions, as demonstrated by the clearer option separation in action space.
In the case of the $2D$ action space of the ball, the options correspond to turning left or right (y-axis) at different forward torques (x-axis).
Thirdly, while the simple Ball can learn these high-level body-agnostic behaviors, the options for more complex bodies have greater switch rates that suggest the learned behaviors may be related to lower-level motor behaviors over a shorter timescale.
Lastly, limiting the number of switches during marginalization does indeed lead to a lower switch rate between options, without hampering the ability of the agent to complete the task.
\subsection{Action and Temporal Abstraction Experiments}
The complete results for all pixel and proprioception based multitask experiments for ball-in-cup and stacking can be respectively found in Figures \ref{fig:bic_all} and \ref{fig:stack_all}.
Both RHPO and HO2~outperform a simple Gaussian policy trained via MPO. HO2~additionally improves performance over mixture policies (RHPO) demonstrating that the ability to learn temporal abstraction proves beneficial in these domains. The difference grows as the task complexity increases and is particularly pronounced for the final stacking tasks.
\begin{figure*}[h]
\centering
\begin{tabular}{ccc}
\includegraphics[width = 0.3\textwidth]{results/png/bic/BIC_1.png}
\includegraphics[width = 0.3\textwidth]{results/png/bic/BIC_2.png}
\includegraphics[width = 0.3\textwidth]{results/png/bic/BIC_3.png}\\
\includegraphics[width = 0.3\textwidth]{results/png/bic/BIC_4.png}
\includegraphics[width = 0.3\textwidth]{results/png/bic/BIC_0.png}
\end{tabular}
\caption{Complete results on pixel-based ball-in-cup experiments.}
\label{fig:bic_all}
\end{figure*}
\begin{figure*}[h]
\centering
\begin{tabular}{cccc}
\includegraphics[width = 0.235\textwidth]{results/png/stack/pile1_0.png}
\includegraphics[width = 0.235\textwidth]{results/png/stack/pile1_1.png}
\includegraphics[width = 0.235\textwidth]{results/png/stack/pile1_2.png}
\includegraphics[width = 0.235\textwidth]{results/png/stack/pile1_3.png} \\
\includegraphics[width = 0.235\textwidth]{results/png/stack/pile1_4.png}
\includegraphics[width = 0.235\textwidth]{results/png/stack/pile1_5.png}
\includegraphics[width = 0.235\textwidth]{results/png/stack/pile1_6.png}
\end{tabular}
\caption{Complete results on pixel-based stacking experiments.}
\label{fig:stack_all}
\end{figure*}
\subsection{Task-agnostic Terminations}
The perspective of options as task-independent skills with termination conditions as being part of a skill, leads to termination conditions which are also task independent. We show that at least in this limited set of experiments, the perspective of task-dependent termination conditions - i.e. with access to task information - which can be understood as part of the high-level control mechanism for activating options improves performance.
Intuitively, by removing task information from the termination conditions, we constrain the space of solutions which first accelerates training slightly but limits final performance. It additionally shows that while we benefit when sharing options across tasks, each task gains from controlling the length of these options independently.
Based on these results, the termination conditions across all other multi-task experiments are conditioned on the active task.
The complete results for all experiments with task-agnostic terminations can be found in Figure \ref{fig:terminations}.
\begin{figure*}[h]
\centering
\begin{tabular}{cccc}
\includegraphics[width = .235\textwidth]{results/png/terminations/pile1_terminations_0.png}
\includegraphics[width = .235\textwidth]{results/png/terminations/pile1_terminations_1.png}
\includegraphics[width = .235\textwidth]{results/png/terminations/pile1_terminations_2.png}
\includegraphics[width = .235\textwidth]{results/png/terminations/pile1_terminations_3.png}\\
\includegraphics[width = .235\textwidth]{results/png/terminations/pile1_terminations_4.png}
\includegraphics[width = .235\textwidth]{results/png/terminations/pile1_terminations_5.png}
\includegraphics[width = .235\textwidth]{results/png/terminations/pile1_terminations_6.png}
\end{tabular}
\caption{Complete results on multi-task block stacking with and without conditioning termination conditions on tasks.}
\label{fig:terminations}
\end{figure*}
\subsection{Off-Policy Option Learning}
\label{sec:off_policy_exps}
In order to train in a more on-policy regime, we reduce the size of the replay buffer by two orders of magnitude and increase the ratio between data generation (actor steps) and data fitting (learner steps) by one order of magnitude. The resulting algorithm is run without any additional hyperparameter tuning to provide an insight into the effect of conditioning on action probabilities under options in the inference procedure.
We can see that in the on-policy case the impact of this change is less pronounced. Across all cases, we were unable to generate significant performance gains by including action conditioning into the inference procedure.
The complete results for all experiments with and without the action-conditional inference procedure can be found in Figure \ref{fig:ac_onpol}.
\begin{figure*}[h]
\centering
\begin{tabular}{c}
\includegraphics[width = 0.98\textwidth]{results/png/gym_ac.png}\\
\includegraphics[width = 0.98\textwidth]{results/png/gym_ac_onpol.png}\\
\end{tabular}
\caption{Complete results on OpenAI gym with and without conditioning component probabilities on past executed actions. For the off-policy (top) and on-policy case (bottom). The on-policy approaches uses data considerably less efficiently and the x-axis is correspondingly adapted. }
\label{fig:ac_onpol}
\end{figure*}
\subsection{Trust-region Constraints}
\label{sec:trust_region_exps}
The complete results for all trust-region ablation experiments can be found in Figure \ref{fig:trustregions_all}.
With the exception of very high or very low constraints, the approach trains robustly, but performance drops considerably when we remove the constraint fully.
\begin{figure*}[h]
\centering
\begin{tabular}{cccc}
\includegraphics[width = .235\textwidth]{results/png/trustregion/pile1_tr_0.png}
\includegraphics[width = .235\textwidth]{results/png/trustregion/pile1_tr_1.png}
\includegraphics[width = .235\textwidth]{results/png/trustregion/pile1_tr_2.png}
\includegraphics[width = .235\textwidth]{results/png/trustregion/pile1_tr_3.png}\\
\includegraphics[width = .235\textwidth]{results/png/trustregion/pile1_tr_4.png}
\includegraphics[width = .235\textwidth]{results/png/trustregion/pile1_tr_5.png}
\includegraphics[width = .235\textwidth]{results/png/trustregion/pile1_tr_6.png}
\end{tabular}
\caption{Complete results on block stacking with varying trust-region constraints for both termination conditions $\beta$ and the high-level controller $\pi_C$. }
\label{fig:trustregions_all}
\end{figure*}
\subsection{Single Time-Step vs Multi Time-Step Inference}
\label{sec:sampling_exps}
To investigate the impact of probabilistic inference of posterior option distributions $\pi_H(o_t|h_t)$ along the whole sampled trajectory instead of using sampling-based approximations until the current timestep, we perform additional ablations displayed in Figure \ref{fig:sampling_options}. Note that we are required to perform probabilistic inference for at least one step to use backpropagation through the inference step to update our policy components. Any completely sampling-based approach would require a different policy optimizer (e.g. via likelihood ratio or reparametrization trick) which would introduce additional compounding effects.
We compare HO2~with an ablated version where we do not compute the option probabilities along the trajectory following Equation \ref{eq:dynamic1} but instead use an approximation with only concrete option samples propagating across timesteps for all steps until the current step.
To generate action samples, we therefore sample options for every timestep along a trajectory without keeping a complete distribution over options and sample actions only from the active option at every timestep.
To determine the likelihood of actions and options for every timestep, we rely on Equation \ref{eq:optiontransitions} based the sampled options of the previous timestep.
By using samples and the critic-weighted update procedure from Equation \ref{eq:objective_pi}, we can only generate gradients for the policy for the current timestep instead of backpropagating through the whole inference procedure. We find that using both samples from executed options reloaded from the buffer as well as new samples during learning can reduce performance depending on the domain. However, in the Hopper-v2 environment, sampling during learning performs slightly better than inferring options.
\begin{figure*}[h]
\centering
\begin{tabular}{c}
\includegraphics[width = .98\textwidth]{results/png/gym_samples.png}
\end{tabular}
\caption{Ablation results comparing inferred options with sampled options during learning (sampled) and during execution (executed). The ablation is run with five actors instead of a single one as used in the OpenAI gym experiments in order to generate results faster. }
\label{fig:sampling_options}
\end{figure*}
\section{Additional Experiment Details \label{app:details}}
\subsection{OpenAI Gym Experiments}
All experiments are run with asynchronous learner and actors. We use a single actor and report performance over the number of transitions generated. Following \citep{wulfmeier2019regularized}, both HO2~and RHPO use different biases for the initial mean of all options or mixture components - distributed between minimum and maximum action output. This provides a small but non-negligible benefit and supports specialization of individual options. In line with our baselines (DAC \citep{zhang2019dac}, IOPG \citep{smith2018inference}, Option Critic \citep{bacon2017option}) we use 4 options or mixture components for the OpenAI gym experiments. We run all experiments with 5 samples and report variance and mean. All experiments are run with a single actor in a distributed setting. The variant with limited switches limits to 2 switches over a sequence length of 8. Lower and higher values led to comparable results.
\begin{table}[h!]
\begin{center}
\begin{tabular}{c||c|c|c}
Hyperparameters & HO2 & RHPO & MPO \\
\hline
Policy net & \multicolumn{3}{c}{256-256}\\
Number of actions samples& \multicolumn{3}{c}{20}\\
Q function net & \multicolumn{3}{c}{256-256}\\
Number of components & \multicolumn{2}{c}{4} & NA\\
$\epsilon$ &\multicolumn{3}{c}{0.1} \\
$\epsilon_{\mu}$ & \multicolumn{3}{c}{5e-4} \\
$\epsilon_{\Sigma}$ & \multicolumn{3}{c}{5e-5}\\
$\epsilon_{\alpha}$ & \multicolumn{2}{c}{1e-4} & NA\\
$\epsilon_{t}$ & 1e-4 & \multicolumn{2}{c}{NA}\\
Discount factor ($\gamma$) & \multicolumn{3}{c}{0.99} \\
Adam learning rate & \multicolumn{3}{c}{3e-4} \\
Replay buffer size & \multicolumn{3}{c}{2e6} \\
Target network update period & \multicolumn{3}{c}{200}\\
Batch size & \multicolumn{3}{c}{256}\\
Activation function & \multicolumn{3}{c}{elu}\\
Layer norm on first layer & \multicolumn{3}{c}{Yes}\\
Tanh on output of layer norm & \multicolumn{3}{c}{Yes}\\
Tanh on actions (Q-function) & \multicolumn{3}{c}{Yes} \\
Sequence length & \multicolumn{3}{c}{8}
\end{tabular}
\end{center}
\caption{Hyperparameters - OpenAI gym}
\label{tab:Single}
\end{table}
\subsection{Action and Temporal Abstraction Experiments}
Shared across all algorithms, we use 3-layer convolutional policy and Q-function torsos with [128, 64, 64] feature channels, [(4, 4), (3, 3), (3, 3)] as kernels and stride 2.
For all multitask domains, we build on information asymmetry and only provide task information as input to the high-level controller and termination conditions to create additional incentive for the options to specialize. The Q-function has access to all observations (see the corresponding tables in this section).
We follow \citep{riedmiller2018learning,wulfmeier2019regularized} and assign rewards for all possible tasks to trajectories when adding data to the replay buffer independent of the generating policy.
\begin{table}[h!]
\begin{center}
\begin{tabular}{c||c|c|c}
Hyperparameters & HO2 & RHPO & MPO \\
\hline
Policy torso \\(shared across tasks) & \multicolumn{2}{c}{256} & 512\\
\begin{tabular}{@{}c@{}} Policy task-dependent \\ heads \end{tabular} &\multicolumn{2}{c}{ \begin{tabular}{@{}c@{}} 100 (cat.) \end{tabular}} & 200 \\
\begin{tabular}{@{}c@{}} Policy shared \\ heads \end{tabular} & \multicolumn{2}{c}{\begin{tabular}{@{}c@{}} 100 (comp.) \end{tabular}} & NA \\
\begin{tabular}{@{}c@{}} Policy task-dependent \\ terminations \end{tabular} &\begin{tabular}{@{}c@{}} 100 \\ (term.) \end{tabular} & NA & NA \\
$\epsilon_{\mu}$ & \multicolumn{3}{c}{1e-3} \\
$\epsilon_{\Sigma}$ & \multicolumn{3}{c}{1e-5}\\
$\epsilon_{\alpha}$ & \multicolumn{2}{c}{1e-4} & NA\\
$\epsilon_{t}$ & 1e-4 & \multicolumn{2}{c}{NA}\\
Number of action samples& \multicolumn{3}{c}{20}\\
Q function torso \\(shared across tasks) & \multicolumn{3}{c}{400}\\
Q function head \\(per task)& \multicolumn{3}{c}{300}\\
Number of components & \multicolumn{2}{c}{\begin{tabular}{@{}c@{}} number of tasks \end{tabular}} & NA\\
Replay buffer size & \multicolumn{3}{c}{1e6} \\
Target network \\ update period & \multicolumn{3}{c}{500}\\
Batch size & \multicolumn{3}{c}{256}\\
\end{tabular}
\end{center}
\caption{Hyperparameters. Values are taken from the OpenAI gym experiments with the above mentioned changes.}
\label{tab:MPOMulti}
\end{table}
\paragraph{Stacking}
The setup consists of a Sawyer robot arm mounted on a table and equipped with a Robotiq 2F-85 parallel gripper. In front of the robot there is a basket of size 20x20 cm which contains three cubes with an edge length of 5 cm (see Figure \ref{fig:evs}).
The agent is provided with proprioception information for the arm (joint positions, velocities and torques), and the tool center point position computed via forward kinematics. For the gripper, it receives the motor position and velocity, as well as a binary grasp flag. It also receives a wrist sensor's force and torque readings. Finally, it is provided with three RGB camera images at $64 \times 64$ resolution. At each timestep, a history of two previous observations (except for the images) is provided to the agent, along with the last two joint control commands. The observation space is detailed in Table \ref{tab:sawyer_observations}. All stacking experiments are run with 50 actors in parallel and reported over the current episodes generated by any actor. Episode lengths are up to 600 steps.
The robot arm is controlled in Cartesian velocity mode at 20Hz. The action space for the agent is 5-dimensional, as detailed in Table \ref{tab:pile_actions}. The gripper movement is also restricted to a cubic volume above the basket using virtual walls.
\begin{table}[h!]
\caption{Action space for the Sawyer Stacking experiments.}
\label{sawyer-action-table2}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{tabular}{lccc}
\toprule
Entry & Dims & Unit & Range \\
\midrule
Translational Velocity in x, y, z & 3 & m/s & [-0.07, 0.07] \\
Wrist Rotation Velocity & 1 & rad/s & [-1, 1] \\
Finger speed & 1 & tics/s & [-255, 255] \\
\bottomrule
\end{tabular}
\end{small}
\end{center}
\vskip -0.1in
\label{tab:pile_actions}
\end{table}
\begin{table}[h!]
\caption{Observations for the Sawyer Stacking experiments. The TCP's pose is represented as its world coordinate position and quaternion. In the table, $m$ denotes meters, $rad$ denotes radians, and $q$ refers to a quaternion in arbitrary units ($au$).}
\label{tab:sawyer_observations}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{tabular}{lccc}
\toprule
Entry & Dims & Unit & History \\
\midrule
Joint Position (Arm) & 7 & rad & 2 \\
Joint Velocity (Arm) & 7 & rad/s & 2 \\
Joint Torque (Arm) & 7 & Nm & 2 \\
Joint Position (Hand) & 1 & tics & 2 \\
Joint Velocity (Hand) & 1 & tics/s & 2 \\
Force-Torque (Wrist) & 6 & N, Nm & 2 \\
Binary Grasp Sensor & 1 & au & 2 \\
TCP Pose & 7 & m, au & 2 \\
Camera images & $3 \times 64 \times $ & R/G/B value & 0 \\
& $64 \times 3$ & & \\
Last Control Command & 8 & rad/s, tics/s & 2 \\
\bottomrule
\end{tabular}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
\begin{equation}
stol(v, \epsilon, r) =
\begin{cases}
1 &\text{iff} \ |v| < \epsilon \\
1 - tanh^2( \frac{atanh(\sqrt{0.95})}{r} |v|) &\text{else}
\end{cases}
\label{eq:shaped_tolerance}
\end{equation}
\begin{equation}
slin(v, \epsilon_{min}, \epsilon_{max}) =
\begin{cases}
0 &\text{iff} \ v < \epsilon_{min} \\
1 &\text{iff} \ v > \epsilon_{max} \\
\frac{v - \epsilon_{min}}{\epsilon_{max} - \epsilon_{min}} &\text{else}
\end{cases}
\label{eq:shaped_tolerance2}
\end{equation}
\begin{equation}
btol(v, \epsilon) =
\begin{cases}
1 &\text{iff} \ |v| < \epsilon \\
0 &\text{else}
\end{cases}
\end{equation}
\begin{itemize}
\item \textit{REACH(G)}: $stol(d(TCP, G), 0.02, 0.15)$: \\
Minimize the distance of the TCP to the green cube.
\item \textit{GRASP}: \\
Activate grasp sensor of gripper ("inward grasp signal" of Robotiq gripper)
\item \textit{LIFT(G)}: $slin(G, 0.03, 0.10)$ \\
Increase z coordinate of an object more than 3cm relative to the table.
\item \textit{PLACE\_WIDE(G, Y)}: $stol(d(G, Y + [0,0,0.05]), 0.01, 0.20)$\\
Bring green cube to a position 5cm above the yellow cube.
\item \textit{PLACE\_NARROW(G, Y)}: $stol(d(G, Y + [0,0,0.05]), 0.00, 0.01)$: \\
Like PLACE\_WIDE(G, Y) but more precise.
\item \textit{STACK(G, Y)}: $btol(d_{xy}(G, Y), 0.03) * btol(d_z(G, Y) + 0.05, 0.01) * (1 - \textit{GRASP})$ \\
Sparse binary reward for bringing the green cube on top of the yellow one (with 3cm tolerance horizontally and 1cm vertically) and disengaging the grasp sensor.
\item \textit{STACK\_AND\_LEAVE(G, Y)}: $ stol(d_z(TCP, G)+0.10, 0.03, 0.10) * \textit{STACK(G, Y)}$ \\
Like STACK(G, Y), but needs to move the arm 10cm above the green cube.
\end{itemize}
\paragraph{Ball-In-Cup}
This task consists of a Sawyer robot arm mounted on a pedestal. A partially see-through cup structure with a radius of 11cm and height of 17cm is attached to the wrist flange. Between cup and wrist there is a ball bearing, to which a yellow ball of 4.9cm diameter is attached via a string of 46.5cm length (see Figure \ref{fig:evs}).
Most of the settings for the experiment align with the stacking task. The agent is provided with proprioception information for the arm (joint positions, velocities and torques), and the tool center point and cup positions computed via forward kinematics. It is also provided with two RGB camera images at $64 \times 64$ resolution. At each timestep, a history of two previous observations (except for the images) is provided to the agent, along with the last two joint control commands. The observation space is detailed in Table \ref{tab:sawyer_bic_observations}. All BIC experiments are run with 20 actors in parallel and reported over the current episodes generated by any actor. Episode lengths are up to 600 steps.
The position of the ball in the cup's coordinate frame is available for reward computation, but not exposed to the agent.
The robot arm is controlled in joint velocity mode at 20Hz. The action space for the agent is 4-dimensional, with only 4 out of 7 joints being actuated, in order to avoid self-collision. Details are provided in Table \ref{tab:pile_actions}.
\begin{table}[h!]
\caption{Action space for the Sawyer Ball-in-Cup experiments.}
\label{sawyer-action-table3}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{tabular}{lccc}
\toprule
Entry & Dims & Unit & Range \\
\midrule
Rotational Joint Velocity \\
for joints 1, 2, 6 and 7 & 4 & rad/s & [-2, 2] \\
\bottomrule
\end{tabular}
\end{small}
\end{center}
\vskip -0.1in
\label{tab:bic_actions}
\end{table}
\begin{table}[h!]
\caption{Observations for the Sawyer Ball-in-Cup experiments. In the table, $m$ denotes meters, $rad$ denotes radians, and $q$ refers to a quaternion in arbitrary units ($au$). Note: the joint velocity and command represent the robot's internal state; the 3 degrees of freedom that were fixed provide a constant input of 0.}
\label{tab:sawyer_bic_observations}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{tabular}{lcc}
\toprule
Entry & Dims & Unit \\
\midrule
Joint Position (Arm) & 7 & rad \\
Joint Velocity (Arm) & 7 & rad/s \\
TCP Pose & 7 & m, au \\
Camera images & $2 \times 64 \times 64 \times 3$ & R/G/B value \\
Last Control Command & 7 & rad/s \\
\bottomrule
\end{tabular}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
Let $B_A$ be the Cartesian position in meters of the ball in the cup's coordinate frame (with an origin at the center of the cup's bottom), along axes $A \in \{x, y, z\}$.
\begin{itemize}
\item \textit{CATCH}: $0.17 > B_z > 0$ and $||B_{xy}||_2 < 0.11$ \\
Binary reward if the ball is inside the volume of the cup.
\item \textit{BALL\_ABOVE\_BASE}: $B_z > 0$ \\
Binary reward if the ball is above the bottom plane of the cup.
\item \textit{BALL\_ABOVE\_RIM}: $B_z > 0.17$\\
Binary reward if the ball is above the top plane of the cup.
\item \textit{BALL\_NEAR\_MAX}: $B_z > 0.3$\\
Binary reward if the ball is near the maximum possible height above the cup.
\item \textit{BALL\_NEAR\_RIM}: $1 - tanh^2( \frac{atanh(\sqrt{0.95})}{0.5} \times ||B_{xyz}-(0,0,0.17)||_2)$\\
Shaped distance of the ball to the center of the cup opening (0.95 loss at a distance of 0.5).
\end{itemize}
\subsection{Pre-training and Sequential Transfer Experiments}
The sequential transfer experiments are performed with the same settings as their multitask equivalents. However, they rely on a pre-training step in which we take all but the final task in each domain and train HO2~to pre-train options which we then transfer with a new high-level controller on the final task. Fine-tuning of the options is enabled as we find that it produces slightly better performance. Only data used for the final training step is reported but all both approaches were trained for the same amount of data during pretraining until convergence.
The variant with limited switches limits to 4 switches over a sequence length of 16.
\subsection{Locomotion experiments}
\label{sec:locomotion_details}
\begin{figure}[h]
\centering
\includegraphics[width = .3\textwidth, clip, trim = 0 0 0 2mm]{figures/env_ball.png}
\includegraphics[width = .3\textwidth, clip, trim = 0 0 0 0]{figures/env_ant.png}
\includegraphics[width = .3\textwidth, clip, trim = 0 0 0 3mm]{figures/env_quad.png}
\caption{The environment used for simple locomotion tasks with Ball (top), Ant (center) and Quadruped (bottom).}
\label{fig:locomotion_env}
\end{figure}
Figure~\ref{fig:locomotion_env} shows examples of the environment for the different bodies used.
In addition to proprioceptive agent state information (which includes the body height, position of the end-effectors, the positions and velocities of its joints and sensor readings from an accelerometer, gyroscope and velocimeter attached to its torso), the state space also includes the ego-centric coordinates of all target locations and a categorical index specifying the task of interest. Table \ref{tab:predicate_observations} contains an overview of the observations and action dimensions for this task.
The agent receives a sparse reward of $+60$ if part of its body reaches a square surrounding the predicate location, and $0$ otherwise.
Both the agent spawn location and target locations are randomized at the start of each episode, ensuring that the agent must use both the task index and target locations to solve the task.
\begin{table}[h!]
\caption{Observations for the \textit{go to one of 3 targets} task with Ball, Ant, and Quadruped.}
\label{tab:predicate_observations}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{tabular}{lccc}
\toprule
Entry & Dimensionality \\
\midrule
Task Index & 3 \\
Target locations & 9 \\
Proprioception (Ball) & 16 \\
Proprioception (Ant) & 41 \\
Proprioception (Quad) & 57 \\
Action Dim (Ball) & 2 \\
Action Dim (Ant) & 8 \\
Action Dim (Quad) & 12 \\
\bottomrule
\end{tabular}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
\section{Conclusions}\label{sec:conclusions}
We introduce a robust, efficient algorithm for off-policy training of option policies. The approach outperforms recent work in option learning on common benchmarks and is able to solve complex, simulated robot manipulation tasks from raw pixel inputs more reliably than competitive baselines.
HO2~takes a probabilistic inference perspective to option learning, infers option and action probabilities for trajectories in hindsight, and performs critic-weighted maximum-likelihood estimation by backpropagating through the inference step.
Being able to infer options for a given trajectory allows robust off-policy training and determination of updates for all instead of only for the executed options. It also makes it possible to impose constraints on the termination frequency independently of an environment's reward scale.
We separately analyze the impact of action abstraction (via mixture policies), and temporal abstraction (via options). We find that each abstraction independently improves performance. Additional maximization of temporal consistency for option choices is beneficial when transferring pre-trained options but displays a limited effect when learning from scratch.
Furthermore, we investigate the consequences of the off-policyness of training data and demonstrate the benefits of trust-region constraints for option learning.
We examine the impact of different agent and environment properties (such as information asymmetry, tasks, and embodiments) with respect to task decomposition and option clustering; a direction which provides opportunities for further investigation in the future.
Finally, since our method is based on (weighted) maximum likelihood estimation, it can be adapted naturally to learn structured behavior representations in mixed data regimes, e.g. to learn from combinations of demonstrations, logged data, and online trajectories. This opens up promising directions for future work.
\section{Additional Derivations \label{app:additiona_derivations}}
In this section we explain the derivations for training option policies with options parameterized as Gaussian distributions. Each policy improvement step is split into two parts: non-parametric and parametric update.
\subsection{Non-parametric Option Policy Update \label{app:nonparam}}
In order to obtain the non-parametric policy improvement we optimize the following equation:
\begin{equation*}
\begin{aligned}
& \max_q \mathbb{E}_{h_t \sim p(h_t)} \big[ \mathbb{E}_{a_t,o_t \sim q} \big[Q_\phi(s_t,a_t,o_t)] \big] \\
& s.t. \mathbb{E}_{h_t \sim p(h_t)} \big[ \textrm{KL}(q(\cdot|h_t) , \pi_\theta(\cdot|h_t) ) \big] < \epsilon_E\\
& s.t. \mathbb{E}_{h_t \sim p(h_t)} \big[ \mathbb{E}_{q(a_t,o_t | h_t)} \big [1 \big] \big] = 1.
\end{aligned}
\end{equation*}
for each step $t$ of a trajectory, where $h_{t} = \{s_t, a_{t-1}, s_{t-1},... a_0,s_0\}$ represents the history of states and actions and $p(h_t)$ describes the distribution over histories for timestep $t$, which in practice are approximated via the use of a replay buffer $\mathcal{D}$. When sampling $h_t$, the state $s_t$ is the first element of the history. The inequality constraint describes the maximum allowed KL divergence between intermediate update and previous parametric policy, while the equality constraint simply ensures that the intermediate update represents a normalized distribution.
Subsequently, in order to render the following derivations more intuitive, we replace the expectations and explicitly use integrals.
The Lagrangian $L(q,\eta,\gamma)$ can now be formulated as
\begin{align}\label{eq:completelagrange1}
L(q,\eta,\gamma) = \iiint p(h_t) q(a_t,o_t|h_t) Q_\phi(s_t,a_t,o_t) \\
\hfill\nonumber \diff o_t \diff a_t \diff h_t \\
\nonumber +\eta \bigg(\epsilon_E - \iiint p(h_t)q(a_t,o_t|h_t)
\log \frac{q(a_t,o_t|h_t)}{\pi_\theta(a_t,o_t|h_t)}\\
\nonumber \hfill \diff o_t \diff a_t \diff h_t \bigg)\\
\nonumber + \gamma\left(1 - \iiint p(h_t) q(a_t,o_t|h_t)\diff o_t \diff a_t \diff h_t \right).
\end{align}
Next to maximize the Lagrangian with respect to the primal variable $q$, we determine its derivative as,
\begin{align*}
\frac{\partial L(q,\eta,\gamma)}{{\partial q}} = Q_\phi(a_t,o_t,s_t) - \eta\log q(a_t,o_t|h_t) \\
\nonumber+\eta\log \pi_\theta(a_t,o_t|h_t) - \eta - \gamma.
\end{align*}
In the next step, we can set the left hand side to zero and rearrange terms to obtain
\begin{align*}
q(a_t,o_t|h_t) = \pi_\theta(a_t,o_t|h_t)\exp\left(\frac{Q_\phi(s_t,a_t,o_t)}{\eta}\right)\\
\exp\left(-\frac{\eta+\gamma}{\eta}\right).
\end{align*}
The last exponential term represents a normalization constant for $q$, which we can formulate as
\begin{eqnarray}
\begin{aligned}
\frac{\eta+\gamma}{\eta}= \log\bigg(&\iint \pi_\theta(a_t,o_t|h_t)\label{eq:norm}\\
&\exp\left(\frac{Q_\phi(s_t,a_t,o_t)}{\eta}\right)\diff o_t \diff a_t\bigg).
\end{aligned}
\end{eqnarray}
In order to obtain the dual function $g(\eta)$, we insert the solution for the primal variable into the Lagrangian in Equation \ref{eq:completelagrange1} which yields
\begin{align*}
L(q,\eta,\gamma) = \iiint p(h_t)q(a_t,o_t|h_t) Q_\phi(s_t,a_t,o_t) \\
\hfill \diff o_t \diff a_t \diff h_t \\
+\eta\bigg(\epsilon_E - \iiint p(h_t) q(a_t,o_t|h_t)\\
\log {\scriptscriptstyle \frac{\pi_\theta(a_t,o_t|h_t)\exp\left(\frac{Q_\phi(s_t,a_t,o_t)}{\eta}\right)\exp\left(-\frac{\eta+\gamma}{\eta}\right)}{\pi_\theta(a_t,o_t|h_t)} } \diff o_t \diff a_t \diff h_t \bigg) \\
+ \gamma\left(1 - \iiint p(h_t) q(a_t,o_t|h_t)\diff o_t \diff a_t \diff h_t \right).
\end{align*}
We expand the equation and rearrange to obtain
\begin{align*}
L(q,\eta,\gamma) &=\iiint p(h_t) q(a_t,o_t|h_t) Q_\phi(s_t,a_t,o_t) \\
& \hfill ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \diff o_t \diff a_t \diff h_t \\
& - \eta\iiint p(h_t) q(a_t,o_t|h_t)\Big[\frac{Q_\phi(s_t,a_t,o_t)}{\eta} \\
&\hfill+ \log \pi_\theta(a_t,o_t|h_t) - \frac{\eta+\gamma}{\eta}\Big]\diff o_t \diff a_t \diff h_t \\& + \eta\epsilon_E
+\eta\iiint p(h_t) q(a_t,o_t|h_t)\\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\log \pi_\theta(a_t,o_t|h_t)\diff o_t \diff a_t \diff h_t \\ & + \gamma\left(1 - \iiint p(h_t) q(a_t,o_t|h_t)\diff o_t \diff a_t \diff h_t \right).
\end{align*}
In the next step, most of the terms cancel out and after additional rearranging of the terms we obtain
\begin{align*}
L(q,\eta,\gamma) = \eta\epsilon_E + \eta\int p(h_t)\frac{\eta+\gamma}{\eta}\diff h_t.
\end{align*}
We have already calculated the term inside the integral in Equation \ref{eq:norm}, which we now insert to obtain
\begin{align}
g(\eta) =& \min_q L(q,\eta,\gamma)\label{eq:dual_eta}\\
=& \eta\epsilon_E+\eta\int p(h_t)\log\bigg(\iint \pi_\theta(a_t,o_t|h_t)\nonumber\\
&~~~~~~~~~~~~~~~~~~~\exp\left(\frac{Q_\phi(s_t,a_t,o_t)}{\eta}\right)\diff o_t \diff a_t \bigg)\diff h_t \nonumber\\
=& \eta\epsilon_E+\eta \mathbb{E}_{h_t\sim p(h_t)} \Big[ \log\bigg(\mathbb{E}_{a_t,o_t \sim \pi_\theta}\Big[\nonumber\\
&~~~~~~~~~~~~~~~~~~~~ \exp\left(\frac{Q_\phi(s_t,a_t,o_t)}{\eta}\right)\Big]\bigg)\Big]. \nonumber
\end{align}
The dual in Equation \ref{eq:dual_eta} can finally be minimized with respect to $\eta$ based on samples from the replay buffer and policy.
\subsection{Parametric Option Policy Update \label{app:param}}
After obtaining the non-parametric policy improvement, we can align the parametric option policy to the current non-parametric policy.
As the non-parametric policy is represented by a set of samples from the parametric policy with additional weighting, this step effectively employs a type of critic-weighted maximum likelihood estimation. In addition, we introduce regularization based on a distance function $\mathcal{T}$ which has a trust-region effect for the update and stabilizes learning.
\begin{eqnarray*}
\begin{aligned}
\theta_{new} =&\arg \min_{\theta} \mathbb{E}_{h_t \sim p(h_t)}\Big[ \mathrm{KL}\big( q(a_t,o_t | h_t) \| \pi_{\theta}(a_t,o_t | h_t) \big) \Big] \\
=&\arg \min_{\theta} \mathbb{E}_{h_t \sim p(h_t)}\Big[ \mathbb{E}_{a_t,o_t \sim q} \Big[ \log q(a_t,o_t | h_t) \\
\nonumber &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~-\log \pi_{\theta}(a_t,o_t | h_t) \Big] \Big] \\
=& \argmax_{\theta}
\mathbb{E}_{h_t \sim p(h_t), a_t, o_t \sim q}\Big[ \log \pi_{\theta}(a_t, o_t |h_t) \Big], \quad \\&\textrm{s.t.}\:\mathbb{E}_{h_t \sim p(h_t)} \Big[ \mathcal{T}(\pi_{\theta_{new}}(\cdot|h_t) | \pi_{\theta}(\cdot|h_t)) \Big] < \epsilon_M,
\end{aligned}
\end{eqnarray*}
where $h_t \sim p(h_t)$ is a trajectory segment, which in practice sampled from the dataset $\mathcal{D}$, $\mathcal{T}$ is an arbitrary distance function between the new policy and the previous policy. $\epsilon_M$ denotes the allowed change for the policy. We again employ Lagrangian Relaxation to enable gradient based optimization of the objective, yielding the following primal:
\begin{align}\label{eq:lagrange}
\max_\theta \min_{\alpha > 0} L(\theta,\alpha) = \mathbb{E}_{ h_t \sim p(h_t), a_t,o_t \sim q}\Big[ \log \pi_{\theta}(a_t,o_t|h_t) \Big] \nonumber\\
+\alpha\Big(\epsilon_M - \mathbb{E}_{h_t \sim p(h_t)} \big[ \mathcal{T}(\pi_{\theta_{new}}(\cdot|h_t) , \pi_{\theta}(\cdot|h_t))\big]\Big).
\end{align}
We can solve for $\theta$ by iterating the inner and outer optimization programs independently. In practice we find that it is most efficient to update both in parallel.
We also define the following distance function between old and new option policies
\begin{align*}
\mathcal{T}(\pi_{\theta_{new}}(\cdot|h_t),\pi_{\theta}(\cdot|h_t)) = \mathcal{T}_{H}(h_t) +\mathcal{T}_{T}(h_t) + \mathcal{T}_{L}(h_t)
\end{align*}
\begin{align*}
\mathcal{T}_{H}(h_t) = \textrm{KL}(\textrm{Cat}(\{\alpha_{\theta_{new}}^{j}(h_t)\}_{j=1...M}) \|\\
\textrm{Cat}(\{\alpha_{\theta}^{j}(h_t)\}_{j=1...M}))\\
\mathcal{T}_{T}(h_t) = \frac{1}{M}\sum_{j=1}^{M} \textrm{KL}(\textrm{Cat}(\{\beta_{\theta_{new}}^{ij}(h_t)\}_{j=1...2}) \| \\ \textrm{Cat}(\{\beta_{\theta}^{ij}(h_t)\}_{j=1...2}))\\
\mathcal{T}_{L}(h_t) = \frac{1}{M}\sum_{j=1}^{M}\textrm{KL}(\mathcal{N}(\mu^j_{\theta_{new}}(h_t),\Sigma^j_{\theta_{new}}(h_t)) \| \\ \mathcal{N}(\mu^j_{\theta}(h_t),\Sigma^j_{\theta}(h_t)))
\end{align*}
where $\mathcal{T}_{H}$ evaluates the KL between the categorical distributions of the high-level controller, $\mathcal{T}_{T}$ is the average KL between the categorical distributions of the all termination conditions, and $\mathcal{T}_{L}$ corresponds to the average KL across Gaussian components. In practice, we can exert additional control over the convergence of model components by applying different $\epsilon_M$ to different model parts (high-level controller, termination conditions, options).
\subsection{Transition Probabilities for Option and Switch Indices}
The transitions for option $o$ and switch index $n$ are given by:
\begin{align}
p(o_t,n_t|s_t,o_{t-1},n_{t-1}) = ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ &\nonumber\\
\begin{cases} \label{eq:optionswitch_transitions}
(1-\beta(s_t,o_{t-1})) & \text{if } n_{t} = n_{t-1}, o_t=o_{t-1} \\
\beta(s_t,o_{t-1}) \pi^C(o_t|s_t) & \text{if } n_{t} = n_{t-1}+1\\
0 & \text{otherwise}
\end{cases}
\end{align}
\section{Experiments}\label{sec:experiments}
In this section, we aim to answer a set of questions to better understand the contribution of different aspects to the performance of option learning - in particular with respect to the proposed method, HO2.
To start, in Section \ref{sec:single} we explore two questions: (1) How well does HO2 perform in comparison to existing option learning methods? and (2) How important is off-policy training in this context?
We use a set of common OpenAI gym \citep{Brockman2016OpenAIG} benchmarks to answer these questions.
In Section \ref{sec:multi} we ask: (3) How do action abstraction in mixture policies and the additional temporal abstraction brought by option policies individually impact performance?
We use more complex, pixel-based 3D robotic manipulation experiments to investigate these two aspects and evaluate scalability with respect to higher dimensional input and state spaces.
We also explore: (4) How does increased temporal consistency impact performance, particularly with respect to sequential transfer with pre-trained options?
Finally, we perform additional ablations in Section \ref{sec:ablations} to investigate the challenges of robust off-policy option learning and improve understanding of how options decompose behavior based on various environment and algorithmic aspects.
Across domains, we use HO2~to train option policies, RHPO \citep{wulfmeier2019regularized} for the reduced case of mixture-of-Gaussians policies with sampling of options at every timestep and MPO \citep{abdolmaleki2018relative} to train individual flat Gaussian policies - all based on critic-weighted maximum likelihood estimation for policy optimization.
\subsection{Comparison of Option Learning Methods} \label{sec:single}
We compare HO2~(with and without limits on option switches) against competitive baselines for option learning in common, feature-based continuous action space domains. HO2~outperforms baselines including Double Actor-Critic (DAC) \citep{zhang2019dac}, Inferred Option Policy Gradient (IOPG) \citep{smith2018inference} and Option-Critic (OC) \citep{bacon2017option}.
With PPO~\citep{schulman2017proximal}, we include a commonly used on-policy method for flat policies which in addition serves as the foundation for the DAC algorithm.
As demonstrated in Figure \ref{fig:openaigym}, HO2~performs better than or commensurate to existing option learning algorithms such as DAC, IOPG and Option-Critic as well as PPO.
Training mixture policies (via RHPO \citep{wulfmeier2019regularized}) without temporal abstraction slightly reduces both performance and sample efficiency but still outperforms on-policy methods in many cases.
Here, enabling temporal abstraction (even without explicitly maximizing it) provides an inductive bias to reduce the search space for the high-level controller and can lead to more data-efficient learning, such that HO2~even without constraints performs better than RHPO.
Finally, while less data-efficient than HO2, even off-policy learning alone with flat Gaussian policies (here MPO~\citep{abdolmaleki2018maximum}) can outperform current on-policy option algorithms, for example in the HalfCheetah and Swimmer domains while otherwise at least performing on par.
This emphasizes the importance of a strong underlying policy optimization method.
Using the switch constraints for increasing temporal abstraction from Section \ref{sec:method} can provide minor benefits in some tasks but has overall only a minor effect on performance. We further investigate this setting in sequential transfer in Section \ref{sec:sequential}.
It has to be noted that given the comparable simplicity of these tasks, considerable performance gains are achieved with pure off-policy training.
In the next section, we study more complex domains to isolate additional gains from action and temporal abstraction.
\subsection{Ablations: Action Abstraction and Temporal Abstraction}
\label{sec:multi}
\begin{figure}[t]
\centering
\begin{tabular}{cc}
\includegraphics[trim=1in 0.2in 0.5in 0, clip,height = 0.175\textwidth]{figures/env2.png}~
\includegraphics[height = 0.175\textwidth]{figures/env9.png}\\
\includegraphics[trim=0.in 0.8in 0.in 0.in, clip,height = 0.19\textwidth]{figures/env8.png}
\includegraphics[height = 0.19\textwidth]{figures/env3.png}
\end{tabular}
\caption{Ball-In-Cup (top) and Stacking (bottom). Left: Environments. Right: Example agent observations.}
\label{fig:evs}
\end{figure}
We next ablate different aspects of HO2~on more complex simulated 3D robot manipulation tasks - stacking and the more dynamic ball-in-cup (BIC) - as displayed in Figure \ref{fig:evs}, based on robot proprioception and raw pixel inputs (64x64 pixel, 2 cameras for BIC and 3 for stacking).
Since the performance of HO2~for training from scratch is relatively independent of switch constraints (Figure \ref{fig:openaigym}), we will simplify our figures by focusing on the base method.
To reduce data requirements, we use a set of common techniques to improve data-efficiency and accelerate learning for all methods. We will apply multi-task learning with a related set of tasks to provide a curriculum,
with details in Appendix \ref{app:additiona_exps}. Furthermore, we assign rewards for all tasks to any generated transition data in hindsight to improve data efficiency and exploration \citep{andrychowicz2017hindsight,riedmiller2018learning,wulfmeier2019regularized, cabi2017intentional}.
Across all tasks, except for simple positioning and reach tasks (see Appendix \ref{app:additiona_exps}), action abstraction improves performance (mixture policies via RHPO versus flat Gaussian policies via MPO).
In particular the more challenging stacking tasks shown in Figure \ref{fig:multitask} intuitively benefit from shared sub-behaviors with easier tasks.
Finally, the introduction of temporal abstraction (option policies via HO2~vs mixture policy via RHPO) further improves both performance and sample efficiency, especially on the more complex stacking tasks.
The ability to learn explicit termination conditions, which can be understood as classifiers between two conditions, instead of the high-level controller, as classifier between all options, can considerably simplify the learning problem.
\begin{figure}[t]
\centering
\setlength{\tabcolsep}{0pt}
\begin{tabular}{cc}
\includegraphics[width = 0.24\textwidth]{results/png/bic/BIC_4.png}
\includegraphics[width = 0.24\textwidth]{results/png/bic/BIC_0.png}\\
\includegraphics[width = 0.24\textwidth]{results/png/stack/pile1_5.png}
\includegraphics[width = 0.24\textwidth]{results/png/stack/pile1_6.png}
\end{tabular}
\vspace{-3pt}
\caption{Results for option policies, and ablations via mixture policies, and single Gaussian policies (respectively HO2, RHPO, and MPO) with pixel-based ball-in-cup (left) and pixel-based block stacking (right). All four tasks displayed use sparse binary rewards, such that the obtained reward represents the number of timesteps where the corresponding condition - such as the ball is in the cup - is fulfilled.
See Appendix \ref{app:details} for details and additional tasks.}
\label{fig:multitask}
\vspace{-8pt}
\end{figure}
\paragraph{Optimizing for Temporal Abstraction}
\label{sec:sequential}
There is a difference between simplifying the representation of temporal abstraction for the agent and explicitly maximizing it.
The ability to represent temporally abstract behavior in HO2~via the use of explicit termination conditions consistently helps in prior experiments.
However, these experiments show limited benefit when increasing temporal consistency (by limiting the number of switches following Section \ref{sec:temporal_abstraction}) for training from scratch.
In this section, we further evaluate temporal abstraction for sequential transfer with pre-trained options.
We first train low-level options for all tasks except for the most complex task in each domain by applying HO2. Next, given a set of pre-trained options, we only train the final task and compare training with and without maximizing temporal abstraction.
We use the domains from Section \ref{sec:multi}, block stacking and BIC
As shown in Figure \ref{fig:sequential}, we can see that more consistent options lead to increased performance in the transfer domain. Intuitively, increased temporal consistency and fewer switches lead to a smaller search space from the perspective of the high-level controller.
While the same mechanism should also apply for training from scratch, we hypothesize that the added complexity of simultaneously learning the low-level behaviors (while maximizing temporal consistency) outweighs the benefits. Finding a set of options which not only solve a task but also represent temporally consistent behavior can be harder, and require more data, than just solving the task.
\begin{figure}[t]
\centering
\setlength{\tabcolsep}{0pt}
\begin{tabular}{cc}
\includegraphics[trim=3mm 0 0 0,clip,height = 0.158\textwidth]{results/png/sequential/bic_sequential.png}
\includegraphics[height = 0.158\textwidth]{results/png/sequential/bic_sequential_switchrate.png}\\
\includegraphics[height = 0.158\textwidth]{results/png/sequential/pile_sequential.png}
\includegraphics[height = 0.158\textwidth]{results/png/sequential/pile_sequential_switchrate.png}
\end{tabular}
\caption{The sequential transfer experiments for temporal abstraction show considerable improvements for limited switches. Top: BIC. Bottom: Stack. In addition, we visualize the actual agent option switch rate in the environment to directly demonstrate the constraint's effect.}
\label{fig:sequential}
\end{figure}
\subsection{Ablations: Off-policy, Robustness, Decomposition} \label{sec:ablations}
In this section, we investigate different algorithmic aspects to get a better understanding of the method, properties of the learned options, and how to achieve robust training in the off-policy setting.
\paragraph{Off-policy Option Learning}
In off-policy hierarchical RL, the low-level policy underlying an option can change after trajectories are generated.
This results in a non-stationarity for training the high-level controller.
In addition, including previously executed actions in the forward computation for component probabilities can introduce additional variance into the objective.
In practice, we find that removing the conditioning on low-level probabilities (the $\pi_L$ terms in Equation \ref{eq:dynamic1}) improves performance and stability.
The effect is displayed in Figure \ref{fig:ac}, where the conditioning of high-level component probabilities on
low-level action probabilities (see Section \ref{sec:method}) is detrimental.
\begin{figure}[h]
\centering
\setlength{\tabcolsep}{1pt}
\begin{tabular}{cc}
\includegraphics[trim=.1in 0 17.5in 0,clip,height = 0.165\textwidth]{results/png/gym_ac.png}
\includegraphics[trim=12in 0 5.7in 0,clip,height = 0.165\textwidth]{results/png/gym_ac.png}
\end{tabular}
\caption{Results on OpenAI gym with/without option probabilities being conditioned on past actions.
}
\label{fig:ac}
\end{figure}
We additionally evaluate this effect in the on-policy setting in Appendix~\ref{sec:off_policy_exps} and find its impact to be diminished, demonstrating the connection between the effect and an off-policy setting.
While we apply this simple heuristic for HO2, the problem has lead to various off-policy corrections for goal-based HRL \citep{nachum2018data,levy2017learning} which provide a valuable direction for future work.
\paragraph{Trust-regions and Robustness}
Previous work has shown the benefits of applying trust-region constraints for policy updates of non-hierarchical policies \citep{schulman2015trust, abdolmaleki2018maximum}.
In this section, we vary the strength of constraints on the option probability updates (both termination conditions $\beta$ and the high-level controller $\pi_C$).
As displayed in Figure~\ref{fig:trustregion}, the approach is robust across multiple orders of magnitude, but very
weak or strong constraints can considerably degrade performance.
Note that a high value is essentially equal to not using a constraint and causes very low performance. Therefore, option learning here relies strongly on trust-region constraints. Further experiments can be found in Appendix~\ref{sec:trust_region_exps}.
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\includegraphics[trim=3mm 3mm 0 0,clip,width=0.23\textwidth]{results/png/trustregion/pile1_tr_2.png}
\includegraphics[trim=3mm 3mm 0 0,clip,width=0.23\textwidth]{results/png/trustregion/pile1_tr_6.png}
\end{tabular}
\caption{Block stacking results for two tasks with different trust-region constraints. Note that the importance of constraints increases for more complex tasks.}
\label{fig:trustregion}
\end{figure}
\paragraph{Decomposition and Option Clustering}
To investigate how HO2~uses its capacity and decomposes behavior into options, we apply it to a variety of simple and interpretable locomotion tasks.
In these tasks, the agent body (``Ball'', ``Ant'', or ``Quadruped'') must go to one of three targets in a room, with the task specified by the target locations and a selected target index.
As shown for the ``Ant'' case in Figure~\ref{fig:predicate_decomp}, we find that option decomposition intuitively depends on both the task properties and algorithm settings. In particular \textit{information asymmetry (IA)}, achieved by providing task information only to the high-level controller, can address degenerate solutions and lead to increased diversity with respect to options (as shown by the histogram over options) and more specialized options (represented by the clearer clustering of samples in action space).
We can measure this quantitatively, using (1) the Silhouette score, a measure of clustering accuracy based on inter- and intra-cluster distances\footnotemark; and (2) entropy over the option histogram, to quantify diversity.
These metrics are reported in Table~\ref{table:predicates_quant_main} for all bodies, with and without information asymmetry.
The results show that for all cases, IA leads to greater option diversity and clearer separation of option clusters with respect to action space, state space, and task.
\footnotetext{The silhouette score is a value in $[-1, 1]$ with higher values indicating cluster separability. We note that the values obtained in this setting do not correspond to high \textit{absolute} separability, as multiple options can be used to model the same skill or behavior abstraction. We are instead interested in the \textit{relative} clustering score for different scenarios.}
More extensive experiments and discussion can be found in Appendix~\ref{sec:locomotion}, including additional quantitative and qualitative results for the other bodies and scenarios.
To summarize, the analyses yield a number of relevant observations, showing that (1) for simpler bodies like a ``Ball'', the options are interpretable (forward torque, and turning left/right at different rates); and (2) applying the switch constraint introduced in Section~\ref{sec:sequential} leads to increased temporal abstraction without reducing the agent's ability to solve the task.
\begin{figure}[h]
\includegraphics[scale = 0.19, trim=3mm 0 0 0, clip]{results/predicates/ant_hist_noIA_main.png}
\includegraphics[scale = 0.21, trim=0 0 0 3mm, clip]{results/predicates/ant_actions_noIA_main.png}
\includegraphics[scale = 0.19, trim=3mm 0 0 0, clip]{results/predicates/ant_hist_fullIA_main.png}
\includegraphics[scale = 0.21, trim=0 0 0 3mm, clip]{results/predicates/ant_actions_fullIA_main.png}
\caption{Analysis on Ant locomotion tasks, showing histogram over options, and t-SNE scatter plots in action space colored by option. Left: without IA. Right: with IA. Agents with IA use more components and show clearer option clustering in the action space.}
\label{fig:predicate_decomp}
\end{figure}
\begin{table}[!h]
\centering
\scalebox{0.65}{
\begin{tabular}{ c c | c c c c }
\hline \\ [-1.5ex]
\multicolumn{2}{c}{\bf{Scenario}} & \bf{Option entropy} & \bf{s (action)} & \bf{s (state)} & \bf{s (task)} \\
\hline \\ [-1.5ex]
\multirow{2}{*}{Ball} & No IA & $1.80 \pm 0.21$ & $-0.30 \pm 0.04$ & $-0.25 \pm 0.14$ & $-0.13 \pm 0.05$ \\
& With IA & $2.23 \pm 0.03$ & $-0.13 \pm 0.04$ & $-0.11 \pm 0.04$ & $-0.05 \pm 0.00$ \\
\cline{2-6} \\ [-1.5ex]
\multirow{2}{*}{Ant} & No IA & $1.60 \pm 0.08$ & $-0.12 \pm 0.02$ & $-0.15 \pm 0.07$ & $-0.08 \pm 0.03$ \\
& With IA & $2.22 \pm 0.04$ & $-0.05 \pm 0.01$ & $-0.05 \pm 0.01$ & $-0.05 \pm 0.01$ \\
\cline{2-6} \\ [-1.5ex]
\multirow{2}{*}{Quad} & No IA & $1.55 \pm 0.29$ & $-0.07 \pm 0.04$ & $-0.12 \pm 0.03$ & $-0.11 \pm 0.02$\\
& With IA & $2.23 \pm 0.04$ & $-0.03 \pm 0.03$ & $-0.03 \pm 0.00$ & $-0.05 \pm 0.01$ \\
\hline
\end{tabular}}
\vspace{1ex}
\caption{Quantitative results
indicating the diversity of options used (entropy), and clustering accuracy in action and state spaces (silhouette score $s$), with and without information asymmetry (IA). Switching constraints are applied in all cases.
Higher values indicate greater separability by option / component.}
\label{table:predicates_quant_main}
\end{table}
\section{Introduction}\label{sec:introduction}
Deep reinforcement learning has seen numerous successes in recent years~\citep{silver2017mastering,openai2018dexterous,vinyals2019grandmaster}, but still faces challenges in domains where data is limited or expensive.
One candidate solution to address the challenges and improve data efficiency is to impose hierarchical policy structures.
By dividing an agent into a combination of low-level and high-level controllers, the options framework \citep{sutton1999between,precup2000temporal} introduces a form of action abstraction, effectively reducing the high-level controller's task to choosing from a discrete set of reusable sub-policies.
The framework further enables temporal abstraction by explicitly modelling the temporal continuation of low-level behaviors.
Unfortunately, in practice, hierarchical control schemes often introduce technical challenges, including a tendency to learn degenerate solutions preventing the agent from using its full capacity \citep{ harb2018waiting}, undesirable trade-offs between learning efficiency and final performance \citep{harut2019termination}, or the increased variance of updates \citep{precup2000temporal}.
Additional challenges in off-policy learning for hierarchical approaches \citep{NIPS2005_2767} led to a focus of recent works on the on-policy setting, forgoing the considerable improvements in data efficiency often connected to off-policy methods.
\begin{figure*}[t]
\centering
\begin{tabular}{ccc}
\includegraphics[trim=0 3mm 0 0,clip,width = 0.29\textwidth,valign=b]{figures/gmf.png} &
\includegraphics[trim=0 9mm 0 0,clip,width = 0.32\textwidth,valign=b]{figures/gm.png} &
\includegraphics[width = 0.29\textwidth,valign=b]{figures/gmmc3.png}
\end{tabular}
\vspace{-1mm}
\caption{Graphical model for flat policies (left), mixture policies (middle) - introducing a type of action abstraction, and option policies (right) - adding temporal abstraction via autoregressive options.
While the action $a$ is solely dependent on the state $s$ for flat policies, mixture policies introduce the additional component or option $o$ which affects the actions (following Equation \ref{eq:mixture}). Option policies do not change the direct dependencies for actions but instead affect the options themselves, which are now also dependent on the previous option and its potential termination $b$ (following Equation \ref{eq:optiontransitions}). }
\label{fig:2dimensional}
\vspace{-1mm}
\end{figure*}
We propose an approach to address these drawbacks, Hindsight Off-policy Options~(HO2): a method for data-efficient, robust, off-policy option learning.
The algorithm simultaneously learns a high-level controller and low-level options via a single end-to-end optimization procedure. It improves data efficiency by leveraging off-policy learning and inferring distributions over option for trajectories in hindsight to maximize the likelihood of good actions and options.
To facilitate off-policy learning the algorithm does not condition on executed options but treats these as latent variables during optimization and marginalizes over all options to compute the exact likelihood.
HO2~backpropagates through the resulting dynamic programming inference graph (conceptually related to \citep{rabiner1989tutorial,shiarlis2018taco, smith2018inference}) to enable the training of all policy components from trajectories, independent of the executed option.
As an additional benefit, the formulation of the inference graph allows to impose intuitive, hard constraints on the option termination frequency, thereby regularizing the learned solution (and encouraging temporally-extended behaviors) independently of the scale of the reward.
The policy update follows an expectation-maximization perspective and generates an intermediate, non-parametric policy, which is adapted to maximize agent performance. This enables the update of the parametric policy to rely on simple weighted maximum likelihood, without requiring further approximations such as Monte Carlo estimation or continuous relaxation~\citep{li2019sub}.
Finally, the updates are stabilized using adaptive trust-region constraints, demonstrating the importance of robust policy optimization for hierarchical reinforcement learning (HRL) in line with recent work on on-policy option learning \citep{zhang2019dac}.
We experimentally compare HO2~to prior option learning methods.
By treating options as latent variables in off-policy learning and enabling backpropagation through the inference procedure, HO2~demonstrates to be more efficient than prior approaches such as the Option-Critic \citep{bacon2017option} or DAC \citep{zhang2019dac}. HO2 additionally outperforms IOPG \citep{smith2018inference}, which considers a similar perspective but still builds on on-policy training.
To better understand different abstractions in option learning, we compare with corresponding policy optimization methods for flat policies \citep{abdolmaleki2018relative} and mixture policies without temporal abstraction \citep{wulfmeier2019regularized}
thereby allowing us to isolate the benefits of both action and temporal abstraction.
Both properties demonstrate particular relevance in more demanding simulated robot manipulation tasks from raw pixel inputs.
We further perform extensive ablations to evaluate the impact of trust-region constraints, off-policyness, option decomposition, and the benefits of maximizing temporal abstraction when using pre-trained options versus learning from scratch.
Our main contributions include:
\begin{itemize}
\item A robust, efficient off-policy option learning algorithm enabled by a probabilistic inference perspective on HRL. The method outperforms existing option learning methods on common benchmarks and demonstrates benefits on pixel-based 3D robot manipulation tasks.
\item An intuitive technique to further encourage temporal abstraction beyond the core method, using the inference graph to constrain option switches without additional weighted loss terms.
\item A careful analysis to improve our understanding of the options framework by isolating the impact of action abstraction and temporal abstraction.
\item Further ablation and analysis of several algorithmic choices: trust-region constraints, off-policy versus on-policy data, option decomposition, and the importance of temporal abstraction with pre-trained options versus learning from scratch.
\end{itemize}
\section*{Acknowledgments}
The authors would like to thank Peter Humphreys, Satinder Baveja, Tobias Springenberg, and Yusuf Aytar for helpful discussion and relevant feedback which helped to shape the publication.
We additionally like to acknowledge the support of the DeepMind robotics lab for infrastructure and engineering support.
\clearpage
\section{Method}\label{sec:method}\label{sec:prelims}
We start by considering a reinforcement learning setting with an agent operating in a Markov Decision Process (MDP) consisting of the state space $\mathcal{S}$, the action space $\mathcal{A}$, and the transition probability $p(s_{t+1}|s_t,a_t)$ of reaching state $s_{t+1}$ from state $s_t$ when executing action $a_t$.
The agent's behavior is commonly described as a conditional distribution with actions $a_t$ drawn from the agent's policy $\pi(a_t | s_t)$.
Jointly, the transition dynamics and policy induce the marginal state visitation distribution $p(s_t)$. The discount factor $\gamma$ together with the reward $r_t=r\left(s_{t}, a_{t}\right)$ gives rise to the expected return, which the agent aims to maximize:
$J(\pi) = \mathbb{E}_{p\left(s_t\right),\pi(a_t|s_t)}\Big[\sum_{t=0}^{\infty} \gamma^{t} r_t \Big]$.
\subsection{Policy Types}
Option policies introduce temporal and action abstraction in comparison to commonly-used flat Gaussian policies.
Our goal in this work is not only to introduce this additional structure to improve data efficiency but to further understand the impact of the different abstractions.
For this purpose, we further study mixture distributions. They represent an intermediate case with only action abstraction, as described in Figure \ref{fig:2dimensional}.
We begin by covering both policy types in the following paragraphs. First, we focus on computing likelihoods of actions (and options) under a policy. Then, we describe the proposed critic-weighted maximum likelihood algorithm to train hierarchical policies.
\paragraph{Mixture Policies} This type extends flat policies $\pi(a_t | s_t)$ by introducing a high-level controller that samples from multiple options (low-level policies) independently at each timestep (Figure \ref{fig:2dimensional}).
The joint probability of actions and options
is given as:
\begin{align}\label{eq:mixture}
\pi_{\theta}(a_t,o_t | s_t) = \pi^L \left(a_t | s_t, o_t\right) \pi^H\left(o_t | s_t\right),
\end{align}
where $\pi^H$ and $\pi^L$ respectively represent high-level policy (which for the mixture is equal to a Categorical distribution $\pi^H\left(o_t | s_t\right) = \pi^C\left(o_t | s_t\right)$) and low-level policy (components of the resulting mixture distribution), and $o$ is the index of the sub-policy or mixture component.
\paragraph{Option Policies} This type extends mixture policies by incorporating temporal abstraction.
We follow the semi-MDP and \textit{call-and-return} option model \citep{sutton1999between}, defining an option as a triple $(I(s_t,o_t),\pi^L(a_t|s_t,o_t),\beta(s_t,o_t))$. The initiation condition $I$ describes an option's probability to start in a state and is simplified to $I(s_t,o_t)=1 \forall s_t \in \mathcal{S}$ following \citep{bacon2017option,zhang2019dac}. The termination condition $b_t \sim \beta(s_t, o_t)$ denotes a Bernoulli distribution describing the option's probability to terminate in any given state, and the action distribution for a given option is modelled by $\pi^L(a_t|s_t,o_t)$.
Every time the agent observes a state, the current option's termination condition is sampled. If subsequently no option is active, a new option is sampled from the controller $\pi^C(o_{t}|s_{t})$. Finally, we sample from either the continued or new option to generate a new action.
The resulting transition probabilities between options are described by
\begin{align}
p\left(o_t | s_t, o_{t-1}\right) =~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\label{eq:optiontransitions}
\\
\nonumber\begin{cases}
1-\beta(s_t,o_{t-1}) (1- \pi^C(o_t|s_t)) & \text{if } o_{t} = o_{t-1} \\
\beta(s_t,o_{t-1}) \pi^C(o_t|s_t) & \text{otherwise
\end{cases}
\end{align}
During interaction in an environment, an agent samples individual options. However, during learning HO2~takes a probabilistic inference perspective with options as latent variables and states and actions as observed variables.
This allows us to infer likely options over a whole trajectory in hindsight, leading to efficient intra-option learning \citep{precup2000temporal} for all options independently of the executed option. This is particularly relevant for off-policy learning, as options can change between data generation and learning.
Following the graphical model in Figure \ref{fig:2dimensional} and corresponding transition probabilities in Equation \ref{eq:optiontransitions}, the probability of being in option $o_t$ at timestep $t$ across a trajectory $h_t= \{s_t, a_{t-1}, s_{t-1},... s_0, a_0\}$ is determined in a recursive manner based on the previous timestep's option probabilities. For the first timestep, the probabilities are given by the high-level controller $\pi^{H}\left(o_0 | h_0\right) = \pi^C\left(o_0 | s_0\right)$. For all consecutive steps are computed as follows for $M$ options:
\begin{eqnarray}
\begin{aligned}
\tilde{\pi}^{H}\left(o_t | h_t\right) = \sum_{o_{t-1}=1}^M \big[ p\left(o_t|s_t, o_{t-1}\right) \pi^H\left(o_{t-1} | h_{t-1}\right) \label{eq:dynamic1} \\
{\pi^L\left(a_{t-1} | s_{t-1}, o_{t-1}\right)}\big]
\end{aligned}
\end{eqnarray}
The distribution is normalized at each timestep following $\pi^H\left(o_t | h_t\right) = {\tilde{\pi}^{H}\left(o_t | h_t\right)}/ {\sum_{o'_{t}=1}^M\tilde{\pi}^{H}\left(o'_t | h_t\right)}$.
Performing this exact marginalization at each timestep is much more efficient than computing independently over all possible sequences of options and reduces variance compared to sampling-based approximations.
Building on the option probabilities, Equation \ref{eq:option_probs} conceptualizes the connection between mixture and option policies.
\begin{align}\label{eq:option_probs}
\pi_{\theta}(a_t,o_t | h_{t}) = \pi^L\left(a_t | s_t, o_t\right) \pi^H\left(o_t | h_t\right)
\end{align}
In both cases, the low-level policies $\pi^L$ only depend on the current state.
However, where mixtures only depend on the current state $s_t$ for the high-level probabilities $\pi^H$, with options we can take into account compressed information about the history $h_{t}$ as facilitated by the previous timestep's distribution over options $\pi^H\left(o_{t-1} | h_{t-1}\right)$.
This dynamic programming formulation in Equation \ref{eq:dynamic1}
enables the exact computation of the likelihood of actions and options along off-policy trajectories.
We can use automatic differentiation in modern deep learning frameworks (e.g. \citep{abadi2016tensorflow}) to backpropagate through the graph and determine the gradient updates for all policy parameters.
\subsection{Agent Updates}
We continue by describing the policy improvement algorithm, which uses the previously determined option probabilities.
The three main steps are: 1) update the critic (Eq. \ref{eq:objective_q_value}); 2) generate an intermediate, non-parametric policy based on the updated critic (Eq. \ref{eq:objective_q}); 3) update the parametric policy to align to the non-parametric improvement (Eq. \ref{eq:objective_pi}).
By handling the maximization of expected returns with a closed-loop solution for a non-parametric intermediate policy, the update of the parametric policy can build on simple, weighted maximum likelihood.
In essence, we do not rely on differentiating an expectation over a distribution with respect to parameters of the distribution.
This enables training a broad range of distributions (including discrete ones) without further approximations such as required when the update relies on the reparametrization trick \citep{li2019sub}.
\begin{figure*}[t]
\centering
\begin{tabular}{cc}
\includegraphics[width = 0.43\textwidth]{figures/2d.png}&
\includegraphics[width = 0.53\textwidth]{figures/3d.png}
\end{tabular}
\caption{Representation of the dynamic programming forward pass - bold arrows represent connections without switching. Left: example with two options. Right: extension of the graph to explicitly count the number of switches. Marginalization over the dimension of switches determines component probabilities. By limiting over which nodes to sum at every timestep, the optimization can be targeted to fewer switches and more consistent option execution.}
\label{fig:example}
\end{figure*}
\paragraph{Policy Evaluation}
In comparison to prior work on training mixture policies~\citep{wulfmeier2019regularized}, the critic for option policies is a function of $s$, $a$, {and} $o$ since the current option
influences the likelihood of future actions and thus rewards. Note that even though we express the policy as a function of the history $h_t$, $Q$ is a function of $o_t, s_t, a_t$, since these are sufficient to render the future trajectory independent of the past (see the graphical model in Figure \ref{fig:2dimensional}).
We define the TD(0) objective as
\begin{align}
\label{eq:objective_q_value}
\min_\phi L(\phi) = \mathbb{E}_{s_t,a_t,o_t \sim \mathcal{D}} \Big[ \big( Q_{\text{T}} - {Q}_\phi(s_t, a_t, o_t))^2 \Big],
\end{align}
where the current states, actions, and options are sampled from the current replay buffer $\mathcal{D}$.
For the 1-step target $Q_{\text{T}}= r_t+\gamma \mathbb{E}_{s_{t+1},a_{t+1},o_{t+1}}[{{Q'}}(s_{t+1}, a_{t+1}, o_{t+1})]$,
the expectation over the next state is approximated with the sample $s_{t+1}$ from the replay buffer,
and we estimate the value by sampling actions and options according to $a_{t+1}, o_{t+1} \sim \pi'(\cdot|h_{t+1})$ following Equation \ref{eq:option_probs}. $\pi'$ and $Q'$ respectively represent target networks for policy and critic which are used to stabilize training.
\paragraph{Policy Improvement}
We follow an Expectation-Maximization procedure similar to \citep{wulfmeier2019regularized,abdolmaleki2018maximum}, which first computes an improved non-parametric policy and then updates the parametric policy to match this target.
In comparison to prior work, the policy does not only depend on the current state $s_t$ but also on compressed information about the rest of the previous trajectory $h_t$, building on Equation \ref{eq:dynamic1}.
Given the critic, all we require to optimize option policies is the ability to sample from the policy and determine the log-likelihood (gradient) of actions and options under the policy.
The first step provides us with a non-parametric policy $q(a_t,o_t|h_t)$.
\begin{eqnarray}
\begin{aligned}
\max_{q} J(q) =\ \mathbb{E}_{a_t,o_t \sim q, h_t \sim \mathcal{D}}\big[{Q_\phi}(s_t, a_t, o_t) \big], \\
\text{s.t. } \mathbb{E}_{h_t \sim \mathcal{D}}\Big[
\mathrm{KL}\big(q(\cdot|h_t) \| \pi_\theta(\cdot|h_t)\big)\Big] \le \epsilon_E,
\end{aligned}
\label{eq:objective_q}
\end{eqnarray}
where $\mathrm{KL}(\cdot \| \cdot)$ denotes the Kullback-Leibler divergence, and $\epsilon_E$ defines a bound on the KL.
We can find the solution to the constrained optimization problem in Equation \ref{eq:objective_q} in closed-form and obtain
\begin{equation} \label{eq:q_closed}
q(a_t,o_t|h_t) \propto \pi_{\theta}(a_t,o_t|h_t) \exp\left({{{Q_\phi}(s_t,a_t,o_t)}/{\eta}}\right).
\end{equation}
Practically speaking, this step computes samples from the previous policy and weights these based on the corresponding temperature-calibrated values of the critic. The temperature parameter $\eta$ is computed following the dual of the Lagrangian. The derivation and final form of the dual can be found in Appendix \ref{app:nonparam}, Equation \ref{eq:dual_eta}.
To align the parametric to the improved non-parametric policy in the second step, we minimize their KL divergence.
\begin{eqnarray}
\begin{aligned}
\theta = \arg \min_{\theta} \mathbb{E}_{h_t \sim \mathcal{D}}\Big[ \mathrm{KL}\big( q(\cdot | h_t) \| \pi_{\theta}(\cdot | h_t) \big) \Big], \label{eq:objective_pi}\\
\text{s.t. } \mathbb{E}_{h_t \sim \mathcal{D}}\Big[
\mathcal{T}\big(\pi_{\theta_{+}}(\cdot|h_t) \| \pi_{\theta}(\cdot|h_t)\big)\Big] \le \epsilon_M
\end{aligned}
\end{eqnarray}
The distance function $\mathcal{T}$ in Equation \ref{eq:objective_pi} has a trust-region effect and stabilizes learning by constraining the change in the parametric policy. The computed option probabilities from Equation \ref{eq:dynamic1} are used in Equation \ref{eq:q_closed} to enable sampling of options as well as Equation \ref{eq:objective_pi} to determine and maximize the likelihood of samples under the policy.
We can apply Lagrangian relaxation again and solve the primal as detailed in Appendix \ref{app:param}.
Finally, we describe the complete pseudo-code for HO2~in Algorithm \ref{alg:learner}.
Note that both Gaussian and mixture policies have been trained in prior work via methods relying on critic-weighted maximum likelihood \citep{abdolmaleki2018relative, wulfmeier2019regularized}. By comparing with the extension towards option policies described above, we will make use of this connection to isolate the impact of action abstraction and temporal abstraction in the option framework in Section \ref{sec:multi}.
\begin{algorithm}[t]
\caption{Hindsight Off-policy Options}\label{alg:learner}
\begin{algorithmic}
\STATE \textbf{input:} initial parameters for $\theta,~\eta$ and $\phi$, KL regularization parameters $\epsilon$, set of trajectories $\tau$
\WHILE{not done}
\STATE sample trajectories $\tau$ from replay buffer
\STATE \color{gray}// forward pass along sampled trajectories
\STATE \color{black}determine component probabilities $\pi^H\left(o_t | h_t\right)$ (Eq. \ref{eq:dynamic1})
\STATE sample actions $a_j$ and options $o_j$ from $\pi_{\theta}(\cdot|h_t)$ (Eq. \ref{eq:option_probs}) to estimate expectations
\STATE \color{gray}// compute gradients over batch for policy, Lagrangian multipliers and Q-function
\STATE \color{black}$\delta_\theta \leftarrow -\nabla_\theta \sum_{h_t \in \tau} \sum_{j} [ \exp\left({Q_\phi(s_t,a_j,o_j)}/{\eta}\right)$ \\ \hfill $\log \pi_{\theta}(a_j, o_j | h_t) ] $ following Eq. \ref{eq:q_closed} and \ref{eq:objective_pi}
\STATE $\delta_{\eta} \leftarrow \nabla_\eta g(\eta) = \nabla_\eta \eta\epsilon+\eta \sum_{h_t \in \tau} \log \sum_{j}[$ \\ \hfill $\exp\left({Q_\phi(s_t,a_j,o_j)}/{\eta}\right) ]$ following Eq. \ref{eq:dual_eta}
\STATE $\delta_\phi \leftarrow \nabla_{\phi} \sum_{(s_t, a_t, o_t) \in \tau} \big( {Q}_\phi(s_t, a_t, o_t) -
Q_{\text{T}} \big)^2$ \\ \hfill following Eq. \ref{eq:objective_q_value}
\STATE \color{black}update $\theta, \eta, \phi$ \color{gray}~~~~~~// apply gradient updates \color{black}
\IF{number of iterations = target update}
\STATE \color{black}$\pi' = \pi_\theta$, $Q' = Q_\phi$ \color{gray}~~~~~~// update target networks for policy $\pi'$ and value function $Q'$ \color{black}
\ENDIF
\ENDWHIL
\end{algorithmic}
\end{algorithm}
\begin{figure*}[b]
\centering
\begin{tabular}{c}
\includegraphics[trim=3mm 0 0 0, clip,width = 0.98\textwidth]{results/png/gym_wppo_main.png}
\end{tabular}
\caption{Results on OpenAI gym. Dashed black line represents DAC \citep{zhang2019dac}, dotted line represents Option-Critic \citep{bacon2017option}, solid line represents IOPG \citep{smith2018inference}, dash-dotted line represents PPO \citep{schulman2017proximal} (approximate results after $2 \times 10^{6}$ steps from \citep{zhang2019dac}). We limit the number of switches to 5 for HO2-limits. HO2~ consistently performs better or on par with existing option learning algorithms. }
\label{fig:openaigym}
\end{figure*}
\subsection{Maximizing Temporal Abstraction
\label{sec:temporal_abstraction}
Persisting with each option over longer time periods can help to reduce the search space and simplify exploration \citep{sutton1999between,harb2018waiting}.
Previous approaches (e.g. \citep{harb2018waiting}) rely on additional weighted loss terms which penalize option transitions.
In addition to the main HO2~algorithm, we introduce an extension mechanism to explicitly limit the maximum number of switches between options along a trajectory to increase temporal abstraction.
In comparison to additional loss terms, a parameter for the maximum number of switches can be chosen independently of the reward scale of an environment and provides an intuitive semantic interpretation, both aspects simplify manual adaptation.
We extend the 2D graph for computing option probabilities (Figure \ref{fig:example}) with a third dimension representing the number of switches between options. Practically, this means that we are modelling $\pi^H(o_t,n_t|h_t)$ where $n_t$ represents the number of switches until timestep $t$.
We can now marginalize over \textit{all} numbers of switches to again determine the option probabilities. Instead, to encourage option consistency across timesteps, we can sum over \textit{only a subset} of nodes for all $n \leq N$ with $N$ smaller than the maximal number of switches leading to $\pi^H \left(o_t| h_t\right) = \sum_{n_t=0}^N \pi^H \left(o_t, n_t| h_t\right)$
For the first timestep, only 0 switches are possible, such that $\pi^H\left(o_0 ,n_0=0| h_0\right) = \pi^C\left(o_0 | s_0\right)$ and $0$ for all other values of $n$.
For further timesteps, all edges resulting in option terminations $\beta$ lead to the next step's option probabilities with increased number of switches $n_{t+1} = n_t +1$. All edges representing the continuation of an option lead to $n_{t+1} = n_t $.
This results in the computation of the joint distribution for $t>0$:
\begin{eqnarray}
\begin{aligned}
\tilde{\pi}^{H}\left(o_t,n_t | h_t\right) = \sum_{\substack{o_{t-1}=1,\\ n_{t-1}=1}}^{M,N} p\left(o_t,n_t|s_t, o_{t-1},n_{t-1}\right) \\ \pi^H\left(o_{t-1},n_{t-1} | h_{t-1}\right) \label{eq:dynamic2}
{\pi^L\left(a_{t-1} | s_{t-1}, o_{t-1}\right)} %
\end{aligned}
\end{eqnarray}
\noindent which can then be normalized using $\pi^H\left(o_t,n_t | h_t\right) = {\tilde{\pi}^{H}\left(o_t,n_t | h_t\right)}/ {\sum_{o'_{t}=1}^M\sum_{n'_{t}=1}^L\tilde{\pi}^{H}\left(o'_t ,n'_t| h_t\right)}$.
The option and switch index transitions $p\left(o_t,n_t|s_t, o_{t-1},n_{t-1}\right)$ are further described in Equation \ref{eq:optionswitch_transitions} in the Appendix.
\section{Related Work}\label{sec:related}
Hierarchy has been investigated in many forms in reinforcement learning to improve data gathering as well as data fitting aspects.
Goal-based approaches commonly define a grounded interface between high- and low-level policies. The high level acts by providing goals to the low level, which is trained to achieve these goals \citep{dayan1993feudal,levy2017learning,nachum2018near,nachum2018data,vezhnevets2017feudal}, effectively generating separate objectives and improving exploration.
These methods have been able to overcome very sparse reward domains but commonly require domain knowledge to define the interface. In addition, a hand-crafted interface can limit expressiveness of achievable behaviors.
Non-crafted, emergent interfaces within policies have been investigated from an RL-as-inference perspective via policies with continuous latent variables \citep{haarnoja2018latent,hausman2018learning,heess2016learning,igl2019multitask,tirumala2019exploiting,teh2017distral}. Related to these approaches, we provide a probabilistic inference perspective to off-policy option learning and benefit from efficient dynamic programming inference procedures.
We furthermore build on the related idea of information asymmetry \citep{pinto2017asymmetric,galashov2018information,tirumala2019exploiting} - providing a part of the observations only to a part of the model. The asymmetry can lead to an information bottleneck affecting the properties of learned low-level policies. We build on the intuition and demonstrate how option diversity can be affected in ablations in Section \ref{sec:ablations}.
At its core, our work builds on and investigates the option framework \citep{precup2000temporal,sutton1999between},
which can be seen as describing policies with an autoregressive, discrete latent space.
Option policies commonly use a high-level controller to choose from a set of options or skills. These options additionally include termination conditions to enable a skill to represent temporally extended behavior.
Without termination conditions, options can be seen as equivalent to components under a mixture distribution, and this simplified formulation has been applied successfully in different methods \citep{agostini2010reinforcement,daniel2016hierarchical,wulfmeier2019regularized}.
Recent work has also investigated temporally extended low-level behaviours of fixed length \citep{frans2018meta,li2019sub,nachum2018data}, which do not learn the option duration or termination condition.
With HO2, enabling to optimize the extension of low-level behaviour in the option framework provides additional flexibility and removes the engineering effort of choosing the right hyperparameters.
The option framework has been further extended and improved for more practical application \citep{bacon2017option,harb2018waiting,harut2019termination,NIPS2005_2767,riemer2018learning,smith2018inference}.
HO2~ relies on off-policy training and treats options as latent variables. This enables backpropagation through the option inference procedure and considerable improvements in comparison to efficient than approaches relying on on-policy updates and on-option learning purely for executed options.
Related, IOPG \citep{smith2018inference} also considers an inference perspective but only includes on-policy results which naturally have poorer data efficiency.
Finally, the benefits of options and other modular policy styles have also been applied in the supervised case for learning from demonstration \citep{fox2017multi, krishnan2017ddco,shiarlis2018taco}.
One important step to increase the robustness of option learning has been taken in \citep{zhang2019dac} by building on robust (on-policy) policy optimization with PPO \citep{schulman2017proximal}. HO2~has similar robustness benefits, but additionally improves data-efficiency by building on off-policy learning, hindsight inference of options, and additional trust-region constraints \citep{abdolmaleki2018maximum,wulfmeier2019regularized}. Related inference procedures have also been investigated in imitation learning \citep{shiarlis2018taco} as well as on-policy RL \citep{smith2018inference}.
In addition to inferring options in hindsight, off-policy learning enables us to assign rewards for multiple tasks, which has been successfully applied with flat, non-hierarchical policies \citep{andrychowicz2017hindsight,riedmiller2018learning, cabi2017intentional} and goal-based hierarchical approaches \citep{levy2017learning,nachum2018data}.
|
2,877,628,091,211 | arxiv | \section{Introduction}
In recent Lewkowycz and Maldacena [1] proposed the generalization of the usual black hole entropy formula [2-5] to Euclidean solutions without a Killing vector. Let us briefly describe it in below.
For a general quantum system the Euclidean evolution generates an un-normalized density matrix
\be
\rho = P~e^{-\int^{\tau_f}_{\tau_0} d\tau H(\tau)}
\ee
While $\rm Tr[\rho]$ can be computed by considering Euclidean evolution on a circle by identifying $\tau_f=\tau_0+2\pi$. The entropy of this density matrix can be calculated by the ``replica trick" [6,7]. Namely, we can compute $\rm Tr[\rho^n]$ by considering time evolution over a circle of n times the length of the original one where $H(\tau+2\pi) = H(\tau)$ and entropy is computed as
\be
S&=&-n\partial_n\Big[\log Z(n)-n \log Z(1)\Big]_{n=1}=\rm Tr\Big[\hat\rho\log\hat\rho\Big]\\
Z(n)&=&\rm Tr\Big[\rho^n\Big],~~~~~\hat\rho={\rho\over \rm Tr[\rho]}
\ee
In the gravitational context, we can consider metrics which end on a boundary and assume that the boundary has a direction with the topology of the circle.
The boundary data can depend on the position along this circle but it respects the periodicity of the circle. We define the coordinate $\tau\sim \tau+2\pi$ on the circle but without a U(1) symmetry. We then consider a spacetime in the interior which is smooth. Its Euclidean action is defined to be log Z(1). We can also consider other spacetimes where we take the same boundary data but consider a new circle with period $\tau\sim \tau+2n\pi$. Their Euclidean action is defined to be log Z(n). These computations can be viewed as computing$\rm Tr[\rho^n]$ for the density matrix produced by the Euclidean evolution.
In this way, Lewkowycz and Maldacena [1] propose the generalization of the usual black hole entropy formula [2-5] to Euclidean solutions without a Killing vector. While the original paper had used the complex scalar field to calculate the black hole entropy we will first investigate the case of free real scalar field and then discuss the effect of the interacting scalar field and the Maxwell field.
In section II we study the free real scalar field and interacting scalar field. In section III we exactly solve the Maxwell field equation and calculate the analytic value of the generalized gravitational entropy. In section IV we use the Einstein equation to study the effect of backreaction of the Maxwell field on the spacetime. We calculate the associated modified area law and see that it is consistent with the generalized gravitational entropy. Last section is devoted to a short conclusion.
\section {Generalized Gravitational Entropy of Interacting Real Scalar Field}
\subsection {Geometry}
We follow the original paper [1] and consider the simple BTZ geometry \footnote{This geometry is more like $\rm AdS^3$ then the BTZ spacetime.}
\be
ds^2={dr^2\over 1+r^2}+r^2d\tau^2+(1+r^2)dx^2
\ee
Above metric has a U(1) isometry along the circle labeled by $\tau\sim \tau+2\pi$. As metric is invariant under translations in $x$ this direction will not play any role and we can take it to be compact of size $L_x$. Computing the entropy for this solution gives the standard area formula.
We now add a scalar field or Maxwell field and set boundary conditions that are not U(1) invariant by
\be
\Phi&\sim& \eta\cos(\tau)~~~~~~~~~~~~\rm~~at~~~r\rightarrow\infty\\
A_\mu&\sim& \eta\cos(\tau)~~~~~~~~~~~~\rm~~at~~~r\rightarrow\infty
\ee
For the $n^{th}$ case, we need to consider a spacetime with the same boundary
conditions in above equation but where $\tau\sim \tau+2n\pi$. This implies that the spacetime in the interior is [1]
\be
ds^2={dr^2\over n^{-2}+r^2}+r^2d\tau^2+(n^{-2}+r^2)dx^2
\ee
We now need to compute the gravitational action to second order in $\eta$ for scalar field or Maxwell field. The metric is changed at order $\eta^2$ but since the original background obeys Einstein equations, there is no contribution from the gravitational term to order $\eta^2$. Thus, to this order, the whole contribution comes from the scalar field or Maxwell field in the action and we need to consider it in this spacetime.
\subsection {Free Real Scalar Field}
The action of real scalar field is
\be
A^{\Phi}={1\over 2}\int d^3x\sqrt g~g^{ab}\partial_a\Phi \partial_b\Phi={1\over 2}\int d^3x\partial_a[\sqrt g~g^{ab}\Phi \partial_b\Phi]-{1\over 2}\int d^3x \Phi\partial_a[\sqrt g~g^{ab} \partial_b\Phi]
\ee
The first bracket is the surface term and will contribute to the gravitational action which is considered later. After the variation the second bracket gives the scalar field equation. As in [1] we make following ansatz
\be
\Phi(\tau,r)&=&\eta \cos(\tau)~\phi_n(r)
\ee
then the associated differential equation of $\phi_n(r) $ becomes
\be
(n^{-2}r^2+r^4)\phi''_n(r)+(n^{-2}r+3r^3)\phi'_n(r)-\phi_n(r)=0
\ee
which is just the differential equation of the complex scalar field investigated in the original paper [1]. There are two independent solutions which have been normalized by $\phi_n(\infty)=1$ are
\be
h_n(r)&=&{n^nr^n \Gamma\Big(1+{n\over 2}\Big)^2\over \Gamma(1+n)}~_2F_1\Big(1+{n\over 2},{n\over 2},1+n,-n^2r^2\Big)\\
k_n(r)&=&{n^{-n}r^{-n} \Gamma\Big(1-{n\over 2}\Big)^2 \over \Gamma(1-n)}~_2F_1\Big(1-{n\over 2},-{n\over 2},1-n,-n^2r^2\Big)
\ee
The first solution which is regular at the origin will be used in this section to find the associated gravitational action. The second solution which is divergent at the origin plays a role in next subsection in studying the interacting theory.
Note that the action, as we analyzed before, has two parts. The second bracket becomes zero after putting the field on-shell and we remain only the first bracket which is the surface term. Thus the classical on-shell action becomes
\be
A_{\rm on-shell}^{\Phi}&=&{1\over 2}\int d^3x\partial_a[\sqrt g~g^{ab}\Phi \partial_b\Phi]={1\over 2}\int d^3x \Big[\partial_\tau(\sqrt g~g^{\tau\tau}\Phi \partial_\tau\Phi)+\partial_r(\sqrt g~g^{rr}\Phi \partial_r\Phi)\Big]\nn\\
&=&-{\eta^2\over 2}\int d^3x\Big[\partial_t(\cos(\tau)\sin(\tau))\Big(\sqrt g~g^{\tau\tau}h_n(r)^2\Big)\nn\\
&&~~~+\cos^2(\tau)\partial_r\Big(h_n(r)\sqrt g~g^{rr}\partial_rh_n(r)\Big)\Big]\nn\\
&=&{\eta^2\over 2} \int_0^{L_x} dx~\int_0^{2n\pi} d\tau \cos^2(\tau)~\Big(h_n(r)\sqrt g~g^{rr}\partial_rh_n(r)\Big)_{r\rightarrow\infty}\nn\\
&=&{n\pi L_x\over 2}~\eta^2 \Big(h_n(r)\sqrt g~g^{rr}\partial_rh_n(r)\Big)_{r\rightarrow\infty}
\ee
Substituting the solution $h_n(r)$ we find that
\be
\log Z^{\Phi}(n)=A_{\rm on-shell}^{\Phi}&=&\pi L_x~\eta^2\left(-2 n (\log (n)+\log (r))+2 n \left(\psi ^{(0)}\left(\frac{n}{2}\right)+\gamma \right)+2\right)
+ {\cal O} \Big({1\over r}\Big)\nn\\
\ee
The terms linear in $n$ include divergent terms that should be subtracted [1]. However, they do not contribute to the entropy.
The associated generalized gravitational entropy becomes
\be
S^{\Phi}_{GGE}&=&-n\partial_n\Big[\log Z(n)-n \log Z(1)\Big]_{n=1}={\pi L_x \over 4}~\eta^2\Big(8-\pi^2\Big)
\ee
which is exactly the half value of the complex scalar field calculated in [1], as expected.
\subsection {Interacting Real Scalar Field}
The action of interacting real scalar field is
\be
A&=&{1\over 2}\int d^3x \sqrt g~\Big[g^{ab}\partial_a\Phi \partial_b\Phi+{\lambda\over 4!}\Phi^4\Big]\nn\\
&=&{1\over 2}\int d^3x \partial_a[\sqrt g~g^{ab}\Phi \partial_b\Phi]-{1\over 2}\int d^3x~ \Big[\Phi\partial_a(\sqrt g~g^{ab} \partial_b\Phi)-{\lambda\over 4!}\sqrt g~\Phi^4\Big]
\ee
The first integration is the surface term and will contribute to the gravitational action. After substituting previous ansatz $\Phi(\tau,r)=\eta \cos(\tau)~F_n(r)$ into second integration it becomes
\be
(n\pi L_x\eta^2)\Big[\int dr~{1\over 2}F_n(r) \sqrt g~g^{\tau\tau}F_n(r)- F_n(r)\partial_r[\sqrt g~g^{rr}\partial_rF_n(r)]\nn\\
+\int dr~{3\over 4}~{\lambda\eta^2\over 4!}\sqrt g~F_n(r)^4\Big]
\ee
in which ${3\over 4}$ is coming from the integration of $\cos(\tau)^4$ in ${\lambda\over 4!}\Phi^4$. The variation with respective to $F_n(r)$ gives the differential equation
\be
(n^{-2}r^2+r^4)F''_n(r)+(n^{-2}r+3r^3)F'_n(r)-F_n(r)={\lambda\eta^2\over 8}F^3_n(r)
\ee
which cannot be solved exactly. Therefore, we restore to the perturbation for small coupling $\lambda\eta^2$. In this case we expand $F_n=h_n+\lambda \eta^2s_n$ in which $h_n$ is that found in free case and $s_n$ will satisfy the differential equation
\be
s''_n(r)+{n^{-2}r+3r^3\over n^{-2}r^2+r^4}~s'_n(r)-{1\over n^{-2}r^2+r^4}~s_n(r)={h^3_n(r)\over 8(n^{-2}r^2+r^4)}
\ee
Above differential equation can be solved by the standard method of variation of parameters. As associated homogeneous differential equation has two exact solutions ($k_n(r)$ and $h_n(r)$) the general solution is
\be
s_n(r)=\alpha~k_n(r)+\beta~h_n(r) + P_n(r) k_n(r)+Q_n(r) h_n(r)
\ee
in which $\alpha$ and $\beta$ are arbitrary constants and
\be
P_n&=&\int dr{- h_n(r) \over W(k_n(r),h_n(r))}~{h^3_n(r)\over 8(n^{-2}r^2+r^4)}
\\
Q_n&=&\int dr{ k_n(r) \over W(k_n(r),h_n(r))}~{h^3_n(r)\over 8(n^{-2}r^2+r^4)}
\ee
where $W(k_n(r),h_n(r))$ is the Wronskian of $k_n(r), h_n(r)$ defined by
\be
W(k_n(r),h_n(r))&=&\left| {\begin{array}{cc}
k_n(r)& h_n(r) \\
k_n'(r)& h_n'(r)\\
\end{array} } \right|
\ee
While we could not find the exact function forms of $P_n$ and $Q_n$ we make following comments :
1. In the convention, we can ignore the integration constants in above integrations since including them would merely regenerate terms already present in the homogeneous solution. However, in our case we shall take $\alpha=0$ and $\beta=1$ to have regular solution at $r=0$.
2. Although solution $k_n(r)$ is divergent at $r=0$ the particular solution $P_n$ and $Q_n$, which involve $k_n(r)$ in their definition, are finite at $r=0$. We have checked this property by numerical evaluation. Thus we shall take both solutions in following study.
3. We have to find the large $r$ behaviors of particular solution $P_n(x)$ and $Q_n(x)$ in order to calculate the generalized gravitational entropy. By Taylor expansion about $r=\infty$ it is easy to find that
\be
P_n(r)&\approx&{1\over 12 \pi n}\tan \Big(\frac{\pi n}{2}\Big) \Big(-2 \log (r) \Big(n\Big(\log \Big(n^2\Big)+\log (r)\Big)-2 \gamma n+n-1\Big)+n r^2\nn\\
&&+n \Big(\psi ^{(0)}\Big(-\frac{n}{2}\Big)+3 \psi ^{(0)}\Big(\frac{n}{2}\Big)\Big) \log (r)\Big)\\
Q_n(r)&\approx&{1\over 12 \pi n} \tan \Big(\frac{\pi n}{2}\Big) \Big(-2 \log (r) \Big(n\Big(\log \Big(n^2\Big)+\log (r)\Big)+n-2\Big)+n r^2\nn\\
&&+4 n \Big(\psi ^{(0)}\Big(\frac{n}{2}\Big)+\gamma \Big) \log (r)\Big)
\ee
Because $P_n$ and $Q_n$ are divergent at $r\rightarrow\infty$ we have to introduce a cutoff $\Lambda$ and perform the integration from $\Lambda$ to large $r$ to define the precisely values of $P_n$ and $Q_n$ in large $r$. In this way the solution is normalized to be $\Phi (\infty)=1$ as in free case.
Now, we can substitute above solutions into classical on-shell action and the associated generalized gravitational entropy becomes
\be
S^{\lambda\Phi}_{GGE}&=&-n\partial_n\Big[\log Z(n)-n \log(1)\Big]_{n=1}\approx \pi L_x ~\Big(8-\pi^2\Big) \Big({\eta^2\over 4}-{\lambda\eta^4\over 64}\Big)\ee
to the first order of small parameter $\lambda\eta^4$.
Note that above result seems to be valid at order $\eta^4$ which is a higher order than initially expected and to this order the purely gravitational part of the action should give a contribution. Now we can consider the case with the property : next leading gravitational part $<$ interacting scalar field part $<$ leading gravitational part, i.e. $\eta^4<\lambda \eta^4<\eta^2$ then above result is consistent without furthermore gravitational part. We shall notice that in this case, as $\eta^4<\lambda\eta^4$ the coupling $\lambda$ is larger than 1 while $\eta < 1$.
\subsection {Backreaction of Interacting Scalar Field on the Area Law}
Now we will study the backreaction of the interacting scalar field on the metric. Note that the backreaction of free scalar field has been treated in original paper [1]. Following it we can easily check that the formula derive in [1]
\be
S^{\Phi}|_{\eta^2}&=&4\pi (\delta A_0+\delta A_{\lambda\eta^2})=4\pi \delta A_0+S^{\Phi}|_{\lambda\eta^2}= -\lambda\eta^2~2\pi L_x \int_0^\infty dr r(\partial_r\Phi)^2
\ee
can also be used in the interacting scalar system. The black hole entropy is $S = 4\pi A = 4\pi (A_0 +\delta A_0+\delta A_{\lambda\eta^2})$ where $ 4\pi \delta A_0$ is the part of deformation by free scalar field which was studied in [1]. $4\pi \delta A_{\lambda\eta^2}$ is the part of deformation by interacting scalar field.
The field $\Phi$ in above equation can be found by the standard method of variation of parameters as described in previous subsection. Now we need to find the $n=1$ solutions associated to $h_n(r)$ and $k_n(r)$ in (2.8) and (2.9). To avoid the singularity of factor $~_2F_1\left(1-\frac{n}{2},-\frac{n}{2};1-n;-n^2 r^2\right)$ in $k_n(r)$ in the limit $n=1$ we can use following identity
\be
(1-n)~_2F_1\left(1-\frac{n}{2},-\frac{n}{2};1-n;-n^2 r^2\right) = \left(1-\frac{n}{2}\right)^2 \, _2F_1\left(2-\frac{n}{2},-\frac{n}{2};2-n;-n^2
r^2\right)\nn\\
-\frac{n^2}{4} \, _2F_1\left(1-\frac{n}{2},1-\frac{n}{2};2-n;-n^2 r^2\right)~~~~~~~~\ee
to find $k_1(r)$. Then we see that $h_1(r)=k_1(r)$ although $h_n(r)\ne k_n(r)$ if $n\ne 1$. In fact for case of $n=1$ we can find the following exact solutions
\be
h_1(r)&=&\frac{\pi r}{4} \, _2F_1\left(\frac{1}{2},\frac{3}{2};2;-r^2\right)\\
\ell_1(r)&=&-\frac{i ~{\bf E}\left(r^2+1\right)}{r}=h_1(r) +~ i~F(r)
\ee
where ${\bf E(r)}$ is the complete elliptic integrals of the second kind. Note that functions $h_1(r)$ and $F(r)$ are real. $h_1(\infty)=\ell_1(\infty)=1$ and $F(\infty)=0$. Therefore, besides the function $\ell_1(r)$ we can choose $h_1(r) + \alpha~F(r)$ as another real solution and both of them satisfy the boundary condition of approach to 1 at $r=\infty$. However, we now have an arbitrary parameter $\alpha$. Maybe, for example, we can set the boundary condition at $r=10$ by requiring the solution to approach 0.1 to determine the parameter $\alpha$. However this will render the analytic calculation in previous subsection to have another arbitrary parameter.
Therefore at present time we can not see the property that the backreaction of the interacting scalar field on the area of horizon is equal to the associated generalized gravitational entropy, which is show in the case of free scalar field in [1]. In next section we can see that the backreaction of the Maxwell field on the area of horizon is exactly equal to the associated generalized gravitational entropy.
\section {Generalized Gravitational Entropy of Maxwell Field}
The conventional action of Maxwell field is
\be
A=-{1\over 4}\int d^3x\sqrt g~F_{ab}F_{cd}g^{ac}g^{bd}
\ee
We choose the gauge of $A_0=0$ and as [1] we assume that the Maxwell fields are independent of coordinate $x$. Then, the action becomes
\be
A&=&-{1\over 2}\int d^3x\sqrt g~\Big(g^{\tau\tau}g^{xx}\Big(\partial_tA_x\Big)^2+g^{rr}g^{xx}\Big(\partial_rA_x\Big)^2+g^{\tau\tau}g^{rr}\Big(\partial_tA_r\Big)^2\Big)\\
&=&-{1\over 2}\int d^3x \Big[\partial_t\Big(A_x\sqrt g~g^{\tau\tau}g^{xx}\partial_tA_x\Big)+\partial_r\Big(A_x\sqrt g~g^{rr}g^{xx}\partial_rA_x\Big)\nn\\
&& + \partial_t\Big(A_r\sqrt g~g^{\tau\tau}g^{rr}\partial_tA_r\Big)\Big] -\Big[A_x \partial_t\Big(\sqrt g~g^{\tau\tau}g^{xx}\partial_tA_x\Big)\nn\\
&&+A_x \partial_r\Big(\sqrt g~g^{rr}g^{xx}\partial_rA_x\Big)+ A_r \partial_t\Big(\sqrt g~g^{\tau\tau}g^{rr}\partial_tA_r\Big)\Big]
\ee
The first bracket is the surface term and will contribute to the gravitational action which is considered later. After the variation the second bracket gives the field equations (as in [1] we assume that Maxwell field is independent of $x$)
\be
0&=&\partial_t\Big(\sqrt g~g^{\tau\tau}g^{rr}\partial_tA_r\Big)
\\
0&=&\partial_t\Big(\sqrt g~g^{\tau\tau}g^{xx}\partial_tA_x\Big)+\partial_r\Big(\sqrt g~g^{rr}g^{xx}\partial_rA_x\Big)
\ee
If we search the following solution
\be
A_r(\tau,r)&=&\eta_r \cos(\tau)~w_n(r)\\
A_x(\tau,r)&=&\eta~ \cos(\tau)~f_n(r)
\ee
then field equations imply $w_n(r)=0$ and $f_n(r)$ is described by
\be
rf''_n(r)+f'_n(r)-{1\over r(n^{-2}+r^2)}f_n(r)=0
\ee
The solution which is regular at the origin is
\be
f_n(r)=n^nr^n~_2F_1\Big({n\over 2},{n\over 2},{n\over 2},1+n,-n^2r^2\Big)
\ee
As in real scalar field we choose the normalized constant to be 1 for the coefficient of leading asymptotical term. Then the normalized solution becomes
\be
f_n(r)={n^nr^n \Gamma\Big({n\over 2}\Big)\Gamma\Big(1+{n\over 2}\Big)\over 2 n\Gamma(n)}~_2F_1\Big({n\over 2},{n\over 2},{n\over 2},1+n,-n^2r^2\Big)
\ee
Using this solution we now calculate the gravitational action.
The action we analyzed before has two parts. The second bracket becomes zero after putting the field on-shell and we remain only the first bracket which is the surface term. Thus the classical on-shell action becomes
\be
A_{\rm on-shell}&=&-{1\over 2}\int d^3x \Big[\partial_t\Big(A_x\sqrt g~g^{\tau\tau}g^{xx}\partial_tA_x\Big)+\partial_r\Big(A_x\sqrt g~g^{rr}g^{xx}\partial_rA_x\Big)\Big]\nn\\
&=&-{\eta^2\over 2}\int d^3x\Big[\partial_t(-\cos(\tau)\sin(\tau))\Big(\sqrt g~g^{\tau\tau}g^{xx}f_n(r)^2\Big)\Big]\nn\\
&&~~~+\cos^2(\tau)\partial_r\Big(f_n(r)\sqrt g~g^{rr}g^{xx}\partial_rf_n(r)\Big)\nn\\
&=&-{\eta^2\over 2} \int_0^{L_x} dx~\int_0^{2n\pi} d\tau \cos^2(\tau)~\Big(f_n(r)\sqrt g~g^{rr}g^{xx}\partial_rf_n(r)\Big)_{r\rightarrow\infty}\nn\\
&=&-{n\pi L_x\over 2}~\eta^2 \Big(f_n(r)\sqrt g~g^{rr}g^{xx}\partial_rf_n(r)\Big)_{r\rightarrow\infty}
\ee
Substituting the found solution $f_n(r)$ we obtain
\be
\log Z(n)&=&A_{\rm on-shell}=\pi Lx ~\eta^2\left(-2 n (\log (n)+\log (r))+2 n \left(\psi^{(0)}\left(\frac{n}{2}\right)+\gamma \right)+2\right)+ {\cal O} \Big({1\over r}\Big)\nn\\
\ee
The terms linear in $n$ include divergent terms that should be subtracted [1]. However, they do not contribute to the entropy.
The associated generalized gravitational entropy becomes
\be
S_{GGE}&=&-n\partial_n\Big[\log Z(n)-n \log(1)\Big]_{n=1}={\pi L_x \over 4}~\eta^2\Big(8-\pi^2\Big)
\ee
which is exactly the value in the real scalar field case. The physical reason behind the coincidence is that in 3D the Maxwell field is dual to a real scalar field. Therefore, the contribution of the electromagnetic field is just that from a real scalar field in 3D.
Notice that only the asymptotical values of $\log Z(n)$ are same for real scalar field and Maxwell field. The terms ${\cal O} \Big({1\over r}\Big)$ in each $\log Z(n)$ are different.
\section {Backreaction of the Maxwell Field on the Area Law}
Now we will study the backreaction of the Maxwell field on the metric. The action is
\be
-S=\int d^3x~\sqrt g~\Big[R-\Lambda -{1\over 4}F_{\mu\nu}F^{\mu\nu}\Big]
\ee
with $\Lambda=2$. The Einstein equations are
\be
G_{\mu\nu}\equiv R_{\mu\nu}-{g_{\mu\nu}\over 2}\Big(R-2\Big)=T_{\mu\nu}
\ee
where
\be
T_{\mu\nu}={1\over 4}g_{\mu\nu} F_{ab}F^{ab}-F_{\mu}^{~a}F_{a\nu}
\ee
Using the property shown in previous section that it remains only one component $A_x$ in Coulomb gauge we find that
\be
T_x^x&=&{1\over 4} F_{ab}F^{ab}-F^{xa}F_{ax}\\
T_r^r&=&{1\over 4} F_{ab}F^{ab}-F^{ra}F_{ar}\nn=T_x^x-(\partial^\tau A^x)(\partial_tA_x)\nn\\
&=& T_x^x-{1\over r^2(1+r^2)}(\partial_\tau A_x)^2
\ee
To proceed we see that the backreaction of Maxwell field will modify the BTZ metric which can be written by [1]
\be
ds^2 ={dr^2\over 1+r^2+g(r)}+r^2d\tau^2+(1+r^2)(1+v(r))dx^2
\ee
To the leading of perturbation (small $g(r)$ and $v(r)$)
\be
G_x^x&=&{ g'(r)\over 2r}\\
G_r^r&=&{g(r)\over 1+r^2}+{(1+r^2)v'(r)\over 2r}
\ee
Therefore, by using the Einstein equations we can find that
\be
v'(r)=\Big({g(r)\over 1+r^2}\Big)'+{2\over r(1+r^2)^2}(\partial_tA_x)^2
\ee
Now we have a little trouble, because that the functions $g(r)$ and $v(r)$ are independent of $\tau$ while the time dependence of $A_x(\tau ,r)$ is $\cos(\tau)$, as used in previous section. In fact, in this case we can first perform the integration over factor $\cos(\tau)$ within the function $(\partial_tA_x)^2$ in the action. This integration gives ${1\over 2\pi}\int_0^{2\pi}\cos(\tau)^2={1\over 2}$ in the action and, in conclusion, we can replace above $(\partial_\tau A_x)^2$ by ${1\over 2}A_x^2$.
Therefore, using the property that $g(0) = 0$ due to the regularity condition for the metric at the origin and $g(r)/r^2\rightarrow 0$ at infinity we find that
\be
v(0)&=&\int_0^\infty dr {1\over r(1+r^2)^2}(A_x)^2
\ee
which implies that to the leading order of $\eta^2$ the deformation of area of horizon $\delta A$ is
\be
S|_{\eta^2}&=&4\pi \delta A= -\eta^2~2\pi L_x \int_0^\infty dr {1\over r(1+r^2)^2}(A_x)^2\approx -1.46838~\eta^2~L_x
\ee
after substituting the exact solution of $A_x$ (with $n=1$) found in previous section.
We make two comments about our results.
1. The black hole formula is $S = 4\pi A = 4\pi A_0(1 + v(0))=4\pi (A_0 +\delta A)$ and $S|_{\eta^2}$ is the part of deformation by Maxwell field.
2. The generalized gravitational entropy calculated in previous section is $S_{GGE}={\pi L_x \over 4}~\eta^2\Big(8-\pi^2\Big)= -1.46838~\eta^2~L_x
$. We thus see that $S_{GGE}=S|_{\eta^2}$. This shows that the backreaction of the Maxwell field on the area of horizon is equal to the associated generalized gravitational entropy.
\section {Conclusion}
After Lewkowycz and Maldacena [1] proposed the generalization of the usual black hole entropy formula [2-5] several authors had studied the generalized gravitational entropy in higher-derivative gravity [8-14]. In this paper we focus on the fundamental property of generalized gravitational entropy. We first investigate the case of free real scalar field and then discuss the effect of the interacting scalar field. Next, we investigate the Maxwell field system. We are able to exactly solve the wave equation and calculate the analytic value of the generalized gravitational entropy in Coulomb gauge. We also use the Einstein equation to find the effect of backreaction of the Maxwell field on the geometry. The associated modified area law is consistent with the generalized gravitational entropy. We have shown a possible way to calculate the generalized gravitational entropy of the interacting scalar field and Maxwell field system in Coulomb gauge. We also explicitly show that effect of backreaction of the Maxwell field on the area of horizon is the generalized gravitational entropy.
Note that in this paper we adopt the Coulomb gauge because the associated solution could be found easily. However, it is important to see that whether the property also be shown in the general gauge. Also, we shall study the property in the more realistic geometry to confirm the property of the generalized gravitational entropy. These problems are under investigation.
\\
\begin{center} {\bf REFERENCES}\end{center}
\begin{enumerate}
\item A. Lewkowycz and J. Maldacena, ``Generalized gravitational entropy," JHEP 08 (2013) 090 [arXiv:1304.4926 [hep-th]].
\item J. D. Bekenstein, ``Black Holes and Entropy", Phys. Rev. D 7, 2333 (1973).
\item J. M. Bardeen, B. Carter and S. W. Hawking, ``The four laws of black hole mechanics, "Commun. Math. Phys. 31, 161 (1973).
\item S.W. Hawking, ``Particle creation by black holesCommun," Math. Phys. 43, 199 (1975), [Erratum-ibid. 46, 206 (1976)]..
\item G. W. Gibbons and S. W. Hawking, ``Action Integrals and Partition Functions in Quantum Gravit," Phys. Rev. D 15, 2752 (1977).
\item Jr., C. G. Callan and F. Wilczek, ``On geometric entropy", Phys. Lett., B333 (1944) 55, [arXiv:hep-th/9401072].
\item S. N. Solodukhin, `` Entanglement entropy of black holes," Living Rev. Relativity 14, (2011), 8 090 [arXiv:1104.3712 [hep-th]].
\item A. Bhattacharyya, A. Kaviraj and A. Sinha, `` Entanglement entropy in higher derivative holography," JHEP 1308 (2013) 012 [arXiv:arXiv:1305.6694 [hep-th]].
\item B. Chen and J-J Zhang, ``Note on generalized gravitational entropy in Lovelock gravity," JHEP 07 (2013) 185 [arXiv:arXiv:1305.6767 [hep-th]].
\item D. V. Fursaev, A. Patrushev and S. N. Solodukhin, ``Distributional Geometry of Squashed Cones," Phys. Rev. D 88 (2013)044054[arXiv:arXiv:1306.4000 [hep-th]].
\item A. Bhattacharyya, M. Sharma and A. Sinha, ``On generalized gravitational entropy, squashed cones and holography," JHEP 1401 (2014) 021 [arXiv:arXiv: 1308.5748 [hep-th]].
\item X. Dong, `` Holographic Entanglement Entropy for General Higher Derivative Gravity," JHEP 1401 (2014) 044 [arXiv:arXiv:1310.5713 [hep-th]].
\item J. Camps, ``Generalized entropy and higher derivative Gravity," JHEP 1403(2014) 070 [arXiv:arXiv:1310.6659 [hep-th]].
\item A. Bhattacharyya and M. Sharma, ``On entanglement entropy functionals in higher derivative gravity theories," [arXiv:arXiv:1405.3511 [hep-th]].
\end{enumerate}
\end{document}
|
2,877,628,091,212 | arxiv | \section{Introduction}
Although some aspects of the QCD phase structure have been revealed in the past years, still many things are unknown and require more thorough
investigation. Regarding the chiral and deconfinement transition, we have learned from lattice QCD data that for small baryochemical potentials,
the transition from a hadron gas to a quark-gluon plasma is a smooth crossover rather than a phase transition \cite{Aoki:2006we,Borsanyi:2010bp}.
A critical endpoint (CEP) and first-order phase transition are expected from effective low-energy models \cite{Scavenius:2000qd,Schaefer:2007pw,Fukushima:2008wg,Herbst:2010rf}
and a Dyson-Schwinger approach \cite{Fischer:2014ata}, though any irrefutable proof is still missing. Experimentally, the dip in the recent measurement of
directed flow as a function of beam energy by STAR \cite{Adamczyk:2014ipa} is sometimes argued to result from the presence of a phase transition.
The measurement of proton number fluctuations during the beam energy scan program has revealed some non-monotonic behavior in higher moments
such as the kurtosis \cite{Adamczyk:2013dal}. The idea
hereby is that moments of fluctuations of conserved quantities are sensitive to a CEP as they scale with some powers of the correlation
length \cite{Stephanov:2008qz}. However, these predictions rely on the assumption of thermodynamic equilibrium and to understand the observables we
need to develop models which are able to describe the full nonequilibrium evolution of the hot and dense matter created in a heavy-ion collision.
Effects like critical slowing down near a CEP \cite{Berdnikov:1999ph} and spinodal decomposition at a first-order phase transition
\cite{Mishustin:1998eq,Randrup:2010ax} might influence the obtained signal. Our ansatz for such a model is to couple an ideal fluid of quarks and gluons to the explicit
propagation of the relevant order parameters and additionally take into account friction and stochastic fluctuations \cite{Nahrgang:2011mg}.
We have shown that this model successfully describes supercooling and critical slowing down \cite{Nahrgang:2011mv,Herold:2013bi} as well as
domain formation due to spinodal dynamics \cite{Herold201414}. The goal of this work is to estimate the impact of nonequilibrium effects on an enhancement
of net-baryon number fluctuations at a CEP and first-order phase transition \cite{Herold:2014zoa}.
\section{Nonequilibrium chiral fluid dynamics}
\label{sec:model}
We derive the coupled dynamics of order parameters and quark-gluon fluid from a linear sigma model with dilatons \cite{Sasaki:2011sd}
\begin{eqnarray}
\label{eq:Lagrangian}
{\cal L}&=&\overline{q}\left(i \gamma^\mu \partial_\mu-g_{\rm q} \sigma\right)q + \frac{1}{2}\left(\partial_\mu\sigma\right)^2
+ \frac{1}{2}\left(\partial_\mu\chi\right)^2 + {\cal L}_A- U_{\sigma}-U_{\chi}~. \\
\end{eqnarray}
It describes the chiral dynamics of the light quarks $q=(u,d)$ coupled to the condensate $\sigma\sim\langle\bar q q\rangle$ which melts at
high temperatures, thus restoring chiral symmetry. Additionally, gluons are incorporated with a constituent gluon field $A$ whose mass is generated
by the glueball condensate $\langle\chi\rangle\sim\langle A_{\mu\nu}A^{\mu\nu}\rangle$, the so-called dilaton field. The quark-meson coupling
$g_{\rm q}$ is fixed to a value of $3.37$ to reproduce the vacuum nucleon mass. Fixing the mass of the sigma meson at $900$~MeV results in a phase
diagram with a CEP at temperature $T_{\rm CEP}=89$~MeV and quark chemical potential $\mu_{\rm CEP}=329$~MeV, with a first-order phase transition
for larger chemical potentials.
The mean-field effective thermodynamic potential is given by
\begin{equation}
V_{\rm eff}=\Omega_{q\bar q}+\Omega_{A}+U_{\sigma}+U_{\chi}+\Omega_0~,
\end{equation}
with the quark and gluon contributions
\begin{eqnarray}
\Omega_{\rm q\bar q}&=&-2 N_f N_c T\int\frac{\mathrm d^3 p}{(2\pi)^3} \left\{\ln\left[1+\mathrm e^{-\frac{E_{\rm q}-\mu}{T}}\right]+\ln\left[1+\mathrm e^{-\frac{E_{\rm q}+\mu}{T}}\right]\right\}~, \\
\Omega_{A}&=&2 (N_c^2-1) T\int\frac{\mathrm d^3 p}{(2\pi)^3} \left\{\ln\left[1-\mathrm e^{-\frac{E_A}{T}}\right]\right\}~,
\end{eqnarray}
Here, $E_{\rm q}=\sqrt{p^2+m_{\rm q}^2}$ and $E_A=\sqrt{p^2+m_A^2}$ denote the quasiparticle energies of constituent quarks and gluons, respectively.
Note that we use the quark chemical potential $\mu=\mu_{\rm q}$. The constant $\Omega_0$ is added to set the total energy in vacuum to zero. The
pressure of the ideal quark-gluon fluid is given by $p = -\Omega_{q\bar q}-\Omega_{A}$.
For the dynamics of the sigma field we adopt the result from \cite{Nahrgang:2011mg} for our model, resulting in a Langevin equation containing
a dissipative term $\eta_{\sigma}(T)\partial_t \sigma$ due to the possible decay of a sigma into a quark-antiquark pair and stochastic noise field
$\xi_{\sigma}$ resembling the back reaction of the fluid
\begin{equation}
\label{eq:eomsigma}
\partial_\mu\partial^\mu\sigma+\eta_{\sigma}(T)\partial_t \sigma+\frac{\delta V_{\rm eff}}{\delta\sigma}=\xi_{\sigma}~.
\end{equation}
Due to kinematic reasons, no dissipative processes from interactions between quarks and the dilaton field are possible, therefore we use
the classical Euler-Lagrange equation of motion for the dynamics of $\chi$,
\begin{equation}
\label{eq:eomchi}
\partial_\mu\partial^\mu\chi+\frac{\delta V_{\rm eff}}{\delta\chi}=0~.
\end{equation}
Energy-momentum and net-baryon number are conserved through the equations
\begin{eqnarray}
\label{eq:fluidT}
\partial_\mu T^{\mu\nu}&=&-\partial_\mu\left(T_\sigma^{\mu\nu}+T_\chi^{\mu\nu}\right)~,\\
\label{eq:fluidN}
\partial_\mu N_{\rm q}^{\mu}&=&0~,
\end{eqnarray}
which together with the equation of state from the mean-field pressure determine the evolution of the fluid.
As can be seen, we allow for a direct transfer of energy-momentum from the order parameters to the quark-gluon fluid through the stochastic
source terms $T_\sigma$ and $T_\chi$.
\section{Baryon number Fluctuations}
\label{sec:fluc}
Fluctuations of conserved quantities play an important role in identifying the location of a phase transition. They can be studied on several
levels. For effective models, we may calculate for instance quark or baryon number susceptibilities according to the formula
\begin{equation}
c_n = \frac{\partial^n(p/T^4)}{\partial(\mu/T)^n}~.
\end{equation}
as some derivative of the pressure with respect to the chemical potential.
These coefficients are usually finite and diverge only at a CEP, thus marking a clear signal for a critical structure. However, as shown in
\cite{Sasaki:2007db,Herold:2014zoa}, divergences occur also at the spinodal lines of a first-order phase transition if instabilities are taken
into account. Fig. \ref{fig:susceptibilities} shows the susceptibility and kurtosis from the linear sigma model with dilatons for a constant
temperature of $T=40$~MeV as a function of the net quark density. We see that both quantities diverge around the spinodal points, with critical
indices of $2/3$ and $2$, respectively. This result is in agreement with what has been found for the NJL model in \cite{Sasaki:2007db}.
\begin{figure}[t]
\centering
\subfloat[\label{fig:suscep}]{
\centering
\includegraphics[scale=0.61,angle=270]{nsuscep.eps}
}
\hfill
\subfloat[\label{fig:kurtosis}]{
\centering
\includegraphics[scale=0.61,angle=270]{nkurt.eps}
}
\caption[Quark number susceptibility and kurtosis]{Quark number susceptibility \subref{fig:suscep} and kurtosis \subref{fig:kurtosis}
for a nonequilibrium first-order phase transition at $T=40$~MeV. Figure from \cite{Herold:2014zoa}.}
\label{fig:susceptibilities}
\end{figure}
Given this, one may expect a clear enhancement of event-by-event fluctuations in experiment not only for a CEP but also, and probably even
stronger, for a first-order phase transition. Note that the important aspect here is that the system is allowed to develop spinodal instabilities
after the formation of a supercooled phase. Only in nonequilibrium, enhanced fluctuations can be observed for a first-order transition.
We test this assumption by extracting fluctuations of the net-baryon number from dynamical simulations
of heavy-ion collisions using the nonequilibrium chiral fluid dynamics model. For this purpose we begin with a spherical droplet of plasma, defined
by an initial temperature and chemical potential. We choose these values such that the subsequent evolution proceeds through the desired region of
the phase diagram, enabling us to study a crossover, CEP and first-order phase transition. As the total baryon number is conserved throughout the
expansion of the hot and dense matter, we need to define some region of acceptance, as we are not able to study fluctuations in a grand canonical
ensemble as the susceptibilities would indicate. We rather have to observe some appropriate part of the phase space in the canonical ensemble.
As shown in \cite{Bzdak:2012an}, ratios of cumulants significantly depend on the ratio of measured to total baryons
$N/N_{\rm tot}$ due to the overall conservation law, making the choice of a suitable range of acceptance a delicate and nontrivial problem. We show results for the variance and kurtosis as a function of time in Fig. \ref{fig:ebe}.
The baryon number within a single event is calculated over all cells with rapidity $|y|<0.5$ and transverse momentum density
$100 \mbox{ MeV/fm}^3<p_T<500 \mbox{ MeV/fm}^3$. In the upper plot we see the variance clearly enhanced for a CEP and even more
for a first-order phase transition in comparison with a crossover scenario. The same holds for the kurtosis as shown in the lower plot. We see that
it also becomes negative for both a CEP and a first-order phase transition.
\begin{figure}[t]
\centering
\subfloat[\label{fig:ebesuscep}]{
\centering
\includegraphics[scale=0.72,angle=270]{sigma_pt.eps}
}
\hfill
\subfloat[\label{fig:ebekurtosis}]{
\centering
\includegraphics[scale=0.72,angle=270]{kurt_pt.eps}
}
\caption[Event-by-event fluctuations]{Variance \subref{fig:ebesuscep} and kurtosis \subref{fig:ebekurtosis} of the net-baryon number from the fluid
dynamical evolution as a function of time. Figure from \cite{Herold:2014zoa}.}
\label{fig:ebe}
\end{figure}
We also see that the enhanced fluctuations vanish for larger times, so the signal gets washed out in the hydrodynamic phase after passing the
phase boundary. Therefore it is important to consider a criterion for a freeze-out to determine if the fluctuations in the density get finally
imprinted in fluctuations of actual particle numbers as measured by a detector. We expect that due to baryon number conservation the fluctuations
remain present even after particlization and final state interactions. However, this assumption has to be verified in the future.
\section*{Acknowledgements}
This work was funded by Suranaree University of Technology (SUT) and the CHE-NRU (NV.12/2557) project. The authors thank Igor Mishustin and
Chihiro Sasaki
for fruitful discussions and Dirk Rischke for providing the SHASTA code that was used numerical evolution of the ideal fluid. M. N. acknowledges support
from the U.S. Department of Energy under grant DE-FG02-05ER41367 and a fellowship within the Postdoc-Program of the German
Academic Exchange Service (DAAD).
The computing resources have been provided by the National e-Science Infrastructure
Consortium of Thailand, the Center for Computer Services at SUT and the Frankfurt Center for Scientific Computing.
\section*{References}
\bibliographystyle{iopart-num}
|
2,877,628,091,213 | arxiv | \section{Introduction}\label{sec_intro}
\IEEEPARstart{E}{dge} caching is a promising technology to deal with the high demands on backhaul by bringing popular content closer to the network edge. As such, it is a promising component for future wireless cellular networks facilitated by edge intelligence \cite{M.MaddahAliMay2014, MaddahAli2014,G.Paschos2016,Yu2018}. Caching technologies have been explored from various design aspects, considering, e.g., network topology, caching model, performance metric, control structure and mathematical tool \cite{L.Li2018}. Typical cache-enabled cellular networks include cache-enabled macro-cellular networks \cite{Khreishah2015}, heterogeneous networks (HetNets) \cite{C.Yang2016, E.Bastug2014}, device-to-device (D2D) networks \cite{B.Chen2017}, and cloud radio access networks (CRANs) / fog RANs (F-RANs) \cite{M.Tao2016, S.H.Park2016}. A caching policy is determined in a centralized or decentralized manner with performance metrics varying from backhaul load, latency, successful transmission rate, etc., utilizing tools such as optimization, stochastic geometry and deep learning. Cache placement methods considered include deterministic, random, and online placement \cite{A.Gharaibeh2016}, with or without file spitting and erasure coding based on, e.g., maximum distance separable (MDS) codes \cite{J.Liao2017}. Content delivery, on the other hand, can be realized via multiple unicast transmissions or one multicast transmission. Overall, caching models such as Femto caching \cite{K.Shanmugam2013}, probabilistic caching \cite{Blaszczyszyn2015}, and information-theoretic coded caching (CC) \cite{M.MaddahAliMay2014} have commanded attention in the literature. By making full use of the cached content in local storage and transmissions at the backhaul to create opportunities for simultaneous coded multicasting, coded caching can considerably reduce the backhaul load, i.e. the number of coded messages transmitted in parallel, with a considerable global caching gain.
Coded caching strategies have been developed under various network settings and content properties~\cite{M.MaddahAliMay2014, MaddahAli2014, G.Paschos2016, J.Zhang2015, J.Zhang2015a, S.Wang, E.Parrinello2020,S.Jin2016, MozhganBayat, M.Ji2016, Tandon2016, N.Mital2020}. In \cite{M.MaddahAliMay2014, MaddahAli2014}, centralized and decentralized coded caching methods with homogeneous network and file settings were proposed. Nonuniform file popularity, file sizes and cache sizes were considered in \cite{J.Zhang2015, J.Zhang2015a, S.Wang}, respectively. In \cite{J.Zhang2015}, file popularity was approximated into several levels and the cache space for storing files with different popularity levels was optimized. Moreover, coded caching techniques have been improved to deal with shared storage, multi-antennas, multi-requests, and the large-scale file subpacketization that follows, by jointly considering the caching design and possible collaboration. In \cite{E.Parrinello2020}, coded caching was considered with shared caches, multi-antennas and multi-requests, where index coding was used for user-to-cache association. Multi-round delivery was utilized based on the cache replication strategy proposed in \cite{S.Jin2016, MozhganBayat}. Coded caching in D2D networks where the mobile stations act as both transmitters and receivers were discussed in~\cite{M.Ji2016}. The effective numbers of sources and destinations were modeled in caching design. \cite{Tandon2016} jointly utilized the local storage and computing to achieve edge intelligence in F-RANs.
Most of the works on coded caching are targeted for wired networks where both network and content properties are static. However, in practice, there are many stochastic properties that impose the need to optimize coded caching against uncertainty. Coded caching needs to be studied considering the impacts of randomness caused by wireless channel fading \cite{Bayat, S.P.Shariatpanahi2019, D.Cao2019}, multiple antennas and transmission interference \cite{A.Toelli2020, S.S.Bidokhti2016}, random user behavior, such as user inactivity and mobility \cite{C.Yapar2019, Ozfatura2020}, and dynamic content popularity \cite{R.Pedarsani2016}. Coded caching over a broadcast channel between the server and the users was studied in \cite{Bayat} by joint optimization of caching and modulation. Mapping the multicast coded messages to the symbols of a signal constellation helps the users demodulate the desired symbols more reliably. In \cite{A.Toelli2020}, coded caching was applied to a multiple-input single-output (MIMO) broadcast channel by joint designing the multigroup multicast beamforming and the coded caching policy so that it benefits from spatial multiplexing, improved interference management and multi-antenna multicasting opportunities in content delivery.
An important source of uncertainty in content delivery, especially in a mobile network scenario, is user inactivity. The main reason is that the users are free to change location between cache placement and cache delivery. With edge-optimized caching, caching is local, e.g., optimized on a per base station level. Due to mobility, a user may not be within the range of the same cache at the cache delivery phase as during cache placement. Moreover, at a given tie period of cache delivery, not all users may be requesting a file.
When this happens, content delivery will be targeted only for active devices. As information about user activity is not available during cache placement, the design of optimal cache placement becomes complicated. The objective of this work is to find optimal cache placement and delivery strategies in the presence of user inactivity.
\subsection {Related Work}
In \cite{M.MaddahAliMay2014}, Maddah-Alo and Niesen (MAN) presented pioneering information-theoretic research on coded caching, where the network topology was deterministic without user inactivity, inspiring a substantial body of scientific work. Of relevance to this paper are \cite{MaddahAli2014, N.Mital2020, C.Yapar2019, Deng2020, Daniel2020, Q.Wang2019,Dutta2021} which consider coded caching either from the perspective of optimization, or with user inactivity, or in a decentralized manner.
Multi-server networks in a random topology were considered in \cite{N.Mital2020}, where the users were randomly connected to a fixed number of servers. Maximum distance separable (MDS) codes were utilized to construct file pieces and thus enabled the users to recover the required file with fewer fragments from a limited number of servers out of all. This is an opposite scenario of what is discussed in this paper. Here we consider one server, and the randomness is in the set of users connected to the server.
The paper \cite{C.Yapar2019} characterized user inactivity for a cache enabled D2D network with $K$ users. Each user might be inactive independently at a given probability, thus the number of effective devices can be predicted based on probability. Considering D2D, each user can both transmit and receive content from the other $K-1$ users, and hence there is a multiplier $K-1$ in file subpacketization when all users are active. When part of the users became inactive, the number of effective users available to deliver and receive content dropped to $K-1-\alpha$.
Given an $\alpha$, cache placement and delivery performance are analyzed. The selection of $\alpha$ is from the perspective of performance analysis. The outage probability for successful transmission was defined as a function of $\alpha$, and one was able to choose a proper $\alpha$ with any given outage probability threshold. MDS codes were utilized for multi-server transmissions, as in~\cite{C.Yapar2019}. The D2D scenario analyzed in~\cite{C.Yapar2019} is more similar to the scenario of~\cite{N.Mital2020} with fewer transmitters available than to the scenario of interest here. Instead of probabilistic performance analysis, we pursue optimization for obtaining the best file subpacketization and caching strategy.
\cite{Deng2020} and \cite{Daniel2020} provided insights on optimization based coded caching design for nonuniform file parameters. User inactivity was not considered.
Decentralized coded caching has been investigated based on random cache placement operated independently at each user, which removes the need for central coordination which is not always applicable for practical wireless cellular networks \cite{MaddahAli2014,Q.Wang2019,Dutta2021}. \cite{MaddahAli2014} proposed a framework for decentralized coded caching and analyzed its application in three different typologies: tree topology, shared caches and multiple requests. As an extension to \cite{MaddahAli2014}, \cite{Q.Wang2019} provided an optimization framework for decentralized coded caching in a more general scenario with arbitrary file sizes and cache sizes. Caching parameters were optimized, aiming to minimize the worst-case or average load. \cite{Dutta2021} revisited the shared caching problem where multiple users were served by one cache. An optimal delivery scheme was proposed utilizing index coding.
In multiround delivery when multiple users share a cache~\cite{E.Parrinello2020,S.Jin2016,MozhganBayat}, in some rounds not all users are present. Thus, from a cache delivery perspective, the results of~\cite{E.Parrinello2020,S.Jin2016,MozhganBayat} directly apply to an inactive user scenario. However, the {\it cache placement} setting is different. The user caching profile is assumed known during cache placement in ~\cite{E.Parrinello2020}, decentralized caching is applied in ~\cite{S.Jin2016}, while MAN cache placement with pre-selected cardinality is assumed upfront in~\cite{MozhganBayat}. In contrast, here we consider deterministic caching in a situation where the set of active users is not known during cache placement, and optimize cache placement.
In this paper, we focus on
a scenario where there are several cache-enabled users connected to a single server via shared links, and where there is inactivity. Centralized and decentralized coded caching are studied for cache-enabled networks with user inactivity.
For centralized caching, two methods are considered. First, a method with file fragments of one size is considered, and optimal fragment size is found. Second, a general scheme is considered where each file is divided into fragments of different sizes, and fragment sizes are optimized over. It is proved that the optimal solution for cache placement is the same for the basic scenario without user inactivity, and the scenario with user inactivity.
Mathematical analysis and simulation results are presented to illustrate the advantages of the proposed method in terms of reducing the backhaul load against user inactivity, as well as the equivalence among the subpacketization optimization in all scenarios. Finally, the decentralized coded caching strategy is of interest in scenarios with uncertainty resulting from user inactivity.
Analysis and simulations are provided to compare decentralized coded caching with centralized coded caching in the presence of user inactivity.
\subsection {Contributions}
In this paper, our aim is to unlock the potential of utilizing coded caching against the uncertainty caused by user inactivity. In summary, this paper has made the following major contributions:
\begin{itemize}
\item We address coded caching design in presence of user inactivity using both centralized and decentralized cache placement. The uncertainty of the inactive users causes some difficulties for caching placement design.
\item We develop centralized coded caching schemes optimizing the worst-case backhaul load of the one server shared link network via file subpacketization optimization assuming fixed cardinality and also multiple cardinalities.
\item With fixed cardinality of the fragment label set, the optimal cardinality is proved to be the same as the cache replication parameter used in Maddah-Ali-Niesen's method without user inactivity.
\item Considering the possible redundancy introduced by the fragments labeled with the user sets containing inactive users, file subpacketization is also done based on multiple cardinalities. The weights for different types of fragments labeled with different cardinalities are designed. The optimal solution is proved to be the same as fixed cardinality.
\item We have utilized decentralized cache placement in a system with user inactivity and developed inactivity-aware cache delivery for decentralized caching.
\item The performance gaps in terms of backhaul load against user inactivity have been investigated between the proposed centralized method and the decentralized method, and also between the centralized method and the ideal MAN method. While the former decreases with the increase of the number of inactive users, the latter increases from $0$ to a peak point with the number of inactive users, and then go down to $0$ until all the users become inactive.
\item Simulations have shown that the proposed optimization based coded caching method shows compatible performance to the ideal scenario with full user inactivity information available in the placement phase. The decentralized method with user inactivity is easy to implement at a price of slight performance degradation.
\end{itemize}
\subsection {Notation}
The notation $[b]$ denotes the set consisting of consecutive integers $\{1, 2, {\dots }, b-1, b\}$. Similarly, $[a:b]$ is used to define the set $\{a, a+1, {\dots }, b-1, b\}$ consisting of integers ranging within $[a,b]$. $W_n$ is used to refer to the $n$-th file with $|W_n|$ denoting the length of the file. A fragment of file $W_n$ is expressed as $W_{n, \mathcal \tau}$, stating that the fragment of file $n$ is stored at the users whose indices belong to the set $\mathcal \tau$. The cardinality of any set $\mathcal \tau$, i.e. the number of elements in set $\mathcal \tau$, is denoted by $|\mathcal \tau|$. For any real number $c$, $\lfloor c \rfloor$ and $\lceil c \rceil$ denote the floor and ceiling versions of $c$, respectively. The operator $\oplus$ denotes the bitwise “XOR” operation between multiple fragments.
\subsection {Organization}
The organization of the rest of the paper is as follows. Section II introduces the system model used in the paper and Section III discusses the coded caching scheme in presence of user inactivity with fixed cardinality in file subpacketization. Section IV introduces an optimization framework and its optimal solution to interpret coded caching against user inactivity based on multiple cardinalities in file subpacketization. Section V discusses the alternative decentralized coded caching scheme to deal with the uncertainty caused by user inactivity which is suboptimal but easy to implement. Section VI provides a comparison of centralized and decentralized manners, as well as the ideal MAN method, against user inactivity. Section VII presents the simulation results of the proposed coded caching schemes against user inactivity. Section VIII summarizes the paper with a conclusion and discussion of the main contributions.
\section{System Model}\label{sec_sys}
In this section, the network model with caching policies, as well as the content characteristics that involve the structure of the network coding, and the file popularity profiles are presented.
There is a base station connected to the core network with access to all the file library ($N$ files $W_1, W_2,\dots, W_N$ each with equal size $F$ and popularity), and $K$ users each with local storage of size $MF$. The users are connected to the server via error free shared links. The probability for each user to be inactive is $p$. In the cache placement phase, the user inactivity is unknown while the base station has the information of the user inactivity in the delivery phase. Assuming that in a realization, there are $I$ inactive users forming an inactive user set ${\cal I}$. To ensure the significance of the discussion, we assume there is at least one user becoming inactive and at the same time, there is at least one active user to be served, i.e. $I \in [K-1]$. The number of active users is correspondingly defined as $J=K-I$. The probability for $I$ of the $K$ users being inactive is
\begin{equation}
P(I)={K \choose I} p^I (1-p)^{K-I},~I \in [K-1].
\end{equation}
The cached content at the local cache of a user $k$ is defined as $Z_k$. The content delivered through the backhaul via coded multicast is described as $X_{\boldsymbol{d}}$, where $\boldsymbol{d}=(d_1,d_2,\dots,d_K)$ with $d_k \in [N], k \in [K]$ denoting the demand of user $k$.
As a common metric for measuring the performance of coded caching methods, the backhaul load is defined as the volume of content needed to be delivered via backhaul using coded multicasting. The backhaul load can be calculated both in the worst case when the active users each requesting a different file, and the average case with all types of possible demands considered. Here, the worst-case backhaul load is considered which implies that the number of files is higher than the number of users $N>K$. We aim to minimize the worst-case backhaul load by designing the caching strategy subject to file size and cache size constraints. \footnote{Unless otherwise specified, the backhaul load in the following parts of the paper refers to the worst-case backhaul load.}
\begin{figure}[htb]
\centering
\includegraphics[width=13cm]{caching_inactive-eps-converted-to.pdf}
\caption{System model for a cache-aided network in presence of user inactivity.}\label{fig_1}
\end{figure}
\section{coded caching in presence of inactive users}\label{method1}
\subsection{Content Placement and Delivery with User Inactivity}
We begin with the effective file subpacketization in Maddah-Ali-Niesen's method to make full usage of the multicast delivery opportunities. In the MAN method, all the users have the same cache content placement and all the files are equally cached in local storage because of the homogeneous settings. Define a variable $t \triangleq \frac{KM}{N}$, and then divide each file into ${K \choose t}$ fragments equally. $t$ is referred to as the cache replication parameter in literature \cite{ S.Wang}. \footnote{Here MAN method refers to Maddah-Ali-Niesen's method. It is assumed $t$ is an integer. If not, content sharing can be used to deal with this issue.} The fragments are indexed by all subsets of users ${\cal \tau} \subset [K]$ of fixed cardinality $|\tau|=t$. Accordingly the fragments of file $n$ are $W_{n, {\cal \tau}}$.
For sake of simplicity, the set of all $t$-element subsets of $[K]$ is defined as ${\cal \zeta}=\{{\cal \tau}|{\cal \tau} \subset [K], |{\cal \tau}|=t \}$. It is assumed that user $k$ stores the fragments $W_{n, {\cal \tau}}$ of each file $n$ when $k \in {\cal \tau}, {\cal \tau} \in {\cal \zeta}$. Hence, the cache content placement at user $k$ can be written as
\begin{equation}\label{placement}
Z_k=(W_{n, {\cal \tau}}:{\cal \tau} \in {\cal \zeta}, k \in {\cal \tau}, n \in [N]).
\end{equation}
In each cache, there are ${K-1 \choose t-1}$ fragments for each file, and each fragment has normalized size of $1/{K \choose t}$. Thus the cache capacity constraint holds as follows
\begin{equation}
N{K-1 \choose t-1}\frac{1}{{K \choose t}}=M.
\end{equation}
Without user inactivity ($I=0$), the server can deliver a number of packets each of which comprises coded fragments to the users to help them reconstruct their requested files:
\begin{align} \label{X_or}
X_{\boldsymbol{d}}=(X_{\boldsymbol{d}, {\cal S}}:{\cal S} \in {\cal \vartheta}), \\
X_{\boldsymbol{d}, {\cal S}}=\oplus_{k \in {\cal S}}~W_{d_k, {\cal S}\setminus\{k\}},
\end{align}
where the set ${\cal S}$ has one more element than set ${\cal \tau}$ satisfying ${\cal S} \subset [K], |{\cal S}|=t+1$. The set of all ${\cal S}$ is ${\cal \vartheta}$. This coded multicast strategy works in the way that all the users are able to recover their request files using the same transmitted packets and the cached fragments in local cache. For any user $i$ in a particular ${\cal S}$, the linear combination $X_{\boldsymbol{d}, {\cal S}}$ can be rewritten as
\begin{equation}
X_{\boldsymbol{d}, {\cal S}}=W_{d_i, {\cal S} \setminus \{i\}} \oplus (\oplus_{k \in {\cal S}, k \neq i }~W_{d_k, {\cal S} \setminus\{k\}}),
\end{equation}
where $W_{d_i, {\cal S} \setminus \{i\}}$ is one of the fragments that user $i$ needs to recover the requested file $d_i$. The other fragments in linear combination $(\oplus_{k \in {\cal S}, k \neq i }~W_{d_k, {\cal S} \setminus\{k\}})$ are all cached at user $i$ according to the cache placement in \eqref{placement} due to the fact that $i \in {\cal S} \setminus\{k\}$ for any $k \in {\cal S}$ but $k \neq i$. Therefore, $W_{d_i, {\cal S} \setminus \{i\}}$ can be decoded by user $i$. Taking all types of ${\cal S} \in {\cal \vartheta}, i \in {\cal S}$ into account, user $i$ is thus able to decode all the missing fragments of the request file, i.e. $(W_{d_i, {\cal S} \setminus \{i\}}:{\cal S} \in {\cal \vartheta}$, $i \in {\cal S})=(W_{d_i, {\cal \tau}}:{\cal \tau} \in {\cal \zeta}$, $i \not\in {\cal \tau})$.
As the linear combination of several fragments via operator $\oplus$ has the same size as a single fragment, i.e. $F/{K \choose t}$, the backhaul load can be written as the size of a fragment multiplied by the number of the different sets ${\cal S}$ as follows:
\begin{equation}\label{R}
R=\frac{F}{{K \choose t}} {K \choose t+1}=F~\frac{K-t}{t+1}.
\end{equation}
Because file size $F$ performs as a multiplier in backhaul loads, unit file size is assumed in the following for briefness.
In~\cite{E.Parrinello2020}, multi-round delivery to users sharing caches is considered. In a given delivery round, a subset of cache profiles may be present. Thus, for a given round, the cache delivery problem addressed in \cite{E.Parrinello2020} is the same as the delivery problem in a situation with a set of inactive users. We shall thus use the cache delivery scheme of \cite{E.Parrinello2020}, rephrased to an inactive user scenario.
Assuming there are $I$ inactive users forming an inactive user set ${\cal I} \subset [K]$, we utilize a general cardinality $l \in [K]$ in file subpacketization instead of the cache replication parameter $t=KM/N$ that is used in the MAN method. The optimal value of $l$ will be optimized in Subsection \ref{method2}. The transmitted packets would be
\begin{align} \label{Xd_inact}
X_{\boldsymbol{d}}=\left\{\begin{array}{cl}
(X_{\boldsymbol{d}, {\cal S}}:{\cal S} \in {\cal \vartheta}), & \mbox{if} ~l+1 > I, \\
\\
(X_{\boldsymbol{d}, {\cal S}}:{\cal S} \in {\cal \vartheta},{\cal S} \not\subset {\cal I}), & \mbox{if}~l+1 \le I,
\end{array}\right.
\end{align}
where the packet given any subset ${\cal S}$ is
\begin{equation}\label{Xds_inact}
X_{\boldsymbol{d}, {\cal S}}=\oplus_{k \in {\cal S}, k \notin {\cal I}}~W_{d_k, {\cal S}\setminus k}.
\end{equation}
The worst case backhaul load then becomes~\cite{E.Parrinello2020}\footnote{${K \choose l+1}/{K \choose l}=-1+\frac{K+1}{l+1}$ is used for simplifying the computation.}
\begin{align}\label{R_inact-obj}
R(l)=\left\{\begin{array}{cl}
\frac{1}{{K \choose l}} {K \choose l+1}, & \mbox{if}~l+1> I,\\
\frac{1}{{K \choose l}} \left[{K \choose l+1}-{I \choose l+1}\right], &\mbox{if}~l+1\le I.
\end{array}\right.
\end{align}
Comparing \eqref{R} and \eqref{R_inact-obj}, it is obvious that the worst case backhaul load in presence of user inactivity is either the same as the one derived without user inactivity when $t+1 > I$ or is smaller than the one without user inactivity when $t+1 \le I$.
For clarification, we summarize the procedure of the proposed centralized coded caching scheme in presence of user inactivity in Alg.~\ref{alg_cen}, based on (\ref{R_inact-obj}). Note that in Alg.~\ref{alg_cen}, $l^*$ denotes the optimal cardinality based on file subpacketization optimization given by $l^*=KM/N$, which shall be carefully proved in Subsection \ref{method2} and Section \ref{method3}.
\begin{algorithm}
\caption{Centralized Coded Caching in Presence of User Inactivity}
\label{alg_cen}
\begin{algorithmic}[1]
\STATE \textbf{procedure} {\large P}LACEMENT
\STATE ~~~~$l \leftarrow l^*$ (the optimal solution in file subpacketization optimization: $l^*=KM/N$)
\STATE ~~~~${\cal \zeta} \leftarrow \{{\cal \tau}|{\cal \tau} \subset [K], |{\cal \tau}|=l \}$
\STATE ~~~~\textbf{for} $n \in [N]$ do
\STATE~~~~~~~split $W_n$ into $(W_{n, {\cal \tau}}|{\cal \tau} \in {\cal \zeta})$ with identical size
\STATE~~~~\textbf{end for}
\STATE ~~~~\textbf{for} $k \in [K]$ do
\STATE~~~~~~~user $k$ caches $Z_k \leftarrow (W_{n, {\cal \tau}}|{\cal \tau} \in {\cal \zeta}, k \in {\cal \tau},n \in [N])$
\STATE~~~~\textbf{end for}
\STATE \textbf{end procedure} \\
Users make requests $\boldsymbol{d}$ given the number and identity of the inactive users $(I, {\cal I})$
\STATE \textbf{procedure} {\large D}ELIVERY
\STATE ~~~~$l \leftarrow KM/N$
\STATE ~~~~${\cal \vartheta} \leftarrow \{{\cal S}|{\cal S} \subset [K], |{\cal S}|=l+1\}$
\STATE ~~~~\textbf{if} $l+1>I$ do\\
\STATE~~~~~~~$X_{\boldsymbol{d}} \leftarrow (\oplus_{k \in {\cal S}, k \notin {\cal I}}~W_{d_k, {\cal S}\setminus \{k\}}:{\cal S} \in {\cal \vartheta})$
\STATE ~~~~\textbf{else if} $l+1 \le I$ do\\
\STATE~~~~~~~$X_{\boldsymbol{d}} \leftarrow (\oplus_{k \in {\cal S}, k \notin {\cal I}}~W_{d_k, {\cal S}\setminus\{k\}}:{\cal S} \in {\cal \vartheta},{\cal S} \not\subset {\cal I})$
\STATE~~~~\textbf{end if}
\STATE \textbf{end procedure}
\end{algorithmic}
\end{algorithm}
While in (\ref{R_inact-obj}) and Alg.~\ref{alg_cen} a framework of coded caching in presence of user inactivity is presented, there is an essential problem remaining: {\it What is the optimal cache placement policy if it is known at the cache placement that a random set of $I$ out of $K$ users will be inactive at the time of cache delivery?} This motivates our investigation on file subpacketization optimization in the following.
\subsection{Optimizing Coded Caching in Presence of User Inactivity}
When optimizing cache placement in presence of user inactivity, a remark is in place. It is fair to assume that the server has no information of user inactivity in the content placement phase while this type of knowledge becomes available in the content delivery phase. As a result, the server will only target the active users for content delivery based on a given cache content placement. The coded packets to be transmitted from the server are only for $J=K-I$ active users. Thus the number of inactive users $I$ affects at least content delivery and backhaul load. Moreover, to determine the packets $Z_{\boldsymbol{d}}$ to be transmitted, full information about file requests is needed, which directly indicates the set inactive users ${\cal I}$. As backhaul load, either worst case or average, is usually selected as a performance metric in coded caching design, $I$ has to appear in the objective of cache placement optimization, not only in cache delivery. This conflicts with the assumption that user inactivity information is not available before content delivery. For the sake of analysis, we assume that the server knows the estimated number of inactive users $I$ already in the cache placement phase, while not knowing which set of users ${\cal I}$ will become inactive. This information may be based, e.g., on historical information.
It turns out that the optimal solutions to file subpacketization problems both with fixed cardinality and with multiple cardinalities, will be independent on $I$.
This demonstrates that in presence of user inactivity, where the set ${\cal I}$ is not known in the cache placement phase, knowledge about the cardinality of ${\cal I}$ is of no use. The optimal schemes found here are thus optimal also in situations where $I$ is not known during cache placement.
\subsection{Subpacketization Optimization with Fixed Cardinality}\label{method2}
The subpacketization method and cache content placement used here is based on a group of subsets ${\cal \tau} \subset [K]$ with $|{\cal \tau}|=l$ to create possible multicast opportunities in the content delivery phase. The optimal choice for the cardinality of ${\cal \tau}$ can be interpreted from an optimization perspective.
Firstly, we consider the normal case without user inactivity, and define an multiple cardinalities $l=|{\cal \tau}|, l \in [K]$, to replace the fixed $t=KM/N$. The optimal $l$ should give the lowest backhaul load $R(l)={K \choose l+1}/{K \choose l}=\frac{K-l}{l+1}$ while satisfying cache capacity constraint ${K-1 \choose l-1}/{K \choose l}=l/K \le M/N $. The optimization problem can be rewritten into
\begin{subequations}\label{opt-man}
\begin{align}
\min_l&~~~~-1+\frac{K+1}{l+1} \label{optMAN_0}\\
{\rm s.t.} &~~l \le \frac{KM}{N}, l \in [K].\label{optMAN_1}
\end{align}
\end{subequations}
Since the objective \eqref{optMAN_0} decreases with respect to $l$, the optimal solution is the largest $l$ with cache capacity constraint satisfied with equality, i.e. $l=\frac{KM}{N}$, which agrees with the cache replication parameter $t=(KM)/N$ used in the MAN method. In particular, when $(KM)/N$ is not integer, the optimal cardinality becomes $l=\floor{KM/N}$, which works for the scenario with user inactivity as well.
Similarly, we substitute the worst case backhaul load with user inactivity \eqref{R_inact-obj} into the objective function and again replace the fixed $t=KM/N$ with a variable $l$ to be optimized. The coded caching optimization with user inactivity after simplification can be written as
\begin{subequations}\label{optFix}
\begin{align}
\min_l&~~~~R(l) \label{optFix_0} \\
{\rm s.t.} &~~l \le \frac{KM}{N},~~l \in [K-1]. \label{optFix_1}
\end{align}
\end{subequations}
where the objective function is given by \eqref{R_inact-obj}.
To find the optimal $l$, the analysis of (\ref{optFix}) can be divided into two parts. We have
\begin{lemma}\label{lemma-optFix-obj}
The backhaul load $R(l)$ of (\ref{R_inact-obj}) is a decreasing function of $l$ in the interval $l \in [I-1]$.
\end{lemma}
\begin{proof}
See Appendix B.
\end{proof}
We can now show
\begin{theorem}\label{theorem-inactive-t}
If MAN cache placement with cardinality $l$ of all caching subsets $\tau$ is used in the presence of user inactivity, minimum worst case backhaul load is achieved with cardinality $l=\frac{KM}{N}$.
\end{theorem}
\begin{proof}
We first treat separately the minimization in the regions $l>I-1$ and $l\leq I-1$ separately.
In the region $l > I-1$, the backhaul load \eqref{R_inact-obj} is the same as the one without user inactivity \eqref{R}, for which the optimal cardinality has been proved to be $l=MK/N$.
Thus if $I \le KM/N$, this yields the minimum backhaul in this region, while if $I > KM/N$, all of this region is infeasible.
According to Lemma \ref{lemma-optFix-obj}, the minimum backhaul in the second region $l\leq I-1$ is achieved at the maximal feasible point $l=\min(KM/N,I-1)$.
It remains to find the smaller value of the solutions in the two regions, in the case $KM/N \geq I$. For this, we compute
\begin{eqnarray}
R(I-1)-R\left(\frac{KM}{N}\right) &=&
\frac{{K \choose I}-{I \choose I}}{{K \choose I-1}} -\frac{{K \choose KM/N +1}}{{K \choose KM/N}} \cr
&=&\frac{(I-1)!}{I (KM/N+1) K!} \times B \, \Gamma,
\end{eqnarray}
where $B=I(KM/N+1)(K-I+1)!$ and
\begin{eqnarray}
\Gamma+1 &=& \frac{(K+1)(KM/N+1-I)}{I(KM/N+1)(K-I+1)! {K \choose I-1}} \cr
&\ge& \frac{K+1}{KM/N+1} > 1.
\label{fixProof}
\end{eqnarray}
This completes the proof.
\end{proof}
Theorem~\ref{theorem-inactive-t} thus states that the optimal cardinality in file subpacketization for coded caching in presence of user inactivity in the whole interval $I \in [K-1]$ is always $l=KM/N$, which is the same as $t$ used in MAN method without user inactivity.
\section{Subpacketization with Multiple Cardinalities}\label{method3}
The analysis in previous section contains redundancy in the content placement caused by caching the fragments related to the inactive users. For instance, the fragments $(W_{n, {\cal \tau}}:k \in {\cal \tau},~{\cal \tau}\cap{\cal I} \neq \emptyset, n \in [N])$ stored in an active user $k$ seem to take up storage space without contributing in reducing backhaul load. The optimal file subpacketization and cache placement is to cache only the fragments corresponding to the active users, e.g. $(W_{n, {\cal \tau}}:k \in {\cal \tau}, {\cal \tau} \cap {\cal I} = \emptyset, n \in [N])$. However, the information about user inactivity is unknown in the cache placement phase, which means that the set ${\cal I}$ can not be specified.
Given an inactivity probability, the probability of a user caching a file fragment in vain grows with fragment label cardinality $|\tau|$. Above, we found that the optimal cardinality is given by $l=KM/N$ if all labels have the same cardinality. As the number of active users decreases, there is a possibility that having labels of multiple cardinalities might lead to more efficient use of the caches. In \cite{Daniel2020}, the multiple cardinalities based file subpacketization is utilized to deal with the heterogeneous of file popularity which imposes multilevel file subpacketization in terms of popularity.
For this, we split each file based on subsets with a series of different cardinalities $l \in [0:K]$ instead of a fixed number $t$. That is to say, each file is split into $2^K$ fragments labeled with $W_{n, {\cal A}^l}:{\cal A}^l \subset [K], |{\cal A}^l|=l, l \in [0:K]$. Similarly, we assume that in the cache placement phase, a fragment is cached by user $k$ if its fragment label ${\cal A}^l, l \in [0:K]$ includes $k$:
\begin{equation}
Z_k=(W_{n, {\cal A}^l}:k \in {\cal A}^l, {\cal A}^l \subset [K],|{\cal A}^l|=l, l \in [0:K], n \in [N]).
\end{equation}
According to the cardinality $l$, the fragments for each file $n$ can be divided into $K+1$ groups as $W_n^l=(W_{n, {\cal A}^l}:{\cal A}^l \subset [K], |{\cal A}^l|=l), l=0,1, \dots,K.$ There are ${K \choose l}$ types of fragments in fragment group $W_n^l$. In total, there are $\sum_l {K \choose l}=2^K$ different fragments for each file. By adjusting the weights of the fragment groups $W_n^l, l \in [0:K]$ for each file, the space that each fragment group takes from the cache is decided accordingly. The number of effective users involved in the caching design can then be controlled to some degree.
It is assumed that the fragments in the same group $l$ have the same size. Define a weight vector as ${\boldsymbol \alpha}\triangleq [\alpha^0,\alpha^1,\dots, \alpha^K]$ with $\alpha^l$ denoting the size of a fragment in fragment group $l$ normalized by file size $F$, i.e. $\alpha^l=|W_{n, {\cal A}^l}|, n \in [N]$. Hence, the size of fragment group $l$ of file $n$, $W_n^l$, is ${K \choose l} \alpha^l F$.
Now the file size and cache capacity constraints are:
\begin{subequations} \label{c_fs}
\begin{align}
&~~~~\sum_{l=0}^K {K \choose l} \alpha^l=1, \label{c_fs0} \\
&\sum_{l=1}^K \alpha^l {K-1 \choose l-1} \le \frac {M}{N}, \label{c_fs1} \\
&~0 \le \alpha^l,~l=0,1\dots,K. \label{c_fs2}
\end{align}
\end{subequations}
In the case, the content to be delivered to the users via backhaul can be derived as (with referring to Appendix A)
\begin{align*}
X_{\boldsymbol{d}}=\left\{\begin{array}{cl}
(X_{\boldsymbol{d},{\cal A}^{l+1}}:{\cal A}^{l+1} \subset [K], |{\cal A}^{l+1}|=l+1), ~~\text{if}~l+1>I, \\
(X_{\boldsymbol{d},{\cal A}^{l+1}}:{\cal A}^{l+1} \not \subset {\cal I},{\cal A}^{l+1} \subset [K], |{\cal A}^{l+1}|=l+1), \\
\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad
\text{if}~1 < l+1 \le I,\\
\quad~X_{\boldsymbol{d},{\cal A}^{l}},\quad\quad\quad\quad\quad\quad \text{if}~l=0,
\end{array}\right.
\end{align*}
where the packet corresponding to a given fragment label set ${\cal A}^{l+1}$ is given by
\begin{align*}
X_{\boldsymbol{d},{\cal A}^{l+1}}=\left\{\begin{array}{cl}
\oplus_{k \in {\cal A}^{l+1}, k \notin {\cal I}}~~W_{d_k, {\cal A}^{l+1}\setminus{\{k\}}} ,&\text{if}~l+1>I, \\
\oplus_{k \in {\cal A}^{l+1}, k \notin {\cal I}}~~W_{d_k, {\cal A}^{l+1}\setminus{\{k\}}},&\text{if}~1 < l+1\le I,\\
~~W_{d_k, {\cal A}^l},&\text{if}~l=0.
\end{array}\right.
\end{align*}
In particular, there is an exceptional case for $l=0$ when ${\cal A}^0$ equals to $\emptyset$ and thus none of the users has stored the subfiles $W_{n,{\cal A}^0}, n \in [N]$. Accordingly, the backhaul load normalized by file size $F$ is written as
\begin{align}
\quad\quad R({\boldsymbol \alpha})&=(K-I)\alpha^0+\sum_{l=I}^{K-1} {K \choose l+1} \alpha^l
+ \sum_{l=1}^{I-1} \bigg[{K \choose l+1} - {I \choose l+1} \bigg]\alpha^l \notag\\
&=\sum_{l=0}^{K-1} {K \choose l+1} \alpha^l-\sum_{l=0}^{I-1} {I \choose l+1} \alpha^l.
\end{align}
The caching design turns to solving a linear programming of minimizing the backhaul load subject to the file size and cache capacity constraints \eqref{c_fs}:
\begin{equation}\label{opt0}
\min_{{\boldsymbol \alpha}}~~~~ R({\boldsymbol \alpha})~~~~
{\rm s.t.}~~\eqref{c_fs0}-\eqref{c_fs2}.
\end{equation}
Existing solvers, e.g. CVX \cite{M.Grant2013}, can be used to solve problem \eqref{opt0} with $K+1$ variables and $K+3$ constraints \cite{M.Tao2019,M.Tao2017}. It is important to figure out the structure of the optimal solution which is discussed below.
To simplify the problem, we replace the original variables with the variables satisfying $\beta^l={K \choose l} \alpha^l, l \in [0:K]$. Thus we derive a new weight vector as ${\boldsymbol \beta} \triangleq [\beta^0,\beta^1,\dots,\beta^K]$. In this case, problem \eqref{opt0} can then be rewritten into
\begin{subequations}\label{opt1}
\begin{align}
\min_{\{\beta^l\}} ~~~~&\sum_{l=0}^{K-1} \frac{K-l}{l+1} \beta^l-\sum_{l=0}^{I-1} {I \choose l+1}/{K \choose l}\beta^l \label{R1_0}\\
{\rm s.t.} ~~~~~~&\sum_{l=0}^K \beta^l=1, \label{R1_1} \\
~~~~~~~~~~&\sum_{l=1}^K l \beta^l \le t, \label{R1_2} \\
~~~~~~&0 \le \beta^l,~l \in [0:K]. \label{R1_3}
\end{align}
\end{subequations}
It can be observed that the terms related to the inactive users in the objective actually destroy the similarity among the combination terms in the objective, and thus it becomes challenging to find a closed form solution to problem \eqref{opt1}.
To proceed, we analyze the properties of the objective function and the linear constraints. The discussion is generally accomplished in \textit{four} steps each of which is formulated as a Lemma given below, discussing \textit{the objective, the constraints, the structure of the optimal solution and an exceptional case.}
\begin{lemma}\label{lemma-step1}
The first derivative of coefficients in the objective function \eqref{R1_0} is negative while the second derivative is positive when $I \in [K-2]$ and equal to $0$ when $I=K-1$.
\end{lemma}
\begin{proof}
See Appendix C.
\end{proof}
\begin{lemma}\label{lemma-step2}
The optimal solution to problem \eqref{opt1} must have a tight cache capacity constraint \eqref{R1_2}.
\end{lemma}
\begin{proof}
We aim to prove the cache constraint \eqref{R1_2} must be satisfied with equality by the optimal solution $\boldsymbol{\beta}$ of problem~\eqref{opt1} via contradiction. If not, one can always derive a new feasible point, which satisfies all the constraints with a lower objective value, by assigning a larger value to $\beta^l$ with higher $l$. That is to say, the optimal caching strategy makes full usage of the cache space.
We define a feasible solution as $\boldsymbol{\beta}=[\beta^0,\beta^1,\dots,\beta^K]$ guaranteeing a loose constraint \eqref{R1_2}, i.e. $t-\sum^K_{l=0} l \beta^l=\lambda>0$ with $\lambda >0$. We reduce the value of a nonzero $\beta^{l_1}$ by some value $\delta$, where $0 < \delta \le \beta^{l_1}$, then increase the value of any $\beta^{l_2}$ with $l_2>l_1$ by $\delta$. Since it holds true that $l_2>l_1$ and the coefficients in constraint \eqref{R1_2} increase linearly with $l$, we can always adjust the value of $\delta$, such that constraint \eqref{R1_2} is satisfied with a narrower gap $\lambda$. To be exact, we let $\delta$ satisfy the following constraints:
\begin{align}
\left\{\begin{array}{cl}
0 < \delta \le \beta^{l_1},\\
\beta^{l_2}+\delta \le 1,\\
t-\lambda+l_1 (-\delta)+l_2 \delta \le t,
\end{array} \right.
\end{align}
which results in $0 < \delta=\min\{\beta^{l_1}, 1-\beta^{l_2}, \lambda/(l_2-l_1)\}$. The analysis is based on the assumption that there are at least two nonzero variables in $\boldsymbol{\beta}$, i.e. $0 <\beta^{l_1} <1$ and $0 <\beta^{l_2} <1$. In case that there is only one nonzero element in $\boldsymbol{\beta}$, i.e. $\beta^{l_1}=1$, we can recall the conclusion in Theorem~\ref{theorem-inactive-t} assuming fixed carnality that the optimal solution is $\boldsymbol{\beta}=\{\beta^t=1, \beta^l=0, \text{when}~l \neq t\}$ with the cache capacity constraint satisfied with equality. Hence, we only need to discuss on the general case when there are at least two nonzero elements in $\boldsymbol{\beta}$.
In this case, the summation of the elements in $\boldsymbol{\beta}$ remains the same so that constraint \eqref{R1_1} still holds true. The new feasible point becomes $\boldsymbol{\hat{\beta}}=[\hat{\beta}^0,\hat{\beta}^1,\dots,\hat{\beta}^K]$ with
\begin{align}
\left\{\!\begin{array}{cl}
&\hat{\beta}^{l_1}=\beta^{l_1}-\delta, \\ &\hat{\beta}^{l_2}=\beta^{l_2}+\delta, \\ &\hat{\beta}^l=\beta^l, \mbox{else}.
\end{array} \right.
\end{align}
Since only two variables are changed, the difference of the objective function defined as $\Delta$ can be written as
\begin{align}
\Delta=&(c^{l_1} \hat{\beta}^{l_1}+c^{l_2} \hat{\beta}^{l_2})-(c^{l_1} \beta^{l_1}+c^{l_2} \beta^{l_2}) \notag \\
=&c^{l_2} \delta-c^{l_1} \delta=(c^{l_2}-c^{l_1})\delta \overset{(a)}{<} 0, \label{tightC}
\end{align}
where $(a)$ is derived utilizing the conclusion in previous step that coefficient $c^{l}$ is decreasing with respect to $l$. As is presented in \eqref{tightC}, the new feasible point $\boldsymbol{\hat{\beta}}$ gives a lower objective value than $\boldsymbol{\beta}$. That is to say there always exists a better solution $\boldsymbol{\hat{\beta}}$. It is consequently proved that the optimal solution to problem \eqref{opt1} always satisfies the cache capacity constraint \eqref{R1_2} with equality.
\end{proof}
\begin{lemma}\label{lemma-step3}
The optimal solution has at most one non-zero variables of $\boldsymbol{\beta}$ (two for non-integer $t$).
\end{lemma}
\begin{proof}
Here contradiction will again be employed to prove that the optimal solution $\boldsymbol{\beta}$ must have either one non-zero variable ($t \in \mathbb{Z}$), or two non-zero variables with consecutive indices ($t \not\in \mathbb{Z}$), i.e. some $l$ and $l+1$.
From the contradictory perspective, we assume a feasible solution $\boldsymbol{\beta}$ to problem \eqref{opt1}, which has at least two non-zero variables defined as $\beta^{l_1}$ and $\beta^{l_2}$ with $l_2-l_1 \ge 2$, is optimal, and then prove that there exists a better solution $\boldsymbol{\beta_o}$. Similar to the construction of $\boldsymbol{\hat{\beta}}$ in previous step, we reduce a small positive value $\delta$ from one $\beta^l$ and add it to another $\beta^l$ to keep the sum unchanged. Thus $\boldsymbol{\beta_o}=[\beta^0_o, \dots \beta^{K}_o]$ is defined as
\begin{align}
\left\{\!\!\begin{array}{cl}
&\beta^{l_1}_o=\beta^{l_1}-\delta,\\
&\beta^{l_1+1}_o=\beta^{l_1+1}+\delta,\\
&\beta^{l_2-1}_o=\beta^{l_2-1}+\delta, \\
&\beta^{l_2}_o=\beta^{l_2}-\delta, \\
&\beta^l_o=\beta^l,~\mbox{else}.
\end{array} \right.
\end{align}
Specially, we set $\beta^{l_2-1}_o=\beta^{l_2-1}+2\delta$ when $l_2-l_1=2$ instead. The cache capacity constraint is satisfied with equality because
\begin{equation*}
\sum_l l \beta^l_o=\sum_l l \beta^l-l_{1}\delta+(l_{1}+1)\delta+(l_{2}-1)\delta-l_{2}\delta=t,
\end{equation*}
where $\sum_l l \beta^l=t$ proved in previous step.
To proceed, one needs to derive the change of objective value with $\boldsymbol{\beta_o}$ from the one with $\boldsymbol{\beta}$. The aim is to prove that $\boldsymbol{\beta_o}$ provides an improved objective value. As mentioned in the first step, the coefficients of objective function have positive second derivative. Hence, we obtain $d^{l_2}>d^{l_2-1}>d^{l_1+1}>d^{l_1}$ on condition that $l_2>l_2-1>l_1+1>l_1$. Again, we use $\Delta$ here to denote the change in objective function given by
\begin{align}
\Delta&=-c^{l_1}\delta+c^{l_1+1}\delta+c^{l_2-1}\delta-c^{l_2}\delta \\
&= \delta \left [{(c^{l_{1}+1}-c^{l_{1}}) - (c^{l_{2}} - c^{l_{2}-1}) }\right]
= \delta (d^{l_1}- d^{l_2-1}) < 0.
\end{align}
That ends the proof of a better feasible solution $\boldsymbol{\beta_o}$ than $\boldsymbol{\beta}$, which implies that the optimal solution has either only one non-zero variable, or two non-zero variables with consecutive indices. This remark largely simplifies the objective function and the constraints by reducing the number of variables to be optimized from $K+1$ to some around two. Recalling the conclusion in previous step, problem~\eqref{opt1} becomes
\begin{subequations}\label{opt3}
\begin{align}
\min_{\{l,\beta^l, \beta^{l+1}\}} ~~~~& c^l\beta^l+c^{l+1}\beta^{l+1} \label{R3_0} \\
{\rm s.t.} ~~~~& \beta^l + \beta^{l+1} = 1, \label{R3_1} \\
~~~~& l \beta^l + (l+1) \beta^{l+1}=t, \label{R3_2} \\
~~~~& 0 \le \beta^l \le \beta^{l+1} \le 1, \label{R3_3} \\
~~~~& l \in [0:K-1]. \label{R3_4}
\end{align}
\end{subequations}
Problem \eqref{opt3} has three variables to be optimized with about four constraints two of which are equations. Jointly considering the constraints \eqref{R3_1}, \eqref{R3_3}, and \eqref{R3_4}, which imply non negative integers $l$ and $l+1$ as well as the proper fractions $\beta^l$ and $\beta^{l+1}$, constraint \eqref{R3_2} can thus be interpreted into
\begin{align}
&l\beta^l + (l+1) \beta^{l+1}=t, \label{tandl0}\\
&l \le l \beta^l + (l+1) \beta^{l+1} \le l+1, \label{tandl1}\\
&l \le t \le l+1. \label{tandl2}
\end{align}
According to \eqref{tandl2}, $t$ must falls within the interval spanned by $l$ and $l+1$. If $t \in \mathbb{Z}$ which is a common assumption for coded caching, one can easily derive that $t=l$ or $t=l+1$. However, no matter in which of the two cases mentioned, the optimal solution to problem \eqref{opt1} has only one non-zero variable at the index $l=t$ equivalently. For instance, when $t=l+1$, it follows that $\beta^{l+1}=\beta^{t}=1$. We then summarize the optimal solution as $\beta^t=1$, and $\beta^l=0, \text{when}~l \ne t$, which agrees with \eqref{sol2} in Theorem~\ref{theorem-opt}. If $t \notin \mathbb{Z}$, the relationship among $\{l, t, l+1\}$ is derived as $l=\floor*{t}$ and $l+1=\ceil*{t}$ which is the unique solution. Letting $\beta^{l}=\eta$, we obtain from \eqref{tandl0}-\eqref{tandl2}
\begin{align}
t&=\floor*{t} \eta+\ceil*{t} (1-\eta) \notag \\
&=(\floor*{t}-\ceil*{t}) \eta +\ceil*{t}=-\eta +\ceil*{t}.
\end{align}
Hence, it holds true that $\eta=\ceil*{t}-t$. The optimal solution to problem \eqref{opt1} when $t \notin \mathbb{Z}$ is given by $\beta^{\floor*{t}}=\eta,\beta^{\ceil*{t}}=1-\eta$ with $\eta=\ceil*{t}-t$, and $\beta^l=0, l \in [0:K] \setminus \{\floor*{t}, \ceil*{t}\}$. That ends the proof of the optimal solution to the coded caching optimization problem \eqref{opt1}.
\end{proof}
\begin{lemma}\label{lemma-step4}
There is an exceptional case $I=K-1$ when the second derivative equals $0$. It will destroy the proof of Lemma~\ref{lemma-step3} which strictly requires a positive second derivative. The optimal solution, in this case, is no longer unique, but the solution in the general case still works. The same solution is claimed as the optimal one when $I=K-1$ for consistency.
\end{lemma}
\begin{proof}
As mentioned previously, when $I=K-1$, the second derivative of the coefficients in the objective equals to $0$ which affects the proof of the structure of the optimal solution in Lemma~\ref{lemma-step3}. To this end, we firstly derive the objective function when $I=K-1$ as
\begin{equation*}
\sum_{l=0}^{K-1} \frac{K-l}{l+1} \beta^l-\sum_{l=0}^{K-2} {K-1 \choose l+1}/{K \choose l}\beta^l=\sum_{l=0}^{K-1} \frac{(K-l)}{K} \beta^l.
\end{equation*}
Based on Lemma~\ref{lemma-step2}, the constraints \eqref{R1_1}-\eqref{R1_3} simplifies into
\begin{align}\label{step4-cons}
\left\{\begin{array}{cl}
\sum_{l=0}^K \beta^l=1, &
\sum_{l=1}^K l \beta^l = t, \\
0 \le \beta^l,&l \in [0:K].
\end{array}\right.
\end{align}
Substituting \eqref{step4-cons} into the objective function, it becomes
\begin{equation}
\label{objK-1}
\sum_{l=0}^{K-1} \frac{(K-l)}{K} \beta^l=1-\frac{t}{K}.
\end{equation}
Apparently, all feasible objective value is constant in this case. That is to say, the optimal objective value is $1-t/K$, and the optimal solution can be any $\boldsymbol{\beta}$ satisfying \eqref{step4-cons}. Without loss of generality, we state that the optimal solution when $I=K-1$ is the same as the one given in \eqref{sol2} and \eqref{sol3}. \end{proof}
\begin{theorem}\label{theorem-opt}
The optimal solution to problem \eqref{opt1} is the same as the optimal solution to the file subpacketization optimization with fixed cardinality, which is given by
\begin{align} \label{sol2}
\beta^l=\left\{\begin{array}{cl}
1, &\mbox{if}~~ l=t, \\
0, & \mbox{else},
\end{array}\right.
\end{align}
assuming $t=KM/N$ is integer. When $t=KM/N$ is non-integer, there are two adjacent nonzero elements in $\{\beta^l\}$ around $t$. Letting $\eta=\ceil*{t}-t$, the solution becomes
\begin{align} \label{sol3}
\beta^l=\left\{\begin{array}{cl}
\eta, &\mbox{if}~~ l=\floor*{t}, \\
1-\eta, &\mbox{if}~~ l=\ceil*{t},\\
0, & \mbox{else}.
\end{array}\right.
\end{align}
\end{theorem}
\begin{proof}
This follows directly from following the four steps given in Lemma~\ref{lemma-step1}-Lemma~\ref{lemma-step4}.
\end{proof}
\begin{corollary}
Based on Theorem \ref{theorem-inactive-t} and Theorem \ref{theorem-opt}, Alg.~\ref{alg_cen} is optimal for integer $KM/N$.
\end{corollary}
\section{Decentralized coded caching in presence of inactive users}\label{method4}
In practical wireless networks, one can rarely expect the server has totally central coordinating among the users, which has motivated the exploration on using a decentralized pattern for content placement, at the same time enjoying the global caching gain of coded multicasting in delivery.
Decentralized coded caching utilizes random content placement without coordination among users, so that it is easy to implement with acceptable performance degradation over the centralized scheme \cite{MaddahAli2014} ignoring user inactivity. Decentralized cache placement is applicable in our settings, where there is uncertainty from user inactivity, i.e. the number and identity of inactive users are unknown in the content placement phase. Compared to the centralized placement discussed previously, here each user fills local storage independently and randomly, so that the user inactivity information is unnecessary.
Decentralized coded caching proceeds in two phases: random content placement phase and content delivery phase. In placement phase, each user fills its cache with $\frac{MF}{N}$ bits of each file $n$ independently at random. Hence, no file splitting is needed. Here, we utilize notation ${\cal S} \subset [K]$ to denote a set of users with cardinality $|{\cal S}|=s$. $V_{k, {\cal S}}$ denotes the bits of the file requested by user $k$ (i.e. $d_k$) if the bits are cached exclusively by users in ${\cal S}$. That is to say, the bits in $V_{k, {\cal S}}$ are cached only by every user in ${\cal S}$, so that they are missing in users outside ${\cal S}$. Each bit is chosen at random uniformly with the probability denoted by $q \triangleq M/N \in (0,1)$. The probability for a bit is cached at any given user set ${\cal S}$ with cardinality $s$ becomes $Q^s=q^s(1-q)^{K-s}$. The number of bits being cached at the users in the given ${\cal S}$ is $F Q^s=F q^s(1-q)^{K-s}$. Accordingly, for any user $k \in {\cal S}$, the expected number of missing bits of file $d_k$ related to a given ${\cal S}$, i.e. $V_{k, {\cal S} \setminus \{k\} }$ is $F Q^{s-1}=F q^{s-1} (1-q)^{K-s+1}$. The benefit of focusing on set ${\cal S} \setminus \{k\}$ is that it enables us to use $V_{k, {\cal S} \setminus \{k\}}$ to identify the required bits for user $k$ corresponding to ${\cal S}$, and at the same time use $V_{j, {\cal S} \setminus \{j\}},~j \in {\cal S}, j \ne k$ to describe the cached bits at user $k$. For any possible set ${\cal S}$ with cardinality $s$, the server needs to send the maximum missing bits for the users in ${\cal S}$ given by
\begin{equation}
\max_{k \in {\cal S}} |V_{k, {\cal S} \setminus \{k\} }|=F q^{s-1} (1-q)^{K-s+1}.
\end{equation}
In total, there are ${K \choose s}$ types of distinct sets ${\cal S}$ with given cardinality $s$. In delivery, one should sum up all the distinct ${\cal S}$ with given cardinality $s$ in a loop, and then let cardinality $s$ vary from $1$ to $K$.
Without user inactivity, the transmitted packet with user set ${\cal S}$ is $\oplus_{k \in {\cal S}}~V_{k, {\cal S} \setminus \{k\}}$. The backhaul load is obtained as
\begin{equation}\label{Rdecen}
R=\sum_{s=1}^{K} {K \choose s} q^{s-1} (1-q)^{K-s+1}
=\frac{1-q}{q}\big(1\!-\!(1-q)^K\big).
\end{equation}
Considering user inactivity, the server determines the bits to be transmitted utilizing the information of user inactivity. Similar to
(\ref{R_inact-obj}), only the active users require to be served. Consequently, there are two types of ${\cal S}$ in terms of the cardinality $s$. First, when the cardinality $s \in [I+1:K]$ which guarantees at least one active user in ${\cal S}$, the transmitted bits with regarding to ${\cal S}$ are $\oplus_{k \in {\cal S}, k \notin {\cal I}}~V_{k, {\cal S} \setminus \{k\}}$. In contrast, when $s \in [1:I]$, it may happens that all the users in ${\cal S}$ are inactive so that there is no need to consider set ${\cal S}$. The transmitted bits can be written as $\oplus_{k \in {\cal S}, k \notin {\cal I}}~V_{k, {\cal S} \setminus \{k\}, {\cal S} \not\subset {\cal I} }$. The procedure of the decentralized coded caching in presence of user inactivity is given in Alg.~\ref{alg_dec} with delivery policy I.
\begin{algorithm}
\caption{Decentralized Coded Caching in Presence of User Inactivity}
\label{alg_dec}
\begin{algorithmic}[1]
\STATE \textbf{procedure} {\large P}LACEMENT
\STATE ~~~~\textbf{for} $k \in [K], n \in [N]$ do\\
\STATE~~~~~~~user $k$ independently caches $\frac{MF}{N}$ bits of file $n$, \\
~~~~~~~chosen uniformly at random\\
\STATE~~~~\textbf{end for}
\STATE \textbf{end procedure} \\
Users make requests $\boldsymbol{d}$ given the number and identity of the inactive users $(I, {\cal I})$; \\
The number of active users $J=K-I$
\STATE \textbf{procedure} {\large D}ELIVERY I
\STATE ~~~~\textbf{for} $s=K, K-1,\dots, I+1$ do\\
\STATE~~~~~~~\textbf{for} ${\cal S} \subset [K]$ with $|{\cal S}|=s$ do\\
\STATE~~~~~~~~~~~server sends $\oplus_{k \in {\cal S}, k \notin {\cal I}}~V_{k, {\cal S} \setminus \{k\}}$
\STATE~~~~~~~\textbf{end for}
\STATE ~~~~\textbf{end for}
\STATE ~~~~\textbf{for} $s=I, I-1,\dots, 1$ do\\
\STATE~~~~~~~\textbf{for} ${\cal S} \subset [K]$ with $|{\cal S}|=s$ do\\
\STATE~~~~~~~~~~~server sends $\oplus_{k \in {\cal S}, k \notin {\cal I}}~V_{k, {\cal S} \setminus \{k\}, {\cal S} \not\subset {\cal I}}$
\STATE~~~~~~~\textbf{end for}
\STATE ~~~~\textbf{end for}
\STATE \textbf{end procedure}
\STATE \textbf{procedure} {\large D}ELIVERY II
\STATE ~~~~\textbf{for} $s=J, J-1,\dots, 1$ do\\
\STATE~~~~~~~\textbf{for} ${\cal S} \subset [K]\setminus {\cal I}$ with $|{\cal S}|=s$ do\\
\STATE~~~~~~~~~~~server sends $\oplus_{k \in {\cal S}}~V_{k, {\cal S} \setminus \{k\}}$
\STATE~~~~~~~\textbf{end for}
\STATE ~~~~\textbf{end for}
\STATE \textbf{end procedure}
\end{algorithmic}
\end{algorithm}
The backhaul load considering user inactivity becomes
\begin{align}\label{RdecenInact}
R&=\left[\sum_{s=I+1}^{K} {K \choose s} +\sum_{s=1}^{I} \left[{K \choose s}-{I \choose s}\right] \right]
q^{s-1} (1-q)^{K-s+1} \notag \\
&=\frac{1-q}{q}\big(1-(1-q)^{K-I}\big).
\end{align}
The connection between the backhaul loads with user inactivity $(I, \cal{I})$ and without user inactivity can be derived as
\begin{equation}\label{RdecenConnect}
R(I, K)=R(0,K-I)<R(0,K).
\end{equation}
Here $R(a, b)$ is the load with $a$ inactive users and $b$ total users.
Equation \eqref{RdecenConnect} implies that the scenario with $I$ inactive users out of in total $K$ users equals the case when there are original $J=K-I$ users and all are active. The remark agrees with the key idea of decentralized caching where the content placement operates at each user separately without coordination. Consequently, a more direct way is to replace $(K, [K])$ with the number and identities of the active users $(J, [K] \setminus {\cal I})$ when defining user set ${\cal S}$ in delivery. In this case, there is no need to worry about inactive users. The alternative delivery policy is given in Delivery II in Alg.~\ref{alg_dec}.
\section{Comparison of centralized and decentralized cache placement against user inactivity}
\subsection{Centralized versus Decentralized Coded Caching}
Here we briefly compare the backhaul loads derived using decentralized method in Alg.~\ref{alg_dec} and the centralized method in Alg.~\ref{alg_cen}, in terms of the number of inactive users $I$.
First, it can be concluded from the key ideas and precise backhaul loads of the centralized \eqref{R_inact-obj} and decentralized \eqref{RdecenInact} versions that:
\begin{itemize}
\item The backhaul loads in both manners decrease when the number of inactive users $I$ rises.
\item When $I \to 0$, the backhaul loads for both manners approximate the backhaul loads without user inactivity.
\item On the contrary, when $I \to K$, there are few users staying active, so that both of the backhaul loads approach $0$.
\item The gain of centralized manner over decentralized manner results from the coordination among active users which increases the opportunities for coded multicasting, so that it decreases with regarding to $I$. Particularly when there is only one active user, the two manners become the same.
\end{itemize}
While the qualitative discussion is presented above, the gain of centralized coded caching over decentralized manner against user inactivity defined as $G=R_d-R_c$ is investigated quantitatively. Here index $c$ and $d$ denote centralized, and decentralized manners respectively. We obtain
\begin{align*}
G(I)=\left\{\begin{array}{cl}
\frac{N-M}{M}\big(1-(1-\frac{M}{N})^{K-I}\big)-\frac{K(1-M/N)}{1+KM/N}, & \mbox{if}~t+1!> I,\\
\frac{N-M}{M}\big(1-(1-\frac{M}{N})^{K-I}\big)-\frac{1}{{K \choose t}} \left[{K \choose t+1}-{I \choose t+1}\right], & \mbox{if}~t+1\le I.
\end{array}\right.
\end{align*}
It holds true that $G(K)=R_d(K)=R_c(K)=0$. Now we prove that $G(I)$ decreases with regarding to $I$, i.e. $G(I) \ge G(I+1)$, such that the maximum gain is given by $\max_I G(I)=G(1)$. The first derivative of $G$ is
\begin{equation}\label{Gcd}
\Delta_G(I)=G(I+1)-G(I)=\Delta_{R_d}(I)-\Delta_{R_c}(I),
\end{equation}
where $\Delta_{R_d}(I)$ and $\Delta_{R_c}(I)$ are defined similar to $\Delta_G(I)$.
As $R_c$ in \eqref{R_inact-obj} is piece-wise, the proof is operated for the two intervals separately.
When $I \in [t+1:K]$, We derive the differences as
\begin{align*}
\Delta_{R_d}(I)&=-\left(1-\frac{M}{N}\right)^{K-I}, \\
\Delta_{R_c}(I)&=\frac{1}{{K \choose t}} \left[{I \choose t+1}-{I+1 \choose t+1} \right]
\overset{(a)}{=}\!-{I \choose t}/{K \choose t}\\
&\overset{(b)}{\ge}-\left(\frac{K-t}{K}\right)^{K-I}
\ge \Delta_{R_d}(I),
\end{align*}
where $(a)$ is based on equation ${n+1 \choose k}={n \choose k}+{n \choose k-1}$ with $n, k$ are both positive integers, while $(b)$ is obtained as $\frac{K-t}{K}$ decreases in terms of $K$. In this case, it has been verified that $\Delta_G(I) \le 0$ when $I \in [t+1:K]$. In particular, the equality is achieved when $I=K-1$.
Moreover, when $I \in [1:t+1)$, $R_c=\frac{K(1-M/N)}{1+KM/N}$ is independent on the number of inactive users $I$, and hence $R_c$ remains constant which means that $\Delta_{R_c}(I)=0$ in the considered interval. Then it is apparent that $\Delta_G(I)=\Delta_{R_d}(I) < 0$.
Finally, we verify that $\Delta_G(I) \le 0$ for all possible $I$. The maximum gain of centralized versus decentralized coded caching against user inactivity is given by $\max_I G=G(1)$.
Consequently, we obtain that $0 \le G(I+1) \le G(I) \le G(1)$ for the consider interval $I \in [K-1]$, which can be observed in the simulation results in Subsection~\ref{sim3}.
\subsection{Proposed Opt CC versus Ideal MAN Method}
In this section, we aim to provide some insights on how the proposed centralized method, referred to as \textit{optimization based coded caching (Opt CC)}, performs compared with the ideal MAN method which assumes perfect user inactivity information in the content placement phase.
The gap between the backhaul load of Opt CC using \eqref{R_inact-obj} and the one derived using the ideal MAN method according to \eqref{R} is discussed here, which reflects the performance loss of Opt CC for lack of user inactivity information in the placement phase. We set $\tilde{G}=R_c-R_i$ with the backhaul load of the ideal MAN method given by
\begin{equation}\label{R_ideal}
R_i=\frac{1}{{J \choose \tilde{t}}} {J \choose \tilde{t}+1}
=\frac{(K-I)\left(1-M/N\right)}{1+(K-I)M/N},
\end{equation}
where the total number of users $K$ is replaced by the number of active users $J=K-I$ while $t$ is updated to $\tilde{t}=JM/N$.
Next, we analyze how $\tilde{G}$ varies with regarding to $I$. $\tilde{G}$ is
\begin{align}\label{G1}
\tilde{G}=\left\{\begin{array}{cl}
\frac{K(1-M/N)}{1+KM/N}-\frac{(K-I)\left(1-M/N\right)}{1+(K-I)M/N}, &\mbox{if}~t+1> I, \\
\frac{1}{{K \choose t}} \left[{K \choose t+1}-{I \choose t+1}\right]-\frac{(K-I)\left(1-M/N\right)}{1+(K-I)M/N}, & \mbox{if}~t+1 \le I.
\end{array}\right.
\end{align}
\subsubsection{Performance gap analysis against $I$ when $I \in [1:t+1)$} \label{subsec-gap1}
\hfill
When $I \in [1:t+1)$, it is apparent that $\tilde{G}$ increases with regarding to $I$. When $I \to 0$, it holds true that $\lim_{I \to 0} \tilde{G}=0$ which shows that the Opt CC approximates the ideal MAN method with very low user inactivity. In general, we derive that $\tilde{G}(I+1) > \tilde{G}(I) >0$ for $I \in [1:t+1)$, and also $\lim_{I \to 0} \tilde{G}=0$. The observation agrees with the fact that the performance loss of Opt CC results from the lack of user inactivity and therefore increases with regarding to $I$. In this case, the maximum gap is $\tilde{G}(t)$ given by
\begin{equation}
\tilde{G}(t)=\frac{1}{1+t}\left(1-\frac{1}{1+t-tM/N}\right).
\end{equation}
\subsubsection{Performance gap analysis against $I$ when $I \in [t+1:K)$} \label{subsec-gap2}
\hfill
We firstly consider the extreme case that
$\lim_{I \to K} \tilde{G}=0$. Moreover, we simplify the gain at starting point $I=t+1$ as
\begin{align}\label{tildeGt1}
\tilde{G}(t+1)
= &\frac{1-M/N}{1-M/N+(K-t)M/N}-\frac{1}{{K \choose t}} \notag \\
= &\frac{1}{t+1}-\frac{(K-t)(K-t-1)\dots 2}{K(K-1)\dots(t+2)} \times \frac{1}{t+1} > 0,
\end{align}
To investigate the monotonicity of $\tilde{G}$ in this interval, we obtain the difference $\Delta_{\tilde{G}}=\tilde{G}(I+1)-\tilde{G}(I)$ as
\begin{equation*}
\Delta_{\tilde{G}}=-\underbrace{\frac{{I \choose t}}{{K \choose t}}}_{\Delta_{\tilde{G}_1}}+\underbrace{\frac{\left(1-M/N\right)}{\left(1+(K-I)M/N\right)\left(1+(K-I-1)M/N\right)}}_{\Delta_{\tilde{G}_2}}.
\end{equation*}
The first derivatives are obtained as
\begin{align*}
&\Delta_{\tilde{G}_1}'(I)=\Delta_{\tilde{G}_1}(I+1)-\Delta_{\tilde{G}_1}(I)=\frac{{I \choose t}}{{K \choose t}} \frac{t}{I+1-t} >0, \\
&\Delta_{\tilde{G}_2}'(I)=\Delta_{\tilde{G}_2}(I+1)-\Delta_{\tilde{G}_2}(I)~~~~~~~~~~~~~~~~~~~~~~~~~~\\
&=\frac{(1-M/N)(2M/N)}{(1+(K-I-2)M/N)(1+(K-I-1) M/N)(1+(K-I)M/N)},
\end{align*}
which states both $\Delta_{\tilde{G}_1}$ and $\Delta_{\tilde{G}_2}$ increase with regarding to $I$. \footnote{Notation $(*)$ in $\Delta_{\tilde{G}_i}(*)$ specifies the values of $\Delta_{\tilde{G}_i}$ when $I=*$.}
Moreover, the corresponding second differences satisfy $\Delta_{\tilde{G}_1}''>0$, $\Delta_{\tilde{G}_2}''>0$ because both $\Delta_{\tilde{G}_1}'$ and $\Delta_{\tilde{G}_2}'$ monotonically increases with regarding to $I$.
Now we discuss the ending and starting points of $\Delta_{\tilde{G}}$. The details of the ending point at $I=K-1$ can be obtained as
\begin{align*}
\Delta_{\tilde{G}_1}(K-1)&=1-M/N, \\
\Delta_{\tilde{G}_2}(K-1)&=\frac{1-M/N}{1+M/N} < \Delta_{\tilde{G}_1}(K-1),
\end{align*}
with the corresponding first derivative given by
\begin{align*}
\Delta_{\tilde{G}_1}'(K-1)&=\frac{{K-1 \choose t}}{{K \choose t}} \frac{t}{K-t}=\frac{M}{N},~~~\\
\Delta_{\tilde{G}_2}'(K-1)
&=\frac{M}{(N+M)/2} >\Delta_{\tilde{G}_1}'(K-1).~~
\end{align*}
Similarly, we discuss the starting point when $I=t+1$ with the conclusion presented in Lemma~\ref{deltaGstart}. Note that the sign of $\Delta_{\tilde{G}}(t+1)$ is not fixed. It depends on the values of $t$ and $K$.
\begin{lemma}\label{deltaGstart}
Assuming the starting point defined as $\Delta_{\tilde{G}}(t+1)=\Delta_{\tilde{G}_2}(t+1)-\Delta_{\tilde{G}_1}(t+1)$ with $t \in [1:K-2]$ when $I=t+1$ , it holds true that
\begin{align}\label{deltaGtplus1}
\Delta_{\tilde{G}}(t+1) \left\{
\begin{array}{cl}
> 0, & \mbox{if}~~t \in [1:K-3], K \in [9:+\infty],\\
> 0, & \mbox{if}~~t \in [1:K-4], K \in [6:8],\\
< 0, & \mbox{if}~~t=K-3,~K \in [6:8], \\
<0, & \mbox{if}~~t \in [1:K-3], K \in [4:5].\\
<0, & \mbox{if}~~t=K-2, K \in [3:+\infty].
\end{array} \right.
\end{align}
\end{lemma}
\begin{proof}
See Appendix D.
\end{proof}
It is now proved that both $\Delta_{\tilde{G}_1}$ and $\Delta_{\tilde{G}_2}$ increase in terms of $I$ with increasing growth rates. According to the sign of the starting point $\Delta_{\tilde{G}}(t+1)$, there are two cases regarding the impact of $I$ on $\Delta_{\tilde{G}}$ based on the derivatives:
\begin{itemize}
\item Case I: When $\Delta_{\tilde{G}_2}(t+1) > \Delta_{\tilde{G}_1}(t+1)$, there is an intersection point for $\Delta_{\tilde{G}_1}$ and $\Delta_{\tilde{G}_2}$ in the interval. Hence, $\tilde{G}$ goes up from $I=t+1$ until reaching a peak, and then falls to $0$ at $I=K$. Only one peak point is guaranteed at some $I^*$. Meanwhile, $\Delta_{\tilde{G}} \ge 0$ during $I \in [t+1:I^*]$, and $\Delta_{\tilde{G}} <0$ when $I \in (I^*:K)$.
\item Case II: When $\Delta_{\tilde{G}_2}(t+1) \le \Delta_{\tilde{G}_1}(t+1)$, it holds true that $\Delta_{\tilde{G}}(t+1)<0$ in the interval. Considering the ending point and derivatives, it is derived that $\Delta_{\tilde{G}} <0$ during the interval $[t+1:K-1]$. It means that $\tilde{G}$ decreases with regarding to $I$ during the interval. The maximum value of $\tilde{G}$ in the interval is $\tilde{G}(t+1)$ given in \eqref{tildeGt1}.
\end{itemize}
\subsubsection{Summary of performance gap $\tilde{G}$ against $I$}\hfill
Now we summarize the performance gap between $R_c$ and $R_i$ against $I$ when $I \in [K-1]$ in Theorem~\ref{theorem-gap}.
\begin{theorem}\label{theorem-gap}
The value of $\tilde{G}$ defined in \eqref{G1} varies with different values of $I$ in the following manner:
\begin{itemize}
\item If $I \in [1:t+1)$, $\tilde{G}$ increases in terms of $I$ as mentioned previously. In particular, $\lim_{I \to 0} \tilde{G}=0$ which agrees with the fact that Opt CC performs the same as the ideal MAN method without user inactivity.
\item If $I \in [t+1:K)$, the starting point satisfies $\tilde{G}(t+1) >0$ and the ending point follows $\lim_{I \to K} \tilde{G}=0$.
\item Meanwhile, $\tilde{G}$ may continue rising from $(I=t+1)$ until reaching the peak at $(I=I^*)$, and then decreases to $0$ at $(I=K-1)$. Only one peak point is guaranteed at $I^*$ during the whole interval. Alternatively, it probably starts to decrease with regarding to $I$ from $(I=t+1)$ with the maximum value being $\tilde{G}(t+1)$.
\end{itemize}
\end{theorem}
\begin{proof}
To prove Theorem~\ref{theorem-gap}, follow the discussion in subsection~\ref{subsec-gap1} and subsection~\ref{subsec-gap2}.
\end{proof}
\section{Simulations}\label{sim}
\begin{figure}[htbp]
\centering
\includegraphics[width=13cm]{R_inactive_optandmanv2-eps-converted-to.pdf}
\caption{Backhaul load of coded caching network with user inactivity}\label{figsim_1}
\end{figure}
\subsection{The Proposed Coded Caching Method with fixed cadinality}\label{sim1}
A one-server cache-enable network is considered. There are $K=50$ users and $N=100$ files with equal popularity and size. Fig.~\ref{figsim_1} presents the impacts of cache size $M$ and number of active users $J$ on worst case backhaul load. The performance of the proposed optimization based coded caching with fixed cardinality is compared to the unicast caching method, and the ideal MAN method~ \cite{M.MaddahAliMay2014} where perfect user inactivity information is assumed in the cache placement phase.
As can be seen Fig.~\ref{figsim_1}, the proposed scheme outperforms the unicast caching scheme while provides a compatible performance to the idea MAN method against user inactivity. In general, the backhaul load decreases with the increase of cache size $M$ and rises with the increase of the number of active users $J$. Increasing $M$, the backhaul load decreases dramatically. The decrease of backhaul load is more obvious with more active users, i.e. larger $J$. The gap between them decreases when the cache size $M$ rises. When $M=20$, the proposed method is approximately the same as the MAN method in the ideal case. Increasing $J$ causes some increase in backhaul load, but the gap is limited for the proposed method, e.g. less than $10$. The gap is getting narrower when increasing cache size $M$. That is to say, the proposed method can provide an acceptable solution to user inactivity. Particularly when $J=K=50$, the proposed method is the same as the MAN method as all the users are active. Moreover, the backhaul load tends to be stable when $J$ tends to $K$, which agrees with \eqref{R_inact-obj} since the backhaul load is independent of the number of inactive users $I$ when $I<t+1$, i.e. $J>K-t+1$. For instance, when $M=8$, the solid curve in orange becomes stable after reaching $J=43$.
\subsection{Optimal Coded Caching Method with multiple cardinalities}\label{sim2}
To investigate the optimal coded caching method in presence of user inactivity, the performance of coded caching using fixed cardinality and multiple cardinalities in file subpacketization is compared. Results obtained using optimization solver CVX are compared to the closed form optimal solution~\eqref{sol2}-\eqref{sol3}.
\begin{figure}[htb]
\centering
\includegraphics[width=13cm]{R_optimzation-eps-converted-to.pdf}
\caption{Performance comparison between coded caching with fixed $l$ and optimal scheme with multiple cardinalities against user inactivity.}\label{figsim_2}
\end{figure}
In Fig.~\ref{figsim_2} we consider a cache enabled network where we let $K=50, N=100, M=5/10/20$, and the number of inactive users varies within $[0:K)$, to compare the performance of the optimal solution for coded caching with multiple $l \in [0:K]$ and the one with fixed $l=t$.
The simulation confirms the closed form solution. The backhaul load for the proposed caching scheme with fixed cardinality is always the same as the optimal solution with multiple $l$ for all the different values of inactive users and cache size.
\subsection{Centralized Method versus Decentralized Method}\label{sim3}
\begin{figure}[htb]
\centering
\includegraphics[width=13cm]{RcenDecenCom-eps-converted-to.pdf}
\caption{Performance comparison between coded caching methods with different numbers of inactive users $I$: centralized versus decentralized ($M=10, K=50$).}\label{figsim_3}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=13cm]{RcenDecenComMv2-eps-converted-to.pdf}
\caption{Performance comparison between coded caching methods with different normalized cache sizes $M$: centralized versus decentralized ($I=20, K=50$).}\label{figsim_4}
\end{figure}
In this section, we compare the performance of centralized and decentralized coded caching methods. Performance is measured in terms of worst case backhaul load $R$ with the impacts of the number of inactive users $I$ and cache size $M$ investigated. In Fig.~\ref{figsim_3} and Fig.~\ref{figsim_4}, we have $K=50, N=100$.
In Fig.~\ref{figsim_3} we fix $M=10$ and vary the number of inactive users $I\in [K-1]$, while in Fig.~\ref{figsim_4} we fix the number of inactive users at $I=20$ and consider different cache sizes $M \in [N/4]$.
Five methods are compared. The centralized methods consist of: the MAN method without user inactivity, assuming $J=K=50$, with backhaul load calculated using \eqref{R}; the proposed optimization based coded caching (Opt CC) given in Alg.~\ref{alg_cen}, using \eqref{R_inact-obj}; as well as the ideal MAN method in \eqref{R_ideal} assuming perfect user inactivity information in placement phase. The decentralized methods consist of: a decentralized caching without user inactivity $J=K=50$ based on \eqref{Rdecen} and decentralized caching with user inactivity following Alg.~\ref{alg_dec} using \eqref{RdecenInact}.
As can be seen in Fig.~\ref{figsim_3}, in this scenario, centralized coded caching provides at most around $17\%$ gain over decentralized methods. Considering the impacts of the number of inactive users $I$, the proposed Opt CC outperforms the decentralized manner with user inactivity, and it is closer to the ideal centralized coded caching method. Moreover, the gain of Opt CC over the decentralized manner with user inactivity decreases from around $17\%$ to $0\%$ when the number of inactive users $I$ rises from $I=1$ to $K-1$.
The advantage of the centralized method results from coordination among users, which decreases when there are fewer active users to be served.
When approaching the minimum $J=1$ active user, the performance of the centralized and decentralized method gradually converge. The gap between Opt CC and the ideal MAN method in Fig.~\ref{figsim_3} agrees with the first case in Section \ref{subsec-gap2}; the gap $\tilde{G}$ grows with increasing $I$
until peaking at $I^* \approx 35$, and then decreases to $0$ at $I \to K$.
Fig.~\ref{figsim_4} shows the dependence of backhaul performance on
the cache size $M$, with $I=20$. For all schemes of interest, the load decreases with increasing storage space as expected. Moreover, the rate of decrease shrinks with increasing cache size. Centralized Opt CC outperforms the decentralized methods with or without user inactivity, and the conventional MAN method while approximating the ideal MAN method.
\section{Conclusions}
In this paper, we studied the coded caching strategy for one-server cache-enable networks in presence of inactive users. Based on the classic file subpacketization and coded multicast strategy, a coded caching method with fixed cardinality of the fragment label set has been proposed and the optimality of selected cardinality $t=KM/N$, which is known as the cache replication parameter used in MAN method without user inactivity, has also been proved. The scheme is extended to a scenario where multiple cardinalities can be used instead of a fixed one. The weights for different types of fragments labeled with different cardinalities are optimized in order to minimize the worst case backhaul load. The optimal solution turns out to be the same as the one derived from the proposed caching scheme with fixed cardinality. Besides centralized coded caching, a decentralized method is also discussed. Next, we compare the backhaul load as a function of the number of inactive users $I$, provided by the identified optimal centralized method, the decentralized method, and the ideal MAN method assuming perfect knowledge of user inactivity. Numerical results show that the optimization based centralized scheme outperforms the decentralized scheme in presence of user inactivity, and approximates the ideal MAN method at the same time.
\section*{Appendix}
\subsection{Proof of Lemma~\ref{lemma-optFix-obj}}
To proceed, we compute the ratio $R(l+1)/R(l)$, with $1\leq l \leq I-2$. By expanding the binomial coefficients we get
\begin{eqnarray}
\frac{R(l+1)}{R(l)} &=& \frac{\big[{K \choose l+2}-{I \choose l+2}\big]}{\big[{K \choose l+1}-{I \choose l+1}\big]} \frac{{K \choose l}} {{K \choose l+1}} \cr
&=& \frac{ A(l) }{ A(l) + B(l)-C(l)} \nonumber
\end{eqnarray}
where
\begin{eqnarray}
A &=& (l+1) \left(\frac{K!}{(K-l-2)!} - \frac{I!}{(I-l-2)!}\right)\cr
B &=& \frac{(K+1)!}{(K-l-1)!} \cr
C &=& (K+1 +(K-I)(l+1))\frac{I!}{(I-l-1)!}\,.\nonumber
\end{eqnarray}
Because $K \ge I$, we have $A \ge 0$, as well as $B>0$ and $C>0$ in the domain of interest. It is thus sufficient to prove that $B>C$ for all $l$ in the domain. For this, we first use the fact that $(K-m)/(I-m) \geq K/I$ for non-negative $m$ and $K\geq I$ to lower bound the $l+1$ smallest terms in the $l+2$-fold product in $B$ to get
\begin{equation}
B \geq (K+1)\left(\frac{K}{I}\right)^{l+1} \frac{I!}{(I-l-1)!}
\end{equation}
Writing $K/I = 1 + (K-I)/I$ we can expand $(K/I)^{l+1}$ using the binomial expansion. When $K>I$, all terms in the expansion are positive. Comparing the two first terms in this expansion to $C$ directly shows that $B>C$ holds. This completes the proof. \hfill
\subsection{Proof of Lemma~\ref{lemma-step1}}\label{proof-step1}
The objective function \eqref{R1_0} is reformulated to
\begin{equation}
f({\boldsymbol \beta})=\sum_{l=0}^{K-1} c^l \beta^l=\sum_{l=0}^{I-1} c_1^l \beta^l + \sum_{l=I}^{K-1} c_2^l \beta^l,
\end{equation}
where it is defined that ${\boldsymbol c} \triangleq [c^0,c^2,\dots,c^{K-1}]$ denoting the coefficients in the objective given by
\begin{align} \label{c}
c^l=\left\{\begin{array}{cl}
c_1^l=\frac{K-l}{l+1}-{I \choose l+1}/{K \choose l}, & \mbox{if}~ 0 \le l \le I-1, \\
c_2^l=\frac{K-l}{l+1}, & \mbox{if}~ I \le l \le K-1.
\end{array}\right.
\end{align}
According to \eqref{c}, the coefficient ${\boldsymbol c}$ is a piece-wise function in terms of $l$ consisting of two sub-functions, ${\boldsymbol c_1}$ for interval $l \in [0:I-1]$ and ${\boldsymbol c_2}$ for $l \in [I:K-1]$, respectively. We will start with the discussion of ${\boldsymbol c_2}$, then move to ${\boldsymbol c_1}$.
\subsubsection{The first derivative of coefficients ${\boldsymbol c}$}
It is apparent that ${\boldsymbol c_2}$ is monotonically decreasing in terms of $l$ in the interval. The first derivative ${\boldsymbol d_2}=[d_2^I,d_2^2,\dots, d_2^{K-1}]$ is
\begin{equation}
d_2^l=c_2^{l+1}-c_2^l=\frac{-K-1}{(l+2)(l+1)}<0,\label{d2}
\end{equation}
and the second derivative ${\boldsymbol e_2}=[e_2^I,e_2^2,\dots, e_2^{K-1}]$ is given by
\begin{equation}
e_2^l=d_2^{l+1}-d_2^l=\frac{2(K+1)}{(l+3)(l+2)(l+1)}>0.\label{e2}
\end{equation}
Thus, it implies a negative first derivative and a positive second derivative for ${\boldsymbol c_2}$.
Now we discuss the monotonicity of ${\boldsymbol c_1}$. We rewrite ${\boldsymbol c_1}$ into
\begin{equation}
c_1^l=\bigg[{K \choose l+1}-{I \choose l+1}\bigg]/{K \choose l}, \label{c1} \end{equation}
utilizing $\frac{K-l}{l+1}={K \choose l+1}/{K \choose l}$. Recalling the result from Appendix B that
\begin{equation}
\frac{R(l+1)}{R(l)}=\frac{\big[{K \choose l+2}-{I \choose l+2}\big]}{\big[{K \choose l+1}-{I \choose l+1}\big]} \frac{{K \choose l}} {{K \choose l+1}} <1, \label{B_R}
\end{equation}
it follows that $0 < c_1^{l+1}/c_1^l <1$. Hence, the first derivative of ${\boldsymbol c_1}$, referring to as ${\boldsymbol d_1}$ satisfying
$d_1^l=c_1^{l+1}-c_1^l < 0$.
Since we have confirmed that ${\boldsymbol c_1}$ and ${\boldsymbol c_2}$ are monotonically decreasing in their own intervals, now we need to prove $c_1^{I-1} > c_2^I$ holds true. To proceed, we simplify $c_1^{I-1}$ to
\begin{equation}\label{breakp0}
c_1^{I-1}=\frac{K-I+1}{I}-\frac{1}{{K \choose I-1}}
> \frac{K-I}{I+1}+\left(\frac{1}{I}-\frac{1}{{K \choose I-1}}\right).
\end{equation}
As $c_2^I=\frac{K-I}{I+1}$ and $I \le {K \choose I-1}$ for $I >1$, we have $c_1^{I-1} > c_2^I$.
Similarly, we need to prove $d_1^{I-1} < d_2^I$ to guarantee the consistence when combining ${\boldsymbol d_1}$ and ${\boldsymbol d_2}$ which is required in the third step. Hence, we derive
\begin{align}\label{breakp2}
d_1^{I-1}-d_2^I&=(c_2^I-c_1^{I-1})-d_2^I \notag\\
&=\frac{(K-I+1)(K-I)\dots 4}{K(K-1)\dots (I+3)}\times \frac{3 \times 2}{(I+2)(I+1)I}
-\frac{2(K+1)}{I(I+1)(I+2)}.
\end{align}
According to \eqref{breakp2}, it holds true that $d_1^{I-1}-d_2^I < 0$ when $K \ge 4$. Because there are at least three points, i.e. $I-1, I, I+1$, are considered, it implies that $I+1 \le K-1$. If more than one inactive users are targeted, it follows that $K \ge 4$.
\subsubsection{The second derivative of coefficients ${\boldsymbol c}$}
The second derivative of ${\boldsymbol c_1}$ defined as ${\boldsymbol e_1}=[e_1^0,e_1^1,\dots, e_1^{I-3}]$ is
\begin{align}
e_1^l&=d_1^{l+1}-d_1^l=c_1^{l+2}-2c_1^{l+1}+c_1^l \notag\\
&=\frac{2(K+1)}{(l+3)(l+2)(l+1)}-\frac{{I \choose l+3}}{{K \choose l+2}}+2\frac{{I \choose l+2}}{{K \choose l+1}}-\frac{{I \choose l+1}}{{K \choose l}}.
\label{e1}
\end{align}
For sake of simplification, we introduce a new group of variables $\boldsymbol{\hat{e}_1}=\{\hat{e}_1^l\}$ with $\hat{e}_1^l=(l+3)(l+2)(l+1) K(K-1)\dots(I+1) e_1^l, l \in [0:I-1]$. $\boldsymbol{\hat{e}_1}$ can be written as
\begin{align} \label{hat-e1}
\hat{e}_1^l
=&2(K+1)K \dots(I+1)-(K-l-2)(K-l-3)\dots(I-l) \notag \\
&\times \left[(I-l-1)(I-l-2)(l+2)(l+1) \right.
\left. +(K-l)(K-l-1)(l+3)(l+2)\right. \\
&\left. -2(K-l-1)(I-l-1)(l+3)(l+1) \right]. \notag
\end{align}
Firstly, some hypothesis is given based on numerical results:
\begin{itemize}
\item When $l$ is fixed, $\hat{e}_1^l(I)$\footnote{Here we utilize $\hat{e}_1^l(\alpha)$ to specify $\hat{e}_1^l$ with $I$ fixed at $I=\alpha$.} decreases with the increase of $I$;
\item When $I$ is fixed, $\hat{e}_1^l(I)$ is increasing with regarding to $l$;
\item Based on the previous properties, one can derive that the minimum value of $\hat{e}_1^l(I), ~l \in [0:I-1],~I \in [K-1] $ is $\hat{e}_1^0$ with $I=K-1$, i.e. $\hat{e}_1^0(K-1)$.
\end{itemize}
If one can verify the hypotheses one by one, $\hat{e}_1^l \ge 0$ is then proved. To simplify the proof, we shall relax the conditions because when the first hypothesis is proved, we only need to focus on $I=K-1$ and prove that $\hat{e}_1^l$ is an increase function with regarding to $l$ when $I=K-1$ (or a constant function in particular). Then it will holds true that $\min_{l, I}~{\hat{e}_1^l}(I) \ge 0$, so that $\hat{e}_1^l(I) \ge 0, l \in [0:I-1], I \in [K-1]$.
Firstly, we shall prove that $\hat{e}_1^l(I+1) < \hat{e}_1^l(I)$ for any $l \le I$.
For sake of clarification, here we introduce three auxiliary variables $A, B, C_{I}$ given by
\begin{align}
\left\{\!\!\begin{array}{cl}
&A=2(K+1)K \dots(I+2), \\
&B=(K-l-2)(K-l-3)\dots (I-l+1), \\
&C_{I}=(I-l-1)(I-l-2)(l+2)(l+1) \\
&~~~~~+(K-l)(K-l-1)(l+3)(l+2)\\
&~~~~~-2(K-l-1)(I-l-1)(l+3)(l+1).
\end{array} \right.
\end{align}
Using the expressions above, we obtain that
\begin{align}\label{delta-hat-e1}
\hat{e}_1^l(I)-\hat{e}_1^l(I+1)
&=A(I+1)-B(I-l)(C_{I})-A-B(C_{I+1}) \notag\\
&=AI-B\left((I-l)C_{I}-C_{I+1}\right) \\
&=B\left[2(K+1)KI \gamma-((I-l)C_{I}-C_{I+1})\right] \notag
\end{align}
where $\gamma=\frac{A}{2(K+1)KB}=\frac{\alpha}{\beta}$ with auxiliary variables defined as $\alpha=(K-1)\dots (I+2)$ and $\beta=(K-l-2)\dots (I-l+1)$.
Now \eqref{delta-hat-e1} simplifies into $\hat{e}_1^l(I)-\hat{e}_1^l(I+1) =B \times \Xi,$
with the expression $\Xi$ defined as
\begin{align}\label{Lambda-hat-e1}
\Xi=&2(K+1)KI \gamma
-(I-l)(I-l-1)(l+1)(l+2)(I-l-3)~~ \notag\\
&-(K-l)(K-l-1)(l+3)(l+2)(I-l-1) \\
&+2(K-l-1)(I-l)(l+3)(l+1)(I-l-2).\notag
\end{align}
In this case, it remains to prove that $\Xi > 0$ to guarantee $\hat{e}_1^l(I) > \hat{e}_1^l(I+1)$.
With $K, I$ and $l$ defined above, i.e. $0 \le l \le I-1, I \le K-2, K \in [3: +\infty]$, it holds true that $\Xi > 0$. This can be easily proved based on calculation so that the detailed derivation is omitted here for briefness. The main idea is to compute the derivation of $\Xi$ in terms of $K$ utilizing the derivation of $\gamma$ and then prove it is positive, so that one can easily get the minimum value of $\Xi$ which is positive ($\min{\Xi}=12$), with the smallest $K=3$ and the unique combination $(I, l)=(1, 0)$.
Second, we analyze the values of $\boldsymbol{\hat{e}_1}$ when $I=K-1$ which have been proved to be minimum values with regarding to $I$.
In this case, the minuend term in \eqref{hat-e1} is independent on $l$, so that the subtrahend referring to as $\Psi^l$ is the only term requiring to be analyzed. We then rewrite \eqref{hat-e1} into
\begin{equation}
\hat{e}_1^l(K-1)=2(K+1)K -\Psi^l,
\end{equation}
where variable $\Psi^l$ defined as \footnote{Note that the coefficient term $(K-l-2)(K-l-3)\dots(I-l)$ in \eqref{psi} equals to $1$ as it is derived sequentially on condition of $(K-l-2)<(K-l-3)\dots<(I-l)$.}
\begin{align}
\Psi^l=&(K-l-2)(K-l-3)(l+2)(l+1)
+(K-l)(K-l-1)(l+3)(l+2) \notag\\
&-2(K-l-1)(K-l-2)(l+3)(l+1). \label{psi}
\end{align}
The aim is to investigate how $\Psi^l$ varies with $l$ to find the minimum value of $\Psi^l$. Hence, we reformulate \eqref{psi} into
\begin{equation}
\Psi^l=(K-l-1)(l+3)(K+l+2)-(K-l-2)(l+1)(K+l+3)
=2(K+1)K. \label{psi1}
\end{equation}
Now we can obtain that
\begin{equation}
\hat{e}_1^l(K-1)=2(K+1)K -\Psi^l
=2(K+1)K-2(K+1)K=0,
\end{equation}
which states that $\hat{e}_1^l(K-1)$ is constant and always non-negative. As is proved previously that when $l$ is fixed, $\hat{e}_1^l(I)$ decreases with the increase of $I$, it holds true that
\begin{equation}
\min_{l, I}~{\hat{e}_1^l}(I)=\min_l {\hat{e}_1^l}(K-1) \ge 0,~\text{if}~K \ge I+1.
\end{equation}
To summarize, it has been proved that $\hat{e}_1^l(I) \ge 0,~l \in [0:I-1],~I \in [K-1],~K \in [3:+\infty]$, which guarantees $e_1^l \ge 0$ in the considered interval. To be exact, we obtain $\hat{e}_1^l(I) > \hat{e}_1^l(K-1) = 0,~I \in [K-2]$. Hence, the second derivative of the coefficients is positive for both $e_1^l$ and $e_2^l$ when $I \in [K-2]$ with an exception of constant second derivative at $0$ when $I=K-1$. This ends the proof of Lemma~\ref{lemma-step1}.
\subsection{Proof of Lemma~\ref{deltaGstart}}
To proceed, we reformulate $\Delta_{\tilde{G}_1}(t+1)$ into
\begin{equation}
\Delta_{\tilde{G}_1}(t+1)={t+1 \choose t}/{K \choose t}=\frac{(K-t)\dots2}{K\dots(t+2)}.
\end{equation}
Apparently, $\Delta_{\tilde{G}_1}(t+1)$ is subject to $t$ but is independent on $I$. Then we reformulate $\Delta_{\tilde{G}_1}(t+1)$ into a function of $t$ defined as $\omega_1(t)=\Delta_{\tilde{G}_1}(t+1)$. In consideration of different values of $t$, it can be obtained that
\begin{align*}
&\omega_1(t)=\Delta_{\tilde{G}_1}(t+1)=\frac{(K-t)\dots2}{K\dots(t+2)}, \\
&\omega_1(t+1)=\Delta_{\tilde{G}_1}(t+2)=\frac{(K-t-1)\dots2}{K\dots(t+3)}, \\
&\Omega_1(t)=\frac{\omega_1(t+1)}{\omega_1(t)}=-1+\frac{K+2}{K-t},
\end{align*}
where $\Omega_1(t)$ denotes the first derivative of $\omega_1(t)$ with regarding to $t$ in division form when $t \in [1:K-2]$. Apparently, $\Omega_1(t)$ increases with regarding to $t$. Let $t^*$ satisfy $\Omega_1(t^*)=1$ which implies that $\omega_1(t^*)=\omega_1(t^*+1)$. It then follows that $t^*=\floor{K/2}-1$. Consequently, it holds true that
\begin{align}\label{deltaGOmega}
\Omega_1(t) \left\{\begin{array}{cl}
\le 1, & \mbox{if}~~1 \le t \le \floor{K/2}-1, \\
\ge 1, & \mbox{if}~~ \ceil{K/2}-1 \le t \le K-2.
\end{array}\right.
\end{align}
Hence, $\omega_1(t)$ firstly decreases with $t$ when $1 \le t \le \floor{K/2}-1$, and then increases when $\ceil{K/2} \le t \le K-2$. The maximum value of $\omega_1(t)$ is given by
\begin{equation}
\max_t~\omega_1(t)=\max\{\omega_1(1), \omega_1(K-2)\}=\frac{2}{K},
\end{equation}
where $\omega_1(1)=\omega_1(K-2)=2/K$.
Next, we focus on how $\Delta_{\tilde{G}_2}(t+1)$ varies with $t$. Similarly, it is derived that
\begin{align*}
&\omega_2(t)=\frac{1}{\left(t+1\right)\left(1+(K-t-2)t/K\right)}, \\
&\omega_2(t+1)=\frac{1}{\left(t+2\right)\left(1+(K-t-3)(t+1)/K\right)}, \\
&\Omega_2(t)=\frac{\omega_2(t+1)}{\omega_2(t)}=\frac{\left(t+1\right)\left(1+(K-t-2)t/K\right)}{\left(t+2\right)\left(1+(K-t-3)(t+1)/K\right)} \notag \\
&~~~~~~=\frac{A}{A+B},
~~~~t \in [1:K-2],~K \ge 3,
\end{align*}
where the definitions of $\omega_2(t)$ and $\Omega_2(t)$ are similar to $\omega_1(t)$ and $\Omega_1(t)$. The auxiliary variables are defined as
\begin{align*}
A&=\left(t+1\right)\left(K+(K-t-2)t\right), \\
B&=-3t^2+(2K-9)t+3K-6.
\end{align*}
The sign of $B$ is important. Since $B$ is a typical quadratic equation of $t$, the positive solution to $B=0$ is
\begin{equation*}
t_2=\frac{2K-9+\sqrt{4K^2+9}}{6}
\le K-2.
\end{equation*}
Thus, we derive $B \ge 0$, if $1 \le t \le t_2$, and $B< 0$, if $t_2 < t \le K-2$.
As a result, $\Omega_2(t)$ becomes
\begin{align}
\Omega_2(t) \left\{\begin{array}{cl}
\le 1, & \mbox{if}~~1 \le t \le t_2,\\
> 1, & \mbox{if}~~ t_2 < t \le K-2,
\end{array}\right.
\end{align}
which means that $\omega_2(t)$ decreases with regarding to $t$ when $1 \le t \le t_2$ and then increases when $t_2 < t \le K-2$.
Now we compare $\omega_1(K-2)$ and $\omega_2(K-2)$ with $t=K-2$. As mentioned before, $\omega_1(1)=\omega_1(K-2)=2/K$.
\begin{equation*}
\omega_2(K-2)=\frac{1-M/N}{1+M/N}
=\frac{1}{K-1} > \omega_1(K-2).
\end{equation*}
On the contrary, when $t=1$, we obtain
\begin{equation*}
\omega_2(1)=\frac{1}{2(1+(K-3)/K)}=\frac{1}{2(2-3/K)} \in [\frac{1}{4}, \frac{1}{2}].
\end{equation*}
Comparing $\omega_1(1)$ and $\omega_2(1)$, we let $\omega_2(1)=\omega_1(1)$ and obtain
\begin{equation*}
(K-2)(K-6)=0.
\end{equation*}
Therefore, it is proved that
\begin{align} \label{deltaG2}
\left\{\begin{array}{cl}
\Delta_{\tilde{G}}(2)=\omega_2(1)-\omega_1(1)<0, & \mbox{if}~~K \in [3:6),\\
\Delta_{\tilde{G}}(2)=\omega_2(1)-\omega_1(1)\ge 0, & \mbox{if}~~K \in [6:+\infty].
\end{array}\right.
\end{align}
For the point next to the maximum $t$ ($t=K-3$), we get
\begin{align*}
\omega_2(K-3)&=\frac{1}{(K-2)(1+\frac{K-3}{K})},\\
\omega_1(K-3)&=\frac{6}{K(K-1)}.
\end{align*}
Similarly by comparison, we obtain
\begin{align}
\left\{\begin{array}{cl}
\omega_2(K-3)<\omega_1(K-3), & \mbox{if}~~3 \le K \le 8,\\
\omega_2(K-3)>\omega_1(K-3), & \mbox{if}~~ K \ge 9.
\end{array}\right.
\end{align}
Consequently, $\Delta_{\tilde{G}}(t+1)$ at $(t=K-3)$ becomes
\begin{align}\label{deltaGK-3}
\Delta_{\tilde{G}}(K-2)\left\{\begin{array}{cl}
<0, & \mbox{if}~~3 \le K \le 8,\\
>0, & \mbox{if}~~ K \ge 9.
\end{array}\right.
\end{align}
The conclusion on $\Delta_{\tilde{G}}(t+1)$ with $t=1$ and $t=K-3$, which are given in \eqref{deltaG2} and \eqref{deltaGK-3} can be utilized to analyze the value of $\Delta_{\tilde{G}}(t+1)$ when $t=1, \dots, K-4, K \ge 5$. In particular, for $t \in [5:8]$, it is easy to obtain that
\begin{align}\label{deltaGK-5}
\Delta_{\tilde{G}}(t+1) \left\{\begin{array}{cl}
>0, & \mbox{if}~~t \in [K-4], K \in [6:8], \\
<0, & \mbox{if}~~t \in [K-4], K=5.
\end{array}\right.
\end{align}
When $K \in [9:+\infty]$, the sign of $\Delta_{\tilde{G}}(t+1)$ with $t=K-3$ has been clarified above as $\Delta_{\tilde{G}}(t+1)>0$ for $t=K-3$. We can assume that there is some $t$ satisfying $\Delta_{\tilde{G}}(t+2)>0$, i.e. $\omega_2(t+1) > \omega_1(t+1)$. Then, $\Delta_{\tilde{G}}(t+1)$ simplifies to
\begin{align*}
\Delta_{\tilde{G}}(t+1)
=&\Delta_{\tilde{G}_2}(t+1)-\Delta_{\tilde{G}_1}(t+1) \\
=&\omega_2(t)-\omega_1(t)
=\frac{\left[\omega_2(t+1)\Omega_1(t)-\omega_1(t)\Omega_2(t)\right]}{\Omega_1(t)\Omega_2(t)}.
\end{align*}
Next, we compare $\Omega_1(t)(>0)$ and $\Omega_2(t)(>0)$ as
\begin{equation*}
\Omega_1(t)-\Omega_2(t)
=\frac{t+2}{K-t}-\frac{\left(t+1\right)\left(K+(K-t-2)t\right)}{\left(t+2\right)\left(K+(K-t-3)(t+1)\right)}.
\end{equation*}
It can further simplify to
\begin{align*}
&(t+2)^2\left(K+(K-t-3)(t+1)\right) - (K-t)(t+1)\left(K+(K-t-2)t\right) \notag \\
=&AB-CD,
\end{align*}
where auxiliary variables $A=t^2+4t+4$, $B=K-2t-3$, $C=Kt-t^2+K-t$ and $D=Kt+K-2t^2-6t-4$, respectively. It follows that
\begin{align*}
C&=A+(t+1)(K-2t-4)+t, \\
D&=B+t(K-2t-4)-1.
\end{align*}
When $t>\ceil{K/2}-2$, $C<A, D<B$ holds true. Since integer $t$ is assumed, we obtain that $(K-2t-4) \le -1$ and $(t+1)(K-2t-4)+t<0$. It then holds true that $AB-CD >0$ and consequently $\Omega_1(t)>\Omega_2(t)$. Substituting this back and performing recursion from $t=K-4$, we obtain
\begin{equation*}
\Delta_{\tilde{G}}(t+1)>0,~~~~\mbox{if}~t \in (\ceil{K/2}-2, K-4],~K>9.
\end{equation*}
Similarly, it can be proved that $\Delta_{\tilde{G}}(t+1)>0$ for $t \in [\ceil{K/2}-2], K>9$ utilizing recursion from $\Delta_{\tilde{G}}(2)=\omega_2(1)-\omega_2(1)>0$. In this case, we assume there is some $t$ satisfying $\Delta_{\tilde{G}}(t+1)>0$, then discuss the sign of $\Delta_{\tilde{G}}(t+2)$ given by
\begin{equation*}
\Delta_{\tilde{G}}(t+2)
=\omega_2(t)\Omega_2(t)-\omega_1(t)\Omega_1(t).
\end{equation*}
Using the same strategy, $\Delta_{\tilde{G}}(t+1)>0$ for $t \in [\ceil{K/2}-2], K>9$ can be proved as well.
This ends the discussion on the sign of $\Delta_{\tilde{G}}(t+1)$ for all $t \in [K-2]$ and $K>3$.
\bibliographystyle{IEEEtran}
|
2,877,628,091,214 | arxiv |
\section*{Acknowledgements}
\input{Acknowledgements}
\printbibliography
\end{document}
\section{Introduction}
\label{sec:intro}
In order to enhance the capability of the experiments to discover physics beyond the Standard Model,
the Large Hadron Collider (LHC)
operates at the conditions yielding the highest integrated luminosity achievable.
Therefore, the collisions of proton bunches result
not only in large transverse-momentum transfer proton--proton (\pp) interactions, but also in additional
collisions within the same bunch crossing, primarily consisting of low-energy quantum chromodynamics
(QCD) processes.
Such additional \pp collisions are referred to as \textit{in-time} \textit{pile-up} interactions.
In addition to in-time pile-up, \textit{out-of-time} pile-up refers to the energy deposits
in the ATLAS calorimeter from previous and following bunch crossings
with respect to the triggered event.
In this paper, in-time and out-of-time pile-up are referred collectively as pile-up (PU).
In Ref.~\cite{Aad:2015ina} it was shown that pile-up jets can be
effectively removed using track and vertex information with the jet-vertex-tagger (\ensuremath{\mathrm{JVT}}\xspace)
technique. The CMS Collaboration employs a pile-up mitigation strategy based on tracks and
jet shapes~\cite{CMS-PAS-JME-13-005}.
A limitation of the \ensuremath{\mathrm{JVT}}\xspace discriminant used by the ATLAS Collaboration is that it can only be used for jets within the
coverage\footnote{{\textsc ATLAS}\xspace uses a right-handed coordinate system with
its origin at the nominal interaction point (IP) in the centre of the detector
and the $z$-axis along the beam pipe. The $x$-axis points from the IP to the
centre of the LHC ring, and the $y$-axis points upward. Cylindrical coordinates
$(r, \phi)$ are used in the transverse plane, $\phi$ being the azimuthal angle
around the beam pipe.
The pseudorapidity is defined in terms of the polar angle $\theta$ as
$\eta = - \ln \tan(\theta/2)$.
} of the tracking detector, $|\eta|<2.5$. However, in the ATLAS detector, jets are reconstructed
in the range $|\eta|<4.5$.
The rejection of pile-up jets in the \textit{forward} region, here defined as $2.5<|\eta|<4.5$,
is crucial to enhance the sensitivity of key analyses such as
the measurement of Higgs boson production in the vector-boson fusion (VBF) process.
Figure~\ref{fig:fJVTapp}(a) shows how the fraction of $\Zboson+$jets events with
at least one forward jet\footnote{The jet reconstruction is described in Section~\ref{sec:objects}.} with $\pt>20\GeV$,
an important background for VBF analyses,
rises quickly with busier pile-up conditions, quantified by the average number of interactions per bunch crossing ($\langle\mu\rangle$).
Likewise, the resolution of the missing transverse momentum (\met) components \ensuremath{E_x^{\text{miss}}}\xspace and \ensuremath{E_y^{\text{miss}}}\xspace
in $\Zboson+$jets events is also affected by the presence of forward pile-up jets.
The inclusion of forward jets allows a more precise \met calculation
but a more pronounced pile-up dependence, as shown in Figure~\ref{fig:fJVTapp}(b).
At higher $\langle\mu\rangle$, improving the \met resolution depends on rejecting all forward jets,
unless the impact of pile-up jets specifically can be mitigated.
\begin{figure}[htbp]
\begin{center}
\subfloat[][]{\includegraphics[width=0.475\textwidth]{zmumu_evtFracmu_bare}\label{appfrac}}
\subfloat[][]{\includegraphics[width=0.475\textwidth]{met_resomu_bare}\label{appmet}}
\end{center}
\caption{\protect\subref{appfrac} Fraction of simulated $\Zboson+$jets events with at least one forward jet
and \protect\subref{appmet} the resolution of the \met components \ensuremath{E_x^{\text{miss}}}\xspace and \ensuremath{E_y^{\text{miss}}}\xspace
as a function of $\langle\mu\rangle$.
Jets and \met definitions are described in Section~\ref{sec:objects}.
}
\label{fig:fJVTapp}
\end{figure}
In this paper, the phenomenology of pile-up jets with $|\eta|>2.5$ is investigated
in detail, and techniques to identify and reject them are presented.
The paper is organized as follows. Section~\ref{sec:objects} briefly describes the ATLAS
detector, the event reconstruction and selection.
The physical origin and classification of pile-up jets are described in Section~\ref{sec:pileup}.
Section~\ref{sec:width} describes the use of jet shape variables for the
identification and rejection of forward pile-up jets.
The \textit{forward \ensuremath{\mathrm{JVT}}\xspace} (\ensuremath{\mathrm{fJVT}}\xspace) technique is presented
in Section~\ref{sec:fjvt} along with its performance and efficiency measurements.
The usage of jet shape variables in improving \ensuremath{\mathrm{fJVT}}\xspace performance is presented in Section~\ref{sec:comb},
while the application of forward pile-up jet rejection in a VBF analysis is discussed in Section~\ref{sec:vbf}.
The conclusions are presented in Section~\ref{sec:conclusion}.
\section{Experimental setup}
\label{sec:objects}
\subsection{ATLAS detector}
\label{sec:atlas}
The {\textsc ATLAS}\xspace detector is a general-purpose particle detector covering almost $4\pi$
in solid angle and consisting of a tracking system called the inner detector (ID), a calorimeter system,
and a muon spectrometer (MS). The details of the detector are given in
Ref.~\cite{ATLAS:JINST,ATLAS-TDR-19}.
The ID consists of silicon pixel and microstrip tracking detectors covering
the pseudorapidity range of $|\eta| < 2.5$ and
a straw-tube tracker covering $|\eta| < 2.0$.
These components are immersed in an axial 2\,T
magnetic field provided by a superconducting solenoid.
The electromagnetic (EM) and hadronic calorimeters are composed
of multiple subdetectors covering the range $|\eta|<4.9$, generally divided into barrel
($|\eta| < 1.4$), endcap ($1.4 < |\eta| < 3.2$) and forward ($3.2 < |\eta| < 4.9$)
regions.
The barrel and endcap sections of the EM calorimeter use liquid argon (LAr) as the active medium and lead absorbers.
The hadronic endcap calorimeter ($1.5<|\eta|<3.2$) uses copper absorbers and LAr,
while in the forward ($3.1<|\eta|<4.9$) region LAr, copper and tungsten are used.
The LAr calorimeter read-out~\cite{Abreu:2010zzc}, with a pulse length between 60 and 600 ns,
is sensitive to signals from the preceding 24 bunch crossings. It uses bipolar shaping with positive and
negative output, which ensures that the signal induced by out-of-time pile-up averages to zero.
In the region $|\eta|<1.7$, the hadronic (Tile) calorimeter is constructed from steel absorber and scintillator
tiles and is separated into barrel ($|\eta|<1.0$) and extended barrel ($0.8<|\eta|<1.7$) sections.
The fast response of the Tile calorimeter makes it less sensitive to out-of-time pile-up.
The MS forms the outer layer of the ATLAS detector and is dedicated to the detection and measurement
of high-energy muons in the region $|\eta|<2.7$.
A multi-level trigger system of dedicated hardware and software filters is used
to select \pp collisions producing high-\pT particles.
\subsection{Data and MC samples}
The studies presented in this paper are performed using a data set
of $pp$ collisions at $\sqrt{s}=13\TeV$, corresponding to an integrated
luminosity of $3.2$~\ifb, collected in 2015 during which the LHC operated
with a bunch spacing of 25 ns.
There are on average 13.5 interactions per bunch crossing in the data sample used
for the analysis.
Samples of simulated events used for comparisons with data are reweighted to match the
distribution of the number of pile-up interactions observed in data. The average number of interactions
per bunch crossing $\langle\mu\rangle$ in the data used as reference for the reweighting is divided by a
scale factor of $1.16\pm0.07$.
This scale factor takes into account the fraction of visible cross-section
due to inelastic \pp collisions as measured in the data~\cite{Aad:2011eu}
and is required to obtain good agreement with the number of inelastic interactions reconstructed in the tracking detector
as predicted in the reweighted simulation.
In order to extend the study of the pile-up dependence, simulated samples with an
average of 22 interactions per bunch crossing are also used.
Dijet events are simulated with the \PYTHIAeight~\cite{Sjostrand:2007gs} event generator
using the NNPDF2.3LO~\cite{Carrazza:2013axa} set of parton distribution
functions (PDFs) and the parameter values set according to the A14 underlying-event
tune~\cite{ATL-PHYS-PUB-2014-021}.
Simulated \ttbar events are generated with
\Powheg~v2.0~\cite{Alioli:2010xd,Frixione:2007vw,Nason:2004rx} using
the CT10 PDF set~\cite{Lai:2010vv};
\PYTHIAsix~\cite{Sjostrand:2006za} is used for fragmentation and
hadronization with the Perugia2012~\cite{Skands:2010ak}
tune that employs the CTEQ6L1~\cite{Pumplin:2002vw} PDF set.
A sample of leptonically decaying Z bosons produced with jets ($\Zboson(\to\ell\ell)$+jets)
and VBF $H\to\tau\tau$ samples are generated with \Powheg v1.0 and
\PYTHIAeight is used for fragmentation and hadronization with the AZNLO tune~\cite{ATL-PHYS-PUB-2013-017}
and the CTEQ6L1 PDF set.
For all samples, the EvtGen v1.2.0 program~\cite{EvtGen} is used for properties
of the bottom and charm hadron decays.
The effect of in-time as well as out-of-time pile-up
is simulated using minimum-bias events generated with \PYTHIAeight to reflect the
pile-up conditions during the 2015 data-taking period,
using the A2 tune~\cite{ATL-PHYS-PUB-2012-003} and the
MSTW2008LO~\cite{Martin:2009iq} PDF set.
All generated events are processed with a detailed simulation of the {\textsc ATLAS}\xspace detector
response~\cite{Aad:2010ah} based on \GEANTfour~\cite{Agostinelli:2002hh}
and subsequently reconstructed and analysed in the same way as the data.
\subsection{Event reconstruction}
\label{sec:OS}
The raw data collected by the ATLAS detector is reconstructed in the form of particle candidates
and jets using various pattern recognition algorithms.
The reconstruction used in this analysis are detailed in
Ref.~\cite{Aad:2015ina}, while an overview is presented in this section.
\paragraph{Inputs to jet reconstruction}
Jets in ATLAS are reconstructed from clusters of energy deposits in the calorimeters.
Two methods of combining calorimeter cell information are considered in this paper:
topological clusters and towers.
Topological clusters (topo-clusters)~\cite{Aad:2016upy} are built from neighbouring calorimeter cells.
The algorithm uses as seeds calorimeter cells
with energy significance\footnote{The cell noise
$\sigma_\mathrm{noise}$ is the sum in quadrature of the readout electronic noise and the cell noise due to
pile-up, estimated in simulation~\cite{Aad:2016upy,ATLASjets}.} $|E_\mathrm{cell}|/\sigma_\mathrm{noise}>4$,
combines all neighbouring cells with
$|E_\mathrm{cell}|/\sigma_\mathrm{noise}>2$ and finally adds neighbouring cells
without any significance requirement.
Topo-clusters are used as input for jet reconstruction.
Calorimeter towers are fixed-size objects ($\Delta\eta\times\Delta\phi=0.1\times0.1$)~\cite{cscbook}
that ensure a uniform segmentation of the calorimeter information.
Instead of building clusters, the cells are projected onto a fixed grid in $\eta$ and $\phi$ corresponding to 6400 towers.
Calorimeter cells which completely fit within a tower contribute their total energy
to the single tower.
Other cells extending beyond the tower boundary contribute to multiple
towers, depending on the overlap fraction of the cell area with the towers.
In the following, towers are matched geometrically to jets reconstructed using topo-clusters.
\paragraph{Vertices and tracks}
The event hard-scatter primary vertex is defined as the
reconstructed primary vertex with the largest $\sum p_\mathrm{T}^2$ of constituent tracks.
When evaluating performance in simulation,
only events where the reconstructed hard-scatter primary vertex lies
$|\Delta z|<0.1$\,mm from the true hard-scatter interaction are considered.
For the physics processes considered, the reconstructed hard-scatter primary vertex
matches the true hard-scatter interaction more than 95\% of the time.
Tracks are required to have $\pT> 0.5\GeV$ and to satisfy quality
criteria designed to reject poorly measured or fake tracks~\cite{ATL-PHYS-PUB-2015-051}.
Tracks are assigned to primary vertices based on the track-to-vertex matching
resulting from the vertex reconstruction.
Tracks not included in vertex reconstruction are assigned to the nearest vertex based on the distance
$|\Delta z \times \sin \theta|$,
up to a maximum distance of 3.0 mm. Tracks not matched to any vertex are not considered.
Tracks are then assigned to jets by adding them to the jet
clustering process with infinitesimal \pt{}, a procedure known as ghost-association~\cite{Cacciari:2008gn}.
\paragraph{Jets}
\label{sec:jets}
Jets are reconstructed from topo-clusters at the EM scale\footnote{The EM scale
corresponds to the energy deposited in the calorimeter by electromagnetically interacting particles
without any correction accounting for the loss of signal for hadrons.}
using the anti-$k_t$~\cite{Cacciari:2008gp} algorithm, as implemented in
\Fastjet~\cite{Cacciari:2011ma}, with a radius parameter $R=0.4$.
After a jet-area-based subtraction of pile-up energy, a response correction is applied to each
jet reconstructed in the calorimeter to calibrate it to the particle-level jet energy scale~\cite{Aad:2015ina,ATLASjets,ATLAS-CONF-2015-002}.
Unless noted otherwise, jets are required to have $20\GeV < \pT < 50\GeV$. Higher-\pT forward jets are ignored due to
their negligible pile-up rate at the pile-up conditions considered in this paper.
\textit{Central} jets are required to be within $|\eta|$ of 2.5 so that most of their charged particles are within
the tracking coverage of the inner detector.
\textit{Forward} jets are those in the region $2.5<|\eta|<4.5$, and no tracks associated with
their charged particles are measured beyond $|\eta|=2.5$.
Jets built from particles in the Monte Carlo generator's event record (``truth particles'') are also considered.
Truth-particle jets are reconstructed using the
anti-$k_t$ algorithm with $R=0.4$ from stable\footnote{Truth particles are considered stable
if their decay length $c \tau$ is greater than $1\unit{cm}$.
A truth particle is considered to be interacting if it is expected to deposit most of
its energy in the calorimeters; muons and neutrinos are considered to be
non-interacting.}
final-state truth particles from the
simulated hard-scatter (\textit{truth-particle hard-scatter jets}) or in-time pile-up
(\textit{truth-particle pile-up jets}) interaction of choice.
A third type of truth-particle jet (\textit{inclusive truth-particle jets}) is reconstructed by
considering truth particles from all interactions
simultaneously, in order to study the effects of pile-up interactions on truth-particle pile-up jets.
The simulation studies in this paper require a
classification of the reconstructed jets into three categories: \textit{hard-scatter jets},
\textit{QCD pile-up jets}, and \textit{stochastic pile-up jets}.
Jets are thus truth-labelled based on a matching criterion to truth-particle jets.
Similarly to Ref.~\cite{Aad:2015ina}, jets are first classified as
hard-scatter or pile-up jets.
Jets are labelled as hard-scatter jets if
a truth-particle hard-scatter jet with $\pT > 10\GeV$
is found within $\Delta R = \sqrt{(\Delta \eta)^2 + (\Delta \phi)^2}$ of 0.3.
The $\pT>10\GeV$ requirement is used to avoid accidental matches of reconstructed jets
with soft activity from the hard-scatter interaction.
In cases where more than one truth-particle jet is matched,
$\pT^\mathrm{truth}$ is defined from the highest-\pT truth-particle hard-scatter jet
within $\Delta R$ of 0.3.
Jets are labelled as pile-up jets if no truth-particle hard-scatter jet with $\pT > 4\GeV$
is found within $\Delta R$ of 0.6.
These pile-up jets are further classified as QCD pile-up if they are matched within
$\Delta R<0.3$ to a truth-particle pile-up jet or as stochastic pile-up jets
if there is no truth-particle pile-up jet within $\Delta R<0.6$,
requiring that truth-particle pile-up jets have $\pT > 10\GeV$ in both cases.
Jets with $0.3<\Delta R<0.6$ relative to truth-particle hard-scatter jets with $\pT > 10\GeV$
or $\Delta R<0.3$ of truth-particle hard-scatter jets with $4\GeV < \pT < 10\GeV$ are not
labelled because their nature cannot be unambiguously determined.
These jets are therefore not used for performance based on simulation.
\paragraph{Jet Vertex Tagger}
The Jet Vertex Tagger (JVT) is built out of the combination of two jet variables,
\ensuremath{\mathrm{corrJVF}}\xspace and $\ensuremath{R_\mathrm{pT}}\xspace^0$, that provide information to separate hard-scatter jets from pile-up jets.
The quantity \ensuremath{\mathrm{corrJVF}}\xspace~\cite{Aad:2015ina} is defined for each jet as
\begin{equation}
\ensuremath{\mathrm{corrJVF}}\xspace = \frac{\sum{\pT^{\mathrm{trk}}(\mathrm{PV}_0)}}{\sum \pT^{\mathrm{trk}}(\mathrm{PV}_0) + \frac{\sum_{i\geq1}\sum \pT^{\mathrm{trk}}(\mathrm{PV}_i)}{ (k \cdot \ensuremath{n_\mathrm{trk}^\mathrm{PU}}\xspace)}},
\label{eq:corrJVF}
\end{equation}
where PV$_i$ denotes the reconstructed event vertices (PV$_0$ is
the identified hard-scatter vertex and the PV$_i$ are sorted by decreasing $\sum p_\mathrm{T}^2$),
and $\sum{\pT^{\mathrm{trk}}(\mathrm{PV}_0)}$ is the scalar \pT sum
of the tracks that are associated with the jet and originate from the hard-scatter
vertex.
The term $\pT^{\mathrm{PU}}=\sum_{i\geq1}\sum \pT^{\mathrm{trk}}(\mathrm{PV}_i)$ denotes
the scalar \pT sum of the tracks associated with the jet and originating from pile-up
vertices.
To correct for the linear increase of $\pT^{\mathrm{PU}}$ with the
total number of pile-up tracks per event (\ensuremath{n_\mathrm{trk}^\mathrm{PU}}\xspace), $\pT^{\mathrm{PU}}$ is divided by
$(k \cdot \ensuremath{n_\mathrm{trk}^\mathrm{PU}}\xspace)$ with the parameter $k$ set to 0.01~\cite{Aad:2015ina}.\footnote{The parameter
$k$ does not affect performance
and is chosen to ensure that the \ensuremath{\mathrm{corrJVF}}\xspace distribution stretches over the full range between 0 and 1.}
The variable $\ensuremath{R_\mathrm{pT}}\xspace^0$ is defined as the scalar \pT sum of the tracks that are associated
with the jet and
originate from the hard-scatter vertex divided by the fully calibrated jet $\pT$,
which includes pile-up subtraction:
\begin{equation}
\ensuremath{R_\mathrm{pT}}\xspace^0 = \frac{\sum{\pT^{\mathrm{trk}}(\mathrm{PV}_0)}}{\pT^\mathrm{jet}}.
\label{eq:RpT}
\end{equation}
This observable tests the compatibility between the jet \pt and the
total \pt of the hard-scatter charged particles within the jet.
Its average value for hard-scatter jets is approximately 0.5, as the numerator
does not account for the neutral particles in the jet.
The \ensuremath{\mathrm{JVT}}\xspace discriminant is built by defining a two-dimensional likelihood
based on a k-nearest neighbour (kNN) algorithm~\cite{Hocker:2007ht}.
An extension of the $\ensuremath{R_\mathrm{pT}}\xspace^0$ variable computed with respect to any vertex $i$ in the event,
$\ensuremath{R_\mathrm{pT}}\xspace^i=\sum_k{\pT^{\mathrm{trk}_k}(\mathrm{PV}_i)}/\pT^\mathrm{jet}$, is also used
in this analysis.
\paragraph{Electrons and muons}
Electrons are built from EM clusters and associated ID tracks.
They are required to satisfy $|\eta|<2.47$ and $p_\mathrm{T}>10\GeV$, as well as
reconstruction quality and isolation criteria~\cite{Aad:2014fxa}.
Muons are built from an ID track (for $|\eta|<2.5$) and an MS track.
Muons are required to satisfy $p_\mathrm{T}>10\GeV$ as well as reconstruction quality
and isolation criteria~\cite{Aad:2016jkr}.
Correction factors are applied to simulated events to account for mismodelling of lepton isolation, trigger efficiency,
and quality selection variables.
\paragraph{\met}
The missing transverse momentum, \ensuremath{\boldsymbol{E}_{\text{T}}^{\text{miss}}}\xspace, corresponds to the negative vector sum of the
transverse momenta
of selected electron, photon, and muon candidates, as well as jets and tracks not used in reconstruction~\cite{Aad:2016nrq}.
The scalar magnitude \met represents the total transverse momentum imbalance in an event.
\section{Origin and structure of pile-up jets}
\label{sec:pileup}
The additional transverse energy from pile-up interactions contributing to jets originating
from the hard-scatter (HS) interaction is subtracted on an event-by-event basis using the
jet-area method~\cite{Aad:2015ina,Cacciari:2007fd}.
However, the jet-area subtraction assumes a uniform pile-up distribution across the calorimeter,
while local fluctuations of pile-up can cause additional jets to be reconstructed.
The additional jets can be classified into two categories:
\textit{QCD pile-up jets}, where the particles in the jet stem mostly from a single QCD process occuring
in a single pile-up interaction,
and \textit{stochastic jets}, which combine particles from different interactions.
Figure \ref{fig:eventDisplayCentral} shows an event with a hard-scatter jet, a QCD pile-up
jet and a stochastic pile-up jet.
Most of the particles associated with the hard-scatter jet originate from the primary interaction.
Most of the particles associated with the QCD pile-up jet originate from
a single pile-up interaction. The stochastic pile-up jet includes
particles associated with both pile-up interactions in the event, without a single prevalent source.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=60mm]{evtDisplay_new}
\end{center}
\caption{Display of a simulated event in $r$--$z$ view containing a hard-scatter jet,
a QCD pile-up jet, and a stochastic pile-up jet.
The \ensuremath{\Delta\RpT}\xspace values (defined in Section~\ref{sec:diffrpt}) are quoted for the two pile-up jets.}
\label{fig:eventDisplayCentral}
\end{figure}
While this binary classification is convenient for the purpose of description,
the boundary between the two categories is somewhat arbitrary.
This is particularly true in harsh pile-up conditions, with dozens of concurrent
$pp$ interactions, where every jet, including those originating primarily from the identified
hard-scatter interaction, also has contributions from multiple pile-up interactions.
In order to identify and reject forward pile-up jets, a twofold strategy is adopted.
Stochastic jets have intrinsic differences in shape with respect to hard-scatter and QCD pile-up jets,
and this shape can be used for discrimination.
On the other hand, the calorimeter signature of QCD pile-up jets does not differ fundamentally from that of hard-scatter jets.
Therefore, QCD pile-up jets are identified by exploiting transverse momentum conservation
in individual pile-up interactions.
The nature of pile-up jets can vary significantly whether
or not most of the jet energy originates from a single interaction.
Figure~\ref{fig:quotienttr} shows the fraction of QCD pile-up jets among all pile-up jets,
when considering inclusive truth-particle jets.
The corresponding distributions for reconstructed jets are shown in Figure~\ref{fig:quotient}.
When considering only in-time pile-up contributions (Figure~\ref{fig:quotienttr}),
the fraction of QCD pile-up jets depends on the pseudorapidity and
\pt of the jet and the average number of interactions per bunch crossing $\langle\mu\rangle$.
Stochastic jets are more likely at low \pt and $|\eta|$ and in harsher pile-up conditions.
However, the comparison between Figure~\ref{fig:quotienttr}, containing inclusive truth-particle jets,
and Figure~\ref{fig:quotient}, containing reconstructed jets,
suggests that only a small fraction of stochastic jets are due to in-time pile-up.
Indeed, the fraction of QCD pile-up jets decreases significantly once out-of-time pile-up effects and
detector noise and resolution are taken into account.
Even though the average amount of out-of-time energy is higher in the forward region,
topo-clustering results in a stronger suppression of this contribution in the forward region.
Therefore, the fraction of QCD pile-up jets increases in the forward region, and it
constitutes more than 80\% of pile-up jets with \pT > 30 \GeV overall.
The fraction of stochastic jets becomes more prominent at low \pt and it grows as the number
of interactions increases.
The majority of pile-up jets in the forward region are QCD pile-up jets, although a sizeable
fraction of stochastic jets is present in both the central and forward regions.
\begin{figure}[htbp]
\begin{center}
\subfloat[][]{\includegraphics[width=0.475\textwidth]{qcdfractr_vs_eta}\label{qcdvseta}}\quad
\subfloat[][]{\includegraphics[width=0.475\textwidth]{qcdfractr_vs_pt}\label{qcdvspt}}\\
\subfloat[][]{\includegraphics[width=0.475\textwidth]{qcdfractr_vs_mu_20}\label{qcdvsnpv20}}\quad
\subfloat[][]{\includegraphics[width=0.475\textwidth]{qcdfractr_vs_mu_30}\label{qcdvsnpv30}}
\end{center}
\caption{Fraction of pile-up tagged inclusive truth-particle jets classified as QCD pile-up jets
as a function of \protect\subref{qcdvseta} $|\eta|$, \protect\subref{qcdvspt} \pt,
and \protect\subref{qcdvsnpv20} $\langle\mu\rangle$ for jets with $20\GeV<\pT<30\GeV$ and
\protect\subref{qcdvsnpv30} $30\GeV<\pT<40\GeV$,
as estimated in dijet events with \PYTHIAeight pile-up simulation. The inclusive truth-particle jets are
reconstructed from truth particles originating from all in-time pile-up interactions.}
\label{fig:quotienttr}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\subfloat[][]{\includegraphics[width=0.475\textwidth]{qcdfrac_vs_eta}\label{qcdvseta}}\quad
\subfloat[][]{\includegraphics[width=0.475\textwidth]{qcdfrac_vs_pt}\label{qcdvspt}}\\
\subfloat[][]{\includegraphics[width=0.475\textwidth]{qcdfrac_vs_mu_20}\label{qcdvsnpv20}}\quad
\subfloat[][]{\includegraphics[width=0.475\textwidth]{qcdfrac_vs_mu_30}\label{qcdvsnpv30}}
\end{center}
\caption{Fraction of reconstructed pile-up jets classified as QCD pile-up jets,
as a function of \protect\subref{qcdvseta} $|\eta|$, \protect\subref{qcdvspt} \pt,
and \protect\subref{qcdvsnpv20} $\langle\mu\rangle$ for jets with $20\GeV<\pT<30\GeV$
and \protect\subref{qcdvsnpv30} $30\GeV<\pT<40\GeV$,
as estimated in dijet events with \PYTHIAeight pile-up simulation.}
\label{fig:quotient}
\end{figure}
In the following, each source of forward pile-up jets is addressed with algorithms
targeting its specific features.
\section{Stochastic pile-up jet tagging with time and shape information}
\label{sec:width}
Given the evidence presented in Section~\ref{sec:pileup} that out-of-time pile-up
plays an important role for stochastic jets, a direct handle consists of the
timing information associated with the jet.
The jet timing $t_\mathrm{jet}$ is defined as the energy-weighted average of the timing of the
constituent clusters.
In turn, the cluster timing is defined as the energy-weighted average of the timing of the
constituent calorimeter cells.
The jet timing distribution, shown in Figure~\ref{fig:distTime}, is symmetric and centred at
$t_\mathrm{jet}=0$ for both the hard-scatter and pile-up jets.
However, the significantly wider distribution for stochastic jets reveals the large
out-of-time pile-up contribution.
For jets with $20<\pt{}<30$ \GeV{}, requiring $|t_\mathrm{jet}|<12$ ns ensures that 20\% of
stochastic pile-up jets are rejected while keeping 99\% of hard-scatter jets.
In the following, this is always applied as a baseline requirement when identifying stochastic pile-up jets.
\begin{figure}[htbp]
\begin{center}
\subfloat[][]{\includegraphics[width=0.5\textwidth]{outDist2}\label{centraltime}}
\subfloat[][]{\includegraphics[width=0.5\textwidth]{outDist}\label{forwardtime}}
\end{center}
\caption{Distribution of the jet timing $t_\mathrm{jet}$ for hard-scatter, QCD pile-up
and stochastic pile-up jets in the \protect\subref{centraltime} central and \protect\subref{forwardtime} forward region.}
\label{fig:distTime}
\end{figure}
Stochastic jets can be further suppressed using shape information.
Being formed from a random collection of particles from different interactions,
stochastic jets lack the characteristic dense energy core of jets originating
from the showering and hadronization of a hard-scatter parton.
The energy is instead spread rather uniformly within the jet cone.
Therefore, pile-up mitigation techniques based on jet shapes have been shown to be
effective in suppressing stochastic pile-up jets \cite{CMS-PAS-JME-13-005}.
In this section, the challenges of this approach are presented, and
different algorithms exploiting the jet shape information are described
and characterized.
The jet width $w$ is a variable that characterizes the energy spread within a jet.
It is defined as
\begin{equation}
w = \frac{\sum_k{\ensuremath{\Delta R}\xspace(\mathrm{jet},k)\ensuremath{p_{\mathrm{T}}^k}\xspace}}{\sum_k{\ensuremath{p_{\mathrm{T}}^k}\xspace}},
\label{eq:clusterwidth}
\end{equation}
where the index $k$ runs over the jet constituents and $\ensuremath{\Delta R}\xspace(\mathrm{jet},k)$ is
the angular distance between the jet constituent $k$ and the jet axis.
The jet width is a useful observable for identifying stochastic jets, as
the average width is significantly larger for jets
with a smaller fraction of energy originating from a single interaction.
In simulation the jet width can be computed using truth-particles (\textit{truth-particle width}),
as a reference point to benchmark the performance of the reconstructed observable.
At detector level, the jet constituents are calorimeter topo-clusters.
In general, topo-clustering compresses the calorimeter information while retaining its
fine granularity. Ideally, each cluster captures the energy shower from a single
incoming particle.
However, the cluster multiplicity in jets decreases quickly
in the forward region, to the point where jets are formed by a single cluster and
the jet width can no longer be defined.
An alternative approach consists of using as constituents
the 11 by 11 grid of calorimeter towers centred around the jet axis.
The use of calorimeter towers ensures a fixed multiplicity given by the $0.1\times0.1$ granularity
so that the jet width always contains jet shape information.
\begin{figure}[htbp]
\begin{center}
\subfloat[][]{\includegraphics[width=0.475\textwidth]{vsnpv_truewidth}\label{vsnpvtruth}}\quad
\subfloat[][]{\includegraphics[width=0.475\textwidth]{vsnpv_clusterwidth}\label{vsnpvcluster}}\\
\subfloat[][]{\includegraphics[width=0.475\textwidth]{vsnpv_towerwidth}\label{vsnpvtower}}
\end{center}
\caption{Dependence of the average jet width on the number of reconstructed primary vertices (\ensuremath{N_\mathrm{PV}}\xspace).
The distributions are shown using \protect\subref{vsnpvtruth} hard-scatter and in-time pile-up truth-particles,
\protect\subref{vsnpvcluster} clusters, or \protect\subref{vsnpvtower} towers as constituents.}
\label{fig:width_limits}
\end{figure}
As shown in Figure~\ref{fig:width_limits}, the average jet width depends on the pile-up conditions.
At higher pile-up values, a larger number of pile-up particles are likely
to contribute to a jet, thus broadening the energy distribution within the jet
itself. As a result, the width drifts towards higher values for hard-scatter, QCD pile-up,
and stochastic jets.
The difference in width between hard-scatter and QCD pile-up jets is due to the different underlying
\pt spectra. The spectrum of QCD pile-up jets is softer than that of the hard-scatter jets for the process considered (\ttbar);
therefore, a significant fraction of QCD pile-up jets are reconstructed with \pt between $20\GeV$
and $30 \GeV$ because the stochastic and out-of-time component is larger than in hard-scatter jets.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=70mm]{lego}
\end{center}
\caption{
Distribution of the average tower \pt for hard-scatter jets as a function of the angular distance from the jet axis in $\eta$
and $\phi$ in simulated \ttbar events.}
\label{fig:towerDist}
\end{figure}
Using calorimeter towers as constituents, it is possible
to explore the \pt distribution within a jet with a fixed $\eta\times\phi$ granularity.
Figure \ref{fig:towerDist} shows the two-dimensional \pt distribution
around the jet axis for hard-scatter jets. The distribution is symmetric in $\phi$, while the pile-up pedestal
decreases with increasing $\eta$, as is expected in the forward region.
A new variable, designed to exploit the full information about tower constituents,
is considered.
The two-dimensional\footnote{The simultaneous fit of both dimensions was found to perform better than the fit of a 1D projection.}
\pt distribution in the \ensuremath{\Delta\eta}\xspace--\ensuremath{\Delta\phi}\xspace plane centred
around the jet axis is fitted with a function
\begin{equation}
f = \alpha+\beta\Delta\eta+\gamma \mathrm{e}^{-\frac{1}{2}\left(\frac{\ensuremath{\Delta\eta}\xspace}{0.1}\right)^2-\frac{1}{2}\left(\frac{\ensuremath{\Delta\phi}\xspace}{0.1}\right)^2}.
\label{eq:gausfunc}
\end{equation}
Both the width of the Gaussian component of the fit and the range in which the fit is
performed are treated as jet-independent constants.
The fit range, an $11\times11$ tower grid, optimizes the balance between an improved
constant ($\alpha$) and linear ($\beta$) term measurement by using a larger range and
a decreased risk of including outside pile-up fluctuations by using a smaller range.
On average, the jet tower \pt distribution is symmetric with respect to $\Delta\phi$,
and pile-up rejection at constant hard-scatter efficiency is improved by averaging the tower momenta at $\ensuremath{|\Delta\phi|}\xspace$ and
$-\ensuremath{|\Delta\phi|}\xspace$ so that fluctuations are partially cancelled before performing the fit.
The constant ($\alpha$) and linear ($\beta$) terms in the fit capture the average stochastic pile-up
contribution to the jet \pt distribution, while the Gaussian term describes
the \pt distribution from the underlying QCD jet.
The parameter $\gamma$ therefore represents a stochastic pile-up-subtracted estimate of the \pt
of such a QCD pile-up jet in a $\Delta R=0.1$ core assuming a Gaussian \pt distribution of its
constituent towers.
By definition, $\gamma$ does not depend on the amount of pile-up in the event,
but only on the nature of the jet as stochastic or QCD.
In order to make the fitting procedure more robust, the Gaussian width parameter is fixed.
While the width of a QCD pile-up jet is expected to depend on the truth-particle jet \pt and $\eta$,
such dependence is negligible in the \pt range relevant for these studies (20--50 \GeV).
Figure~\ref{fig:singleJets}, showing projections of the tower distribution with the fit function overlaid,
illustrates the characteristic peaking shape of pure QCD jets compared with the flatter distribution
in stochastic jets.
The hard-scatter jet distribution displays the expected, sharply peaked distribution,
while the stochastic pile-up jet distribution is flat with various off-centre features, reflecting the randomness of the underlying processes.
\begin{figure}[htbp]
\begin{center}
\subfloat[][]{\includegraphics[width=0.475\textwidth]{figures/singleJetTowerDist/schematic_hs}\label{jeths}}
\subfloat[][]{\includegraphics[width=0.475\textwidth]{figures/singleJetTowerDist/schematic_sto}\label{jetsto}}
\end{center}
\caption{Symmetrized tower \pt distribution projections in $\phi$
for an example \protect\subref{jeths} hard-scatter jet and \protect\subref{jetsto} stochastic
pile-up jet in simulated \ttbar events.
The black histogram line corresponds to the projection of the 2D tower distribution.
The fit model closely follows the hard-scatter jet distribution, yielding a large Gaussian signal,
while stochastic pile-up jets feature multiple smaller signals, away from the jet core.}
\label{fig:singleJets}
\end{figure}
The performance of the $\gamma$ variable and of the cluster-based
and tower-based widths is compared in Figure~\ref{fig:roc_constCompare},
where the efficiency for stochastic pile-up jets is shown as a function of the
hard-scatter jet efficiency. Each curve is obtained by applying an upper or lower
bound on the jet width or $\gamma$, respectively, in order to select hard-scatter jets.
The tower-based width outperforms the cluster-based width over the whole efficiency
range, while the $\gamma$ variable performs similarly to the tower-based width.
The hard-scatter efficiency and pile-up efficiency dependence on the number of reconstructed vertices
in the event (\ensuremath{N_\mathrm{PV}}\xspace) and $\eta$ is shown in Figure~\ref{fig:eff_constCompare};
the requirement for each discriminant is tuned so that an overall efficiency of 90\%
is achieved for hard-scatter jets.
By construction, the performance of the $\gamma$ variable is less affected by the
pile-up conditions than the two width variables.
\begin{figure}[htbp]
\begin{center}
\subfloat[][]{\includegraphics[width=0.475\textwidth]{roc_constCompare_sto_mu10}\label{rocconst10}}
\subfloat[][]{\includegraphics[width=0.475\textwidth]{roc_constCompare_sto_mu30}\label{rocconst30}}
\end{center}
\caption{Efficiency for stochastic pile-up jets
as a function of the efficiency for hard-scatter jets
using different shape-based discriminants: \protect\subref{rocconst10} $10\leq\langle\mu\rangle<20$ and
\protect\subref{rocconst30} $30\leq\langle\mu\rangle<40$ in simulated \ttbar events.}
\label{fig:roc_constCompare}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\subfloat[][]{\includegraphics[width=0.475\textwidth]{eff_constCompare}\label{effnpv}}\quad
\subfloat[][]{\includegraphics[width=0.475\textwidth]{effeta_constCompare}\label{effeta}}\\
\subfloat[][]{\includegraphics[width=0.475\textwidth]{fake_constCompare_sto}\label{fakenpv}}\quad
\subfloat[][]{\includegraphics[width=0.475\textwidth]{fakeeta_constCompare_sto}\label{fakeeta}}
\end{center}
\caption{
Hard-scatter jet efficiency as a function of \protect\subref{effnpv} number of reconstructed primary vertices \ensuremath{N_\mathrm{PV}}\xspace
and \protect\subref{effeta} pseudorapidity $|\eta|$, as well as
stochastic pile-up jet efficiency as a function of \protect\subref{fakenpv} number of reconstructed primary vertices \ensuremath{N_\mathrm{PV}}\xspace and
\protect\subref{fakeeta} pseudorapidity $|\eta|$ at 90\% efficiency of selecting hard-scatter jets in simulated \ttbar events.}
\label{fig:eff_constCompare}
\end{figure}
The $\gamma$ parameter is a good discriminant for stochastic pile-up jets because it
provides an estimate of the largest amount
of \pt in the jet originating from a single vertex. If there is no dominant contribution,
the \pt distribution does not
feature a prominent core, and therefore $\gamma$ is close to zero.
With this approach, all jets are effectively considered as QCD pile-up jets, and
$\gamma$ is used to estimate their core \pt.
Therefore, from this stage, the challenge of pile-up rejection is reduced to the
identification and
rejection of QCD pile-up jets, which is discussed in the following section.
\section{QCD pile-up jet tagging with topological information}
\label{sec:fjvt}
While it has been shown that pile-up mitigation techniques based on jet shapes
are effective in suppressing stochastic pile-up jets,
such methods do not address QCD pile-up jets that are prevalent in the forward region.
This section describes the development of an effective rejection method
specifically targeting QCD pile-up jets.
QCD pile-up jets originate from a single \pp interaction where multiple jets can be
produced.
The total transverse momentum associated with each pile-up interaction is expected to be
conserved;\footnote{The cross-section of interactions producing high-\pt neutrinos is negligible,
compared to the rate of multijet events.}
therefore all jets and central tracks associated with a given vertex can be exploited
to identify QCD pile-up jets beyond the tracking coverage of the inner detector.
The principle is clear if the dijet final state alone is considered.
Forward pile-up jets are therefore identified by looking for a pile-up jet
opposite in $\phi$ in the central region.
The main limitation of this approach is that it only addresses dijet pile-up interactions
in which both jets are reconstructed.
In order to address this challenge, a more comprehensive approach is adopted by considering the total
transverse momentum of tracks and jets associated with each reconstructed vertex independently.
The more general assumption is that the transverse momentum of each pile-up interaction
should be balanced, and any imbalance would be due to a forward jet from one of the interactions.
In order to properly compute the transverse momentum of each interaction, only QCD pile-up jets
should be considered.
Consequently, the challenge of identifying forward QCD pile-up jets using transverse momentum conservation
with central pile-up jets requires being able to discriminate between QCD and
stochastic pile-up jets in the central region.
\subsection{A discriminant for central pile-up jet classification}
\label{sec:diffrpt}
Discrimination between stochastic and QCD pile-up jets
in the central region can be achieved using track and vertex information.
This section describes a new discriminant built for this purpose.
The underlying features of QCD and stochastic pile-up jets are different.
Tracks matched to QCD pile-up jets mostly originate from a vertex PV$_i$ corresponding
to a pile-up interaction ($i\neq0$), thus yielding $\ensuremath{R_\mathrm{pT}}\xspace^i>\ensuremath{R_\mathrm{pT}}\xspace^0$ for a given jet.
Such jets have large values of $\ensuremath{R_\mathrm{pT}}\xspace^i$ with respect to the pile-up
vertex $i$ from which they originated.
Tracks matched to stochastic pile-up jets are not likely to originate from
the same interaction, thus yielding small $\ensuremath{R_\mathrm{pT}}\xspace^i$ values with respect to any vertex $i$.
This feature can be exploited to discriminate between these two categories.
For stochastic pile-up jets, the largest $\ensuremath{R_\mathrm{pT}}\xspace^i$ value is going to be of similar size as the average $\ensuremath{R_\mathrm{pT}}\xspace^i$ value across all vertices,
while a large difference will show for QCD jets, as most tracks originate from the
same pile-up vertex.
Thus, the difference between the leading and median values of
$\ensuremath{R_\mathrm{pT}}\xspace^i$ for a central jet, \ensuremath{\Delta\RpT}\xspace, can be used for distinguishing QCD pile-up jets
from stochastic pile-up jets in the central region, as shown in Figure~\ref{fig:diffRpT}.
A minimum \ensuremath{\Delta\RpT}\xspace requirement can effectively reject stochastic pile-up jets.
In the following a $\ensuremath{\Delta\RpT}\xspace>0.2$ requirement is applied for central jets with
$\pT < 35 \GeV$.
Above this threshold the fraction of stochastic pile-up jets is negligible, and all pile-up jets are
therefore assumed to be QCD pile-up jets
irrespective of their \ensuremath{\Delta\RpT}\xspace value.
The choice of threshold depends on the pile-up conditions.
This choice is tuned to be optimal for the collisions considered in this study,
with an average of 13.5 interactions per bunch crossing.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.475\textwidth]{distdiff_lin}
\end{center}
\caption{Distribution of \ensuremath{\Delta\RpT}\xspace for stochastic and QCD pile-up jets,
as observed in dijet events with \PYTHIAeight pile-up simulation.}
\label{fig:diffRpT}
\end{figure}
The total transverse momentum of each vertex is thus computed by averaging, with a vectorial sum, the total
transverse momentum of tracks and central jets assigned to the vertex.
The jet--vertex matching is performed by considering the largest $\ensuremath{R_\mathrm{pT}}\xspace^i$ for each jet.
The transverse momentum vector ($\boldsymbol{p}_\mathrm{T}$) of a given forward jet is then compared
with the total transverse momentum of each vertex
in the event. If there is at least one pile-up vertex in the event with a large total vertex transverse
momentum back-to-back in $\phi$ with respect to the forward jet, the jet itself is likely to have
originated from that vertex.
Figure \ref{fig:fjvtevt} shows an example event, where the $\boldsymbol{p}_\mathrm{T}$ of a forward pile-up jet
is back-to-back with respect to the
total transverse momentum of the vertex from which it is expected to have originated.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=150mm]{atlantis4pan.pdf}
\end{center}
\caption{
Display of candidate $\Zboson(\to\mu\mu)$ event (muons in yellow) containing two QCD pile-up jets.
Tracks from the primary vertex are in red, those from the pile-up vertex with the highest $\sum p_\mathrm{T}^2$
are in green. The top panel shows a transverse and longitudinal view of the detector,
while the bottom panel shows the details of the event in the ID in the longitudinal view.
}
\label{fig:fjvtevt}
\end{figure}
\subsection{Forward Jet Vertex Tagging algorithm}
The procedure is referred to as \textit{forward Jet Vertex Tagging} (f\ensuremath{\mathrm{JVT}}\xspace).
The main parameters for the forward JVT algorithm are thus the maximum JVT value,
$\ensuremath{\mathrm{JVT}}\xspace_\mathrm{max}$, to
reject central hard-scatter jets and the minimum \ensuremath{\Delta\RpT}\xspace requirement to ensure the
selected pile-up jets are QCD pile-up jets.
$\ensuremath{\mathrm{JVT}}\xspace_\mathrm{max}$ is set to 0.14 corresponding to an efficiency of selecting pile-up jets of
93\% in dijet events.
The minimum \ensuremath{\Delta\RpT}\xspace requirement defines the operating point in terms of efficiency for
selecting QCD pile-up jet and contamination from stochastic pile-up jets.
A minimum \ensuremath{\Delta\RpT}\xspace of 0.2 is required, corresponding to an efficiency of $70\%$
for QCD pile-up jets and $20\%$ for stochastic pile-up jets in dijet events.
The selected jets are then assigned to the vertex PV$_i$ corresponding to the highest
$\ensuremath{R_\mathrm{pT}}\xspace^i$ value.
For each pile-up vertex $i$, $i\neq0$,
the missing transverse momentum $\langle\boldsymbol{p}_{\mathrm{T},i}^\mathrm{miss}\rangle$ is computed as
the weighted vector sum of the jet ($\boldsymbol{p}_\mathrm{T}^\mathrm{jet}$) and track ($\boldsymbol{p}_\mathrm{T}^\mathrm{track}$) transverse momenta:
\begin{equation}
\langle\boldsymbol{p}_{\mathrm{T},i}^\mathrm{miss}\rangle=-\frac{1}{2}\left(\sum_{\mathrm{tracks \in PV}_i}k\boldsymbol{p}_\mathrm{T}^\mathrm{track} + \sum_{\mathrm{jets \in PV}_i}\boldsymbol{p}_\mathrm{T}^\mathrm{jet}\right).
\end{equation}
The factor $k$ accounts for intrinsic differences between the jet and track terms.
The track component does not include the contribution of neutral particles,
while the jet component is not sensitive to soft emissions significantly below 20 \GeV.
The value $k=2.5$ is chosen as the one that optimizes the overall rejection of
forward pile-up jets.
The \ensuremath{\mathrm{fJVT}}\xspace discriminant for a given forward jet, with respect to the vertex $i$,
is then defined as the normalized projection of the missing transverse momentum on
$\boldsymbol{p}_T^\mathrm{fj}$:
\begin{equation}
\mathrm{fJVT}_i = \frac{\langle\boldsymbol{p}_{\mathrm{T},i}^\mathrm{miss}\rangle\cdot\boldsymbol{p}_\mathrm{T}^\mathrm{fj}}{|\boldsymbol{p}_\mathrm{T}^\mathrm{fj}|^2},
\end{equation}
where $\boldsymbol{p}_\mathrm{T}^\mathrm{fj}$ is the forward jet's transverse momentum.
The motivation for this definition is that the amount of missing transverse momentum in the direction of the
forward jet needed for the jet to be tagged should be proportional to the jet's transverse momentum.
The forward jet is therefore tagged as pile-up if its \ensuremath{\mathrm{fJVT}}\xspace value, defined as
$\ensuremath{\mathrm{fJVT}}\xspace=\mathrm{max}_i(\ensuremath{\mathrm{fJVT}}\xspace_i)$, is above a threshold.
The choice of threshold determines the pile-up rejection performance.
The \ensuremath{\mathrm{fJVT}}\xspace discriminant tends to have larger values for
QCD pile-up jets, while the distribution for hard-scatter jets falls steeply,
as shown in Figure~\ref{fig:fjvtbase}.
\begin{figure}[htbp]
\begin{center}
\subfloat[][]{\includegraphics[width=0.475\textwidth]{fjvtdist30}\label{fjvt30}}
\subfloat[][]{\includegraphics[width=0.475\textwidth]{fjvtdist40}\label{fjvt40}}
\end{center}
\caption{The \ensuremath{\mathrm{fJVT}}\xspace distribution for hard-scatter (blue) and pile-up (green) forward jets
in simulated $\Zboson+$jets events with at least one forward jet with \protect\subref{fjvt30} $30<\pt<40$ \GeV
or \protect\subref{fjvt40} $40<\pt<50$ \GeV .}
\label{fig:fjvtbase}
\end{figure}
\subsection{Performance}
Figure \ref{fig:fJVTroc} shows the efficiency of selecting forward pile-up jets
as a function of the efficiency of selecting forward hard-scatter
jets when varying the maximum \ensuremath{\mathrm{fJVT}}\xspace requirement.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.475\textwidth]{roc2}
\end{center}
\caption{
Efficiency for pile-up jets in simulated $\Zboson+$jets events as a function of the efficiency for hard-scatter jets for
different jet \pt ranges.
}
\label{fig:fJVTroc}
\end{figure}
Using a maximum \ensuremath{\mathrm{fJVT}}\xspace of 0.5 and 0.4 respectively, hard-scatter efficiencies of
92\% and 85\% are achieved for pile-up efficiencies of 60\% and 50\%,
considering jets with $20<\pT<50\GeV$.
The dependence of the hard-scatter and pile-up efficiencies on the forward jet \pT is
shown in Figure~\ref{fig:fJVTeff}.
For low-\pt forward jets, the probability of an upward fluctuation in the \ensuremath{\mathrm{fJVT}}\xspace value
is more likely, and therefore the efficiency for hard-scatter jets is slightly lower
than for higher-\pt jets.
The hard-scatter efficiency depends on the number of pile-up interactions,
as shown in Figure~\ref{fig:fJVTeffVsNPV}, as busier pile-up conditions
increase the chance of accidentally matching the hard-scatter jet to a pile-up vertex.
The pile-up efficiency depends on the \pT of the forward jets, due to the \pT-dependence
of the relative numbers of QCD and stochastic pile-up jets.
\begin{figure}[htbp]
\begin{center}
\subfloat[][]{\includegraphics[width=0.475\textwidth]{effhs_vs_pt}\label{hseff}}
\subfloat[][]{\includegraphics[width=0.475\textwidth]{effpu_vs_pt}\label{pueff}}
\end{center}
\caption{
Efficiency for \protect\subref{hseff} hard-scatter jets and
\protect\subref{pueff} pile-up jets as a function of the forward jet \pt in simulated $\Zboson+$jets events.}
\label{fig:fJVTeff}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\subfloat[][]{\includegraphics[width=0.475\textwidth]{hseff_vs_npv_30}\label{hs30}}\quad
\subfloat[][]{\includegraphics[width=0.475\textwidth]{hseff_vs_npv_40}\label{hs40}}\\
\subfloat[][]{\includegraphics[width=0.475\textwidth]{pueff_vs_npv_30}\label{pu30}}\quad
\subfloat[][]{\includegraphics[width=0.475\textwidth]{pueff_vs_npv_40}\label{pu40}}
\end{center}
\caption{
Efficiency in simulated $\Zboson+$jets events
as a function of \ensuremath{N_\mathrm{PV}}\xspace for hard-scatter forward jets with \protect\subref{hs30} $30\GeV<\pT<40\GeV$
and \protect\subref{hs40} $40\GeV<\pT<50\GeV$, and for pile-up forward jets with
\protect\subref{pu30} $30\GeV<\pT<40\GeV$ \protect\subref{pu40} $40\GeV<\pT<50\GeV$.}
\label{fig:fJVTeffVsNPV}
\end{figure}
\subsection{Efficiency measurements}
The \ensuremath{\mathrm{fJVT}}\xspace efficiency for hard-scatter jets is measured in $\Zboson+\mathrm{jets}$ data events,
exploiting a tag-and-probe procedure similar to that described in
Ref.~\cite{Aad:2015ina}.
For $Z(\to\mu\mu)+$jets events, selected by single-muon triggers,
two muons of opposite sign and $\pT>25\GeV$ are required,
such that their invariant mass lies between 66 \GeV and 116 \GeV.
Events are further required to satisfy event and jet quality criteria,
and a veto on cosmic-ray muons.
Using the leading forward jet recoiling against the \Zboson boson as a probe,
a signal region of forward hard-scatter jets is defined as the back-to-back region specified by
$|\Delta \phi (\Zboson, \mathrm{jet})| > 2.8$ rad.
In order to select a sample pure in forward hard-scatter jets,
events are required to have no central hard-scatter jets with $\pt>20 \GeV$, identified with \ensuremath{\mathrm{JVT}}\xspace,
and exactly one forward jet.
The \Zboson boson is required to have $\pT > 20 \GeV$, as events in which the \Zboson boson
has \pT less than the minimum defined jet \pT have a lower hard-scatter purity.
The above selection results in a forward hard-scatter signal region that is greater than
98\% pure in hard-scatter jets relative to pile-up jets, as estimated in simulation.
The \ensuremath{\mathrm{fJVT}}\xspace distributions for data and simulation in the signal region are compared in Figure
\ref{fig:basicComparison}.
The data distribution is observed to have fewer jets with high \ensuremath{\mathrm{fJVT}}\xspace than predicted by simulation,
consistent with an overestimation of the number of pile-up jets, as reported in
Ref.~\cite{Aad:2015ina}.
\begin{figure}[htbp]
\begin{center}
\subfloat[][]{\includegraphics[width=0.475\textwidth]{calibration/distribution_20_30}\label{lowpt}}
\subfloat[][]{\includegraphics[width=0.475\textwidth]{calibration/distribution_30_50}\label{highpt}}
\end{center}
\caption{Distributions of \ensuremath{\mathrm{fJVT}}\xspace for jets with \pT \protect\subref{lowpt} between 20 and 30 \GeV
and \protect\subref{highpt} between 30 and 50 \GeV for data (black circles) and simulation (red squares).
The lower panels display the ratio of the data to the simulation.
The grey bands account for statistical and systematic uncertainties.}
\label{fig:basicComparison}
\end{figure}
The pile-up jet contamination in the signal region $N_{\mathrm{PU}}^\mathrm{signal}(|\ensuremath{\Delta\phi(\Zboson,\mathrm{jet})}\xspace|>2.8~\mathrm{rad})$ is estimated
in a pile-up-enriched
control region with $|\ensuremath{\Delta\phi(\Zboson,\mathrm{jet})}\xspace|<1.2$ rad, based on the assumption that the $|\ensuremath{\Delta\phi(\Zboson,\mathrm{jet})}\xspace|$
distribution is uniform for pile-up jets.
The validity of such assumption was verified in simulation.
The pile-up jet rate in data is therefore used to estimate
the contamination of the signal region as
\begin{multline}
N_{\mathrm{PU}}^\mathrm{signal}(|\ensuremath{\Delta\phi(\Zboson,\mathrm{jet})}\xspace|>2.8~\mathrm{rad}) = \\
[N_\mathrm{j}^\mathrm{control}(|\ensuremath{\Delta\phi(\Zboson,\mathrm{jet})}\xspace|<1.2~\mathrm{rad}) - N_\mathrm{HS}(|\ensuremath{\Delta\phi(\Zboson,\mathrm{jet})}\xspace|<1.2~\mathrm{rad})] \cdot (\pi - 2.8~\mathrm{rad})/1.2~\mathrm{rad},
\end{multline}
where ${N_\mathrm{j}^\mathrm{control}(|\ensuremath{\Delta\phi(\Zboson,\mathrm{jet})}\xspace|<1.2~\mathrm{rad})}$ is the number of jets in the data
control region and ${N_\mathrm{HS}(|\ensuremath{\Delta\phi(\Zboson,\mathrm{jet})}\xspace|<1.2~\mathrm{rad})}$ is the expected number of hard-scatter jets in the control region,
as predicted in simulation.
The hard-scatter efficiency is therefore measured in the signal region as
\begin{eqnarray}
\varepsilon = \frac{N_\mathrm{j}^\mathrm{pass} - N_\mathrm{PU}^\mathrm{pass}}{N_\mathrm{j}^\mathrm{signal} - N_{\mathrm{PU}}^\mathrm{signal}},
\end{eqnarray}
where $N_\mathrm{j}^\mathrm{signal}$ and $N_\mathrm{j}^\mathrm{pass}$ denote respectively the overall number
of jets in the signal region and the number of jets in the signal region satisfying
the \ensuremath{\mathrm{fJVT}}\xspace requirements. The terms $N_\mathrm{PU}^\mathrm{pass}$ and $N_\mathrm{PU}^\mathrm{signal}$ represent the overall number of pile-up jets in the signal region
and the number of pile-up jets satisfying the \ensuremath{\mathrm{fJVT}}\xspace requirements, respectively, and are both estimated from simulation.
Figure \ref{fig:efficiency} shows the hard-scatter efficiency evaluated in data and
simulation.
The uncertainties correspond to a 30\% uncertainty in the number of pile-up jets and a 10\%
uncertainty in the number of hard-scatter jets in the signal region. The uncertainties are
estimated by comparing data and simulation in the pile-up- and hard-scatter-enriched
regions, respectively.
The hard-scatter efficiency is found to be underestimated in simulation.
The level of disagreement is observed to be larger at low jet \pt and high $|\eta|$ and can be as large as about 3\%.
The efficiencies evaluated in this paper are used to define a calibration procedure accounting for this discrepancy.
The uncertainties associated with the calibration and resolution of the jets used to compute \ensuremath{\mathrm{fJVT}}\xspace are estimated in ATLAS analyses
by recomputing \ensuremath{\mathrm{fJVT}}\xspace for each variation reflecting a systematic uncertainty.
\begin{figure}[htbp]
\begin{center}
\subfloat[][]{\includegraphics[width=0.475\textwidth]{calibration/effpt_loose}\label{effptloose}}\quad
\subfloat[][]{\includegraphics[width=0.475\textwidth]{calibration/effpt_tight}\label{effpttight}}\\
\subfloat[][]{\includegraphics[width=0.475\textwidth]{calibration/effeta_loose}\label{effetaloose}}\quad
\subfloat[][]{\includegraphics[width=0.475\textwidth]{calibration/effeta_tight}\label{effetatight}}
\end{center}
\caption{Efficiency for hard-scatter jets to pass \ensuremath{\mathrm{fJVT}}\xspace requirements as a function
of (\protect\subref{effptloose} and \protect\subref{effpttight}) \pT
and (\protect\subref{effetaloose} and \protect\subref{effetatight}) $|\eta|$
for the (\protect\subref{effptloose} and \protect\subref{effetaloose}) 92\%
and (\protect\subref{effpttight} and \protect\subref{effetatight}) 85\%
hard-scatter efficiency operating points of the \ensuremath{\mathrm{fJVT}}\xspace discriminant
in data (black circles) and simulation (red squares).
The lower panels display the ratio of the data to the simulation.
The grey bands account for statistical and systematic
uncertainties.}
\label{fig:efficiency}
\end{figure}
\section{Pile-up jet tagging with shape and topological information}
\label{sec:comb}
The \ensuremath{\mathrm{fJVT}}\xspace and $\gamma$ discriminants correspond to a twofold strategy for pile-up rejection
targeting QCD and stochastic pile-up jets, respectively.
However, as highlighted in Section~\ref{sec:pileup}, this classification is not well defined as
all jets have a stochastic component. Therefore, it is useful to define a coherent strategy
that addresses both the stochastic and QCD nature of pile-up jets at the same time.
The $\gamma$ parameter discussed in Section~\ref{sec:width} provides an estimate of the
\pt in the core of the jet originating from the single interaction contributing
the largest amount of transverse momentum to the jet.
Therefore, the \ensuremath{\mathrm{fJVT}}\xspace definition can be modified to exploit this estimation by replacing
the jet \pt with $\gamma$, so that
\begin{equation}
\ensuremath{\mathrm{fJVT}}\xspace_{\gamma} = \frac{\langle\boldsymbol{p}_{\mathrm{T},i}^\mathrm{miss}\rangle\cdot\boldsymbol{u}^\mathrm{fj}}{\gamma},
\end{equation}
where $\boldsymbol{u}^\mathrm{fj}$ is the unit vector representing the direction of the forward jet in the
transverse plane.
Figure \ref{fig:roc_combo_pt} shows the performance of $\ensuremath{\mathrm{fJVT}}\xspace_{\gamma}$ compared with \ensuremath{\mathrm{fJVT}}\xspace
and $\gamma$ independently. The $\ensuremath{\mathrm{fJVT}}\xspace_{\gamma}$ discriminant outperforms the individual
discriminants over the whole efficiency range. In samples enriched in QCD pile-up jets
($30<\pt < 50$~\GeV),
the $\ensuremath{\mathrm{fJVT}}\xspace_{\gamma}$ performance is driven by the topology information, while $\ensuremath{\mathrm{fJVT}}\xspace_{\gamma}$
benefits from the shape information for rejecting stochastic pile-up jets.
A multivariate combination of \ensuremath{\mathrm{fJVT}}\xspace and $\gamma$ discriminants was also studied and
found to be similar in performance to $\ensuremath{\mathrm{fJVT}}\xspace_{\gamma}$.
\begin{figure}[htbp]
\begin{center}
\subfloat[][]{\includegraphics[width=0.475\textwidth]{combinations/roc_20_30}\label{roc2030}}\quad
\subfloat[][]{\includegraphics[width=0.475\textwidth]{combinations/roc_30_50}\label{roc3050}}\\
\end{center}
\caption{Efficiency for selecting pile-up jets as a function of the efficiency for
selecting hard-scatter jets in simulated \ttbar events for \protect\subref{roc2030} jets with $20 \GeV<\pt<30 \GeV$
and \protect\subref{roc3050} jets with $30\GeV < \pt<50 \GeV$.}
\label{fig:roc_combo_pt}
\end{figure}
\section{Impact on physics of Vector-Boson Fusion}
\label{sec:vbf}
In order to quantify the impact of forward pile-up rejection on a VBF analysis,
the VBF $H\to\tau\tau$ signature is considered, in the case where the $\tau$ decays leptonically.
The pile-up dependence of the signal purity (S/B) is studied in a simplified analysis in the
dilepton channel. Several other channels are used in the analysis of VBF $H\to\tau\tau$ by ATLAS;
the dilepton channel is chosen for this study by virtue of its simple selection and background composition.
The dominant background in this channel originates from $\Zboson+$jets production,
where the $\Zboson$ boson decays leptonically, either to electrons, muons, or a leptonically decaying $\tau\tau$ pair.
The rate of $\Zboson$ bosons produced in association with two jets satisfying the requirements
targeting the VBF topology is extremely low.
The requirements include large $\Delta\eta$ between the jets and large dijet invariant mass $m_\mathrm{jj}$.
However, background events with forward pile-up jets often have large $\Delta\eta$ and $m_\mathrm{jj}$,
mimicking the VBF topology.
As a consequence, the background acceptance grows almost quadratically with the number of pile-up interactions.
This section illustrates the mitigation of this effect that can be achieved with the pile-up rejection provided
by $\ensuremath{\mathrm{fJVT}}\xspace_{\gamma}$.
The event selection used for this study was optimized using simulation without pile-up~~\cite{cscbook}:
\begin{itemize}\itemsep0pt
\item The event must contain exactly two opposite-charge same-flavour leptons $\ell^+\ell^-$ (with $\ell=e$,$\mu$) with \pT>15\GeV;
\item The invariant mass of the lepton pair must satisfy $m_{\ell^+\ell^-}<66\GeV$ or $m_{\ell^+\ell^-}>116\GeV$;
\item The magnitude of the missing transverse momentum must be larger than $40\GeV$;
\item The event must contain two jets with $\pT>20\GeV$, one of which has $\pT>40\GeV$. The absolute difference in rapidities $|\eta_{\mathrm{j}_1}-\eta_{\mathrm{j}_2}|$ must exceed 4.4 and the invariant mass of the two jets must exceed 700\GeV.
\item For simulated VBF $H\to\tau\tau$ only, both jets are required to be truth-labelled as hard-scatter jets.
\end{itemize}
The impact of pile-up mitigation is emulated by randomly removing hard-scatter and
pile-up jets to match the performance of a $\ensuremath{\mathrm{fJVT}}\xspace_{\gamma}$ requirement with 85\% overall
efficiency for hard-scatter jets with $20 < \pt < 50~\GeV$, as estimated in \ttbar simulation with an average $\langle\mu\rangle$ of 13.5.
The efficiencies are estimated as a function of the jet \pt and the average number of interactions per bunch crossing.
\begin{figure}[htbp]
\begin{center}
\subfloat[][]{\includegraphics[width=0.475\textwidth]{vbf_dist_back_paramt}\label{vbfb}}
\subfloat[][]{\includegraphics[width=0.475\textwidth]{vbf_dist_sig_paramt}\label{vbfs}}\\
\subfloat[][]{\includegraphics[width=0.475\textwidth]{vbf_significance_paramt}\label{vbfsig}}
\end{center}
\caption{Relative expected yield variation of \protect\subref{vbfb} $\Zboson\to\ell\ell$ and \protect\subref{vbfs} VBF
$H\to\tau\tau$ events and \protect\subref{vbfsig} signal purity
as a function of the number interactions per bunch crossing ($\langle\mu\rangle$), with
different levels of pile-up rejection using $\ensuremath{\mathrm{fJVT}}\xspace_\gamma$.
The expected signal and background yields at $\langle\mu\rangle=10$ are used as reference.
Parameterized hard-scatter efficiency and pile-up efficiency are used.
The lower panels display the ratio to the reference without pile-up rejection.}
\label{fig:fJVTapp_vbf_paramg}
\end{figure}
Figure~\ref{fig:fJVTapp_vbf_paramg} shows the expected numbers of signal and background events,
as well as the signal purity, as a function of $\langle\mu\rangle$.
When going from $\langle\mu\rangle$ of 10 to 35, the expected number of background events grows by a factor
of seven and the corresponding signal purity drops by a factor of eight,
indicating that the presence of pile-up jets enhances the background acceptance.
The slight decrease in signal acceptance is due to misidentification of pile-up jets as VBF jets.
The $\ensuremath{\mathrm{fJVT}}\xspace_{\gamma}$ algorithm mitigates the background growth, at the expense of a signal loss
proportional to the hard-scatter jet efficiency.\footnote{Most VBF events are characterized by one
forward jet and one central jet.}
Therefore, the degradation of the purity due to pile-up can be effectively reduced.
For the specific final state and event selection under consideration, where $\Zboson+$jets production is the dominant background,
this results in about a fourfold improvement in signal purity at $\langle\mu\rangle=35$.
\clearpage
\section{Conclusions}
\label{sec:conclusion}
The presence of multiple $pp$ interactions per bunch crossing at the LHC,
referred to as pile-up,
results in the reconstruction of additional jets beside the ones from the hard-scatter
interaction.
The ATLAS baseline strategy for identifying and rejecting pile-up jets
relies on matching tracks to jets to determine the $pp$ interaction of origin.
This strategy cannot be applied for jets beyond the tracking coverage of the inner detector.
However, a broad spectrum of physics measurements at the LHC relies on the
reconstruction of jets at high pseudorapidities.
An example is the measurement of Higgs
boson production through vector-boson fusion. The presence of pile-up jets at high
pseudorapidities reduces the sensitivity for these signatures, by
incorrectly reconstructing these final states in background events.
The techniques presented in this paper allow the identification
and rejection of pile-up jets beyond the tracking coverage of the inner detector.
The strategy to perform such a task is twofold.
First, the information about the jet shape is used to estimate the
leading contribution to the jet above the stochastic pile-up noise.
Then the topological correlation among particles originating from a pile-up interaction
is exploited to extrapolate the jet vertex tagger, using track and vertex information,
beyond the tracking coverage of the inner detector to identify and reject pile-up
jets at high pseudorapidities.
When using both shape and topological information,
approximately 57\% of forward pile-up jets are rejected for a hard-scatter
efficiency of about 85\% at the pile-up conditions considered in this paper, with an average of 22 pile-up interactions.
In events with 35 pile-up interactions, typical conditions for the LHC operations in the near future,
37\%, 48\%, and 51\% of forward pile-up jets are rejected using, respectively, topological information,
shape information, and their combination, for the same 85\% hard-scatter efficiency.
A procedure is defined and used to measure the
efficiency of identifying hard-scatter jets in 3.2 \ifb of $pp$ collisions at $\sqrt s=13\TeV$ collected in
2015.
The efficiencies are measured in data and estimated in simulation as a function of the jet
kinematics. Discrepancies of up to approximately 3\% are observed, mainly due to the modelling of pile-up events.
The impact of forward pile-up rejection algorithms presented here
is estimated in a simplified study of Higgs boson production through vector-boson fusion
and decaying into a $\tau\tau$ pair; the signal purity for the baseline
selection under consideration, where $\Zboson+$jets production is the dominant background,
can be enhanced by a factor of about four for events with 35 pile-up interactions.
\section*{Acknowledgements}
\input{acknowledgements/Acknowledgements}
\printbibliography
\newpage
\input{atlas_authlist}
\end{document}
|
2,877,628,091,215 | arxiv | \section{Introduction}
\subsection{Outline of the main results}
Let $G$ be a finite group and $V$ a $G$-module of finite dimension over a field $\F$.
By a classical theorem of E. Noether \cite{noether:1926} the \emph{algebra of polynomial invariants} on $V$,
denoted by $\F[V]^G$, is finitely generated. Set
\begin{align*}
\beta(G,V)&:=\min\{d\in \mathbb{N}\mid \F[V]^G\text{ is generated by elements of degree at most } d\}, \\
\beta(G)&:=\sup\{\beta(G,V)\mid V\mbox{ is a finite dimensional }G\text{-module over } \F \}.
\end{align*}
The famous theorem on the {\it Noether bound} asserts that
\begin{align}\label{noetherbound} \beta(G)\leq |G| \end{align}
provided that $\ch(\F)$ does not divide the order of $G$
(see Noether \cite{noether:1916} in characteristic $0$ and Fleischmann \cite{fleischmann}, Fogarty \cite{fogarty} in positive characteristic).
Schmid proved in \cite{schmid} that over the field of complex numbers $\beta(G)=|G|$ holds only when $G$ is cyclic.
This was sharpened by Domokos and Heged\H us in \cite{domokos-hegedus} by proving
that $\beta(G)\leq \frac 34|G|$ for all non-cyclic $G$;
the result was extended to non-modular positive characteristic by Sezer \cite{sezer}.
The constant $3/4$ is optimal here.
On the other hand, a straightforward lower bound on $\beta(G)$ can be obtained
based on the result of Schmid in \cite{schmid}, that
$\beta(G)\geq\beta(H)$ holds for any subgroups $H$ of $G$, so in particular,
$\beta(G)$ is bounded from below by the maximal order of the elements in $G$. Therefore $\beta(G)\geq \frac 12|G|$
whenever $G$ contains a cyclic subgroup of index two,
and obviously there are infinitely many isomorphism classes of such non-cyclic groups.
The main result of the present article is that
|apart from four sporadic exceptions| these are the only groups for which the ratio of Noether number and the group order is so large:
\begin{theorem}\label{thm:main}
For a finite group $G$ with order not divisible by $\ch (\F)$ we have $\beta(G)\geq \frac 12|G|$
if and only if $G$ has a cyclic subgroup of index at most two, or
$G$ is isomorphic to
$Z_3\times Z_3$,
$Z_2\times Z_2\times Z_2$,
the alternating group $A_4$, or
the binary tetrahedral group $\tilde{A_4}$.
\end{theorem}
This Theorem is a novelty even for the case $\F=\mathbb{C}$.
The main technical tool of its proof is a generalization of the Noether number
which allows us to formulate some reduction lemmata in Section~\ref{sec:red}
that can be used to infer estimates on the Noether number of a group
from the knowledge of the (generalized) Noether number of its subgroups and homomorphic images.
Theorem~\ref{thm:structure} then isolates a list of some groups such that an arbitrary finite group $G$ must contain one of them
as a subgroup or a subquotient,
unless $G$ contains a cyclic subgroup of index at most two. Finally, the proof is made complete in Sections 2--3,
where we compute or estimate the (generalized) Noether number for the particular groups on this list.
The quest for degree bounds has always been in the focus of invariant theory. A practical motivation is that good initial degree bounds may potentially decrease the running time of algorithms to compute generators of invariant rings. On the other hand, the exact value of the Noether bound is known only for very few groups.
To indicate the difficulties we mention the paper of Dixmier \cite{dixmier}, investigating the Noether number for irreducible representations of the symmetric group of degree $5$. It can be seen in the present paper as well that the discussion of some small groups, the estimation of the Noether bound takes relatively large place
(especially where the exact value is found).
We finish the Introduction by noting that the constant $1/2$ in Theorem~\ref{thm:main} has a remarkable theoretical status.
In a parallel paper \cite{CzD:2} we determine the (generalized) Noether number for each non-cyclic group $G$ with a cyclic subgroup of index $2$: it turns out that
for such a $G$ we have $\beta(G)-\frac 12|G|\in\{1,2\}$. Consequently, for any $c>1/2$, up to isomorphism there are only finitely many non-cyclic groups
$G$ with $\beta(G)/|G|>c$, whereas there are infinitely many isomorphism classes of groups $G$ with $\beta(G)/|G|>1/2$.
In particular, the set $\{\beta(G)/|G| : G\mbox{ finite group}\}\subset \mathbb{Q}$ has no limit point strictly between $1/2$ and $1$.
\subsection{The Noether number and its generalization}\label{sec:prel}
Throughout this article $\F$ is a fixed algebraically closed base field and
$G$ is a finite group of order not divisible by $\ch (\F)$, unless explicitly stated otherwise.
By a graded module we mean an $\mathbb{N}$-graded $\F$-vector space $M=\bigoplus_{d=0}^\infty M_d$, which is a graded module over a commutative
$\mathbb{N}$-graded $\F$-algebra $R=\bigoplus_{d=0}^\infty R_d$ such that $R_0= \F$ is the base field when $R$ is unital,
or $R_0 = \{0\}$ otherwise (in the latter case we still assume that the multiplication map is $\F$-bilinear).
We set
$M_{\ge s} := \bigoplus_{d\ge s} M_d$,
$M_{\le s} := \bigoplus_{d=0}^s M_d$, and $M_{>s}:=\bigoplus_{d>s}M_d$.
We also use the notation $M_+ := M_{ \ge 1}$, so if we regard $R$ as a module over itself,
its maximal homogeneous ideal is $R_+$.
If $M$ is finitely generated as an $R$-module then set
\[ \beta(M, R) := \min \{s \in\mathbb{N} : M \mbox{ is generated as an }R\mbox{-module by }M_{\le s}\} \]
and write $\beta(M,R) = \infty$ otherwise.
By the graded Nakayama Lemma, a module
$M$ is generated by its homogeneous elements $m_1, ..., m_n $
if and only if
the $\F$-vector space $M/R_+M$ is spanned by the images $\overline{m}_1 , ...,\overline { m }_n$.
As a consequence, $\beta(M,R)$ is the maximal degree of a non-zero homogeneous component of the factor space $M/R_+M$, inheriting the grading from $M$.
Obviously we have $\beta(M,R)=\beta(M,R_+)$.
The subalgebra of $R$ generated by $R_{\le s}$ will be denoted by $\F[R_{\le s}]$.
For subspaces $S,T$ of an $\F$-algebra $L$ we write $ST$ for the subspace spanned by the products $\{st\mid s\in S,t\in T\}$,
and use the notation $S^k:=S\dots S$ ($k$ factors) accordingly.
We set $\beta(R) := \beta(R_+, R)$.
This is the maximal degree of a homogeneous element $m \in R_+ \setminus R_+^2$,
i.e. $\beta(R)$ is the minimal $n$ such that $R$ is generated as an $\F$-algebra by homogeneous elements of degree at most $n$.
Let us apply the above concepts in the more particular setting of invariant theory.
Here we are given a group $G$ and a finite dimensional $\F$-vector space $V$ equipped with a group homomorphism $G \to \GL(V)$;
in this situation we also say that $V$ is a (left) $G$-module.
As an affine space, $V$ has a coordinate ring $\F[V]$ which is defined
in abstract terms as the symmetric tensor algebra of the dual space $V^*$.
Thus $\F[V]$ is isomorphic to a polynomial ring in $\dim(V)$ variables,
so in particular it is a graded ring and $\F[V]_1 \cong V^*$.
The left action of $G$ on $V$ induces a right action on $V^*$ given as
$x^g(v) = x(gv)$ for any $g\in G, v\in V$ and $x\in V^*$.
This right action of $G$ on $V^*$ extends multiplicatively onto the whole $\F[V]$.
Our basic object of study is the \emph{ring of polynomial invariants} defined as:
\[\F[V]^G := \{ f \in \F[V]: f^g = f \quad \forall g \in G \}\]
$\beta(G,V) := \beta(\F[V]^G)$ is called the \emph{Noether number} of the $G$-module $V$.
By a classic result of Hilbert in \cite{hilbert90} $\beta(G,V)$ is finite
if $G$ is \emph{linearly reductive}. When $G$ is finite even more can be said.
The \emph{global degree bound} for a finite group $G$ is defined as
\begin{align*}\beta(G) := \sup_{V} \beta(G,V) \end{align*}
where $V$ runs through all $G$-modules over the field $\F$.
By Noether's degree bound \eqref{noetherbound},
if $|G|$ is not divisible by $\ch (\F)$ then $\beta(G)$ is finite.
The converse of this statement is also true: it was proved in \cite{derksen-kemper-global} for $\mathrm{char}(\F)=0$
and subsequently in \cite{bryant-kemper} for the whole non-modular case that
the finiteness of $\beta(G)$ implies the finiteness of the group $G$, as well.
As for the modular case, i.e. when $\mathrm{char}(\F)$ divides $|G|$, Richman constructed in \cite{richman}
a sequence of $G$-modules $V_1,V_2, ...$ such that $\beta(G,V_i) \to \infty$ as $i \to \infty$,
so in this case $\beta(G)$ is not finite.
Note that we suppressed $\F$ from the notation $\beta(G)$.
The dependence of $\beta(G)$ on the field $\F$ was studied by Knop in \cite{knop}.
He proved that $\beta(G)$ is the same for every field $\F$ with the same characteristic.
In particular this implies that $\beta(G)$ is the same for $\F$ and its algebraic closure.
So our running assumption that $\F$ is algebraically closed causes no loss of generality in the results.
Now let us summarize the previously known reduction lemmata by means of which
$\beta(G)$ can be bound through induction on the structure of $G$:
\begin{lemma} \label{lemma:red}
We have $\beta(G)/|G| \le \beta(K)/|K|$ for any subquotient $K$ of $G$.
\end{lemma}
\begin{proof}
For any subgroup $H \le G$, resp. for any normal subgroup $N \triangleleft G$ the following reduction lemmata hold:
\begin{align}
\beta(G) &\le [G:H] \beta(H) \\
\beta(G) &\le \beta(G/N) \beta(N)
\end{align}
These were proved for characteristic $0$ by Schmid (see Lemma~3.2 and 3.1 in \cite{schmid})
and subsequently extended to the case when $\mathrm{char}(\F) \nmid |G|$ in \cite{sezer}, \cite{fleischmann:2}, \cite{knop}.
Our claim follows after dividing by $|G|$ the above inequalities and using that $\beta(N)/|N|\le 1$ by \eqref{noetherbound}. \end{proof}
We will introduce here a generalization of the Noether number with the intent of improving and generalizing Schmid's reduction lemmata above:
For a graded $R$-module $M$ and an integer $k\ge 1$ set
\[ \beta_k(M, R) : = \beta(M, R_+^k).\]
Note that $\beta_{1}(M,R) = \beta(M,R)$.
The abbreviation $\beta_k(R):= \beta_k(R_+,R)$ will also be used.
For a representation $V$ of a finite group $G$ over the field $\F$ we set $\beta_k(G,V):= \beta_k(\F[V]^G)$.
The trivial bound $\beta_k(G,V) \le k\beta(G,V)$ shows that this quantity is finite.
We also set
\[\beta_k(G) := \sup\{\beta_k(G,V)\mid V\text{ is a finite dimensional }G\text{-module over }\F\} \]
suppressing $\F$ from the notation as in the case of $\beta(G)$.
We shall refer to these numbers as the {\it generalized Noether numbers} of the group $G$.
\subsection{Reduction lemmata}\label{sec:red}
Our starting point is the following alternative characterization of the generalized Noether number:
\begin{proposition}\label{prop:altnoetnum}
$\beta_k(G)$ is the minimal positive integer $d$ having the property that for any
finitely generated commutative graded $\F$-algebra $L$ (with $L_0=\F$) on which $G$ acts via
graded $\F$-algebra automorphisms we have
\[L^G\cap L_+^{d+1}\subseteq
(L^G_+)^{k+1}.\]
\end{proposition}
\begin{proof} Let $L$ be a finitely generated commutative graded $\F$-algebra $L$ with $L_0=\F$ on which $G$ acts via graded $\F$-algebra automorphisms. There exists a finite dimensional $G$-module $V$ and a $G$-equivariant $\F$-algebra surjection
$\pi:\F[V]\to L$ mapping $\F[V]_+$ onto $L_+$. Moreover, $\pi$ restricts to a surjection
$\F[V]_+^G\to L_+^G$ by the assumption $\ch(\F)\nmid |G|$.
So we have
\[L^G\cap L_+^{\beta_k(G)+1}=\pi(\F[V]_{\ge \beta_k(G)+1}^G)
\subseteq \pi((\F[V]^G_+)^{k+1})=(L^G_+)^{k+1}.\]
For the reverse implication let $L:=\F[V]$, where $V$ is a finite dimensional $G$-module with
$\beta_k(G,V)=\beta_k(G)$.
\end{proof}
\begin{lemma} \label{lemma:myred}
Let $N$ be a normal subgroup of $G$. Then for any $G$-module $V$ we have
\[ \beta_k(G,V) \le \beta_{\beta_k(G/N)}(N,V) \]
Consequently the inequality $\beta_k(G) \le \beta_{\beta_k(G/N)}(N)$ holds, as well.
\end{lemma}
\begin{proof}
We shall apply Proposition~\ref{prop:altnoetnum} for the algebra $L:=\F[V]^N$;
denote $R:=\F[V]^G$. The subalgebra $L$ of $\F[V]$ is $G$-stable, and the action of $G$ on $L$ factors through $G/N$, and $R=L^{G/N}$.
Setting $s:=\beta_{\beta_k(G/N)}(N,V)$,
we have
\begin{align*}
R_{\ge s+1}= R\cap L_{\ge s+1} &\subseteq
L^{G/N}\cap L_+^{\beta_k(G/N)+1}
\subseteq (L_+^{G/N})^{k+1}=(R_+)^{k+1}. \qedhere \end{align*}
\end{proof}
A weaker version of Lemma~\ref{lemma:myred} remains true for any subgroup $H \le G$ which is not necessarily normal.
To show this we will make use of the following relativized version of the Reynolds operator (see e.g. \cite{neusel} p. 33):
Let $H \le G$ be a subgroup
and $g_1, ..., g_n$ a system of right coset representatives of $H$.
For a $G$-module $V$ the map $\tau_H^G: \F[V]^H \to\F[V]^{G}$ called the \emph{relative transfer map}
is defined by the sum
\[ \tau_H^G(u) = \sum_{i=1}^n u^{g_i} .\]
In the special case when $H$ is the trivial subgroup $\{1_G\}$, we recover the \emph{transfer map}
$\tau^G:\F[V]\to\F[V]^G$.
If $\mathrm{char}(\F)$ does not divide $[G:H]$ then $\tau:= \tau_H^G$
is a graded $\F[V]^G$-module epimorphism from $\F[V]^H$ onto $\F[V]^G$. We shall use this fact most frequently in the following form:
\begin{proposition} \label{trans}
If $\ch(\F)$ does not divide $[G:H]$, then we have
$\beta_k(G,V)\le \beta_k(\F[V]^H_+, \F[V]^G)$.
\end{proposition}
\begin{proposition}\label{prop:knopeta}
Let $J$ be a non-unitary commutative $\F$-algebra on which a finite group $G$ acts via $\F$-algebra automorphisms
and let $H \le G$ be a subgroup for which one of the following conditions holds:
\begin{itemize}
\item[(i)] $\ch(\F)>[G:H]$ or $\ch(\F)=0$;
\item[(ii)] $H$ is normal in $G$ and $\ch(\F)$ does not divide $[G:H]$;
\item[(iii)] $\ch(\F)$ does not divide $|G|$.
\end{itemize}
Then we have
\[ (J^H)^{[G:H]} \subseteq J^H J^G + J^G \]
\end{proposition}
\begin{proof} (i)
Let $f\in J^H$ be arbitrary and $\mathcal{S}$ a system of right $H$-coset representatives in $G$.
Then $f$ is a root of the monic polynomial $\prod_{g\in \mathcal{S}}(t- f^g)\in J[t]$.
Obviously all coefficients of this polynomial are $G$-invariant.
Consequently, $f^{[G:H]}\in J^H J^G+J^G$ holds for all $f\in J^H$.
Take arbitrary $f_1,\ldots,f_r\in J^H$ where $r=[G:H]$. Then the product
$r!f_1\cdots f_r$ can be written as an alternating sum of $r$th powers of sums of subsets of $
\{f_1,\ldots,f_r\}$ (see e.g. Lemma 1.5.1 in \cite{benson}), hence $f_1\cdots f_r\in J^H J^G+ J^G$.
(ii) (This is a variant of a result of Knop, Theorem 2.1 in \cite{knop}; the idea appears in Benson's simplification of Fogarty's argument from \cite{fogarty}, see Lemma 3.8.1 in \cite{derksen-kemper}).
Let $\mathcal{S}$ be a system of $H$-coset representatives in $G$.
For each $x \in \mathcal{S}$ choose an arbitrary element $a_x \in J^H$.
It is easily checked that
\begin{align}\label{eq:knop} 0 \; =\; \sum_{y \in \mathcal{S}} \prod_{x \in \mathcal{S}} (a_x- a_x^{x^{-1}y})
&= \sum_{U \subseteq \mathcal{S}} (-1)^{|U|} \delta_U \qquad \text{where}\\ \notag
\delta_U &:= \prod_{x \not\in U}a_x \sum_{y \in \mathcal{S}}(\prod_{x \in U} a_x^{x^{-1}})^y
\end{align}
Note that $a_x^g\in J^H$ for all $x\in \mathcal{S}$ and $g\in G$ by normality of $H$ in $G$. Therefore
$\delta_U= \prod_{x \not\in U} a_x \; \tau^G_H\left(\prod_{x \in U} a_x^{x^{-1}}\right)$.
Thus $\delta_{\mathcal{S}}\in J^G$ and $\delta_U \in J^H J^G $ for every $U \subsetneq \mathcal{S}$, except for $U = \emptyset$,
when we get the term $[G:H] \prod_{x \in \mathcal{S} } a_x$.
Given that $[G:H] \in \F^{\times}$ and the elements $a_x$ were arbitrary the claim follows.
(iii) Let $\mathcal{S}$ be a system of left $H$-coset representatives in $G$.
Apply the transfer map $\tau^H:J\to J^H$ to the equality \eqref{eq:knop}, and observe that
\begin{align}
\tau^H(\delta_U)=\prod_{x \not\in U}a_x \sum_{h\in H} \sum_{y \in \mathcal{S}}(\prod_{x \in U} a_x^{x^{-1}})^{yh}= \prod_{x \not\in U}a_x \tau^G(\prod_{x \in U} a_x^{x^{-1}})
\end{align}
This shows that $\tau^H(\delta_U)\in J^HJ^G+J^G$ for all non-empty subsets $U\subseteq \mathcal{S}$, and
$\tau^H(\delta_{\emptyset})=|G|\prod_{x\in\mathcal{S}}a_x$, implying the claim as in (ii).
\end{proof}
\begin{remark}
Finiteness of $G$ can be replaced by finiteness of $[G:H]$ in (i) and (ii) above.
\end{remark}
\begin{corollary}\label{cor:G:H+1}
Keeping the assumptions of Proposition~\ref{prop:knopeta} on $G$, $H$ and $\ch(\F)$,
let $V$ be a $G$-module, $I := \F[V]^H$, $R:= \F[V]^G$. Then for any finitely generated graded $I$-module $M$ we have
\begin{align} \beta_k(M,R)\le \beta_{k[G:H]}(M,I). \end{align}
In particular we have the inequality
\begin{align}
\beta_k(G,V) & \le \beta_{k[G:H]} (H,V).\end{align}
\end{corollary}
\begin{proof}
By Proposition 2.3 we have $I_+^{k[G:H]}\subseteq I_+R_+^k+R_+^k$, and consequently
$MI_+^{k[G:H]}\subseteq MR_+^k$, implying the first inequality.
For the second note that $\beta_k(G,V) = \beta_k(R) \le \beta_k(I_+,R)$ by Proposition~\ref{trans}
and $\beta_k(I_+,R)\le \beta_{k[G:H]}(I_+,I) = \beta_{k[G:H]}(H, V)$.
\end{proof}
\begin{remark}\label{remark:babynoether}
It is conjectured that $\beta(G,V)\leq [G:H]\beta(H,V)$ holds in fact whenever $\mathrm{char}(\F)\nmid [G:H]$.
This open question is mentioned under the name ``baby Noether gap" in Remark 3.8.5 (b) in \cite{derksen-kemper} or on page 1222 in \cite{kemper-separating}.
\end{remark}
Finally we present some rather technical results which will be used later in Chapter~\ref{ch:semidir} to obtain upper bounds on $\beta(G)$:
\begin{lemma} \label{lemma:induk1}
Let $M$ be a graded module over a graded ring $I$,
and $S \subseteq I$ a graded subalgebra.
Then for any integers $k > r \ge 1$ we have
\begin{align*} \label{eq:induk}
\beta_{k}(M,I) \le \max \{ \beta(M,S) + \beta_{k-r-1}(S) , \beta_{r}(M,I) + \beta_{k-r}(S) \}
\end{align*}
\end{lemma}
\begin{proof}
Let $d$ be greater than the right hand side of this inequality.
Then
\[M_d \subseteq M_{\le \beta(M,S)}S_{> \beta_{k-r-1}(S)} \subseteq MS_+^{k-r} \subseteq M(S_+^{k-r})_{\le\beta_{k-r}(S)}\]
hence
$M_d \subseteq M_{>\beta_{r}(M,I)} S_+^{k-r} \subseteq M I_+^{r} S_+^{k-r} \subseteq MI_+^{k}$,
and this proves that $d>\beta_k(M,I)$.
\end{proof}
\begin{lemma}\label{induk}
For a $G$-module $V$ and subgroup $H \le G$ as in Proposition~\ref{prop:knopeta}
set $L := \F[V]$, $M:=L_+/L_+^GL_+$. For any $1\le r< [G:H]$ and $s\ge 1$ we have
\begin{align*}
\beta(L_+,L^G) \le ([G:H] -r)s + \max \{\beta_r(M, L^H) , \beta(M, \F[L^H_{\le s}]) -s \}
\end{align*}
\end{lemma}
\begin{proof} We have $ \beta(L_+,L^G)=\beta(M,L^G)\le\beta_{[G:H]}(M,L^H)$ by Corollary~\ref{cor:G:H+1}.
Applying Lemma~\ref{lemma:induk1} with $k:=[G:H]$, $I:=L^H$, $S := \F[R_{\le s}]$ and noting that
$\beta_k(S) \le ks$ we obtain the above inequality.
\end{proof}
\begin{remark}
(i) A version of Lemma~\ref{induk} limited to the abelian case appears in \cite{geroldinger-halterkoch} as Lemma 6.1.3.
(ii)
The use of Lemma~\ref{lemma:myred} and Corollary~\ref{cor:G:H+1} on the generalized Noether number stems from the fact that
for $k>1$ the number $\beta_k(G,V)$ in general is strictly smaller than $k\beta(G,V)$,
as it can be seen in Section~\ref{sec:davenport} already for abelian groups. See also \cite{CzD:3} for more information in this respect.
\end{remark}
\subsection{The Davenport constant}\label{sec:davenport}
A \emph{character} of an abelian group $A$ is a group homomorphism from $A$ to the multiplicative group $\F^{\times }$ of the base field.
The set of characters of $A$ is denoted by $\hat{A}$; it is naturally an abelian group, and in fact there is a (non-canonic) isomorphism $\hat{A} \cong A$.
Let $V$ be a representation of $A$ over the base field $\F$.
Since $\F$ is algebraically closed and $\ch(\F)$ does not divide
$|A|$ by our conventions,
$V$ decomposes as a direct sum of $1$-dimensional representations.
This means that $V^*$ has an $A$-eigenbasis $\{x_{1},...,x_{n}\}$.
The character
$\theta_i \in \hat{A}$ given by $x_i^a = \theta_i(a) x_i$ is called the \emph{weight} of $x_i$.
We shall always tacitly choose such an $A$-eigenbasis as the variables in the polynomial algebra $\F[V]=\F[x_{1}, ..., x_{n}]$.
Let $M(V)$ denote the set of monomials in $\F[V]$; this is a monoid with respect to ordinary multiplication and unit element $1$.
On the other hand we denote by $\mathcal{M}(\hat{A})$ the free commutative monoid generated by the elements of $\hat{A}$.
Due to our choice of variables in $\F[V]$ we can define a monoid homomorphism $\Phi: M(V) \to \mathcal{M}(\hat{A})$
by sending each variable $x_i$ to its weight $\theta_i$.
We shall call $\Phi(m)$ the \emph{weight sequence} of the monomial $m \in M(V)$.
We prefer to write $\hat{A}$ additively, hence
for any character $\theta\in \hat A$ we denote by $-\theta$ the character $a\mapsto \theta(a)^{-1}$,
$a\in A$.
An element $S \in \mathcal{M}(\hat{A})$ can be interpreted as a \emph{sequence}
$S:=(s_1,\ldots,s_n)$ of elements of $\hat{A}$ where repetition of elements is allowed and their order is disregarded.
The \emph{length} of $S$ is $|S|:=n$.
By a \emph{subsequence} of $S$ we mean $S_J := (s_j\mid j\in J)$ for some subset $J\subseteq \{1,\ldots,n\}$. %
Given a sequence $R$ over an abelian group $A$ we write
$R=R_1R_2$ if $R$ is the concatenation of its subsequences $R_1$, $R_2$,
and we call the expression $R_1R_2$ a \emph{factorization} of $R$.
Given an element $a\in A$ and a positive integer $r$, write $(a^r)$ for the sequence in which $a$ occurs with multiplicity $r$.
For an automorphism $b$ of $A$ and a sequence $S=(s_1,\dots,s_n)$ we write $S^b$ for the sequence
$(s_1^b,\dots,s_n^b)$, and we say that the sequences $S$ and $T$ are \emph{similar} if $T=S^b$
for some $b \in \Aut(A)$.
Let $\sigma: \mathcal{M}(\hat{A}) \to \hat{A}$ be the monoid homomorphism
which assigns to each sequence over $A$ the sum of its elements.
The value $\sigma(\Phi(m)) \in \hat A$ is called the \emph{weight of the monomial} $m \in M(V)$ and it will be abbreviated by $\w(m)$.
The kernel of $\sigma$ is called the \emph{block monoid} of $\hat{A}$, denoted by $\mathcal{B}(\hat{A})$,
and its elements are called zero-sum sequences. Our interest in zero-sum sequences
and the related results in additive number theory stems from the observation that
the invariant ring $\F[V]^A$ is spanned as a vector space by all those monomials
for which $\Phi(m)$ is a zero-sum sequence over $\hat A$. Moreover, as an algebra, $\F[V]^A$ is minimally
generated by those monomials $m$ for which $\Phi(m)$
does not contain any proper zero-sum subsequences.
These are called \emph{irreducible} zero-sum sequences,
and they form the Hilbert basis of the monoid $\mathcal{B}(\hat A)$.
A sequence is \emph{zero-sum free} if it has no non-empty zero-sum subsequence.
The \emph{Davenport constant} $\D(A)$ of $A$ is defined as the length of
the longest irreducible zero-sum sequence over $A$.
It is an extensively studied quantity, see for example \cite{geroldinger-gao}.
As it is seen from our discussion:
\begin{equation}\label{eq:davenportbeta}
\D(A)=\beta(A).\end{equation}
The {\it generalized Davenport constant} $\D_k(A)$ is introduced in \cite{halter-koch} as the length of the longest zero-sum sequence
that cannot be factored into more than $k$ non-empty zero-sum sequences.
It is evident from the above discussion that $\D_k(A) = \beta_k(A)$. Moreover
Lemma~\ref{lemma:myred} applied to abelian groups yields for any subgroup $B\leq A$ that:
\begin{align}
\D_k(A) &\leq \D_{\D_k(A/B)}(B) \\
\D_k(A) &\le \D_{\D_k(B)} (A/B)
\end{align}
The second inequality follows from the first because
$A$ has a subgroup $C \cong A/B$ for which $A /C \cong B$,
hence the role of $A/B$ and $B$ can be reversed in this formula.
This inequality appears as Proposition 2.6 in \cite{delorme}.
For the cyclic group $Z_n$ we have $\D_k (Z_n) = kn$.
We close this section with two more results on $\D_k$ which will be used later on.
\begin{proposition}[Halter-Koch, \cite{halter-koch} Proposition 5] \label{prop:halter-koch}
For any $n \mid m $ we have
\[ \D_k (Z_n \times Z_m) = km + n -1. \]
\end{proposition}
\begin{proposition}[Delorme-Ordaz-Quiroz, \cite{delorme} Lemma 3.7] \label{prop:Z2Z2Z2}
\[ \D_k(Z_2 \times Z_2 \times Z_2) =
\begin{cases}
4 & \text{ if } k =1 \\
2k + 3 & \text{ if } k>1
\end{cases}
\]
\end{proposition}
\section{The semidirect product }\label{ch:semidir}
Our main aim in the present chapter is to give upper bounds on $\beta(Z_p\rtimes Z_q)$ for the non-abelian semidirect product $Z_p\rtimes Z_q$, where $p,q$ are odd primes, $q\mid p-1$.
It is an open conjecture of Pawale reported in \cite{wehlau} that $\beta(Z_p\rtimes Z_q)=p+q-1$.
The lower bound $\beta(Z_p \rtimes Z_q) \ge p+q -1$ follows from a more general result in \cite{CzD:2}
(and can also be seen directly).
We provide here upper bounds that improve on \cite{domokos-hegedus} and \cite{pawale}, and are sufficient for the proof of Theorem~\ref{thm:main}.
\subsection{Extending Goebel's algorithm}\label{sec:goebel}
Let $G$ be a finite group with a proper abelian normal subgroup $A$.
Consider a monomial representation $G \to \GL(V)$ which maps $A$ to diagonal matrices.
This presupposes the choice of a basis $x_1,...,x_n$ in the dual space $V^*$,
which are $A$-eigenvectors permuted up to scalars under the action of $G/A$.
We shall always tacitly choose them as the variables in the coordinate ring $L:=\F[V]$.
Goebel developed an algorithm for the case when $V$ is a permutation representation
(see \cite{goebel}, \cite{neusel}, \cite{derksen-kemper})
which we will adapt here to this more general case.
The conjugation action of $G$ on $A$ induces an action on $\hat A$ in the standard way, and we extend it to an action on
$\mathcal{M}(\hat A)$ by setting $U^g=(a_1^g,\dots,a_l^g)$ for any sequence $U=(a_1,\dots,a_l)$ and $g\in G$.
Enumerate the $G$-orbits in $\hat A$ in a fixed order $O_1,\dots,O_l$.
For a $G$-orbit $O$ in $\hat A$ let $S^O$ be the subsequence of $S$ consisting of its elements belonging to $O$.
Now $S$ has the canonic factorization $S=S^{O_1}\dots S^{O_l}$.
In addition any sequence $S$ over $\hat A$ has
a unique factorization $S = R_1R_2... R_h$ such that each $R_i \subseteq \hat A$ is multiplicity-free and $R_1 \supseteq ... \supseteq R_h$;
we call this the \emph{row decomposition} of $S$ and we refer to $R_i$ as the $i$th \emph{row} of $S$, whereas $\supp(S) := R_1$ is its \emph{support} and $h(S):=h$ is its \emph{height}. In other terms $h(S)$ is the maximal multiplicity of the elements in $S$.
(The intuition behind this is that we like to think of sequences as Young diagrams where
the multiplicities in $S$ of the different elements of $\hat A$ are represented by the heights of the columns.)
Denote by $\mu(S)$ the non-increasing sequence of integers $(\mu_1(S),\dots,\mu_h(S)):=(|R_1|, ..., |R_h|)$.
By the \emph{shape} $\lambda(S)$ of $S$ we mean the $l$-tuple of such partitions
\[ \lambda(S) := (\mu(S^{O_1}),\dots,\mu(S^{O_l})). \]
The set of the shapes is equipped with the usual reverse lexicographic order, i.e. $\lambda(S) \prec \lambda(T)$ if $\lambda(S)\neq\lambda(T)$ and
for the smallest index $i$ such that $\mu(S^{O_i})\neq \mu(T^{O_i})$, we have $\mu_j(S^{O_i})>\mu_j(T^{O_i})$ for the smallest index $j$ with $\mu_j(S^{O_i})\neq \mu_j(T^{O_i})$.
Observe that $\lambda(ST) \prec \lambda(S)$ always holds
but on the other hand $\lambda(S) \prec \lambda(S')$ does not imply $\lambda(ST) \prec \lambda(S'T)$.
Abusing notation for any monomial $m \in \F[V]$ we write $\lambda(m)$, $h(m)$ and $\supp(m)$
for the shape, height and the support of its weight sequence $\Phi(m)$.
In the following we shall assume that we fixed a subset $\mathcal{V}$ of the variables permuted by $G$ up to non-zero scalar multiples; we adopt the convention that unless $\mathcal{V}$ is explicitly specified, it is the set of all variables. Any monomial $m$ factors as $m=m_{\mathcal{V}}m_{\widehat{\mathcal{V}}}$, where $m_{\mathcal{V}}$ is a product of variables belonging to $\mathcal{V}$, and $m_{\widehat{\mathcal{V}}}$ does not involve variables from $\mathcal{V}$. We shall also use the notation
$\lambda_{\mathcal{V}}(m):=\lambda(m_{\mathcal{V}})$.
\begin{definition}~\label{def:terminal} An $A$-invariant monomial $u$ is a \emph{good factor} of a monomial $m=uv$ if
$\lambda_{\mathcal{V}}(u^bv)\prec \lambda_{\mathcal{V}}(m)$
holds for all $b\in G\setminus A$; note that this forces $0<\deg(u)<\deg(m)$.
We say that $m$ is \emph{terminal} if it has no good factor.
\end{definition}
\begin{lemma} \label{goebel} $L_+=\F[V]_+$ is generated as an $L^G$-module by the terminal monomials.
\end{lemma}
\begin{proof} We prove by induction on $\lambda_{\mathcal{V}}(m)$ with respect to $\prec$ that if $m$ is not terminal, then it can be expressed
modulo $L_+L^G_+$
as a linear combination of terminal monomials. Indeed, take a good divisor $u$ of $m=uv$.
Then we have
\begin{equation}\label{eq:tekeres}
\sum_{b\in G/A}u^bv=\tau^G_A(u)v\in L_+^GL_+. \end{equation}
Since for every monomial in the sum on the left hand side except for $m$ we have $\lambda_{\mathcal{V}}(u^bv)\prec\lambda_{\mathcal{V}}(m)$, our claim on $m$ holds by the induction hypothesis.
\end{proof}
At this level of generality there might be an element $b \in G\setminus A$ such that $\theta(x_i^b) = \theta(x_i)$ for every variable $x_i$,
and then every monomial qualifies as terminal by our definition.
The concept of terminality is particularly useful when
\begin{align} \label{G/A-kikotes}
\text{ no non-identity element of $G/A$ fixes any non-trivial element of $\hat A$}
\end{align}
For the rest of this section we assume that \eqref{G/A-kikotes} holds for $(G,A)$. An obvious necessary condition for \eqref{G/A-kikotes} to hold is that $A$ must be a self-centralizing,
hence maximal abelian subgroup in $G$, and the order of $G/A$ must divide $|A|-1$, hence $G$ is the semidirect product of $A$ and $G/A$
by the Schur-Zassenhaus theorem.
In fact condition \eqref{G/A-kikotes} is equivalent to the requirement that
$G$ is a Frobenius group with abelian Frobenius kernel $A$.
In this article we will only study in greater detail
the non-abelian semidirect products $Z_p\rtimes Z_q$, $Z_p\rtimes Z_{q^n}$ where $Z_{q^n}$ acts faithfully on $Z_p$, and the alternating group $A_4$.
Note that if \eqref{G/A-kikotes} holds, then for any non-trivial $1$-dimensional $A$-module $U$ the $G$-module $\Ind_A^G(U)$ is irreducible
by Mackey's irreducibility criterion (cf. \cite{serre} ch. 7.4).
Moreover, the set of $A$-characters occurring in $\Ind_A^G(U)$ coincides with the $G/A$-orbit of the character of $A$ on $U$, and each $A$-character occurring in $\Ind_A^G(U)$ has multiplicity one.
Hence the $G/A$-orbits in $\hat A\setminus \{0\}$ are in bijection with the isomorphism classes of those irreducible $G$-modules that are induced from a $1$-dimensional $A$-module.
\begin{definition}
A monomial $m \in \F[V]$ or its weight sequence $S= \Phi(m)$ is called a \emph{brick} if
$S$ is the orbit of a minimal non-trivial subgroup of $G/A$.
\end{definition}
\begin{remark}
(i) If \eqref{G/A-kikotes} holds then every brick is $A$-invariant.
Indeed, when $m \in \F[V]$ is a brick then $\Phi(m)$ is stabilized by some non-identity element $b \in G/A$,
hence $\theta(m)$ is fixed by $b$, which is only possible by \eqref{G/A-kikotes} if $\theta(m)=0$.
(ii) If a monomial $m$ is not divisible by a brick, then $\Phi(m)\neq \Phi(m^b)$ for each $b\in G\setminus A$.
\end{remark}
\begin{definition} \label{def:gapless}
A sequence $S$ over $\hat A$ with row-decomposition $S=R_1...R_h$ is called
\emph{gapless} if for all $G/A$-orbits $O$ and all $i<h$ such that $R_i\cap O\neq \emptyset$
we have $R_i\cap O\neq R_{i+1}\cap O$
or $R_i \cap O = R_{i+1} \cap O =O$.
A monomial $m\in \F[V]$ is called \emph{gapless} if its weight sequence $\Phi(m)$ is gapless.
\end{definition}
For our next result we will need the following easy combinatorial fact:
\begin{lemma} \label{lemma:easy}
For any sequence $S = (s_1, ..., s_d)$ over an abelian group $A$ let
$\Sigma(S) := \{ \sum_{i\in I} s_i: I \subseteq \{1,...,d \} \}$.
If $A= Z_p$ for a prime $p$ and $S=(s_1,...,s_d)$ a sequence of non-zero elements of $Z_p$
then \[ |\Sigma(S)| \ge \min \{ p, d+1 \}.\]
\end{lemma}
\begin{proof}
By the Cauchy-Davenport Theorem $ |A+B|\ge \min\{p,|A|+|B|-1\} $
for any non-empty subsets $A,B \subseteq Z_p$. Our claim follows from this by induction
on $d$, as
$|\Sigma(S)| \ge |\Sigma(s_1,...,s_{d-1})| + |\{0, s_d \}| -1 \ge d+2-1$ for any $1<d<p$,
while the case $d=1$ is trivial.
\end{proof}
\begin{proposition} \label{prop:gapless}
Let $G=A\rtimes Z_n$ where $A \cong Z_p$ for some prime $p$ and $Z_n$ acts faithfully on $A$.
Let $V$ be a $G$-module and $L:= \F[V]$, $R:= \F[V]^G$, and $\mathcal{V}$ a subset of the variables permuted by $G$ up to non-zero scalar multiples.
Then $L_+/L_+R_+$ is spanned by monomials of the form $b_1\dots b_rm$, where
each $b_i$ is an $A$-invariant variable or a brick composed of variables in $\mathcal{V}$
while $m_{\mathcal{V}}$ has a gapless divisor of degree
at least
\[\min\{\deg(m_{\mathcal{V}}), \deg(m)-p+1\}.\]
\end{proposition}
\begin{proof} Note that \eqref{G/A-kikotes} holds for $(G,A)$.
By Lemma \ref{goebel} it suffices to show that for any terminal monomial $m\in L_+$
not containing a brick over $\mathcal{V}$ or an $A$-invariant variable,
$m_{\mathcal{V}}$ has a gapless divisor of degree at least $\min\{\deg(m_{\mathcal{V}}), \deg(m)-p+1\}$.
Let $m^*$ be a gapless divisor of $m_{\mathcal{V}}$ of maximal possible degree, and suppose for contradiction that $\deg(m^*)<\min\{\deg(m_{\mathcal{V}}), \deg(m)-p+1\}$. Then there is a variable $x$ such that
$m^*x$ is a divisor of $m_{\mathcal{V}}$ and $m^*x$ is not gapless, moreover, the index of the orbit $O_i$ containing $\theta(x)$ is minimal possible, i.e. for all $j<i$ we have $\Phi(m^*)^{O_j}=\Phi(m_{\mathcal{V}})^{O_j}$.
Let $\Phi(m^*)^{O_i} = R_1R_2 ... R_h$ be the row decomposition of $\Phi(m^*)^{O_i}$, and denote by $t$ the multiplicity of $\theta(x)$ in $\Phi(m^*_i)$.
It is then necessary that $R_t = R_{t+1} \cup \{ \theta(x) \}$, for otherwise $m^*x$ would still be gapless.
Take a divisor $u \mid m^*$ with $\Phi(u) = R_{t+1}$, hence $\Phi(ux)=R_t$.
Now consider the remainder $m/(m^*x)$: it contains no variables of weight $0$,
and its degree is at least $p-1$ by assumption, hence $|\Sigma(\Phi(m/(m^*x)))| = p$ by Lemma~\ref{lemma:easy} below.
Thus $m/(m^*x)$ has a (possibly trivial) divisor $\hat u$ for which $\theta(\hat{u}) = - \theta(ux)$. It is easy to see that $w:=xu\hat u$ is a good divisor of $m$. Indeed,
set $v := m/w$, and take $b\in G\setminus A$; clearly, $m^*/u$ divides $v$. For $j<i$, we have $\Phi((w^bv)_{\mathcal{V}})^{O_j}
=\Phi(m_{\mathcal{V}})^{O_j}$.
Moreover,
$\mu_s(\Phi((w^bv)_{\mathcal{V}})^{O_i})\ge
\mu_s(\Phi(m_{\mathcal{V}})^{O_i})$ for $s=1,\dots t$. Here we have strict inequality at least for one $s$:
by our assumption $\Phi((ux)_{\mathcal{V}})=R_t$ is not divisible by a brick, so $R_t^b \setminus R_t \neq \emptyset$, hence
the support of $\Phi(w^b_{\mathcal{V}})^{O_i}$ is not contained in $R_t$, implying
$\sum_{s=1}^t\mu_s(\Phi((w^bv)_{\mathcal{V}})^{O_i})>\sum_{s=1}^t \mu_s(\Phi((m^* /u)_{\mathcal{V}})^{O_i})$.
This contradicts the assumption that $m$ was terminal.
\end{proof}
\subsection{Factorizations of gapless monomials}
Denote by $\mathcal{B}$ the ideal of $L=\F[V]$ generated by the bricks,
and denote by $\mathcal{G}_d$ the ideal of $L$ generated by the gapless monomials of degree at least $d$.
Moreover, for a set $\mathcal{V}$ of variables as in Proposition~\ref{prop:gapless},
denote by $\mathcal{G}_d(\mathcal{V})$ the ideal of $L$ spanned by monomials with a gapless divisor
of degree at least $d$ composed from variables in $\mathcal{V}$.
\begin{proposition} \label{binom}
Let $V= \Ind_A^G U $ be an isotypic $G$-module belonging to a $G$-orbit $O \subseteq \hat A$,
and $s$ the index of a minimal nontrivial subgroup of $G/A$. Then
\[ \mathcal{G}_ {d} \subseteq \mathcal{B} \qquad \text{ where } \quad d = \binom { |O| - s +1}{2} +1 \]
\end{proposition}
\begin{proof}
Let $m \in \F[V]$ be a gapless monomial not divisible by a brick.
In the row decomposition $\Phi(m) = R_1 ...R_h$ we then have $|R_{i+1}| < |R_i| $ for every $1 \le i < h$,
and $|R_1| \le |O| - s$, so $\deg(m)\le 1+2 +...+ (|O|-s) = \binom{ |O| - s +1}{2} $.
\end{proof}
\begin{corollary}\label{cor:skatulya}
Let $A= Z_p$ and $G = A \rtimes Z_{q^n}$ where $Z_{q^n}$ acts faithfully on $A$.
Setting $c = \frac{p-1}{q^n} $ and $ d = \binom {q^n - q^{n-1} + 1}{2}$ and $L = \F[W]$, $R = \F[W]^G$ for a $G$-module $W$ we have
\[ \beta(L_+,R) \le (q^{n}- 2)q + \max \{cd, p+d -1,p+q \} \]
\end{corollary}
\begin{proof} By Lemma~\ref{induk} (applied with $s=q$ and $r=1$) we have
$\beta(L_+,R)\le (q^n-1)q+\max\{p,\beta(L_+/R_+L_+,S)-q\}$, where
$S:=\F[I_{\le q}]$. Apart from $O_0:=\{ 0 \}$, $Z_p$ contains $c$ different $Z_{q^n}$-orbits $O_1,\dots,O_c$, each of cardinality $ q^n$, and
the bricks different from $O_0$ are all of size $q$.
Thus $\beta(L_+/R_+L_+,S)\le\beta(L_+/L_+R_+,\mathcal{B})$, and it is sufficient to show that
for $e:=\max\{cd+1,p+d,p+q+1\}$, $L_{\ge e}\subseteq L_+R_++\mathcal{B}$.
Denote by $M^{(i)}$ (resp. $M^{(0)}$) the subspace of $L_{\ge e}$ spanned by monomials $u$ with $|\Phi(u)^{O_i}|>d$ (resp. $|\Phi(u)^{O_0}|\ge 1$). Clearly $L_{\ge e}\subseteq \sum_{i=0}^cM^{(i)}$. The $A$-invariant variables are bricks, so $M^{(0)}\subseteq\mathcal{B}$.
Apply Proposition~\ref{prop:gapless} with $\mathcal{V}$ the set of variables of weight in $O_i$ for some fixed $i\in\{1,\dots,c\}$. We obtain that the subspace $M^{(i)}$
is contained in $R_+L_++\mathcal{B}+\mathcal{G}_{d+1}(\mathcal{V})$.
By Proposition~\ref{binom}, $\mathcal{G}_{d+1}(\mathcal{V})\subseteq\mathcal{B}$, showing that $M^{(i)}\subseteq R_+L_++\mathcal{B}$. This holds for all $i$,
hence $L_{\ge e}\subseteq L_+R_++\mathcal{B}$.
\end{proof}
For the rest of this section let $G$ be the non-abelian semidirect product $Z_p \rtimes Z_q$, where $p,q$ are odd primes and $q \mid p-1$.
We set $L:= \F[W]$, $I = \F[W]^{Z_p}$, $R = \F[W]^G$ for an arbitrary $G$-module $W$ and denote by $A$ the normal subgroup $Z_p$ in $G$.
In this case the bricks are the monomials $m$ with $\Phi(m)=O_i$ for some $i=0,1,\dots,\frac{p-1}q$,
so a brick is either an $A$-invariant variable or has degree $q$. Moreover, multiplying a gapless monomial by a brick we get a gapless monomial.
Thus in the statement of Proposition~\ref{prop:gapless} all the $b_i$ may be assumed to be $A$-invariant variables.
We need the following consequence of the Cauchy-Davenport Theorem (see Theorem 5.7.3 in \cite{geroldinger-halterkoch} for a more general statement):
\begin{lemma} \label{Zp_corner}
Let $S$ be a sequence over $Z_p$ with maximal multiplicity $h$.
If $|S| \ge p$ then $S$ has a zero-sum subsequence $T \subseteq S$ of length $|T| \le h$.
\end{lemma}
\begin{corollary} \label{cor:tuske} We have the inequality
\[ \beta(L_+, R) \le p + \frac{q(q-1)^2}{2}. \]
\end{corollary}
\begin{proof}
Applying Lemma~\ref{induk} with $r=1$ and $s:=\binom{q}{2}$, and using $\beta(L_+,I)\le p$ we get
\[\beta(L_+,R)\le (q-1)s+\max\{p,\beta(L_+/R_+L_+,\F[I_{\le s}])-s\}\]
so our statement follows from the inequality $\beta(L_+/R_+L_+,\F[I_{\le s}])\le p+s$.
To prove the latter observe that if $h(m)>s$ for a monomial $m$, then $| \Phi(m)^O |>s$ for some $G/A$-orbit $O$ in $\hat A$. Therefore
\begin{equation}\label{eq:N+M}L_{\ge p+s}=N+\sum_{i=0}^{p-1/q}M^{(i)}\end{equation}
where $N$ is spanned by monomials having a degree $p+s$ divisor $m$ with $h(m)\le s$,
$M^{(0)}$ is spanned by monomials involving an $A$-invariant variable, and for $i=1,\dots,\frac{p-1}q$, $M^{(i)}$ is spanned by monomials having a divisor $m$ with
$\deg(m)\ge p+s$ and $|\Phi(m)^{O_i}|>s$; here $O_1,\dots,O_{p-1/q}$ are the $q$-element $G$-orbits in $\hat A$.
By Lemma \ref{Zp_corner} the weight sequence $\Phi(m)$ of a monomial $m\in N$ contains a non-empty zero-sum sequence
of length at most $h(m)\le s$, hence $m\in \F[I_{\le s}]_+L_+$. Applying Proposition~\ref{prop:gapless} with $\mathcal{V}$ the variables with weight in $O_i$
for a fixed $i\in\{1,\dots,\frac{p-1}q\}$, we get $M^{(i)}\subseteq L_+R_++\mathcal{G}_{s+1}(\mathcal{V})+M^{(0)}$,
and by Proposition~\ref{binom} we have $\mathcal{G}_{s+1}(\mathcal{V})\subseteq \mathcal{B}$. Clearly $M^{(0)}\subseteq\mathcal{B}$. It follows by \eqref{eq:N+M} that
$L_{\ge p+s}\subseteq R_+L_++\mathcal{B}+L_+\F[I_{\le s}]_+$, and since bricks have degree at most $q\le s$, the inequality
$\beta(L_+/R_+L_+,\F[I_{\le s}])\le p+s$ is proven.
\end{proof}
\begin{remark} \label{rem:skatulya}
The above results are getting close to the lower bound mentioned at the beginning of Chapter~\ref{ch:semidir}
only for small values of $q$:
we have $ p+2 \le \beta(Z_p \rtimes Z_3) \le p+6$ by Corollary~\ref{cor:tuske}
and $p+3 \le \beta(Z_p \rtimes Z_4) \le p+6 $ by Corollary~\ref{cor:skatulya} (for the lower bounds see \cite{CzD:2}).
In characteristic zero, the inequality $\beta(Z_p\rtimes Z_3)\le p+6$ was proved in \cite{pawale}.
\end{remark}
\begin{proposition} \label{nagy} We have
\[\mathcal{G}_d \subseteq (I_+)_{\le q}L
\qquad \text{ if }\quad
d \ge \min \{ p, \tfrac{1}{2}(p+q(q-2)) \}.\]
\end{proposition}
\begin{proof}
Suppose that $m$ is a gapless monomial having no non-trivial $A$-invariant divisor of degree at most $q$
(hence $m$ is not divisible by a brick).
In particular $m$ has no variables of weight $0$.
Let $m=m_1...m_{p-1/q}$ be the factorization where $\Phi(m_i)=\Phi(m)^{O_i}$,
and let $S_i$ denote the support of the weight sequence $\Phi(m_i)$.
By our assumption $0 \not\in S:=\bigcup_j S_j$ and $|S_i| \le q-1$ for every $i$.
For each factor $m_i$ we have $h(m_i) \le |S_i| \le q-1$, so if $\deg(m) \ge p$ then
$m$ contains an $A$-invariant divisor of degree at most $h(m) \le q-1$ by Lemma~\ref{Zp_corner},
which is a contradiction, hence $\deg(m) \le p-1$.
On the other hand,
as each factor $m_i$ is gapless, $\deg(m_i)\leq \binom{|S_i|+1}{2}\leq
\frac{|S_i|q}2$, and consequently
\begin{align} \label{sq/2}
\deg(m) \le \frac{|S|q}{2}.
\end{align}
We claim that $|S| \le q +\frac{p-1}q -2$. Write $q^\wedge T:=\{t_1+\dots +t_q\mid t_i\neq t_j\in T\}$
for any subset $T \subseteq \hat A$. The Dias da Silva - Hamidoune theorem (see \cite{dasilva}) states that
$ |q^{ \wedge} T| \ge \min \{p, q|T|- q^2 + 1 \}$.
Now if our claim were false then we would get from this theorem that
\[ |q^\wedge (S \dot\cup \{ 0\}) | \ge \min\{p,q(|S|+1) - q^2 + 1 \}=p\]
implying that $m$ contains an $A$-invariant divisor of degree $q$ or $q-1$, again a contradiction.
By plugging in this upper bound on $|S|$ in \eqref{sq/2} and since $q$ is odd we get
$ \deg(m) \le\lfloor \frac{q^2-2q+p-1}{2}\rfloor
=\tfrac{1}{2}(p+q(q-2)) -1$.
\end{proof}
\begin{proposition} \label{kicsi}
Suppose $c, e$ are positive integers such that $c\leq q$ and
$ \binom{c}{2} < p \le \binom{c+1}{2} - \binom{e+1}{2} $
(in particular, this forces that $p<\binom{q+1}2$).
Then
\[ \mathcal{G}_ d \subseteq (I_+)_{\le c-e} L \qquad \text{ if } \qquad d \ge p+ \binom{e}{2}.\]
\end{proposition}
\begin{proof}
Suppose that $m$ is a gapless monomial having no non-trivial $A$-invariant divisor of degree at most $c-e$.
Take the row-decomposition $\Phi(m)=S_1\cdots S_h$ and set $E:=S_1\cdots S_{c-e}$, $F:=S_{c-e+1}\cdots S_h$.
We have $|E| \le p-1$, for otherwise by Lemma~\ref{Zp_corner}
we would get an $A$-invariant divisor of degree at most $c-e$.
It follows that $|S_{c-e}| \le e$, for otherwise the fact that $m$ is gapless and $c\le q$ would
lead to the contradiction
\[ |E| \ge (e+1) + (e+2) + ... + (e+(c-e)) = \textstyle\binom{c+1}{2} - \binom{e+1}{2} \ge p.\]
As a result $|S_{c-e+1}| \le e-1$, hence $|F| \le \binom{e}{2}$ since $m$ is gapless.
But then $\deg(m) =|E|+|F| \le p-1 + \binom{e}{2}$, and this proves our claim.
\end{proof}
To illustrate the use of Proposition~\ref{kicsi} consider the case when $p=11$ and $q=5$.
We then have $c=5$ and $e=2$, hence any gapless monomial of degree at least $12$
contains an $A$-invariant of degree at most $3$. On the other hand
$I_{\ge 22}\subseteq I_+R_++(\mathcal{G}_{12}\cap I_{\ge 22})\subseteq I_+R_++(I_+)_{\le 3}I_{\ge 19}$
by Proposition~\ref{prop:gapless},
hence $I_{\ge 28} \subseteq I_+^3I_{\ge 19} +I_+R_+$.
Furthermore $I_{\ge 19}\subseteq I_+R_++(\mathcal{G}_9\cap I_{\ge 19})$ by Proposition~\ref{prop:gapless}.
A monomial $m\in \mathcal{G}_9\cap I_{\ge 19}$
has a gapless divisor $u$ of degree at least $9$.
It is easily seen that $h(u) \le 3$, hence $u$ can be completed to a monomial $v \mid m$ of degree $11$ and height $h(v) \le 5$,
which will contain an $A$-invariant divisor of degree at most $5$ by Lemma~\ref{Zp_corner}. We get that $I_{\ge 19} \subseteq (I_+)_{\le 5}I_{\ge 14} + I_+R_+$.
Finally $I_{\ge 14} \subseteq I_+^2$ and putting all these together yields
$I_{\ge 28} \subseteq I_+^6 +I_+R_+\subseteq I_+R_+$ by Proposition~\ref{prop:knopeta}. As a result
\begin{align} \label{betaZ11Z5}
\beta(Z_{11} \rtimes Z_5) \le 27
\end{align}
\begin{proposition}\label{ZpZq_estimates}
For any odd primes $p,q$ such that $q \mid p-1$ we have the following estimates:
\[ \beta(L_+,R) \le
\begin{cases}
\frac{3}{2} (p + (q-2)q )-2 & \text{ if } p>q(q-2) \\
2p + (q-2)q -2 & \text{ if } p < q(q-2) \\
2p + (q-2)(c-1) -2 & \text{ if } c(c-1) < 2p < c(c+1), c \le q
\end{cases}
\]
\end{proposition}
\begin{proof}
Let $d$ be a positive integer such that $\mathcal{G}_d \subseteq (I_+)_{\le q}I$.
Given that $\mathcal{B} \subseteq (I_+)_{\le q}I$,
we get $\beta(L_+/R_+L_+,\F[I_{\le q}]) \le p+d-2$ by Proposition~\ref{prop:gapless}.
Using Lemma~\ref{induk} it follows $\beta(L_+,R) \le (q-2)q+ p+d-2$.
Our first two estimates follow by substituting the value of $d$ given in Proposition~\ref{nagy}.
The last one follows similarly by deducing form Proposition~\ref{kicsi} that
$\beta(L_+/R_+L_+,\F[I_{\le c-1}]) \le 2p -2$,
and then applying Lemma~\ref{induk}.
\end{proof}
\begin{theorem} \label{thm:zpzq}
For the non-abelian semidirect product $Z_p\rtimes Z_q$, where $p,q$ are odd primes we have
$\beta(Z_p \rtimes Z_q) < \tfrac{pq}{2}$.
\end{theorem}
\begin{proof}
Recall that $\beta(G,W)\le\beta(L_+,R)$ by Proposition~\ref{trans}.
Hence by Corollary~\ref{cor:tuske} we have $\beta(Z_p \rtimes Z_3) \le p+6$,
hence $\beta(G) < |G|/2$ for $p>7$.
The case $p=7$ will be treated below, with the result $\beta(Z_7 \rtimes Z_3) =9$ in Theorem~\ref{thm:z7z3}.
For the rest we may assume that $q \ge 5$.
Suppose indirectly that $pq \le 2\beta(Z_p \rtimes Z_q)$. Then by the first estimate
in Proposition~\ref{ZpZq_estimates}
\[p(q-3) \le 3q(q-2) -4.\]
Suppose first that $4q+1 \le p$. In this case $q^2 -5q +1 \le 0$,
whence $q<5$, a contradiction.
It remains that $p=2q +1$.
Since by \eqref{betaZ11Z5} our statement is true for $q=5, p=11$,
it remains that $q \ge 11$ (as $2q+1$ is not prime for $q=7$).
Then $2p < q(q+1)$, so we can apply the third estimate in Proposition~\ref{ZpZq_estimates}.
By the indirect assumption and the fact that $c(c-1) < 2p$ we get that
\[ \frac{pq}{2} < 2p + (q-2)\frac{2p}{c}. \]
Here $c \ge 7$ as $p \ge 23$, but then by this inequality $q \le 6$, a contradiction.
\end{proof}
\subsection{The group $Z_7 \rtimes Z_3$}\label{sec:z7z3}
In this section we will deal with the group $G = Z_7 \rtimes Z_3$,
and suppose that $\ch(\F)\neq 3,7$. The character group $\hat A$ of the abelian normal subgroup $A=Z_7$ of $G$
will be identified with the additive group of residue classes modulo $7$,
so the generator $b$ of $G/A=Z_3$ acts on $\hat A$ by multiplication with $2 \in(\mathbb{Z}/7\mathbb{Z})^{\times}$.
Then we have three $G/A$-orbits in $\hat A$, namely $A_0:=\{ 0\}$, $A_{+} := \{1,2,4 \}$, $A_{-} := \{ 3,5,6\}$.
Accordingly $G$ has two non-isomorphic irreducible representations of dimension $3$, denoted by $V_{+}$ and $V_{-}$.
Let $W$ be an arbitrary representation of $G$; it has a decomposition
\begin{align
W= V_+^{\oplus n_+} \oplus V_-^{\oplus n_-} \oplus V_0 \end{align}
where $V_0$ is a representation of $Z_3$ lifted to $G$.
Any monomial $m\in \F[W]$ has a canonic factorization
$m=m_+m_- m_0$ given by the canonic isomorphism $\F[W] \cong \F[V_+^{\oplus n_1}] \otimes \F[V_-^{\oplus n_2}] \otimes \F[V_0]$;
the degrees of these factors will be denoted by $d_+(m), d_-(m), d_0(m)$.
Finally we set $I = \F[W]^A$, $R = \F[W]^G$ and let $\tau=\tau^G_A: I \to R$ denote the transfer map.
\begin{proposition} \label{prop:lapos}
Let $m\in \F[W]$ be a $Z_7$-invariant monomial with $\deg(m) \ge 7$, $d_0(m)=0$ and $d_+(m),d_-(m) \ge 1$.
Then $m\in I_2I_++I_+R_+$.
\end{proposition}
\begin{proof}
Denote by $S$ the support of the weight sequence $\Phi(m)$
and by $\nu_w$ the multiplicity of $w \in \hat{A}$ in $ \Phi(m)$.
Observe that $|S|\ge 2$ since $d_+(m),d_-(m)$ are both positive.
This also implies that $m \in I_+^2$, since any irreducible zero-sum sequence of length at least $7$
is similar to $(1^7)$.
We have the following cases:
(i) if $|S| \ge 4$ then $S \cap - S \neq \emptyset$ hence already $m \in I_2I_+$.
(ii) if $|S|=3$ then up to similarity,
we may suppose that $S \cap A_+ = \{1\}$ and $S \cap A_{-} = \{ 3,5\}$.
If a factorization $m=uv$ exists where $u,v$ is $Z_7$-invariant and
$1 \in \Phi(u)$, $(35) \subseteq \Phi(v)$ then obviously $m - u \tau(v) \in I_2I_+$.
This certainly happens if $\Phi(m)$ contains $(1^7)$ or one of the irreducible zero-sum sequences with support $\{ 3,5\}$, namely
$
(3 5^5),
(3^2 5^3)$, or
$(3^3 5)$.
Otherwise it remains that $\nu_1 \le 6$, $\nu_3 \le 2$ and $\nu_5 \le 4$.
Now, if $\Phi(u)=(135^2)$ then necessarily either $1\in \Phi(v)$ or $(35) \subseteq \Phi(v)$,
and in both cases $m - u \tau(v) \in I_2I_+$.
It remains that $\nu_5= 1$, and therefore $\Phi(m)$ equals $(1^3 3^2 5)$ or $(1^6 3 5)$.
The first case is excluded since $\deg(m) \ge 7$.
In the second take $\Phi(u) = (1^4 3)$ , $\Phi(v) = (1^2 5)$
and observe that $\Phi(uv^{b^2}) $ falls under case (i),
while $\Phi(uv^b) = (1^4 2^2 3^2)$ is similar to the sequence $(1^2 3^2 5^4)$
which was already dealt with.
(iii) if $|S| =2$ then again $m = uv$ for some $u,v \in I_+$.
Denote by $U$ and $V$ the support of $\Phi(u)$ and $\Phi(v)$, respectively.
If $|U| \ge 2$ or $|V| \ge 2$ then after replacing $m$
by $m - u \tau(v)$ we get back to case (ii) or (i).
Otherwise $\Phi(m) = (a^{7i} b^{7j})$ for some $a\in A_+, b \in A_-$ and $i,j \ge 1$;
but then an integer $1 \le n \le 6$ exists such that $(ab^n)(a^{7i-1}b^{7j-n})$ is a $Z_7$-invariant factorization,
and we are done as before.
\end{proof}
\begin{corollary}\label{cor:lapos}
If $m\in \F[W]$ is a $Z_7$-invariant monomial such that
$\deg(m) \ge 10$ and
$d_0(m) \ge 2$ or $ \min\{d_+(m),d_-(m)\} \ge 3 - d_0(m)$
then $m\in I_+R_+$.
\end{corollary}
\begin{proof}
By Corollary~\ref{cor:G:H+1} it is enough to prove that $m \in I_+^4$.
This is immediate if $d_0(m) \ge 2$.
If $ d_0(m) =1$ then applying Proposition~\ref{prop:lapos} two times
shows that $m \in I_1I_2^2I_+$.
Finally, if $d_0(m) =0$ then again after two applications of Proposition~\ref{prop:lapos}
we may suppose that $m=uv$ where $\deg(v) \ge 6$, $d_+(v),d_-(v) \ge 1$ and $u \in I_2^2$.
It is easily checked
that any irreducible zero-sum sequence over $Z_7$ of length at least $6$
is similar to $(1^7)$ or $(1^5 2)$,
none of which equal $\Phi(m)$ (for then $d_-(m) = d_-(u) = 2$, a contradiction).
Therefore $v \in I_+^2$ follows and again $m \in I_+^4$.
\end{proof}
\begin{lemma} \label{lemma:trukk}
Let $G= A \rtimes \langle g \rangle $ where $ \langle g\rangle \cong Z_3 $ and $A$ is an arbitrary abelian group.
If $3 \in \F^{\times}$ then for any $u,v,w \in I_+$
\[ uvw - u v^g w^{g^2} \in I_+ (R_+)_{\le deg(vw)}. \]
\end{lemma}
\begin{proof}
The following identity can be checked by mechanic calculation:
\begin{align*}
3\left(uvw - uv^gw^{g^2} \right)
&= uv\tau(w)+uw\tau(v)+u\tau(vw)\\
& -u\tau(vw^g)-uw^{g^2}\tau(v)-uv^g\tau(w) \qedhere
\end{align*}
\end{proof}
\begin{proposition} \label{prop:hegyes}
Let $m\in \F[W]$ be a $Z_7$-invariant monomial factorized as $m_+=m_1...m_n$
through the isomorphism $\F[V_+^{\oplus n}] \cong \F[V_+]^{\otimes n}$.
If $\deg(m)\ge 10$, $d_0(m) \le 1$ and $\max_{i=1}^n \deg(m_i) \ge 3$ then $m\in I_+R_+$.
\end{proposition}
\begin{proof}
We shall denote by $x,y,z$ the variables of weight $1,2,4$ belonging to that copy of $V_+$ for which $\deg(m_i)$ is maximal,
while $X,Y,Z$ will stand for the variables of the same weights which belong to any other copy of $V_+$.
Since $d_0(m) \le 1$ by assumption,
using Proposition~\ref{prop:gapless} with $\mathcal{V}:=\{x,y,z\}$ we may assume that $m_{\mathcal{V}}$ has a gapless divisor $t$ of degree at least $3$.
Let $S\subseteq \hat A$ be the support of the weight sequence $\Phi(t)$; clearly $|S| \ge 2$.
If $|S| = 3$ then $m_{\mathcal{V}}$ is divisible by the $G$-invariant $xyz$, and we are done.
It remains that $|S| = 2$ hence by symmetry we may suppose that $m_{\mathcal{V}}$ is divisible by $t=x^2y$.
If $d_0(m)=1$ then $m$ contains an $A$-invariant variable $w$ and by Lemma~\ref{lemma:easy} $|\Sigma(\Phi(m/tw))| =7 $.
This gives an $A$-invariant factorization $m/w = uv$ such that $xy \mid u$ and $x \mid v$.
By Lemma \ref{lemma:trukk} we get that
$m \equiv uv^bw^{b^2} \mod I_+R_+$,
where $uv^bw^{b^2}$ contains $xyz$ for a suitable choice of $b \in \{ g, g^2\}$, so we are done.
It remains that $d_0(m) =0$. By a similar argument as in the proof of Proposition~\ref{prop:gapless}, we may assume
that $m_+$ has a gapless divisor of degree $4$,
while $m_{\mathcal{V}}$ still contains a gapless divisor of degree $3$.
Therefore we may suppose that $ m_+$ contains $u:=xyZ$ while $m_{\mathcal{V}}$ still contains $x^2y$.
Now if $m/u \in I_+^2$ then we get an $A$-invariant factorization $m=uvw$ such that $xy \mid u$ and $x \mid v$,
so we are done again by using Lemma \ref{lemma:trukk}.
Finally, if $m/u$ is irreducible then necessarily $\Phi(m/u) = (1^7)$, so that $m = x^2 y X^6 Z$.
Here we can employ the following relations:
\begin{align*}
x^2 y X^6 Z & =
xy X^4 \; \tau(x X^2Z ) -
xy z X^4 Z^2Y -
xy^2 X^5 Y^2 \\
xy^2 X^5 Y^2 & =
xy Y^2 \; \tau(yX^5) -
xy z Y^7 -
x^2 y Y^2 Z^5
\end{align*}
This proves that $m \equiv x^2 y Y^2 Z^5 \mod{I_+R_+}$, and as $xY^2Z^4 \in I_+^2$, the latter monomial
already belongs to $I_+R_+$ by the first part of this paragraph.
\end{proof}
\begin{corollary} \label{z7z3reg}
If $W$ is the regular representation $V_{\mathrm{reg}}$
of $Z_7 \rtimes Z_3$ then we have
$\beta(I_+, R) \le 9$.
\end{corollary}
\begin{proof}
Here we have $n_+ = n_- = 3$.
Let $m \in I_+$ be a monomial with $\deg(m) \ge 10$.
If Corollary~\ref{cor:lapos} can be applied then $m \in I_+R_+$ already holds.
Otherwise $d_0(m) \le 1$ and say $d_-(m) \le 2-d_0(m)$, whence $d_+(m) \ge 8$.
Then one of the monomials in the factorization $m_+=m_1m_2m_3$, say $m_{1}$ has degree at least $3$,
and we are done by Proposition~\ref{prop:hegyes}.
\end{proof}
It was observed by Schmid that $\beta(G) = \beta(G, V_{\mathrm{reg} })$ for any finite group $G$ if $\ch (\F) = 0$.
This is based on Weyl's theorem on polarization (see \cite{weyl}).
If $\ch(\F) > 0$, then Weyl's theorem on polarizations fails even in the non-modular case; instead of that, if $\ch(\F)$ does not divide $|G|$
then by a result of Grosshans in \cite{grosshans}
for any $G$-module $W$ containing $V_{\mathrm{reg}}$ as a submodule,
the ring $\F[W]^G$ is the $p$-root closure of its subalgebra generated by the polarization of
$\F[V_{\mathrm{reg}}]^G$.
Corollary~\ref{z7z3reg} is an improvement of Pawale's result who proved in \cite{pawale} in characteristic $0$ that $\beta(G,W) = 9$ for $n_+,n_- = 2$,
and from this he concluded $\beta(G)= 9$ using a version of Weyl's Theorem on polarization.
For positive characteristic we will use the following result:
\begin{proposition}[Knop, Theorem 6.1 in \cite{knop}]\label{prop:knop}
Let $U$ and $V$ be finite dimensional $G$-modules. If $ n_0 \ge \max \{ \dim (V), \frac{\beta(G)}{\ch(\F) -1} \}$ and
$S$ is a generating set of $\F[U \oplus V^{\oplus n_0}]^G$ then $\F[U\oplus V^{\oplus n}]^G$ for any $n \ge n_0$
is generated by the polarization (with respect to the type-$V$ variables) of $S$.
\end{proposition}
\begin{proposition} \label{prop:Z7Z3_ch>2}
If $\ch(\F) \neq 2,3,7$ then $\beta(G) \le 9$.
\end{proposition}
\begin{proof}
We already know that $\beta(G)\leq 13$ from Corollary~\ref{cor:tuske}.
Therefore it is sufficient to show that $R_d \subseteq R_+^2$ whenever $10 \le d \le 13$.
Suppose first that $\ch(\F)>7$. Then $\max \{ \dim(V_+) , \dim(V_-), \frac{\beta(G)}{\ch(\F) -1} \} = 3 $
hence by Proposition~\ref{prop:knop} a generating set of $\F[W]^G$ can be obtained by polarizations
from a generating set of $\F[V_{\mathrm{reg}}]^G$, so $\beta(G) \le \beta(G,V_{\mathrm{reg}}) \le 9$ by Corollary~\ref{z7z3reg}.
Finally let $\ch(\F)=5$, so that $\max \{ \dim(V_+), \dim(V_-), \frac{\beta(G)}{\ch(\F) -1}\} \le 4$.
By Proposition~\ref{prop:knop} here we can obtain the generators of $R$ by polarizing
the generators of $S := \F[V_+^4 \oplus V_-^4 \oplus V_0]^G$.
$S$ is spanned by elements $f$ that are multihomogeneous in the sense that
for all monomials $m$ occurring in $f$ the triple $(d_+(m),d_-(m),d_0(m))$ is the same; denote it by $(d_+(f),d_-(f),d_0(f))$.
We know from formula (6.3) and Theorem 5.1 in \cite{knop} that
$f$ is contained in the polarization of $\F[V_{\mathrm{reg}}]$ (taken with respect to $V_+^{\oplus 3}$ and then to $V_-^{\oplus 3}$ separately),
if $d_+(f), d_-(f) \le 3(\mathrm{char}(\F) - 1) = 12$.
So for the rest we may suppose that say $d_+(f) \ge 13$.
Then let $f_+ = f_1f_2f_3f_4$ be the factorization given by the isomorphism $\F[V_+^{\oplus 4}] \cong \F[V_+]^{\otimes 4}$,
and observe that $\deg(f_i) \ge 4$ for some $i\le 4$,
whence $f \in I_+R_+$ by Proposition~\ref{prop:hegyes}.
\end{proof}
\subsection{The case of characteristic $2$}
The polarization arguments at the end of the previous section does not cover the case $\mathrm{char}(\F) =2$.
Here we need a closer look at the interplay between our extended Goebel algorithm
and the elementary polarization operators
\[ \Delta_{i,j} := x_j \frac{\partial}{\partial x_i}+ y_j \frac{\partial}{\partial y_i} + z_j \frac{\partial}{\partial z_i}\]
where as before $\F[V_+^{\oplus n}] = \otimes_{i=1}^n\F[x_i,y_i,z_i]$ and the variables $x_i,y_i,z_i$ have weight $1,2,4$, respectively.
The operators $\Delta_{i,j}$ are $G$-equivariant, hence map
$G$-invariants to $G$-invariants.
Moreover, by the Leibniz rule it also holds that:
\begin{align} \label{pol} \Delta_{i,j}(I_+R_+) \subseteq I_+R_+ \end{align}
\begin{proposition} \label{prop:Z7Z3_ch=2}
If $\mathrm{char}(\F) = 2$ then
$\beta(I_+,R) \le 9 $.
\end{proposition}
\begin{proof}
Let $m \in I$ be a monomial with $\deg(m) \ge 10$. It is sufficient to show that $m\in I_+R_+$.
We may suppose by symmetry that $d_+(m) \ge d_-(m)$.
It suffices to deal with the cases not covered by Corollary~\ref{cor:lapos}
so we may suppose that $d_0(m) \le 1$, $d_-(m) \le 2 - d_0(m)$, whence
$d_+(m) \ge 8$.
By Proposition~\ref{prop:gapless} we can assume that $m_+$ contains a gapless monomial of degree $3$.
We have several cases:
(i) Let $m_+=m_1...m_n$ where each monomial $m_i$ belongs to a different copy of $V_+$.
If $\deg(m_i) \ge 3$ for some $i \ge 1$ then $ m\in I_+R_+$ by Proposition~\ref{prop:hegyes}.
So for the rest we may suppose that $\deg(m_i) \le 2$ for every $i=1,...,n$.
(ii) If $m_+$ contains the square of a variable, say $x_1^2$ then
a variable of weight $2$ or $4$ must also divide $m$, say $m =x_1^2 y_2 u$,
because we assumed that $m_+$ contains a gapless divisor of degree $3$.
Here we have
\begin{align*}
\Delta_{1,2} x_1^2 y_1 u = 2 x_1y_1x_2u + x_1^2 y_2 u = m
\end{align*}
as $\mathrm{char}(\F)=2$. In view of case (i) and \eqref{pol} this shows that $m \in I_+R_+$.
(iii) If $m_+$ is square-free, but still $\deg(m_i) =2$ for some $i$, say $x_1y_1 \mid m$, then
our goal will be to find three monomials $u,v,w \in I_+$ such that $m=uvw$ and $x_1 \mid u$, $y_1 \mid v$.
For then $m \equiv uv^bw^{b^2} \mod I_+R_+$ by Lemma~\ref{lemma:trukk}
where $b$ can be chosen so that $uv^bw^{b^2}$ contains $x_1^2$, and then $m$ will fall under case (ii).
Here are some conditions under which this goal can be achieved:
\begin{enumerate}
\item[(a)] If $d_0(m) =1$ then let $w$ be the $Z_7$-invariant variable in $m$;
given that $|\Sigma(\Phi(m/wx_1y_1))|=7$ by Lemma~\ref{lemma:easy}, suitable factors $u,v$ must exist.
\item[(b)] It remains that $d_0(m) = 0$. Again by Proposition~\ref{prop:gapless} (with $\mathcal{V}$ the set of variables in $\F[ V_+^n]$)
we assure that $m_+$ contains a gapless monomial of degree $4$, hence also a $Z_7$-invariant $u := x_1y_1Z$.
Suppose now that $m/u=vw$ for some $v,w \in I_+$.
Up to equivalence modulo $I_+R_+$ we may also suppose that
one of these two monomials, say $v$ contains a variable $X$ (or $Y$).
After swapping $x_1$ and $X$ (or $y_1$ and $Y$) in $u$ and $v$ we are done.
\item[(c)] If $d_-(m)>0$, then $m/u$ has a variable $t$ such that
some $f\in \{x_1t,y_1t, Zt \}$ belongs to $I$; as $\deg(m/f)\ge 8$, the desired factorization of $m$ is given by Lemma~\ref{lemma:easy}.
\item[(d)] It remains that $d_0(m)=d_-(m)=0$ and $\Phi(m/u)$ is an irreducible zero-sum sequence.
Since $\deg(m/u) \ge 7$ it follows that $\Phi(m/u)$ equals $(2^7)$, $ (1^7)$ or $(4^7)$.
In the first case we use the relation:
\[m=x_1y_1Z Y^7 = \tau(x_1 Y^3) y_1ZY^4 - y_1^2Y^4Z^4 - z_1y_1 X^3Y^4Z \]
where the two monomials on the right hand side fall under case (ii) or (iii/b).
The case $\Phi(m/u) = (1^7)$ is similar. Finally, if $\Phi(m/u) = (4^7)$ then we replace $m$ with $m - u\tau(m/u)$
to reduce to the other two cases.
\end{enumerate}
(iv) If $m$ is multilinear: here we can again assure that $(124) \subseteq \Phi(m)$.
If $d_0(m) =0$ then this is achieved using Proposition~\ref{prop:gapless}.
Otherwise, if there is $Z_7$-invariant variable $w$ in $m$
then we may still suppose by Proposition~\ref{prop:gapless} that e.g. $x_1y_2x_3\mid m$
and the same argument as above at (iii/a) gives a factorization $m/w = uv$ such that
$x_1y_2 \mid u$ and $x_3 \mid v$, so our goal is achieved by Lemma~\ref{lemma:trukk}.
Now we may suppose that $m=x_1y_2z_3u$, say. We have:
\begin{align*}
\Delta_{1,2} z_1 x_1 y_3u + \Delta_{3,1} z_2 x_3 y_3 u &=
(z_1 x_2 y_3
+ z_2 x_3 y_1 )u
= m + \tau(x_1y_2z_3) u
\end{align*}
The monomials $z_1 x_1 y_3 u$ and $z_2 x_3 y_3 u$ fall under case (iii),
so $m \in I_+R_+$.
\end{proof}
Comparing Proposition~\ref{prop:Z7Z3_ch>2} and Proposition~\ref{prop:Z7Z3_ch=2} with the lower bound
mentioned at the beginning of Chapter~\ref{ch:semidir}, we have proved:
\begin{theorem} \label{thm:z7z3}
If $\mathrm{char}(\F) \neq 3,7$ then $\beta(Z_7 \rtimes Z_3)=9$.
\end{theorem}
\section{Some further particular cases}
\subsection{The group $Z_5 \rtimes Z_4$, where $Z_4$ acts faithfully}
The following is proved (without explicitly being stated) by Schmid \cite{schmid} for ${\mathrm{char}}(\F)=0$ and by Sezer \cite{sezer} in non-modular positive characteristic:
\begin{proposition}\label{prop:betaD2n}
Suppose that $2n\in\F^\times$. For any module $V$ of the dihedral group $D_{2n}=Z_n\rtimes_{-1}Z_2$ we have
\[\beta(D_{2n}, V) \le \beta(\F[V]^{Z_n}_+,\F[V]^{D_{2n}})\le n+1.\]
\end{proposition}
Let $G:=Z_5 \rtimes Z_4$ where $Z_4=\langle b\rangle $
and conjugation by $b$ is an order $4$ automorphism of the normal subgroup $A=Z_5$. Take a $G$-module $V$ and
set $L:=\F[V]$, $R:=\F[V]^G$, $I:=\F[V]^A$, $S:=\F[V]^H$, where
$H\cong D_{10}$ is the subgroup of $G$ generated by $A$ and $b^2$.
\begin{proposition} \label{prop:z5z4}
If $\mathrm{char}(\F)\neq 2,5$ then $\beta(I_+,R)=8$.
\end{proposition}
\begin{proof} The lower bound $\beta(I_+,R)\ge 8$ follows from a result in \cite{CzD:2}.
By Corollary~\ref{cor:skatulya} we have $\beta(I_+,R)\le 5+6=11$. Therefore it is sufficient to show that
if $m$ is an $A$-invariant monomial with $9\le\deg(m)\le 11$, then $m\in I_+R_+$.
Suppose there are three variables $e,f,h$ such that $m=efhr$ and both $ef$ and $eh$ are $A$-invariant.
The relation \begin{align} \label{masodik} 2m =
\tau^H_A(ef) hr +
\tau^H_A(eh) fr -
\tau^H_A(fh^{b^2}) e^{b^2} r.
\end{align}
implies that
$m\in S_2I_{\ge 7}$, and since $\beta(I_+,S)\le 6$ by Proposition~\ref{prop:betaD2n}
we get $m\in I_+S_+^2\subseteq I_+R_+$ (the latter inclusion follows by Proposition~\ref{prop:knopeta}).
If $m$ contains two $A$-invariant variables then $m\in I_1^2I_{\ge 7}\subseteq I_{\ge 7}S_+$ by
Proposition~\ref{prop:knopeta}. As above, $I_{\ge 7}\subseteq I_+S_+$, so $m\in I_+S_+^2\subseteq I_+R_+$.
From now on suppose that none of the above two cases hold for $m$.
Then $m=m_0m_+$, where $m_0=1$ or $m_0$ is an $A$-invariant variable, $m_+$ involves no $A$-invariant variables, and
$8\le \deg(m_+)\le 11$. This forces that the support of $\Phi(m_+)$ has at most two elements (not opposite to each other).
The action of $G/A$ preserves $I_+R_+$, therefore it is sufficient to deal with the case when
$\Phi(m_+)=(1^k,2^l)$ or $\Phi(m_+)=(1^k,3^l)$ where $k\ge l$.
If $l\ge 2$ then $m_+=ef$ where $e,f$ are $A$-invariant and $\supp(\Phi(e))=\supp(\Phi(f))=\supp(\Phi(m_+))$;
now each monomial of $m-e\tau^G_A(f)$ belongs to $I_+R_+$ by the cases considered already. Finally,
if $l\le 1$, then $m_+=ef$ where $\Phi(f)=1^5$; again all monomials of $\tau^G_A(f)$ belong to $I_+R_+$ by the prior cases.
\end{proof}
\subsection{The alternating group $A_4$}\label{sec:a4}
Throughout this chapter let $G:=A_4$, the alternating group of degree four.
The double transpositions and the identity constitute a normal subgroup $A\cong Z_2\times Z_2$ in $G$, and
$G=A\rtimes Z_3$ where $Z_3 = \{ 1, g,g^2\}$. Denote by $a,b,c$ the involutions in $\hat A$, conjugation by $g$ permutes them cyclically.
Remark for future reference that the only irreducible zero-sum sequences over $\hat A$ are: $(0)$, $(a,a)$, $(b,b)$, $(c,c)$, $(a,b,c)$.
Hence the factorization of any zero-sum sequence over $Z_2 \times Z_2$ into maximally many irreducible ones is of the form
\begin{equation}\label{eq:klein factorization}
(0)^q(a,a)^r(b,b)^s(c,c)^t(a,b,c)^e \qquad \text{ where } e =0 \text{ or } 1.
\end{equation}
In particular the multiplicities of $a,b$ and $c$ must have the same parity.
Let $\F$ be a field with characteristic different from $2$ or $3$. Apart from the one-dimensional representations of $G$ factoring through the natural surjection $G\to Z_3$,
there is a single irreducible $G$-module $V$, hence an arbitrary finite dimensional $G$-module $W$ shall decompose as
\[ W= U \oplus V^{\oplus n} \]
where $U=W^A$ consists of one-dimensional $G$-modules.
$V$ is the $3$-dimensional summand in the natural $4$-dimensional permutation representation of $G$. Let $x,y,z$ denote the corresponding basis in $V^*$
and following our conventions introduced in Section \ref{sec:goebel}
let $\F[V^{\oplus n}] = \otimes_{i=1}^n \F[x_i,y_i,z_i]$,
so that $x_i,y_i,z_i$ are $A$-eigenvectors of weight $a,b,c$ which are permuted cyclically by $g$.
We write $I:=\F[W]^A$, $R:=\F[W]^G$ and $\tau:=\tau^G_A:I\to R$.
\begin{proposition}\label{prop:etaa4} If $n = 3$
then $R_{\ge 7} \subseteq (R_+)_{\le 4} R_+$.
\end{proposition}
\begin{proof}
It is sufficient to show that $ I_{\ge 7} \subseteq (R_+)_{\le 4}I+ (I_+)_{\le 4}R$.
Take a monomial $m \in I_{\ge 7}$ with $\deg(m_+) \ge 7$.
We claim that $m \in I_+(R_+)_{\leq 4}$ in this case.
Consider the factorization $m_+=m_1m_2m_3$ given by the map $\F[V^{\oplus 3}] \cong \F[V]^{\otimes 3}$;
by symmetry we may assume that $\deg(m_1) \ge 3$.
If the $G$-invariant $x_1y_1z_1$ divides $m$ then we are done. Using relation \eqref{eq:tekeres} we may assume that
$\Phi(m_1)$ contains at least two different weights, say $x_1y_1^2 \mid m_1$.
Suppose that the multiplicity of $b$ is at least $3$ in $\Phi(m)$;
then the remainder $m/x_1y_1^2y_i$ must contain an $A$-invariant divisor $w$ with $\deg(w)=2$. Set $v:=y_1y_i$ and $u:=m/vw$ so that $u$ is divisible by $x_1y_1$.
By Lemma \ref{lemma:trukk} we can replace $m$ with the monomial $uv^gw^{g^2}$, which is divisible by the $G$-invariant $x_1y_1z_1$.
Finally, if the multiplicity of $b$ in $\Phi(m)$ is $2$, then the multiplicity of $a$ and $c$ must be even, too.
Then $\deg(m)\ge 8$ and $m$ has an $A$-invariant factorization $m=uvw$ with $x_1y_1^2\mid u$,
and $\deg(v)=\deg(w)=2$. By Lemma~\ref{lemma:trukk} $m$ can be replaced by $uv^gw^{g^2}$ or
$uv^{g^2}w^g$ so that we get back to the case treated before.
It remains that $\deg(m_+) \le 6$.
If $\deg(m_0) \geq 3$ then $m_0 \in (R_+)_{\le 3}I$ and we are done.
So for the rest $\deg(m_0) \le 2$.
Given that $\D(A) = 3$ and $\D_2(A) = 5$ by Proposition~\ref{prop:halter-koch},
we have $m \in I_1 (I_+)_{\le 3}^3I$ or $m \in I_1^2 (I_+)_{\le 3}^2I$.
In both cases $m \in I_+^4$ hence $m \in I_+R_+$ by Proposition~\ref{prop:knopeta}.
Taking into account that $\deg(m) \le 8$ we conclude that $m \in (R_+)_{\le 4}I+ (I_+)_{\le 4}R$, as claimed.
\end{proof}
\begin{theorem} \label{thm:betaka4}
If $\mathrm{char}(\F) \neq 2,3$ then
$\beta_k(A_4) = 4k+2$.
\end{theorem}
\begin{proof}
We pove first that $\beta(A_4) \le 6$. To this end consider the subalgebra $S:=\F[U\oplus V^{\oplus 3}]^G$
in $R = \F[U \oplus V^{\oplus n}]^G$ where $n \ge 3$.
Note that $\beta(S)\le 6$ by Proposition~\ref{prop:etaa4} and in addition $\beta(G)\le \D_3(A)=7$ by Corollary~\ref{cor:G:H+1}
and Proposition~\ref{prop:halter-koch}.
We have $R_d=\F [\GL_n\cdot S_d]$ for all $d$ if $\ch(\F)=0$ by Weyl's Theorem on polarization (cf. \cite{weyl})
and in positive characteristic for $d\le \dim(V)(\ch(\F)-1) $ by Theorem 5.1 and formula (6.3) in \cite{knop}; in our case $\dim(V)(\ch(\F)-1)\ge 12$.
It follows that $R_7= \F [\GL_n\cdot S_7] \subseteq \GL_n\cdot S_+^2\subseteq R_+^2$, whence $\beta(A_4)\le 6$, indeed.
For the rest it suffices to prove that $R_{\ge 7}\subseteq (R_+)_{\le 4}R$ holds for $n\ge 3$, as well,
because then by induction on $k$ we get $R_{\ge 4k+ 3} \subseteq (R_+)_{\le 4}^k R_+$.
Since $R$ is generated by elements of degree at most $6$,
it is enough to prove that $\bigoplus_{d=7}^{12}R_d\subseteq (R_+)_{\le 4}R$.
Applying polarization as above and Proposition~\ref{prop:etaa4} we get
$\bigoplus_{d=7}^{12}R_d\subseteq \F[ \GL_n\cdot \bigoplus_{d=7}^{12}S_d] =\F [\GL_n\cdot (S_+)_{\le 4}S] \subseteq (R_+)_{\le 4}R$.
To prove $\beta_k(A_4)\ge 4k+2$ take as $V$ the natural $4$-dimensional permutation representation of the symmetric group $S_4$.
It is well known that $R:=\F[V]^{A_4}$ has the Hironaka decomposition $R=P\oplus sP$, where $P$ is the subalgebra generated by the elementary symmetric polynomials
$p_1,p_2,p_3,p_4$, and $s$ is the degree $6$ alternating polynomial. It is easy to deduce from the Hironaka decomposition that $sp^{k-1}\notin R_+^{k+1}$.
\end{proof}
\begin{remark}
Working over the field of complex numbers Schmid \cite{schmid} already gave a computer assisted proof of the equality $\beta(A_4, U \oplus V^{\oplus 2})=6$.
\end{remark}
\begin{corollary}\label{cor:betaa4tilde}
Suppose that $\mathrm{char} (\F) \neq 2, 3$. Then $\beta(\tilde A_4)=12$.
\end{corollary}
\begin{proof}
We have $\beta(A_4)= 6$ by Theorem~\ref{thm:betaka4}, and since
$\tilde A_4$ has a two-element normal subgroup $N$ with $\tilde A_4/N\cong A_4$, the inequality $\beta(\tilde A_4)\leq 12$ follows by Lemma~\ref{lemma:red}.
It is sufficient to prove the reverse inequalities for the field $\mathbb{C}$
(as $\beta(G,\mathbb{C}) \le \beta(G,\F)$ by Theorem 4.7 in \cite{knop}).
Consider the ring of invariants of the $2$-dimensional complex representation of $\tilde A_4$ realizing it as the binary tetrahedral group.
It is well known (see the first row in the table of Lemma 4.1 in \cite{huffman} or Section 0.13 in \cite{popov-vinberg})
that this algebra is minimally generated by three elements of degree $6,8,12$,
whence $\beta(\tilde A_4)\ge 12$.
\end{proof}
\subsection{The group $(Z_2\times Z_2)\rtimes Z_9$}\label{sec:z2z2z9}
\begin{proposition}\label{prop:z2z2z9}
Let $G:=(Z_2\times Z_2)\rtimes Z_9$ be the non-abelian semidirect product,
and suppose that $\mathrm{char}(\F) \neq 2,3$. Then we have $\beta(G)\leq 17$.
\end{proposition}
Let $\hat{K} \cong Z_2 \times Z_2= \{ 0,a,b,c \}$ and $Z_9 = \langle g \rangle$.
Then conjugation by $g$ permutes $a,b,c$ cyclically, say $a^g=b$, $b^g=c$, $c^g=a$.
$G$ contains the distinguished abelian normal subgroup $A:= K \times C$ where $C:= \langle g^3\rangle\cong Z_3$.
The conventions of Section~\ref{sec:goebel} can be applied for $(G,A)$, since every irreducible representation of $G$ is $1$-dimensional or is induced from a $1$-dimensional representation of $A$.
For an arbitrary $G$-module $W$ we set
$J = \F[W]^{C}$, $I= \F[W]^A$, $R=\F[W]^G$;
we use the transfer maps $\mu:=\tau^G_C: J \to R$, $\tau:=\tau^G_A: I \to R$.
For any sequence $S$ over $\hat{A}$ we denote by $S|_C$
the sequence obtained from $S$ by restricting to $C$ each element $\theta \in S$.
\begin{proof} Since $G/C\cong A_4$ and $\beta(A_4)=6$, by Lemma~\ref{lemma:red} we have $\beta(G)\le 18$.
Therefore by Lemma~\ref{goebel} it is sufficient to show that if $m \in I$ is a terminal monomial of degree $18$, then $\tau(m)\in R_+^2$.
We may restrict our attention to the case when $\Phi(m)|_C=(h^{18})$ for a generator $h$ of $\hat C$,
as otherwise $m\in J_+^7$, and we get that $\tau(m)=\frac 14\mu(m)\in R_+^2$ by Proposition~\ref{prop:altnoetnum} applied for $G/C$ acting on $J$.
We claim that in this case $\Phi(m)$ contains at least $2$ zero-sum sequences
of length at most $3$,
whence $m \in I_+^4$ (since $\beta(A)=7$ by Proposition~\ref{prop:halter-koch}),
and consequently $\tau(m) \in R_+^2$ again by Proposition~\ref{prop:altnoetnum}.
To verify this claim, factor $m=uv$ where $\Phi(v)|_K=(0^n)$ and $\Phi(u)|_K$ does not contain $0$.
If $n\ge 3s$ then $\Phi(v)$ contains at least $s$ zero-sum sequences of length at most $3$.
Therefore it suffices to show that $\Phi(u)|_K$ contains the subsequence $(a,b,c)$ whenever $\deg(u) \ge 13$,
because then the corresponding subsequence of $\Phi(u)$ is a zero-sum sequence over $A$.
Suppose indirectly that this is false and that $\Phi(u)|_K$ contains e.g. only $a$ and $b$.
This means that $\Phi(u)|_K= (a^{2x}, b^{2y})$ where $2(x+y) =\deg(u)$.
By symmetry we may suppose that $x\ge y$ and consequently $x \ge 4$.
Now $\Phi(u)|_K$ decomposes as follows:
\begin{align*}
(a^4, b^2) &\cdot (a^{2x-4}, b^{2y -2}) & \text{ if } y \ge 2\\
(a^6) &\cdot (a^{2x-6}, b^{2y}) & \text{ if } y \le 1
\end{align*}
Observe that the first factor has degree $6$, hence it corresponds to a zero-sum sequence over $\hat A$, and it is a good divisor in the sense of
Definition~\ref{def:terminal}. This contradicts the assumption that $m$ was terminal.
\end{proof}
\section{Classification of the groups with large Noether number}
\subsection{A structure theorem}
The objective of this section is to prove the following purely group theoretical structure theorem:
\begin{theorem} \label{thm:structure}
For any finite group $G$ one of the following ten options holds:
\begin{enumerate}
\item $G$ contains a cyclic subgroup of index at most $2$;
\item $G$ contains a subgroup isomorphic to:
\begin{enumerate}
\item $Z_2 \times Z_2 \times Z_2$;
\item $Z_p \times Z_p$, where $p$ is an odd prime;
\item $A_4$ or $\tilde{A}_4$;
\end{enumerate}
\item $G$ has a subquotient isomorphic to:
\begin{enumerate}
\item an extension of $Z_2 \times Z_2$ by $Z_2 \times Z_2$;
\item a non-abelian semidirect product $Z_p \rtimes Z_q$ with odd primes $p,q$;
\item $Z_p \rtimes Z_4$, where $Z_4$ acts faithfully on $Z_p$;
\item $D_{2p} \times D_{2q}$, where $p,q$ are distinct odd primes;
\item an extension of $D_{2n}$ by $Z_2 \times Z_2$, where $n$ is odd;
\item the non-abelian semidirect product $(Z_2 \times Z_2) \rtimes Z_9$.
\end{enumerate}
\end{enumerate}
\end{theorem}
\begin{lemma}[Burnside] \label{burn_2-sylow}
If the Sylow $2$-subgroup $P$ of a group $G$ is cyclic then $G = N \rtimes P$ where
$N$ is the characteristic subgroup of $G$ consisting of its odd order elements.
\end{lemma}
\begin{proposition}[Zassenhaus, Satz 6 in \cite{zassenhaus35}]\label{zassenhaus}
Let $G$ be a finite solvable group with a Sylow $2$-subgroup $P$ containing a cyclic subgroup of index $2$.
Then $G$ has a normal subgroup $K$ with a cyclic Sylow $2$-subgroup such that $G/K$ is isomorphic to
one of the groups $Z_2$, $A_4$ or $S_4$.
\end{proposition}
\begin{lemma}[Roquette \cite{roquette}, or \cite{berk} Lemma 1.4 or \cite{huppert} III. 7. 6 ] \label{noZpZp}
If $G$ is a finite $p$-group which does not contain $Z_p \times Z_p$ as a normal subgroup,
then either $G$ is cyclic or $p=2$ and $G$ is isomorphic to one of the groups $D_{2^n}, SD_{2^n}, Dic_{2^n}$, where $n>3$,
or to the quaternion group $Q= Dic_{2^3}$.
\end{lemma}
\begin{corollary} \label{2class}
Any finite $2$-group $G$ falls under case (1), (2a) or (3a) of Theorem~\ref{thm:structure}.
\end{corollary}
\begin{proof}
Suppose that (1) does not hold for $G$.
Then by Lemma~\ref{noZpZp}, $G$ has a normal subgroup $N \cong Z_2 \times Z_2$.
Consider the factor group $G/N$: if it is cyclic, i.e. generated by $aN$ for some $a \in G$,
then necessarily $\langle a \rangle \cap N = \{1 \}$, for otherwise $\langle a \rangle$ would be a cyclic subgroup of index $2$ in $G$.
Now we can find a subgroup $Z_2 \times Z_2 \times Z_2$, which is case (2a):
if $a^2 \neq 1$ then this is because $a^2$ necessarily centralizes $N$,
and if $a^2 =1$ then already $a$ must centralize $N$, for otherwise $G = (Z_2 \times Z_2) \rtimes Z_2 \cong D_8$,
which has a cyclic subgroup of index $2$, a contradiction.
It remains that $G/N$ is non-cyclic.
If $G/N$ contains a
subgroup isomorphic to $Z_2\times Z_2$, then we get case (3a).
Otherwise by Lemma~\ref{noZpZp} $G/N$ contains a cyclic subgroup of index $2$.
Given that the Frattini subgroup $F/N$ of $G/N$ is cyclic, $F$ is an extension of a cyclic group by $Z_2\times Z_2$,
hence by the same argument as above, $F$ (and hence $G$) falls under case (2a),
unless $F$ is a non-cylic group with a cyclic subgroup of index $2$. Then $G/\Phi$ (where $\Phi$ is the Frattini subgroup of $F$)
is an extension of $F/\Phi\cong Z_2\times Z_2$ by $G/F\cong Z_2\times Z_2$, and we get case (3a).
\end{proof}
\begin{proposition}\label{allcyclic}
Let $G$ be a group of odd order all of whose Sylow subgroups are cyclic.
Then either $G$ is cyclic or it falls under case (3b) of Theorem~\ref{thm:structure}.
\end{proposition}
\begin{proof}
By a theorem of Burnside (see p. 163 in \cite{burnside}) $G$ is isomorphic to $Z_n \rtimes Z_m$ for some coprime integers $n,m$.
Hence either $G$ is cyclic, or this semidirect product is non-abelian. In the latter case there are elements
$a \in Z_n$ and $b \in Z_m$ of prime-power orders $p^k$ and $q^r$, which do not commute.
After factorizing by the centralizer of $\langle a \rangle$ in $\langle b \rangle$ we may suppose that $\langle b \rangle$ acts faithfully on $\langle a \rangle$.
Then the order $p$ subgroup of $\langle a\rangle$ and the order $q$ subgroup of $\langle b\rangle$ generate a non-abelian semidirect product $Z_p\rtimes Z_q$.
\end{proof}
\begin{proposition}\label{prop:ZnP}
Let $G= Z_n \rtimes P$, where $n$ is odd and $P$ is a $2$-group with a cyclic subgroup of index $2$.
Then $G$ falls under case (1), (3c), (3d), or (3e) of Theorem~\ref{thm:structure}.
\end{proposition}
\begin{proof}
Let $C$ be the centralizer of $Z_n$ in $P$. The factor
$P/C$ is isomorphic to a subgroup of $\Aut(Z_n)$, which is abelian, and $G/C = Z_n \rtimes (P/C)$.
If $P/C$ contains an element of order $4$, then by a similar argument as in Proposition~\ref{allcyclic} we find
a subquotient isomorphic to $Z_p \rtimes Z_4$, where $Z_4$ acts faithfully on $Z_p$, which is case (3c).
Otherwise $P/C$ must be isomorphic to $Z_2$ or $Z_2 \times Z_2$.
If $P/C = Z_2$ then either $C$ is cyclic, and $Z_n \times C$ is a cyclic subgroup of index $2$ in $G$ | this is case (1);
or else $C$ is non-cyclic, and then $G/\Phi(C)$ (where $\Phi(C)$ is the Frattini subgroup of $C$) is an extension of the dihedral group $G/C \cong D_{2n}$ by the Klein four-group $C/\Phi(C) \cong Z_2 \times Z_2$
| this is case (3e).
Finally, if $P/C\cong Z_2\times Z_2$, we get case (3d): indeed,
$Z_n=P_1\times\dots\times P_r$, where the $P_i$ are the Sylow subgroups of $Z_n$.
If the generators $a$ and $b$ of $Z_2 \times Z_2$ are acting non-trivially on precisely the same set of subgroups $P_i$,
then since the only involutive automorphism of an odd cyclic group is inversion, $ab$ will act trivially on all $P_i$, hence $ab \in C$, a contradiction.
Therefore a $P_i$ exists such that $a$ acts non-trivially, while $b$ acts trivially on it.
But an index $j \neq i$ also must exist such that $b$ is acting non-trivially on $P_j$;
after eventually exchanging $a$ with $ab$ we may suppose that $a$ acts trivially on $P_j$.
Then $G$ has a subfactor $(P_i \times P_j) \rtimes (Z_2 \times Z_2) \cong D_{2p^k} \times D_{2q^l}$, which leads to case (3d).
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:structure} for solvable groups]
We shall argue by contradiction: let $G$ be a counterexample of minimal order.
Since $G$ does not fall under case (2b), all its odd order Sylow subgroups are cyclic by Lemma~\ref{noZpZp}.
As $G$ does not fall under case (1) or (3b), its order is even by Proposition~\ref{allcyclic}.
Finally, as $G$ does not fall under case (2a) or (3a), its Sylow $2$-subgroup contains a cyclic subgroup of index $2$ by Corollary~\ref{2class}.
Therefore Proposition~\ref{zassenhaus} applies to $G$, so a normal subgroup $K$ exists such that $G/K$ is isomorphic to $Z_2$, $A_4$ or $S_4$,
and using Lemma~\ref{burn_2-sylow}, $K=N\rtimes Q$, where $Q$ is a cyclic 2-group while
$N$ is a characteristic subgroup consisting of odd order elements,
which is also cyclic, for otherwise it would fall under case (3b).
The case $G/K\cong S_4$ is ruled out by the minimality of $G$
(since otherwise the subgroup $H$ of $G$ with $H/K\cong A_4$ would fall under case (1), a contradiction).
The case $G/K \cong Z_2$ is also ruled out, since then $G \cong Z_n \rtimes P$
where the Sylow $2$-subgroup $P$ of $G$ has a cyclic subgroup of index $2$,
so it falls under case (1), (3c), (3d), or (3e) by Proposition~\ref{prop:ZnP}.
It remains that $G/K\cong A_4$.
Suppose first that $N$ is trivial. Then $K=Q$ and $P/Q\cong Z_2 \times Z_2$ is normal in $G/Q\cong A_4$,
hence $P$ is normal in $G$ and by the Schur-Zassenhaus theorem $G = P \rtimes Z_3$.
Let $\langle a \rangle$ be the cyclic subgroup of index $2$ in $P$:
the subgroup $\langle a^4 \rangle$ has no non-trivial odd order automorphism,
hence the factor group $P/\langle a^4 \rangle$ must have a non-trivial automorphism of order $3$.
But unless $P$ coincides with the group $Z_2 \times Z_2$ or $Dic_8$,
the factor $P/\langle a^4\rangle$ is isomorphic to $D_8$ or $Z_4 \times Z_2$, which do not have an automorphism of order $3$
(for a list of the $2$-groups with a cyclic subgroup of index $2$ see \cite{brown}).
It follows that $G = (Z_2 \times Z_2) \rtimes Z_3 = A_4$ or $G=Dic_8 \rtimes Z_3 \cong \tilde{A}_4$,
which is case (2c), a contradiction.
Finally, suppose that $N$ is nontrivial.
Since $N$ is characteristic in $K$, it is normal in $G$,
and $G/N$ is isomorphic to $A_4$ or $\tilde{A}_4$ by our previous argument.
Then $N$ is necessarily cyclic of prime order, for otherwise a proper subgroup $M \le N$ would exists which is normal is $G$,
and $G/M$ would contain a cyclic subgroup of index at most $2$ by the minimality assumption on $G$,
but this is impossible since $A_4$ is a homomorphic image of $G/M$.
Consequently it also follows that $N=Z_3$, for otherwise $|N|$ and $|G/N|$ are coprime,
so that $G = N \rtimes (G/N)$ by the Schur-Zassenhaus theorem,
and again $G$ would fall under case (2c), a contradiction.
Let $C$ denote the centralizer of $N$ in $G/N$: on the one hand $G/C$ must be isomorphic to a subgroup of $\Aut(Z_3) = Z_2$,
but on the other hand $Z_2$ is not a homomorphic image of $A_4$ or $\tilde{A}_4$, hence $G=C$.
This means that $N$ is central in $G$, and therefore the Sylow $2$-subgroup $P$ is normal in $G$.
Given that the Sylow $3$-subgroup of $G$ is cyclic and of order $9$ we conclude that $G = P \rtimes Z_9$
where $P$ equals $Dic_8$ or $Z_2 \times Z_2$, and this gives case (3f), a contradiction.
\end{proof}
\begin{proof} [Proof of Theorem~\ref{thm:structure} for non-solvable groups]
Suppose to the contrary that Theorem~\ref{thm:structure} fails for a non-solvable group $G$,
which has minimal order among the groups with this property.
Then any proper subgroup $H$ of $G$ is solvable: indeed, otherwise (2) or (3) of Theorem~\ref{thm:structure} holds for $H$, hence also for $G$, a contradiction.
It follows that $G$ has a solvable normal subgroup $N$ such that $G/N$ is a minimal simple group (i.e. all proper subgroups of $G/N$ are solvable).
If $G/N\cong A_5$, then denote by $H$ the inverse image in $G$ of the subgroup $A_4\subseteq A_5$ under the natural surjection $G\to G/N$.
Then $H$ is solvable, and has $A_4$ as a factor group. Thus $H$ has no cyclic subgroup of index at most two.
Therefore by the solvable case of Theorem~\ref{thm:structure},
(2) or (3) holds for $H$, hence it holds also for $G$, a contradiction.
According to Corollary 1 in \cite{thompson},
any minimal simple group is isomorphic to one of the following:
\begin{enumerate}
\item[(a)] $L_2(2^p)$, $p$ any prime.
\item[(b)] $L_2(3^p)$, $p$ any odd prime.
\item[(c)] $L_2(p)$, $p>3$ prime with $p^2+1\equiv 0$ (mod $5$).
\item[(d)] $Sz(2^p)$, $p$ any odd prime.
\item[(e)] $L_3(3)$.
\end{enumerate}
The group $L_2(2^2)$ is isomorphic to the alternating group $A_5$.
Finally we show that for the remaining minimal simple groups one of (2a), (2b), (3) holds, hence $G/N$ can not be isomorphic to any of them.
The group $L_2(2^p)$ contains as a subgroup
the additive group of the field of $2^p$ elements. Hence when $p\ge3$ then (2a) holds.
Similarly, $L_2(3^p)$ contains as a subgroup the additive group of the field of $3^p$ elements, hence (2b) holds.
The subgroup of unipotent upper triangular matrices in $L_3(3)$ is a non-abelian group of order $27$, hence (2b) holds for it.
The subgroup in $SL_2(p)$ consisting of the upper triangular matrices is isomorphic to the semidirect product $Z_p\rtimes Z_{p-1}$. Its image in $L_2(p)$ contains the non-abelian semidirect product $Z_p\rtimes Z_q$ for any odd prime divisor $q$ of $p-1$. When $p$ is a Fermat prime, then $L_2(p)$ contains $Z_p\rtimes Z_4$ (where $Z_4$ acts faithfully on $Z_p$), except for $p=5$, but we need to consider only primes $p$ with $p^2+1\equiv 0$ (mod $5$).
The Sylow $2$-subgroup of $Sz(q)$ is a so-called Suzuki $2$-group of order $q^2$, that is, a non-abelian 2-group with more than one involution, having a cyclic group of automorphisms which permutes its involutions transitively. Its center consist of the involutions plus the identity, and it has order $q$, see for example \cite{higman}, \cite{collins}.
It follows that the Sylow $2$-subgroup $Q$ of $Sz(2^p)$ ($p$ an odd prime) properly
contains an elementary abelian $2$-group of rank $p$ in its Sylow $2$-subgroup, hence (2a) holds for it.
\end{proof}
\subsection{Proof of the classification theorem} \label{sec:proof}
\begin{proof}[Proof of Theorem \ref{thm:main}]
It suffices to consider the cases listed in Theorem~\ref{thm:structure}:
\begin{enumerate}
\item if $G$ contains a subgroup of index at most $2$ then $\beta(G) \ge \frac{1}{2}|G|$ by Proposition 5.1 in \cite{schmid}
(in fact $\beta(G)-\frac 12|G|\in\{1,2\}$ by \cite{CzD:2})
\item if $G$ contains a subgroup $H$ of index $k$ such that:
\begin{enumerate}
\item $H \cong Z_2 \times Z_2 \times Z_2$ then by Proposition~\ref{prop:Z2Z2Z2} and Corollary~\ref{cor:G:H+1}
\[ \frac{\beta(G)}{|G|} \le \frac{1}{8k} \beta_k(Z_2 \times Z_2 \times Z_2) = \frac{1}{4} + \frac{3}{8k} \]
\item $H \cong Z_p \times Z_p$ then by Proposition~\ref{prop:halter-koch} and Corollary~\ref{cor:G:H+1}
\[ \frac{\beta(G)}{|G|} \le \frac{1}{kp^2} \beta_k(Z_p \times Z_p) = \frac{1}{p} + \frac{p-1}{kp^2} \]
\item $H \cong A_4$ then by Theorem~\ref{thm:betaka4} and Corollary~\ref{cor:G:H+1}
\[ \frac{\beta(G)}{|G|} \le \frac{1}{12 k} \beta_k(A_4) = \frac{1}{3} + \frac{1}{6k} \]
\end{enumerate}
It is easily checked that in all three cases the inequality $\frac{\beta(G)}{|G|} \ge \frac{1}{2}$ holds if and only if $k=1$,
and in case (b) it is also necessary that $p=2 \text{ or } 3$.
Finally, let $H = \tilde{A}_4$; by Lemma~\ref{lemma:myred} we have $\beta_k(\tilde{A}_4) \le 2 \beta_k(A_4)$
hence $\beta(G) \le \beta_{k}(\tilde{A}_4) \le 8k+4$ by Corollary~\ref{cor:G:H+1} and Theorem~\ref{thm:betaka4},
so we get the same upper bound on $\beta(G)/|G|$ as in the case when $H = A_4$.
\item
For any subquotient $K$ of $G$ we have $\beta(G)/|G| \le \beta(K)/|K|$ by Lemma~\ref{lemma:red};
\begin{enumerate}
\item if $K/N \cong Z_2\times Z_2$ for some normal subgroup $N \cong Z_2\times Z_2$
then by Lemma~\ref{lemma:myred} and Proposition~\ref{prop:halter-koch}:
\[ \frac{\beta(K)}{|K|} \le \frac{1}{16}\beta_{\beta(Z_2 \times Z_2)} (Z_2 \times Z_2) =
\frac{1}{16}\beta_3 (Z_2 \times Z_2) = \frac{7}{16}\]
\item if $K \cong Z_p\rtimes Z_q$ then $\beta(K)/|K| < \frac{1}{2}$ by Theorem~\ref{thm:zpzq}
\item if $K \cong Z_p\rtimes Z_4$, where $Z_4$ acts faithfully, then by Corollary~\ref{cor:skatulya}
\[\frac{\beta(K)}{|K|}\le \frac{p+6}{4p} \le \frac{13}{28}\]
for $p \ge 7$, and $\beta(K)/|K| = 2/5$ for $p=5$ by Proposition~\ref{prop:z5z4}
\item if $K \cong D_{2p}\times D_{2q}$ where $p,q$ are distinct odd primes (hence $p \ge 3$ and $q \ge 5$) then by Lemma~\ref{lemma:red} and Proposition~\ref{prop:betaD2n}:
\[ \frac{\beta(G)}{|G|} \le \frac{1}{4pq} \beta(D_{2p}) \beta(D_{2q}) \le \frac{ (p+1)(q+1) }{4pq} \le \frac 2 5\]
\item if $K/N \cong D_{2p}$ for some normal subgroup $N \cong Z_2\times Z_2$
then by Lemma~\ref{lemma:myred} and Proposition~\ref{prop:betaD2n}:
\[ \frac{\beta(G)}{|G|} \le \frac{1}{8p} \beta_{\beta(D_{2p})} (Z_2 \times Z_2) \le \frac{2p+3}{8p} \le \frac{3}{8}\]
\item if $K \cong (Z_2\times Z_2)\rtimes Z_{9}$ then $\beta(K)/|K| \le \frac{17}{36}$ by Proposition~\ref{prop:z2z2z9}
\end{enumerate}
To sum up, $\beta(G)/|G| < 1/2$ whenever $G$ falls under case $(3)$ of Theorem~\ref{thm:structure}. \qedhere
\end{enumerate}
\end{proof}
|
2,877,628,091,216 | arxiv | \section{Introduction}
Most of the success of physics in the 20th century has been
achieved as a result of the applications of two philosophies.The
first is the {\it Quantization Philosophy} and the second is the
{\it Geometerization Philosophy}. The consequences of applying the
first is the {\it Quantum Theory}, while the consequences of
applying the second is the {\it General Theory of Relativity},
(GR). The study and understanding of the four known fundamental
interactions are not equally successful using, only, one of these
two rival philosophies. Electromagnetism, weak and strong
interactions are well understood using the quantization
philosophy, while gravity is not understood using this philosophy.
In the context of geometerization of physics, GR is considered as
a good theory for gravity, while there are no such successful
geometric theories for the other three interactions.
It seems that a third philosophy is needed to unify the physics of
the four fundamental interactions. This philosophy may lead to new
physics. This would be, undoubtedly, a difficult task. It would be
of importance to reach the conclusion that the two rival
philosophies are completely exhausted, before trying a third one.
This my be a less difficult task. It needs a careful examination
of applying the existing philosophies. Examination of the
geometric approach to physics shows that this approach is {\bf
not} exhausted yet. Some types of geometry admit some quantum
properties. This is what I am going to show in the present work.
The following statement summarizes the philosophy of
gemeterization of physics:
\begin{center}
{\it "To understand nature one should start with geometry and end
with physics"}.
\end{center}
In applying this philosophy, one should look for an appropriate
geometry. Einstein, in applying his geometerization philosophy,
used three types of geometry. Some of the main properties of these
geometries are summarized in the following table.
\begin{center}
Table I: Comparison between 3-types of geometry\\
\begin{tabular}{|c|c|c|c|} \hline
& & & \\ Geometry [Ref.] & Metric & Connection & Building
Blocks (\#)
\\ & & &
\\ \hline & & & \\ Riemannian [1] & Symmetric & Symmetric &
Metric tensor (10)
\\ & & & \\ \hline & & & \\ Absolute Parallelism [2] & Symmetric &
Non-symmetric &
Tetrad vectors (16)
\\ & & & \\ \hline & & & \\ Einstein Non-symmetric [3] & Non-symmetric &
Non-symmetric
& Metric tensor (16)\\
& & & \\ \hline
\end{tabular}
\end{center}
We mean by the term {\it "Building Block"} the geometric object,
using which one can construct the whole geometry. In the last
column of Table I, we assume that the dimension of space $ n = 4.$
Riemannian geometry has been used by Einstein to construct his
successful theory of gravity, GR. It is well known that the number
of building blocks in this geometry is just sufficient to describe
gravity. For this reason, we are going to consider the other two
geometries, in Table I, since the number of building blocks in
each is enough to accommodate other interactions, together with
gravity. These interactions may have some quantum properties.
The term {\it "Non-Symmetric Geometry"} will be used to indicate
that the geometry admits non-symmetric connection. In such a
geometry, one can define define three types of tensor derivatives
(derivatives that preserve tensor properties):
$$
A^{\mu}_{+|~ \nu} {\stackrel{def.}{=}}~ A^{\mu}_{~, \nu} +
A^{\alpha}C^\mu _{.\alpha \nu} ,\eqno{(1.1)}
$$
$$
A^{\mu}_{-|~ \nu} {\stackrel{def.}{=}}~ A^{\mu}_{~, \nu} +
A^{\alpha}C^\mu_{.\nu \alpha} ,\eqno{(1.2)}
$$
$$
A^{\mu}_{0|~ \nu} {\stackrel{def.}{=}}~ A^{\mu}_{~, \nu} +
A^{\alpha}C^\mu _{.(\alpha \nu)} ,\eqno{(1.3)}$$
where $A^\mu$ is
an arbitrary vector and $C^\mu_{.\nu\alpha}$ is the
non-symmetric connection. Braces ( ) are used for symmetrization and
brackets [ ] will be used for anti-symmetrization. The comma is
used for ordinary (not tensor) partial differentiation.
Now, what is the starting point for examining non-symmetric
geometries to look for any quantum features? It is well known
that, quantum properties in microscopic world were discovered when
Planck tried to interpret black body radiation, a phenomena which
is closely connected to the motion of electrons. On the other
hand, in the context of geometerization of physics, motion is
described using paths (curves) of an appropriate geometry. So, a
good starting point, may be a search for path equations in the
geometries under consideration.
Bazanski [4],[5] has established a new approach to derive the
equations of geodesic and geodesic deviation simultaneously by
carrying out variation on the following Lagrangian:
$$
L_{B} = g_{\mu\nu} U^{\mu} {\frac{D {\Psi}^{\nu}}{D S}}, \eqno{(1.4)}
$$
where $U^{\mu} \ {\mathop{=}\limits^{\rm def}}\ {\tder{x^{\mu}}{S}}$, $g_{\mu\nu}$ is the
metric tensor, $\Psi^{\mu}$ is the deviation vector and
$\frac{D}{D S}$ is the covariant differential operator using
Christoffel Symbol. We are going to generalize the Bazanski
approach, by replacing the covariant derivative, used in his
Lagrangian, by tensor derivatives of the types given by (1.1),
(1.2) and (1.3), admitted by the geometry under consideration.
The work in this review is organized as follows: Section 2 gives a
brief review of the two non-symmetric geometries under
consideration, together with the new path equations resulting from
each one. Section 3 gives some remarks about the quantum features
appearing in these geometric paths. A method for diffusing the
quantum properties, in the whole geometry, is given in Section 4.
The general quantum path, of the absolute parallelism geometry, is
linearized in Section 5. Section 6 gives confirmation and
applications of the quantum paths. the work is discussed and
concluded in Section 7.
\section{ Geometries with Built-in Quantum Roots }
\subsection{The Absolute Parallelism Geometry}
Absolute parallelism (AP)space is an n-dimensional manifold each
point of which is labelled by n-independent variables $x^{\nu} (\nu =1,2,3,...,n)$
and at each point we define n-linearly independent contravariant vectors
$\h{i}^\mu (i=1,2,3,3...,n$, denotes the vector number and $\mu
=1,2,3...,n$, denotes the coordinate component) subject to the
condition, $$
\h{i}^{\stackrel{\mu}{+}}_{.|~ \nu}=0, \eqno{(2.1)}
$$
where the stroke and the (+) sign denote absolute differentiation, using
a non-symmetric connection to be defined later. Equation (2.1)is the condition
for the absolute parallelism.
The covariant components of $\h{i}^\mu$ are defined such that
$$
\h{i}^\mu \h{i}_\nu = \delta^{\mu}_{\nu}, \eqno{(2.2)}
$$
and
$$
\h{i}^{\nu} \h{j}_{\nu} = \delta_{ij}. \eqno{(2.3)}.
$$ Using these vectors, the following second order symmetric
tensors are defined: $$ g^{\mu \nu} \ {\mathop{=}\limits^{\rm def}}\ \h{i}^\mu \h{i}^{\nu},
\eqno{(2.4)} $$
$$ g_{\mu \nu} \ {\mathop{=}\limits^{\rm def}}\ \h{i}_\mu \h{i}_{\nu}, \eqno{(2.5)} $$
consequently,
$$ g^{\mu \alpha} g_{\nu \alpha} = \delta^{\mu}_{\ \nu }. \eqno{(2.6)} $$
These second order tensors can serve as the metric tensor and its
conjugate of Riemannian space, associated with the AP-space, when
needed. This type of geometry admits, at least, four affine
connections. The first is a non-symmetric connection given as a
direct solution of the AP-condition(2.1), i.e. $$
\Gamma^{\alpha}_{.~\mu \nu} = \h{i}^{\alpha} \h{i}_{\mu,\nu}.
\eqno{(2.7)} $$ The second is its dual $
\hat{\Gamma}^{\alpha}_{.~\mu \nu}( = \Gamma^{\alpha}_{.~\nu \mu}) $,
since (2.7) is non-symmetric. The third one is the symmetric part of
(2.7), $ \Gamma^{\alpha}_{.(\mu \nu)}$. The fourth is Christoffel
symbol defined using (2.4),(2.5) ( as a result of imposing a
metricity condition). The torsion tensor is the skew symmetric part
of the affine connection (2.7), i.e. $$ \Lambda^{\alpha}_{.~\mu
\nu} \ {\mathop{=}\limits^{\rm def}}\ \Gamma^{\alpha}_{.~\mu \nu} - \Gamma^{\alpha}_{.~\nu \mu}.
\eqno{(2.8)} $$ Another third order tensor (contortion) is defined
by, $$ \gamma^{\alpha}_{.~\mu \nu} \ {\mathop{=}\limits^{\rm def}}\ \h{i}^{\alpha} \h{i}_{\mu ;
\nu}. \eqno{(2.9)} $$ The semicolon is used to characterize
covariant differentiation using Christoffel symbol. The two tensors
are related by the formula,
$$\gamma^{\alpha}_{.\mu \nu}= \frac{1}{2} (\Lambda^{\alpha}_{.\mu
\nu } - \Lambda^{~ \alpha}_{\nu .\mu} - \Lambda^{~\alpha}_{\mu
.\nu}). \eqno{(2.10)}
$$ A basic vector could be
obtained by contraction of one of the above third order tensors,
i.e.
$$ C_{\mu} \ {\mathop{=}\limits^{\rm def}}\ \Lambda^{\alpha}_{.\mu \alpha }=
\gamma^{\alpha}_{. \mu \alpha}. \eqno{(2.11)}$$
The curvature tensor of the AP-space is, conventionally, defined by,
$$
B^{\alpha}_{.\mu \nu \sigma} {\ } \ {\mathop{=}\limits^{\rm def}}\ {\ } \Gamma^{\alpha}_{.\mu \sigma, \nu} -
\Gamma^{\alpha}_{.\mu \nu, \sigma} + \Gamma^{\alpha}_{\epsilon \nu} \Gamma^
{\epsilon}_{. \mu \sigma} - \Gamma^{\alpha}_{. \epsilon \sigma}
\Gamma^{\epsilon}_{. \mu \nu} \equiv 0.\eqno{(2.12)}
$$
This tensor vanishes identically because of (2.1).
The autoparallel paths, of this geometry, are given by the
equation,
$$ \frac{d^{2}x^{\mu}}{d \lambda^{2}}+ \Gamma^{\mu}_{\alpha \beta}
\frac{d x^{\alpha} }{d \lambda}\frac{d x^{\beta} }{d \lambda} =
0.\eqno{(2.13)} $$
The AP-geometry, in its conventional form, has two main problems concerning
applications: The first is the identical vanishing of its
curvature tensor and the second is that its path equations (2.13)
do not represent any known physical trajectory. These problems
will be treated in Section 4.
Many authors believe that, because of (2.12), the AP-space is flat
. It is shown [6] that AP-spaces are, in general, curved. The
problem of curvature in AP-spaces is a problem of definition. In
any affinely connected space there is, at least, two methods for
defining the curvature tensor. The first method is by replacing
Christoffel symbol, in the definition of Riemann-Christoffel
tensor, by the affine connection defined in the space concerned.
The second method is to define curvature as a measure of
non-commutation of tensor differentiation using the affine
connection of the space. It is known that, the two methods give
identical results in case of Riemannian space. But the situation
is different for spaces with non-symmetric connections. The two
methods are not identical.
The application of the second method
in non-symmetric geometries implies a problem. That is, we usually
use an arbitrary vector in order to study the non-commutation of
tensor differentiation, and the resulting expression will not be
free from this vector. Fortunately, this problem is solved in
AP-spaces [7]. We can replace the arbitrary vector by the vectors
defining the structure of AP-spaces. In this case we can define the
following curvature tensors (I am going to call these tensors
\textit{non-conventional curvature tensors}):
$$ \h{i}^{\stackrel{\mu}{+}} _{\ | {\nu \sigma}} -
\h{i}^{\stackrel{\mu}{+}}_ {\ | {\sigma \nu}} = \h{i}^\alpha B^\mu_{. \alpha
\nu \sigma},\eqno{(2.14)} $$ $$ \h{i}^{\stackrel{\mu}{-}}_ {\ | {\nu \sigma}}
- \h{i}^{\stackrel{\mu}{-}}_{\ | {\sigma \nu}} = \h{i}^\alpha L^\mu_{.
\alpha \nu \sigma},\eqno{(2.15)} $$ $$ \h{i}^{\stackrel{\mu}{0}}_ {\ | {\nu
\sigma}} - \h{i}^{\stackrel{\mu}{0}} _{\ | {\sigma \nu}} = \h{i}^\alpha
N^\mu_{. \alpha \nu \sigma},\eqno{(2.16)} $$
$$
\h{i}^{\mu}_{\ ; \nu \sigma} - \h{i}^{\mu}_{\ ; \sigma \nu} = \h{i}^\alpha
R^\mu_{. \alpha \nu \sigma} ,\eqno{(2.17)} $$ here we use the stroke , a (+)
sign and (-) sign to characterize absolute differentiation using
the connection (2.7) and its dual, respectively. We use the stroke
without signs to characterize absolute differentiation using the
symmetric part of (2.7), while the semicolon is used to characterize
covariant differentiation using the Christoffel symbols. The
non-conventional curvature tensors defined by (2.14), (2.15), (2.16)
and (2.17) are in general non-vanishing except the first one, which
vanishes (because of the AP-condition (2.1)).
The non-conventional curvature tensors defined above can be written
explicitly in terms of torsion, or contortion via (2.10), i.e. $$
B^{\alpha}_{. \mu \nu \sigma} = R^{\alpha}_{. \mu \nu \sigma} +
Q^{\alpha}_{. \mu \nu \sigma} \equiv 0 ,\eqno{(2.18)} $$ $$
L^{\alpha}_{. \mu \nu \sigma} \ {\mathop{=}\limits^{\rm def}}\ \Lambda^{\stackrel{\alpha}{+}}_{. {\stackrel{\mu}{+}} {\stackrel{\nu}{-}}
| \sigma} - \Lambda^{\stackrel{\alpha}{+}}_{. {\stackrel{\mu}{+}} {\stackrel{\sigma}{-}}
| \nu} + \Lambda^{\beta}_{. \mu \nu} \Lambda^{\alpha}_{. \sigma
\beta} - \Lambda^{\beta}_{. \mu \sigma} \Lambda^{\alpha}_{. \nu
\beta},
\eqno{(2.19)}$$
$$
N^{\alpha}_{. \mu \nu \sigma} \ {\mathop{=}\limits^{\rm def}}\ \Lambda^{\alpha}_{. \mu \nu |
\sigma } - \Lambda^{\alpha}_{. \mu \sigma | \nu } +
\Lambda^{\beta}_{. \mu \nu}\Lambda^{\alpha}_{. \beta \sigma} -
\Lambda^{\beta}_{. \mu \sigma}\Lambda^{\alpha}_{. \nu \beta},
\eqno{(2.20)}
$$
$$ Q^{\alpha}_{. \mu \nu \sigma} \ {\mathop{=}\limits^{\rm def}}\
\gamma^{\stackrel{\alpha}{+}}_{. {\stackrel{\mu}{+}}
{\stackrel{\nu}{+}}
| \sigma} - \gamma^{\stackrel{\alpha}{+}}_{. {\stackrel{\mu}{+}} {\stackrel{\sigma}{-}}
| \nu} + \gamma^{\beta}_{. \mu \sigma} \gamma^{\alpha}_{. \beta
\nu} - \gamma^{\beta}_{. \mu \nu} \gamma^{\alpha}_{. \beta \sigma},
\eqno{(2.21)}
$$ It is clear that the vanishing of the torsion will lead to the
vanishing of (2.19), (2.20). Also this will lead to vanishing of
(2.21) via (2.10) and consequently the vanishing of $
R^{\alpha}_{. \mu \nu \sigma}$ via (2.18). This represents another
problem facing field theories written in AP-spaces. Such theories
will not have GR limit as the torsion vanishes, if this condition
is needed.
\subsection{Quantum Properties of the AP-Geometry}
Recently [8], using the affine connections defined in the AP-space
to generalize the Bazanski Lagrangian (1.4), three path equations
were discovered in the AP-geometry . These equations can be
written in the form:
$$ {\frac{dU^\mu}{dS^-}} + \{^{\mu}_{\alpha\beta}\} U^\alpha U^\beta = 0,
\eqno{(2.22)}$$
$$ {\frac{dW^\mu}{dS^0}} + \{^{\mu}_{\alpha\beta}\} W^\alpha W^\beta = -
{\frac{1}{2}} \Lambda^{~ ~ ~ ~ \mu}_{(\alpha \beta) .}~~ W^\alpha W^\beta,
\eqno{(2.23)} $$ $$ {\frac{dV^\mu}{dS^+}} + \{^{\mu}_{\alpha\beta}\} V^\alpha
V^\beta = - \Lambda^{~ ~ ~ ~ \mu}_{(\alpha \beta) .} ~~V^\alpha V^\beta,
\eqno{(2.24)} $$ where $ S^{-}$,$ S^{0}$and $ S^{+}$ are the
parameters varying along the corresponding curves whose tangents
are ${ J}^\alpha$,$ W^\alpha$ and ${ V}^\alpha$, respectively. We
can write the new set of the path equations, obtained in this
geometry, in the following form:
$$
{\frac{dB^\mu}{d\hat S}} + a~ \cs{\alpha}{\beta}{\mu}\ B^{\alpha}
B^{\beta} = - b~ \Lambda_{(\alpha \beta)}^{~.~.~~\mu}~~B^{\alpha}
B^{\beta}, \eqno{(2.25)}
$$
where $a$, $b$ are the numerical coefficients of the Christoffel
symbol term and of the torsion term, respectively. Thus we can
construct the following table.
\begin{center}
Table II: Numerical Coefficients of The Path Equation in AP-Geometry \\
\begin{tabular}{|c|c|c|} \hline
& & \\
Affine Connection Used &Coefficient $a$ &Coefficient $b$ \\
& & \\ \hline
& & \\
${\hat\Gamma}^{\alpha}_{.~\mu \nu}$ & 1 & 0 \\
& & \\ \hline
& & \\
${\Gamma}^{\alpha}_{.~(\mu\nu)}$ & 1 & $\frac{1}{2}$ \\
& & \\ \hline
& & \\
${\Gamma}^{\alpha}_{.~\mu \nu}$ &1 &1 \\
& & \\ \hline
\end{tabular}
\end{center}
The first column in this table contains the affine connections
used to generalize the Bazanski Lagrangian.
The set of equations (2.22), (2.23)and (2.24) possesses some interesting features: \\
1. It gives the effect of the torsion on the curves (paths)of the
geometry. \\ 2. This set is irreducible i.e. no one of these
equations can be reduced to the other unless the torsion vanishes.
This vanishing will lead to flat space (in view of the definitions
(2.18-21)), which is not
suitable for applications.\\
3. The coefficient of the torsion term jumps by a step of one-half
from one equation to the next (as clear from
Table II).\\
The last feature is tempting to conclude that:
$\underline{ "paths\ in\ AP-geometry\ are\ naturally\
quantized"}$.
\subsection{Einstein Non-symmetric Geometry}
Einstein generalized the Riemannian geometry by dropping the
symmetry conditions imposed on the metric tensor and on the affine
connection [3]. In this geometry the non-symmetric metric tensor
is given by:
$$g_{\mu \nu} {\stackrel{def.}{=}}~ h_{\mu \nu} + f_{\mu \nu} ,\eqno{(2.26)}
$$
where, $$
h_{\mu \nu} {\stackrel{def.}{=}}~ \frac{1}{2}( g_{\mu \nu} + g_{\mu \nu}) ,
$$
$$
f_{\mu \nu} {\stackrel{def.}{=}}~ \frac{1}{2}(g_{\mu \nu} - g_{\mu \nu}) .
$$
Since the connection of the geometry, $U^{\alpha}_{.~\mu \nu}$, is
assumed to be non-symmetric, one can define the following 3-types of
covariant derivatives:
$$
A^{\mu}_{+||~ \nu} {\stackrel{def.}{=}}~ A^{\mu}_{~, \nu} +
A^{\alpha}U^\mu _{.\alpha \nu} ,\eqno{(2.27)}
$$
$$
A^{\mu}_{-||~ \nu} {\stackrel{def.}{=}}~ A^{\mu}_{~, \nu} +
A^{\alpha}U^\mu_{.\nu \alpha} ,\eqno{(2.28)}
$$
$$
A^{\mu}_{0||~ \nu} {\stackrel{def.}{=}}~ A^{\mu}_{~, \nu} +
A^{\alpha}U^\mu _{.(\alpha \nu)} ,\eqno{(2.29)}
$$
where $A^\mu$ is any arbitrary vector. Now the connection
$U^{\alpha}_{.~\mu \nu}$ is defined such that [3],
$$
g_{\stackrel{\mu}{+} \stackrel{\nu}{-} ||\sigma} = 0 ,\eqno{(2.30)}
$$
$$i.e.~~~ g_{\mu \nu ,\sigma} = g_{\mu \alpha} U^{\alpha}_{.~~\sigma
\nu} + g_{ \alpha \nu} U^{\alpha}_{.~~\mu \sigma}. \eqno{(2.31)}
$$
The non-symmetric connection can be written in the the following
form:
$$U^{\alpha}_{.~~\mu \nu} {\stackrel{def.}{=}}~ U^{\alpha}_{.~(\mu
\nu )} + U^{\alpha}_{.~~[\mu \nu]} = \cs{\mu}{\nu}{\alpha}\ +
K^{\alpha}_{.~~\mu \nu}, \eqno{(2.32)}
$$
where,
$$U^{\alpha}_{.~(\mu \nu)} {\stackrel{def.}{=}}~
\frac{1}{2}(~U^{\alpha}_{.~~\mu \nu } + U^{\alpha}_{.~~\nu \mu })
, \eqno{(2.33)}
$$
$$U^{\alpha}_{.~~[\mu \nu]} {\stackrel{def.}{=}}~
\frac{1}{2}(~U^{\alpha}_{.~\mu \nu } - U^{\alpha}_{.~ \nu \mu}) =
K^{\alpha}_{.~[\mu \nu]} = \frac{1}{2} S^{\alpha}_{.~\mu \nu} ,\eqno{(2.34)}
$$
where $S^{\alpha}_{.~\mu \nu}$ is a third order tensor
representing the torsion of the Einstein non-symmetric (ENS)
geometry.
The contravariant metric tensor $g^{\mu \nu}$ is defied such that :
$$
g^{\mu \alpha}g_{\nu \alpha} = g^{\alpha \mu}g_{\alpha \nu} =
\delta^{\mu}_{\nu} .\eqno{(2.35)}
$$
The tensor derivatives {(2.27), (2.28) and (2.29)} are connected to
the parameter derivatives by the relations :
$$
\frac{\nabla A^\mu}{\nabla \tau^{-}} = A^{\mu}_{-||~
\alpha}\tilde{J}^\alpha, \eqno{(2.36)}
$$
$$\frac{\nabla A^\mu}{\nabla \tau^{0}} = A^{\mu}_{0||~
\alpha}\tilde{W}^\alpha, \eqno{(2.37)}
$$
$$
\frac{\nabla A^\mu}{\nabla \tau^{+}} = A^{\mu}_{+||~
\alpha}\tilde{V}^\alpha ,\eqno{(2.38)}
$$
where $\tilde{J}^{\mu}$,$\tilde{W}^{\mu}$ and $\tilde{V}^{\mu}$
are tangents to the paths whose evolution parameters are
$\tau^{-}$,$\tau^{0}$ and $\tau^{+}$, respectively.\\
\subsection{Quantum Properties of ENS-Geometry}
Applying the Bazanski approach
to the Lagrangian functions:
$$
\Xi^{-} = g_{\mu \alpha}\tilde{J}^{\mu}\frac{\nabla
\Psi^\alpha}{\nabla\tau^{-}} ,\eqno{(2.39)}
$$
$$
\Xi^{0} = g_{\mu \alpha}\tilde{W}^{\mu}\frac{\nabla
\Theta^\alpha}{\nabla\tau^{0}} ,\eqno{(2.40)}
$$
$$\Xi^{+} = g_{\mu \alpha}\tilde{V}^{\mu}\frac{\nabla
\Phi^\alpha}{\nabla\tau^{+}} ,\eqno{(2.41)}
$$
where $\Psi^\alpha, \Theta^\alpha$ and $\Phi^\alpha$ are the
deviation vectors, we get [9] the following set path equations
respectively,
$${\frac{d\tilde{J}^\alpha}{d\tau^{-}}} + \cs{\mu}{\nu}{\alpha}\
\tilde{J}^{\mu}\tilde{J}^{\nu} =
-K^{\alpha}_{.~\mu\nu}\tilde{J}^{\mu}\tilde{J}^{\nu} ,\eqno{(2.42)}
$$
$$
{\frac{d\tilde{W}^{\alpha}}{d\tau^{0}}} + \cs{\mu}{\nu}{\alpha}\
\tilde{W}^{\mu}\tilde{W}^{\nu} = -\frac{1}{2} g^{\alpha
\sigma}g_{\mu \rho}S^{\rho}_{.~\nu \sigma}
\tilde{W}^{\mu}\tilde{W}^{\nu} -
K^{\alpha}_{.~\mu\nu}\tilde{W}^{\mu}\tilde{W}^{\nu} ,\eqno{(2.43)}
$$
$$
{\frac{d\tilde{V}^\alpha}{d\tau^{+}}} + \cs{\mu}{\nu}{\alpha}\
\tilde{V}^{\mu}\tilde{V}^{\nu} =
- g^{\alpha \sigma}g_{\mu \rho}S^{\rho}_{.~\nu \sigma} \tilde{V}^{\mu}\tilde{V}^{\nu} - K^{\alpha}_{.~\mu\nu}\tilde{V}^{\mu}\tilde{V}^{\nu}.\eqno{(2.44)}
$$
This set of equations can be written in the following general
form:
$${
\frac{dC^\alpha}{d\tau} +a~ \cs{\mu}{\nu}{\alpha}\ C^{\mu}C^{\nu}
= - b~ g^{\alpha \sigma} g_{\mu \rho} S^{\rho}_{.~\nu \sigma}
C^{\mu}C^{\nu} - c~ K^{\alpha}_{.~\mu\nu} C^{\mu} C^{\nu}}.\eqno{(2.45)}
$$
where $a$, $b$ and $c$ are the numerical coefficient of the
Christoffel symbol, torsion and K-terms, respectively. Thus, we can
construct the following table:
\begin{center}
Table III: Coefficients of The Path Equations in ENS-Geometry \\
\begin{tabular}{|c|c|c|c|} \hline
Affine Connection used &Coefficient $a$ &Coefficient $b$ &Coefficient $c$ \\
\hline
& & & \\
${\hat U}^{\alpha}_{.~\mu \nu}$ & 1 & 0 & 1 \\
& & & \\ \hline
& & & \\
$U^{\alpha}_{.~(\mu\nu)}$ & 1 & $\frac{1}{2}$ &1 \\
& & & \\ \hline
& & & \\
$U^{\alpha}_{.~\mu \nu}$
&1 & 1 &1 \\
& & & \\ \hline
\end{tabular}
\end{center}
The first column in this table contains the affine connections
used to generalize the Bazanski Lagrangian.
From this table, it is clear that, the jumping coefficient of the
torsion term (column 3) has the same values obtained in the case
of the AP-geometry (Table II, column 3). So, one can draw a similar
conclusion given in Subsection 2.2:
$\underline{ "Paths\ in\ ENS-geometry\ are\ naturally\
quantized"}$
\section{Features of Quantum Roots}
(i) We consider the jump of the coefficient of the torsion term in
the path equations of Subsections 2.2 and 2.4, by a step of
one-half, as quantum roots emerging in non-symmetric geometries.
Such path equations, are usually used to represent trajectories of
test particles, in the context of the scheme of geometerization of
physics. So, if such trajectories do exist in nature, then one can
conclude that space-time is quantized and the geometry describing
nature should be non-symmetric.
(ii) The quantum properties shown in Tables II and III, are
properties built in the examined geometries. In other words, these
properties are intrinsic properties characterizing the type of
geometry used. The properties mentioned above are not consequences
of applying any known quantization schemes.
(iii) In the scheme performed to discover these properties, certain
Lagrangian functions are used. Such functions contain, in their
structure, covariant derivatives, in which certain affine
connections are used. The quantum properties discovered are
closely related to such connections. It is well known that, in any
non-symmetric geometry, one can define more affine connections by
adding any third order tensor to any affine connection already
defined in the geometry. If we do so, in the geometries examined
in Section 2, one would not get any values (for the coefficients
given in Tables II, III) different from those listed in the two
tables. As a check one can try the connection,
$$ \Omega^{\alpha}_{. \mu\nu}
\ {\mathop{=}\limits^{\rm def}}\ \{^{\alpha}_{\mu\nu}\} + \Lambda^{\alpha}_{. \mu \nu},
\eqno{(3.1)} $$ defined in the AP-geometry.
(iv) As stated above, the quantum properties discovered are
closely connected to the affine connection, or more strictly, to
its skew pare, the torsion tensor. The coefficients of Christoffel
symbol term (the second column of Tables II and III) are the same
for all paths. Also, the coefficient of the symmetric part of the
tensor $K^a _{\mu \nu}$ has no such jumping properties (last column of
Table III).
\section{Parameterization and Diffusion of Quantum Roots}
It is now obvious that the quantum roots discovered in non-symmetric
geometries depend mainly on the existence of non-symmetric
connections admitted by such geometries. Furthermore, these roots,
explicitly, appeared first in the path equations and not in other
geometric entity.
In order to extend these roots to the whole geometry, we are going
to reconstruct the geometry using a general affine connection. This
connection is defined as a linear combination of the connections,
already, admitted by the geometry. The combination is carried out
using certain parameters. The general expression obtained may not
represent an affine connection, in a conventional sense. In other
words, it might not be transformed as an affine connection, under
the group of general coordinate transformation, unless certain
conditions are imposed on the values of the
parameters used. The version of the geometry obtained
in this way is a {\it parameterized} version.
In the case of the AP-geometry, using the affine connections mentioned in
Subsection 2.1 and carrying out the parameterization scheme mentioned above,
the following results are obtained [10]:
Combining linearly the above mentioned connections we get, after some reductions,
the following parameterized expression, $$ \nabla^{\mu}_{. \alpha
\beta } = (a+b) \{^{\mu}_{\alpha\beta}\} + b \gamma^{\mu}_{.
\alpha \beta } \eqno{(4.1a)} $$ where $a$ and $b$ are parameters.
As a consequence of imposing a metricity condition, using (4.1a),
we get
$$ a + b = 1. \eqno{(4.1b)}$$So, expression (4.1a) will reduce to,
$$ \nabla^{\mu}_{. \alpha
\beta } = \{^{\mu}_{\alpha\beta}\} + b \gamma^{\mu}_{. \alpha
\beta },\eqno{(4.2)} $$ which is a general parameterized affine
connection. Using (4.2) to generalize the Bazanski Lagrangian
(1.4), we get
$$ {\frac{dZ^\mu}{d\tau}} + \{^{\mu}_{\alpha\beta}\} Z^\alpha
Z^\beta = - b~~ \Lambda^{~ ~ ~ ~ \mu}_{(\alpha \beta) .} ~~Z^\alpha Z^\beta,
\eqno{(4.3)} $$ where
${\tau}$ is a parameter varying along the path and ${Z^{\mu}}$ is the tangent to the path.
All curvature tensors defined in this parameterized version of
geometry, are non-vanishing. For example if we redefine the curvature (2.12) using the
connection (4.2) we get [10] $$ {B^{\ast}}^{\alpha}_{.\mu \nu \sigma} = R^{\alpha}_{. \mu \nu \sigma} + b~~ \hat Q^{\alpha}_{. \mu \nu \sigma }. \eqno{(4.4)}$$
where $R^{\alpha}_{.\mu \nu \sigma}$ is Riemann-Christoffel
curvature tensor and $Q^{\alpha}_{.\mu \nu \sigma}$ is defined by,
$$
\hat Q^\alpha_{.\ \mu \nu \sigma} \ {\mathop{=}\limits^{\rm def}}\ {\gamma}^{\stackrel{\alpha}{+}}_{{.\
{\stackrel{\mu}{+}}{\stackrel{\nu}{+}}} | \sigma} -
{\gamma}^{\stackrel{\alpha}{+}}_{.\
{{\stackrel{\mu}{+}}{\stackrel{\sigma}{-}}} | \nu} + b~(
{\gamma}^{\beta}_{. \mu \sigma} {\ } {\gamma}^{\alpha}_{. \beta
\nu} {\ } - {\gamma}^{\beta}_{. \mu \nu} {\ } {\gamma}^{\alpha}_{.
\beta \sigma}). \eqno{(4.5)}
$$
This tensor is, in
general non-vanishing although the corresponding one (2.18) vanishes identically in the
old version of the geometry.
The torsion and the basic vector of AP-geometry are also
parameterized and defined by[12],
$$
{{\Lambda}^{\ast}}^{\alpha}_{.\mu \nu}~ \ {\mathop{=}\limits^{\rm def}}\ ~ \nabla^{\alpha}_{.\mu \nu} -\nabla^{\alpha}_{.\nu \mu}~~=~~b~{\Lambda}^{\alpha}_{.\mu \nu}, \eqno{(4.6)}
$$
$$
{C}^{\ast}_{\mu}~~\ {\mathop{=}\limits^{\rm def}}\ ~~{\Lambda}^{\ast \alpha}_{.\mu
\alpha}~~=~~b~{\Lambda}^{\alpha}_{.\mu \alpha}. \eqno{(4.7)}
$$
The tangent of the new path (4.3) can be written in the form,
$$Z^{\mu}~~=~~U^{\mu}~~+~~b~{\zeta}^{\mu}, \eqno{(4.8)}
$$
where $U^{\mu}$ is the tangent vector of the geodesic of metric and
the vector ${\zeta}^{\mu}$ represents a deviation from geodesic.
The affine parameter $(\tau)$ varying along (4.3) can be related to
that varying along the geodesic $(s)$ by the relation [12],
$$s~=~~\tau ~(1~+~b~ U^{\mu}{\zeta}_{\mu}). \eqno{(4.9)}
$$
For physical reasons [11], the parameter $b$ is suggested to take the form
$$b~=~{\frac{n}{2}}~{\alpha} ~{\gamma},\eqno{(4.10)}
$$
where $n$ is a natural number, $\alpha$ is the fine structure
constant and $\gamma$ is a dimensionless parameter of order
unity.The presence of $\frac{n}{2}$ in the parameter $b$ will
preserve the jumping step appeared in Table II. We are going to call
(4.3) the {\it "Quantum Path Equations"}.
The torsion term, on the R.H.S. of (4.3), is suggested [11] to
represent a type of interaction between the torsion of the
background gravitational field and the quantum spin of the moving
test particle, {\bf Spin-Gravity Interaction}. We are going to
take $n = 0, 1, 2, 3, ....$ for particles with spin $ 0,
\frac{1}{2}, 1 ,\frac{3}{2}, ....$, respectively. For slowly
rotating macroscopic objects, we are going to take $n = 0$.
\section{Quantum Paths and Their Linearization}
The path equation (4.3) can be used as an equation of motion for
any field theory, constructed in the AP-geometry, provided that
the theory has good Newtonian limits. In such theories, (e.g. [7],
[13], [14]), the tetrad vectors $\h{i}_{\mu}$ are considered as
field variables. So, if we write,
$$
\h{i}_{\mu} = \delta_{_i \mu} + \epsilon h_{_i \mu}, \eqno{(5.1)}
$$
where $\epsilon$ is a small parameter, $\delta_{_i \mu}$ is
Kroneckar delta and $h_{i \mu}$ represents deviations from flat
space, then the weak field condition can be fulfilled by
neglecting quantities of the second and higher orders in
$\epsilon$ in the expanded field quantities. For a static field
assumption, we are going to assume the vanishing of time
derivatives of the field variables.
The vector components $Z^\mu$ ($ {\ {\mathop{=}\limits^{\rm def}}\ \tder{x^{\mu}}{\tau}}$)will have the
values,
$$
Z^1 \approx Z^2 \approx Z^3 \approx \varepsilon {\ }{\ }{\ }
, Z^0 \approx 1 - \varepsilon , \eqno{(5.2)}
$$
where $\varepsilon$ is a parameter of the order $( \frac{v}{c})$.
If we want to add the condition of slowly moving particle to the
previous conditions we should neglect quantities of second and
higher orders of the parameter $\varepsilon$. Thus, in expanding
the quantities of the path equation (4.3) we are going to neglect
quantities of orders $\epsilon^2, {\ }\varepsilon^2,{\ } \epsilon
\varepsilon$ and higher, and also time derivatives of the field
variable are to be neglected. To the first order of the
parameters, the only field quantities that will contribute to the
path equation (4.3) are given by [11],
$$
\Lambda_{00}^{.\ .\ i} = - \epsilon h_{ 0 0, i} ,{\ }{\ }{\ }{\ } (i= 1,2,3)
\eqno{(5.3)}
$$
$$
\{^{\ i}_{0{\ }0}\} = \frac{\epsilon}{2} Y_{00,i} ,{\ }{\ }{\ }{\ } (i= 1,2,3)
\eqno{(5.4)}
$$
where $Y_{\mu \nu}$ is defined by,
$$
g_{\mu \nu} = \eta_{\mu \nu} + \epsilon {\ } Y_{\mu \nu} {\ }{\ }{\ },
$$
$g_{\mu \nu}$ is given by (2.5) and $ \eta_{\mu \nu}$ is the Minkowski metric tensor . Substituting from (5.3),(5.4) into (4.3)
we get, after some manipulations :
$$
\frac{d^2x^i}{d{\tau}^2} = -\frac{1}{2}{\ }\epsilon{\ }(1 -\frac{n}{2}\alpha \gamma)
Y_{00,i}{\ }Z^0{\ }Z^0.
\eqno{(5.5)}
$$
In the present case, the metric of the Riemannian space,
associated to AP-space, can be written in the form [11],
$$
(\frac{d\tau}{dt})^2 = c^2{\ }(1 + \epsilon {\ }Y_{00}). \eqno{(5.6)}
$$
Substituting from (5.6) into (5.5) we get after some manipulations:
$$
\frac{d^2x^i}{dt^2} = - \frac{c^2}{2}{\ }\epsilon{\ }(1 -\frac{n}{2}\alpha \gamma)
{\ }Y_{00,i}{\ }{\ }{\ }{\ }(i=1,2,3)
$$
which can be written in the form,
$$
\frac{d^2x^i}{dt^2} = - \pder{\Phi_s}{x^i} {\ }{\ }{\ }{\ }(i=1,2,3) {\ }{\ },
\eqno{(5.7)}
$$
where
$$
\Phi_s \ {\mathop{=}\limits^{\rm def}}\ \frac{c^2}{2}{\ }\epsilon{\ }(1 -\frac{n}{2}\alpha \gamma)
{\ }Y_{00}. \eqno{(5.8)}
$$
Equation (5.7) has the same form as Newton's equation of motion of
a particle in a gravitational field having the potential $\Phi_s$
given by (5.8), which differs from the classical Newtonian
potential.
In the case of motion of macroscopic particles $(n=0)$, we get from (5.8):
$$
\Phi_s = \frac{c^2}{2}{\ }\epsilon{\ }Y_{00} = \Phi_N \eqno{(5.9)}
$$
where $\Phi_N$ is the Newtonian gravitational potential obtained from a similar
treatment of the geodesic equation. Thus (5.8) can be written in the form,
$$
\Phi_s = (1 - {\frac{n}{2}} \alpha \gamma) \Phi_N. \eqno{(5.10)}
$$
This last expression shows that the gravitational potential felt
by the spinning particle is less than that felt by a spinless
particle or a macroscopic test particle. In other words, the
Newtonian potential is reduced, for spinning particles, by a factor
$(1 - {\frac{n}{2}} \alpha \gamma)$.
\section{Confirmation and Applications of the Quantum Paths}
In the context of geometerization of physics, path equations of an
appropriate geometry, are used to represent trajectories of test
particles. It appears clearly, from the previous section, that in
the case of a static weak field and a slowly moving test particle,
we get Newtonian motion, provided that the particle is spinless. In
the following subsections, we are going to use the quantum path
equation (4.3), and its linearization consequences, to study the
motion of spinning test particles in gravitational fields.
\subsection{The COW-Experiment}
Colella, Overhauser and Werner suggested and carried out
experiments concerning the quantum interference of thermal
neutrons [15], [16], [17]. This type of experiments is known, in
the literature, as the COW-experiment. The aim of the experiment
is to test the effect of the Earth's gravitational field on the
phase difference between two beams of thermal neutrons, one is
more closer to the Earth's surface than the other. The second
version of the COW experiment was carried out by Werner et
al.[18]. This version is characterized by a high accuracy in the
measurements of the phase shift (1 part in 1000). The measurements
show that the experimental results are lower than the theoretical
calculations (using the Newtonian gravity) by about 8 parts in
1000. This is a real discrepancy, which may indicate the presence
of a type of non-Newtonian effects.
Now one can use equation (4.3) to give an interpretation for the discrepancy in
the COW-experiment. In fact we are going to use the consequence of equation (4.3)
given by equation (5.10)
since the following conditions, under which (5.10) is derived, hold:\\
-Thermal neutrons can be considered as $\underline{slowly}$ moving test particles, and \\
-the Earth's gravitational field can be considered as $\underline{weak}$ and
$\underline{static}$.
The phase difference $(\Delta \Omega)$ between the two beams of
neutrons in the COW-experiment is given by (cf. [19]),
\begin{equation}
{(\Delta \Omega)_N = - {\frac{1}{\hbar}} \int^{ABD}_{ACD} \Phi_N dt ,}
\end{equation}
where ABD and ACD are the trajectories of the upper and lower
beams of neutrons, in the interferometer, respectively . The index
$N$ is used to indicate that (6.1) is obtained using the Newtonian
potential $\Phi_N$, and $\hbar$ is the Planck's constant. Since
neutrons are spinning particles they will be affected by the
torsion of space-time, as suggested. Thus we replace $\Phi_N$ in
(6.1) by $\Phi_S$ given by (5.10). In this case (6.1) will take
the form [20]:
\begin{equation}
{(\Delta \Omega)_S = - (1 - {\frac{n}{2}} \gamma \alpha) {\frac{1}{\hbar}}
\int^{ABD}_{ACD} \Phi_N dt ,}
\end{equation}
i.e.,
\begin{equation}
{(\Delta \Omega)_S = (\Delta \Omega)_N - {\frac{n}{2}} \gamma \alpha
(\Delta \Omega)_N .}
\end{equation}
The index $S$ is used to indicate that (6.2) is obtained using the
potential $\Phi_S$. Taking the value of $\alpha =
{\frac{1}{137}}$, $n = 1$ for spin ${\frac{1}{2}}$-
particles (neutrons), we easily get the following results [20]:\\
(1) the theoretical value of the COW-experiment will decrease by about
4 parts in 1000, if we take $\gamma = 1$, \\
(2) the theoretical value will coincide with the experimental one if we take $\gamma = 2$.
\subsection{SN1987A}
Carriers of astrophysical information are massless spinning
particles. These carriers are photons, neutrinos, and expectedly,
gravitons. These three types of carriers are assumed to be
emitted from supernovae events. In February, $23^{rd}$,1987 a
supernova , in the Large Magellanic Cloud, was observed (cf.[21]).
Observations of the arrival time of neutrinos, at the
Kamiokande detectors, was recorded in February $23^{rd}$, 1987,
$7^{h} 35^{m} UT$, while the arrival time of photons was
on the same day at $10^{h} 40^{m} UT$. The bar of the
gravitational waves antennae in Rome and Maryland recorded relatively large
pulses, 1.2 seconds earlier than neutrinos (cf.[22], [23]).
Although the three types of particles have different spins, general relativity
assumes that they follow the same trajectory (null-geodesic of the metric), since
they are all massless.
In the context of general relativity, it is well known that the
time interval required for a massless particle to traverse a given
distance is longer in the presence of gravitational field having
the potential $\Phi(r)$. The time delay is given by (cf.[24])
\begin{equation}
{\Delta t_{GR} = const.~~ \int_e^a \Phi(r) dt}
\end{equation}
where $e$ and $a$ are the emission and arrival times of the
carrier, respectively.
In SN1987A's time delay (cf. [24], [25], [26]), $\Phi(r)$ is taken
to be the Newtonian potential $\Phi_{N}$ (spin independent). In
this case we can construct a spin-independent model, for the
emission times of the carriers. If we assume that $\Phi(r)$ is the
spin dependent gravitational potential $\Phi_{s}$ (5.10), we then
get the spin-dependent model. The results of these two models [27]
are summarized in table IV.
\\
\begin{center}
{Table IV: Emission Times Given By The Models}
\begin{tabular}{|c|c|c|} \hline
Particles Emitted & Spin Independent Model & Spin
Dependent Model
\\ (Cause)&(Null-Geodesic) &(Quantum Path) \\
\hline & & \\ Neutrino (core collapse) & 0.0 & 0.0 \\
\hline & & \\ Photons (maximum brightness) & $ +3^h 5^m$ &
$+15^h 18^m$ \\
\hline
& & \\ Gravitons (?) &$-1^s.2$
&$+36^h 28^m$ \\
\hline
\end{tabular}
\end{center}
\section*{}
From Table IV, we can conclude that, the two models assume two
different scenarios for the emission of carriers of astrophysical
information. {\it {The spin-independent model}} shows an
indication that neutrinos were emitted due to core collapse,
associated with gravitons as a result of sudden change in the
space-time symmetry, probably, due to a kicked born neutron star.
About three hours later photons were emitted as a result of
maximum brightness of the envelope. {\it {The spin-dependent
model}} shows that: neutrinos were emitted due to a core collapse,
preserving sphericity of the core. After 15 hours photons were
emitted due to maximum brightness of the envelope, in agreement
with SN theories, then 21 hours later the envelope explodes
asymmetrically producing a sudden change of space-time symmetry
which causes the emission of gravitational waves. It could be seen
that {\it {the spin-dependent model}} is more preferable than {\it
{the spin-independent model}}.
\subsection{The Cosmological Parameters}
Cosmological information are usually carried by, and extracted
from, massless spinning particles, {\it "carriers of cosmological
information"}. The photon (spin 1-particle) is a good candidate
representing one type of these carriers. Recently, the neutrino
(spin $\frac{1}{2}$-particle) entered the playground as another
type. We expect, in the near future, that a third type of
carriers, the graviton (spin 2-particle), to be used for
extracting cosmological information. Two factors affect the
properties of these carriers. The first is the source of the
carrier. The second factor is the trajectory of the carrier, in
the cosmic space, from its source to the receiver. The first
factor implies the information carried, which reflect the
properties of the source. The second factor represents the impact
of the cosmic space-time on the properties of the carrier. So,
information carried by these particles contain, a part connected
to their sources, and another part related to the space-time
through which these particles travelled.
Cosmological parameters are quantities extracted from the
information carried by the above mentioned particles.
Consequently, the values of such parameters are certainly affected
by the second factor. In the present work, we are going to explore
the impact of this factor on these parameters.
It is well known that, the red-shift of spectral lines, coming
from distant objects, plays an important role in measuring the
cosmological parameters. Theoretical calculations of the
red-shift, in the context of GR, treats it as a metric phenomena,
since the metric of space-time is the first integral of the
geodesic equation. But, if the trajectories of test particles, the
carriers, are spin-dependent, then the red-shift of spectral lines
is no longer a metric phenomena. In this case one should look for
an alternative scheme for calculating this quantity.
Kermack, McCrea and Whittaker
[28] developed two theorems on null-geodesics which were applied
to get the standard red-shift of relativistic cosmology, using the following formula,
$$ \frac{\lambda_{o}}{\lambda_{1}}~~= ~~\frac {^{1}\eta^{\mu}\rho_{\mu}}
{^{0}\eta^{\mu}\varpi_{\mu}}, \eqno{(6.5)}$$
where $^{1}\eta^{\mu}$ is the transport vector along the
null-geodesic $\Gamma$ connecting two observers A and B, evaluated
at A, $^{0}\eta^{\mu}$ is the transport vector evaluated at B,
$\rho^{\mu}$ is the unit tangent along the trajectory of A,
$\varpi^{\mu}$ is the unit tangent along the trajectory of B,
$\lambda_{1}$ is the wave length of the spectral line as measured
at A, $\lambda_{o}$ is the wave length of the spectral line as
measured at B, and $\Gamma$ represents the trajectory of a
massless particle from A (source) to B (receiver). If the universe
is expanding then $\lambda_{o} > \lambda_{1}$. It can be shown
that the two theorems, mentioned above, are applicable to any
null-path. So, they can be used for massless spinning particles
following the trajectory (4.3).
In order to evaluate the red-shift using (6.5) one has to know first
the values of the vectors used in this formula. Such vectors are
obtained as solution of the spin-dependent path equation (4.3).
Robertson [29] constructed two geometric AP-structures for
cosmological applications. Using one of these structures, and
performing the necessary calculations we get [30], $$
\frac{\lambda_{o}}{\lambda_{1}}~=~(\frac{R_{o}}{R_{1}})^{(1-\frac{n}{2}
\alpha \gamma)} \eqno{(6.6)} .$$ Now, we define the spin-dependent
scale factor as,
$${R}^{\ast}
= R^{(1-\frac{n}{2} \alpha \gamma))}. \eqno{(6.7)}$$
Using ${R}^{\ast}$, in place of\ R \ in the standard definitions of
the cosmological parameters, we can list the resulting
spin-dependent parameters in Table V. The second column of this
Table, gives the values of the parameters as if they are extracted
from massless spinless particles. The values of the parameters
extracted from photons should match the values listed in column 4.
It is worth of mention that the matter parameter is not affected by
the spin-gravity interaction. This is due to its independence of
Hubble's parameter.
\begin{center}
Table V: Spin-Dependent Cosmological Parameters \\
\begin{tabular}{|c|c|c|c|c|} \hline
& & & & \\ Parameter & Spin-0 & Spin-$\frac{1}{2}$ (neutrino)&
Spin-1 (photon) & Spin-2 (graviton)
\\ & & & &
\\ \hline & & & & \\ Hubble & $H_{o}$ & $(1- \frac{\alpha}{2}) H_o $ & $(1-\alpha)H_{o}$
& $(1- 2 \alpha)H_{o}$ \\ & & & & \\ \hline & & & & \\
Age & $\tau_{o}$ & $\frac{\tau_o}{(1-\frac{\alpha}{2})}$ &
$\frac{\tau_{o}}{(1-\alpha)}$ & $\frac{\tau_{o}}{(1- 2 \alpha)}$ \\
& & & & \\ \hline & & & & \\ Acceleration & $A_o$
& $(1-\frac{\alpha}{2})(A_{o}-\frac{\alpha}{2}H_{o})$
& $(1-{\alpha})(A_{o}-{\alpha}H_{o})$
& $(1-2{\alpha})(A_{o}-2{\alpha}H_{o})$
\\ & & & & \\ \hline & & & & \\ Deceleration &$ q_{o}$
&$\frac{(q_{o}-\frac{\alpha}{2H_{o}})}{(1-\frac{\alpha}{2})}$ &
$\frac{(q_{o}-\frac{\alpha}{H_{o}})}{(1-\alpha)}$ &
$\frac{(q_{o}-\frac{2 \alpha}{H_{o}})}{(1- 2\alpha)}$
\\ & & & & \\ \hline
\end{tabular}
\end{center}
There are some evidences for the existence of the spin-gravity
interaction on the laboratory scale (the results of the
COW-experiment), and on the galactic scale (the data of SN1987A).
Now, to verify the existence of this interaction on the
cosmological scale, observations of one parameter at least, using
two different types of carriers, are needed. For example, if we
observe neutrinos and photons to get Hubble's parameter, a
discrepancy of order 0.001 would be expected, if this interaction
exists on the cosmological scale.
\section{General Discussion and Concluding Remarks}
In the present work, it is shown that, starting within the
geometerization philosophy, some quantum properties appeared very
naturally in the structure of two types of non-symmetric
geometries (see the third column of Tables II and III). These
properties emerged without imposing any known quantization schemes
on the geometry. The properties characterize the torsion term of
two new sets of path equations discovered in each geometry, (2.25)
and (2.45). The natural appearance of such properties can be
considered as quantum roots built in non-symmetric geometry.
It is shown that these roots could be extended and diffuse in the
whole geometry, using a certain parameterization scheme, suggested
in Section 4. This scheme, applied to the AP-geometry, could be
applied with some efforts to the ENS-geometry. The application of
the parameterization scheme, not only diffuses the quantum
properties in the whole geometric structure, but also solves the
two main problems of the AP-geometry, mentioned in Subsection 2.1.
We can summarize the main advantages of this scheme in the
following points:
1. As stated above, it extends the quantum roots, appeared in the
path equations, of a non-symmetric geometry, to other geometric
entities.
2. It solves completely the curvature problems (the identical
vanishing of the curvature (2.12)) , mentioned in Subsection 2.1,
by defining a general parameterized non-vanishing curvature tensor
(4.4).
3. From the application point of view and depending on the
curvature (4.4), field theories written in the parameterized
AP-geometry do not need the condition for a vanishing torsion
(which leads to a flat AP-space via (2.18-21)), in order to get a
correct GR-limit. In other words, to switch-off the effect of the
torsion, in such theories, we only take the parameter $b = 0$.
4. It solves the second problem of conventional AP-geometry, i.e.
the non-physical applicability of the path equations (2.13). The
new quantum paths (4.3) could be used for physical applications,
as shown in Section 6.
5. The parameterized absolute parallelism (PAP) geometry is more
general than both the Riemannian and conventional AP-geometries.
It could account for both geometries as two limiting cases. These
limits can be obtained using (4.1b). The first limit $a = 0
\Longrightarrow b = 1$, which corresponds to the conventional
AP-geometry. The second is $a = 1 \Longrightarrow b = 0$, which
corresponds to the Riemannian geometry. Figure 1 [12], is a
schematic diagram giving the complete spectrum of geometries
admitted by the PAP-geometry.
\begin{center}
\includegraphics[width=10cm]{QG.ps}\\[5pt]
\end{center}
\begin{quote}
\begin{center}
\textbf{Figure 1:} Quantum Properties of PAP-Geometry.
\end{center}
\end{quote}
This Figure is plotted using equations (4.4) and (4.6).\\
The new quantum paths (4.3)can be reduced to the geodesic equation
of Riemannian geometry (or null-geodesic upon reparameterization),
upon setting $b = 0$, which switch-off the effect of the torsion
term. In this case the equation can account for classical
mechanics and relativistic mechanics. But if $b \neq 0$, then the
torsion of the background gravitational field will interact with
some of the properties of the moving particle. Recalling that the
parameter $b$ jumps by steps of one-half, (4.10), then one can
conclude that the property of the test particle, by which it
interacts with the torsion, is its quantum spin. For this reason,
the torsion term in (4.3) is suggested to represent "Spin-Gravity
Interaction". The linearization carried out in Section 5, shows
clearly that this interaction will reduce Newton's gravitational
potential, as obvious from (5.10). This equation shows that, the
gravitational potential felt by a spinning particle is less than
that felt by a spinless particle, or by a macroscopic object. In
other words, one can say that, spinning particles feel the
space-time torsion. This is similar to the fact that charged
particles feel the electromagnetic potential, while neutral
particles do not feel it.\\
The discrepancy, between the experimental results and the
theoretical calculations (using Newtonian gravity), of the
COW-experiment gives a good indicator for the existence of
spin-gravity interaction, on the laboratory scale. The experimental
results are found to be lower than the expected theoretical
calculations. This discrepancy can be interpreted, qualitatively, by
a decrease in the gravitational potential, of the Earth, felt by
neutrons (spin one-half particles). The value of this potential,
felt by neutrons, is less than the value given by Newton's theory.
The application of the new quantum path (4.3), Subsection 6.1, gives
good, qualitative and quantitative, agreement with the experimental
results. Such agreement gives, not only an evidence of the existence
of spin-gravity interaction, but
also a direct confirmation of equation (4.3).\\
The application of the linearized form of (4.3), in the case of
motion of spinning massless particles coming from SN1987A,
Subsection 6.2, gives a good model for the emission times of these
particles from this supernova (see Table IV). This may indicate
the presence of the spin-gravity interaction on the galactic
scale. But more efforts are still needed, both to confirm
supernovae mechanisms and for observing more supernovae, to give
strong confirmation for the existence of this interaction on the
astrophysical scale.\\
The full path equation (4.3) is applied in the case of cosmology,
Subsection 6.3. It is shown that the values of the cosmological
parameters will be affected by the spin-gravity interaction, if it
exists on the cosmological scale. The values of these parameters
will depend on the spin of the particle, from which cosmological
information are extracted. It is suggested that, a cosmological
parameter measured using two massless particles, with different
spins (e.g. photon and neutrino) may confirm the existence of
spin-gravity interaction on the cosmological scale. The
sensitivity of the apparatus, or experiment, to be used should be
better than $0.001$\\
In view of the present work, I will try to give short probable
answers to some of the good questions raised by professor V.Petrov
in the closing session of the conference:
Q1: What is the appropriate topology/geometry?
A1: A non-symmetric geometry.
Q2: How many dimensions?
A2: So for, in the context of geometerization of physics, we don't
need more that four dimensions. Mass and charge appear as
constants of integration. There are some attempts to represent
other interactions (e.g. electromagnetism) together with gravity
in spaces of four dimensions (cf. [7]).
Q3: What are the
experimental/observational signature of quantum-geometrical
effects?
A3: Concerning the experimental signature, the COW-type experiment
is a good media for testing quantum-geometrical effects on the
laboratory scale. the discrepancy in the results of this
experiment gives a good indicator for the existence of such
effects.
Concerning the observational signature, more efforts are needed
for observing photons and neutrinos (and probably gravitons, in
the future), from supernovae events, in order to detect the
existence of such effects on the astrophysical and cosmological
scales.\\
Finally, I would like to thank the organizing committee and
Professor V.Petrov for inviting me to participate in the conference
and to give this talk.\\ \\ \\ \\
{\bf References}\\
{ [1] Eisenhart, L.P. (1926) {\it "Riemannian Geometry"}, Princeton Univ. Press}. \\
{ [2] Einstein, A. (1929) Sitz. Preuss. Akad. Wiss., {\bf 1}, 1.} \\
{ [3] Einstein, A. (1955) {\it "The Meaning of Relativity"}, Appendix II, Princeton.}\\
{ [4] Bazanski, S. L., (1977) Ann. Inst. H. Poincar\'e, A{\bf 27}, 145.} \\
{ [5] Bazanski, S. L., (1989) J. Math. Phys., {\bf 30}, 1018}.\\
{ [6] Wanas, M.I.(1975) Ph.D. Thesis, Cairo University.} \\
{ [7] Mikhail, F.I. and Wanas, M.I. (1977) Proc. Roy. Soc. London, {\bf A356}, 471.} \\
{ [8] Wanas, M.I., Melek, M. and Kahil, M.E. (1995) Astrophys. Space
Sci., {\bf 228}, 273.;
gr-qc/0207113} .\\
{ [9] Wanas, M.I. and Kahil, M.E. (1999) Gen. Rel. Grav., {\bf 31}, 1921.; gr-qc/9912007}. \\
{[10] Wanas, M.I. (2000) Turk. J. Phys., {\bf 24}, 473.; gr-qc/0010099}.\\
{[11] Wanas, M.I. (1998) Astrophys. Space Sci., {\bf 258}, 237.; gr-qc/9904019}. \\
{[12] Wanas, M.I. (2002) Proc. MG IX, Vol. 2, 1303.} \\
{[13] M\o ller, C. (1978) Math. Fys. Medd. Dan.Vid. Selsk., {\bf 39}, 1.} \\
{[14] Hayashi, K. and Shirafuji, T. (1979) Phys. Rev. D{\bf 19}, 3524.} \\
{[15] Overhauser, A.W. and Colella, R. (1974) Phys. Rev. Lett., {\bf 33}, 1237.}\\
{[16] Colella, R., Overhauser, A.W. and Werner, S.A.(1975) Phys. Rev. Lett., {\bf 34},1472.}\\
{[17] Staudenmann, J.L., Werner, S.A., R. Colella, R. and A. W.
Overhauser, A.W. (1980)
Phys.Rev. A, {\bf 21}, 1419.}\\
{[18] Werner, S.A., Kaiser, H., Arif, M. and Clother, R. (1988) Physica B, {\bf 151}, 22.}\\
{[19] Greenberger, D.M. (1983) Rev. Mod. Phys., {\bf 55}, 875.}\\ \\
{[20] Wanas, M.I., Melek, M. and Kahil, M.E. (2000) Gravit. Cosmol.
{\bf 6}, 319;
gr-qc/9812085}. \\
{[21] Schramm, D.N. and Truran, J.W. (1990) Phys. Rep. {\bf 189}, 89-126.} \\
{[22] Weber, J. (1994) Proc. First Edoarndo Amaldi Conf.{\it "On
gravitational wave
experiment"} Ed. E.Coccia et al. World Scientific P.416.}\\
{[23] De Rujula, A. (1987) Phys. Lett. {\bf 60}, 176.} \\
{[24] Krauss, L.M. and Tremaine, S. (1988) Phys. Rev. Lett., {\bf 60}, 176.} \\
{[25] Longo, M.J. (1987) Phys. Rev. D, {\bf 36}, 3276.} \\
{[26] Longo, M. J. (1988) Phys. Rev. Lett., {\bf 60}, 173.}\\
{[27] Wanas, M.I., Melek, M. and Kahil, M.E. (2002) Proc. MG IX, Vol
2, 1100;
gr-qc/0306086}. \\
{[28] Kermack, W.O., McCrea, W.H. and Whittaker, E.T. (1933) Proc.
Roy.
Soc. Edin., {\bf 53}, 31.}\\
{[29] Robertson, H.P. (1932) Ann. Math. Princeton (2), {\bf 33}, 496.} \\
{[30] Wanas, M.I. (2002) To appear in the Proc. IAU-Symp.\# 201,
held in Manchester,
August 2000.} \\
\end{document}
|
2,877,628,091,217 | arxiv | \section{Introduction}
Due to the nonlinearity of Einstein's equation, it is virtually impossible to integrate it analytically without imposing restrictions over the initial ansatz. The most common way of doing so is by the imposition of symmetries. For instance, Schwarzschild solution has been found assuming that the spacetime has spherical symmetry, namely it has three Killing vectors whose Lie algebra is $\mathfrak{so}(3)$. Likewise, Kerr solution has been obtained relying on the existence of two commuting Killing vectors \cite{Kerr}, i.e. the spacetime was assumed to be stationary and axisymmetric. It is important to keep in mind that the hypothesis of two commuting Killing vectors is not over-restrictive from the physical point of view, since the rigidity theorem states that the equilibrium state of an astronomical object should be stationary and axisymmetric \cite{HawkingRigidity,Chrusciel:1996bj}.
Besides the symmetries of the spacetime, which are generated by Killing vectors, one can also impose symmetries on the geodesic motion, which are generated by Killing tensors and Killing-Yano tensors \cite{Carter-KleinG,Santillan}. Since the metric is always a Killing tensor, the existence of an extra Killing tensor along with two independent Killing vectors leads to four first integrals for the geodesic motion, which enables full integrability. Nevertheless, one might wonder whether it is plausible to assume the existence of a Killing tensor in physical spacetimes. The known examples tell us that the answer is yes. For instance, four-dimensional Kerr metric and, more generally, Kerr-NUT-(A)dS spacetimes in arbitrary dimension \cite{KerrNutAds}, are all endowed with enough Killing tensors to allow the integrability of the geodesic motion \cite{Kubiz,Krtous}. Thus, some of the most physically important exact solutions for Einstein's vacuum equation are endowed with Killing tensors. In addition to being related to the integrability of the geodesic motion, these Killing tensors are also related to the integrability of field equations in such spacetimes, like scalar fields \cite{Frol-KG}, electromagnetic fields \cite{KrtousMaxwell,Teukolsky}, and spin 1/2 fields \cite{OotaDirac}. Probably, the existence of these objects might also be related to the integrability of Einstein's equation itself \cite{Yasui}, as hinted by the successful integration of gravitational perturbations through the use of Killing tensors \cite{OotaGrav}. Moreover, these Killing and Killing-Yano tensors can play an important role in supersymmetric theories \cite{KY-SUSY,Cariglia}.
With these motivations in mind, in the present article we will search for solutions of Einstein's vacuum equation with a cosmological constant within the class of spacetimes possessing a Killing tensor and two commuting Killing vectors. The general form of the spaces with such symmetry properties has been found by Benenti and Francaviglia in Ref. \cite{BenentiFrancaviglia} and is given by:
\begin{align}
g^{ab}\partial_{a}\partial_{b} =\frac{1}{S_{x}+S_{y}}\,& \Big[
\,G_{x}^{ij}\,\partial_{\sigma_i}\partial_{\sigma_j}\,+\,G_{y}^{ij}\,\partial_{\sigma_i}\partial_{\sigma_j} \nonumber\\
& \;\;\;\quad + \Delta_{x}\, \partial_{x}^{2}+\Delta_{y}\, \partial_{y}^{2}\, \Big] \, , \label{BFmetric}
\end{align}
where functions with subscript $x$ are arbitrary functions of $x$, while those with subscript $y$ are arbitrary functions of $y$. For instance, $\Delta_{x} = \Delta_{x}(x)$. The indices $i,j$ run through $\{1,2\}$ and label the cyclic coordinates $\sigma_1$ and $\sigma_2$. Note that we can assume that $G_{x}^{ij} = G_{x}^{ji}$ and $G_{y}^{ij} = G_{y}^{ji}$, due to the symmetry of the metric. The rank two Killing tensor associated to this metric is given by
\begin{align}
\boldsymbol{K}\,=\,\frac{1}{S_{x}+S_{y}}\,\Big[& \,S_{x}\,G_{y}^{ij}\, \partial_{\sigma_i}\partial_{\sigma_j}
+ S_{x}\,\Delta_{y} \,\partial_{y}^{2} \nonumber\\
& - S_{y}\,G_{x}^{ij}\,\partial_{\sigma_i}\partial_{\sigma_j} - S_{y} \,\Delta_{x} \, \partial_{x}^{2} \, \Big] \,. \label{KillingT1}
\end{align}
In recent previous works we have already exploited the integrability of Einstein's equation of some spaces within the class of metrics (\ref{BFmetric}). In Ref. \cite{AnabalonBatista}, one of us (C.B.) along with A. Anabal\'{o}n investigated the subcase in which the determinants of the matrices $G_x^{ij}$ and $G_y^{ij}$ are both zero. It has been found that Einstein's vacuum equation with a cosmological constant is fully integrable for such a subcase, with Kerr-NUT-(a)dS being a particular solution. Latter, the present authors also considered the subcase of vanishing determinant for $G_x^{ij}$ and $G_y^{ij}$ but, instead of vacuum,
a gauge field of arbitrary gauge algebra have been considered as a source for the gravitational field \cite{GabrielBatista}. In particular, new exact solutions have been attained in Ref. \cite{GabrielBatista}.
Now, the idea is to explore another subcase of the class of spaces (\ref{BFmetric}). Namely, the one in which one of the matrices $G_{x}^{ij}$ or $G_{y}^{ij}$ vanishes identically. For definiteness, we shall assume $G_x^{ij} = 0$. In this case it is immediate to notice that the line element is given by
\begin{equation}\label{metric1}
ds^2 = (S_x + S_y) \left[ H_y^{ij}\,d\sigma_i d\sigma_j + \frac{dx^2}{\Delta_x} + \frac{dy^2}{\Delta_y} \right] \,,
\end{equation}
where $H_y^{ij}$ are arbitrary functions of $y$. Note that in the general case, when $G_{x}^{ij}$ and $G_{y}^{ij}$ are both nonzero, the line element would have the same algebraic structure above, but the components $H^{ij}$ would be convoluted combinations of functions of $x$ and functions of $y$. As we shall see in the sequel, Einstein's vacuum equation for the class of spaces described by (\ref{metric1}) is integrable. It will be shown that although most of the solutions found within this class are already known, we arrive at a particular solution that, as far as the authors know, has not been attained before.
The outline of the article is the following. At the next section we start the integration of Einstein's equation and conclude that the calculations should be split in three different cases depending on the constancy of the functions $S_x$ and $S_y$. The case in which both functions are constant is tackled in subsection \ref{SubSecA}, which yields flat spaces as the only solutions. Then, the case in which $S_y$ is constant while $S_x$ is non-constant is considered in subsection \ref{SubSecB}, with the only solutions being spaces of constant curvature. Finally, the case in which just $S_x$ is constant is considered in subsection \ref{SubSecC}. During the integration process of the latter case we conclude that there is a special subcase that must be considered separately. In \ref{SubSecC1} we treat the general case and arrive at a generalization of Kasner spacetime, while the special subcase is tackled in subsection \ref{SubSecC2} and leads to a solution that, as far as the authors know, has not been described in the literature yet. Then, in Sec. \ref{Sec.NewSOL} we investigate the geometrical features of the new solution. We show that this solution is a Kundt spacetime of Petrov type II possessing a null Killing vector field and that it reduces to a $pp$-wave spacetime when the cosmological constant vanishes. Its isometry algebra is three-dimensional and abelian, so that it is Bianchi type I, but, differently from the most known solutions of this type, the line element cannot be diagonalized using the cyclic coordinates associated to the Killing vectors. The regularity of the new solution and its asymptotic form are also investigated. The conclusions and perspectives are presented at Sec. \ref{Sec.Conc}. At appendix \ref{AppendixA} we show that at the asymptotic limit, the new solution goes to a Kasner spacetime.
\section{Integrating Einstein's Equation}\label{Sec.Integration}
The goal of this work is to integrate Einstein's field equation in vacuum with a cosmological constant $\Lambda$. That is, we want to find the most general solution of the equation
\begin{equation}\label{Einsteinseq}
R_{ab}=\Lambda g_{ab},
\end{equation}
for line elements of the form (\ref{metric1}), where $R_{ab}$ stands for the Ricci tensor.
Nevertheless, before doing so, it is useful to replace the three arbitrary functions $H_y^{11}$, $H_y^{22}$ and $H_y^{12}=H_y^{21}$ appearing in (\ref{metric1}) by the three functions $P_y$, $Q_y$ and $\Omega_y$ defined in a way that the line element assumes the following form:
\begin{align}\label{metric2}
ds^2=& \,S \,\Big( -\frac{1}{\Omega_y}d\sigma_1^2+ \frac{Q_y^2-P_y^2}{\Omega_y}d\sigma_2^2 \nonumber\\
& \quad \;\;+ \frac{2P_y}{\Omega_y} d\sigma_1 d\sigma_2 +\frac{dx^2}{\Delta_x}+\frac{dy^2}{\Delta_y} \,\Big),
\end{align}
where $S=S_x+S_y$. This represents no loss of generality.
Now, an immediate integration of the component $R_{\sigma_1}^{\ph{\sigma_1}\sigma_2}=0$ of Einstein's equation for the function $\Delta_y$ provides
\begin{equation}\label{Dy}
\Delta_y=\frac{c_1 \,Q_y^2 \,\Omega_y^2}{(S_x + S_y)^2(P_y')^2} \,,
\end{equation}
where $c_1$ is an arbitrary integration constant and the prime denotes a derivative with respect to the variable on which a function depends. Although Eq. (\ref{Dy}) is correct when $S_x$ is a constant function, such equation cannot be used when $S_x$ is non-constant, otherwise $\Delta_y$ would also depend on $x$. Thus, the case in which $S_x$ is a non-constant function of $x$ must be handled with special care\footnote{The equation $R_{\sigma_1}{}^{\sigma_2}=0$ has the structure $A_y + B_y S_x=0$, where $A_y$ and $B_y$ are functions of $y$. If $S_x$ is constant the general solution is $A_y = -B_y S_x$. However, if $S_x$ is non-constant the general solution is $A_y=0$ and $B_y=0$, thus yielding an extra constraint.}. Doing so, we find that the equation $R_{\sigma_1}^{\ph{\sigma_1}\sigma_2}=0$ yields the following constraints:
\begin{equation}\label{Dy2}
\Delta_y=\frac{c_1 \,Q_y^2 \,\Omega_y^2}{(P_y')^2} \;\; \textrm{ and } \;\; S'_y = 0 \,. \;\;\; (\textrm{when } S'_x\neq 0)
\end{equation}
In order to attain both of the expressions \eqref{Dy} and \eqref{Dy2}, we have considered that $P_y'\neq 0$. The special case in which $P_y$ is constant will be considered latter.
Now, assuming either \eqref{Dy} or \eqref{Dy2} to hold, and then integrating $R_{\sigma_2}^{\ph{\sigma_2}\sigma_1}=0$, we find that in both cases $Q_y$ must be given by
\begin{equation}\label{Qy}
Q_y=\sqrt{(P_y-a_1)(P_y-a_2)} \,,
\end{equation}
with $a_1$ and $a_2$ being arbitrary integration constants.
Also, irrespective of assuming the latter expressions for $\Delta_y$ and $Q_y$, the integration of the component $R_x{}^y=0$ leads to the following constraint:
\begin{equation*}
S_x'\,S_y'=0 \,.
\end{equation*}
Thus, we face three possible cases to be followed depending on whether the functions $S_x(x)$ and $S_y(y)$ are constant or not. Namely, (A) the functions $S_x$ and $S_y$ are both constant, (B) $S_x$ is non-constant and $S_y$ constant, and (C) $S_x$ is constant and $S_y$ non-constant. Particularly, note that in cases (A) and (C), $\Delta_y$ is given by Eq. \eqref{Dy}, while in the case (B) we must use Eq. \eqref{Dy2}. In the following section, each of these three cases will be treated separately. As we shall see, the cases (A) and (B) do not provide any particularly interesting solutions, while the case (C) will lead to solutions with richer physics: a generalization of Kasner solution, that is already available in the literature, and a new solution of Petrov type II possessing a null Killing vector field and whose isometry algebra is three-dimensional and abelian.
\subsection{Subcase $S_x'=0$ and $S_y'=0$}\label{SubSecA}
In this subsection we investigate the simplest of the three possible cases regarding the constancy of the functions $S_x$ and $S_y$, namely we shall consider that they are both constant. This gives rise to a constant conformal factor $S$ which can be easily incorporated into the coordinates by a scaling transformation, so that we can set
\begin{equation*}
S= S_x + S_y = 1 \,.
\end{equation*}
Then, assuming $S=1$, along with Eqs. (\ref{Dy}) and (\ref{Qy}) for $\Delta_y$ and $Q_y$, it follows that $R_{x}^{\ph{x}x}$ is automatically zero, so that the equation $R_{x}^{\ph{x}x} = \Lambda \delta^x_x$ states that the cosmological constant must vanish, $\Lambda = 0$. Then, integrating $R_{\sigma_1}^{\ph{\sigma_1}\sigma_1}=\Lambda=0$, we find
\begin{equation}\label{Omegay1}
\Omega_y=c_2\,(P_y-a_1)^{d} \, (P_y-a_2)^{1-d} \,,
\end{equation}
where $c_2$ and $d$ are arbitrary integration constants. In order to attain (\ref{Omegay1}) it was necessary to assume $a_1 \neq a_2$. Indeed, the special case $a_1=a_2$ would lead to a different expression for $\Omega_y$, but let us put this particular case aside and deal with it at the end of this subsection. With the latter expressions for $S$, $\Delta_y$, $Q_y$, and $\Omega_y$ at hand, the equation $R_{y}^{\ph{y}y}=\Lambda=0$ leads to the constraint $d = 0$. Actually, another possibility for solving $R_{y}^{\ph{y}y}=0$ is $d=1$, but this is equivalent to $d=0$ when we interchange the arbitrary constants $a_1$ and $a_2$, so that we just need to consider $d=0$. Thus, $\Omega_y$ should be given by:
\begin{equation*}
\Omega_y=c_2\, (P_y-a_2) \,.
\end{equation*}
With this expression for $\Omega_y$ along with the latter expressions for $S$, $Q_y$ and $\Delta_y$, it follows that the Riemann tensor is identically zero. Thus, the solution is the flat space. In particular, the Ricci tensor vanishes, so that we must have $\Lambda= 0$.
In the latter integration, we have excluded two possibilities, namely the case $a_1=a_2$ and the case in which $P_y$ is a constant function. Nevertheless, integrating these cases separately we have checked that, in both circumstances, the solution can only be attained for $\Lambda=0$ and that, likewise, these solutions turn out to be flat spaces. Thus, summing up, the case considered in this subsection, namely $S_x'=0$ and $S_y'=0$, do not lead to any interesting solution. More precisely, all solutions in such subcase are flat.
\subsection{Subcase $S_x'\neq0$ and $S_y'=0$} \label{SubSecB}
Now, let us integrate Einstein's vacuum equation for the subcase $S_x'\neq0$ and $S_y'=0$. Since the functions $S_x$ and $S_y$ appear in the metric only through the combination $S_x + S_y$, it follows that the constant value of $S_y$ can be absorbed into $S_x$. Thus, without loss of generality, we can set
\begin{equation*}
S_y = 0 \,.
\end{equation*}
Assuming that $P_y'\neq 0 $, it follows that $\Delta_y$ and $Q_y$ should be given by Eqs. (\ref{Dy2}) and (\ref{Qy}), respectively. With these at hand, it follows that integration of the component $R_{\sigma_1}^{\ph{\sigma_1}\sigma_1}- R_{y}^{\ph{y}y}=0$ of Einstein's equation yields
\begin{equation}\label{SSy}
\Omega_y = c_2 + c_3 P_y,
\end{equation}
with $c_2$ and $c_3$ being integration constants. Also, using the equation $R_{x}^{\ph{x}x}=\Lambda$, we obtain
\begin{equation}\label{D1}
\Delta_x=\frac{ c_4\, S_x^2 - 4 \Lambda \, S_x^3 }{ 3(S_x')^2 },
\end{equation}
where $c_4$ is another integration constant. Finally, imposing $R_{\sigma_1}^{\ph{\sigma_1}\sigma_1}= \Lambda$ we arrive at the following constraint on the integration parameters:
\begin{equation}\label{c2c4}
c_4 = -3 c_1(c_2+a_1 c_3)(c_2+a_2 c_3)\,.
\end{equation}
Then, once assumed that $c_4$ is given by Eq. (\ref{c2c4}), it follows that Einstein's vacuum equation $R_{a}^{\ph{a}b}=\Lambda\delta_{a}^{\ph{a}b}$ are fully obeyed. Nevertheless, one can check that this final solution has vanishing Weyl tensor, so that the Riemann tensor obeys
\begin{equation}
R_{abcd}=\frac{1}{3}\Lambda \left(g_{ac}g_{bd}-g_{ad}g_{bc}\right).
\end{equation}
Thus, the solution that we have found have constant curvature, i.e. they are de Sitter and anti de Sitter spacetimes when Lorentzian signature is assumed and $\Lambda\neq 0$, while it is the flat space for vanishing cosmological constant.
A possibility that has not been considered yet for the present subcase ($S_x'\neq0$ and $S_y'=0$) is $P_y'=0$, in which case Eqs. (\ref{Dy2}) and (\ref{Qy}) are not valid. However, integrating Einstein's equation for $S_y=0$ and $P_y'=0$ we also eventually find that the solution is a maximally symmetric space. Thus, all solutions of the subcase $S_x'\neq0$ and $S_y'=0$ turn out to be the ``non-interesting'' spaces of constant curvature.
\subsection{Subcase $S_x'=0$ and $S_y'\neq0$} \label{SubSecC}
Finally, let us consider the subcase $S_x'=0$ and $S_y'\neq 0$, in which case we can, without loss of generality, absorb the constant value of $S_x$ into $S_y$ and set
\begin{equation}\label{Sx3}
S_x = 0 \,.
\end{equation}
Moreover, we can easily redefine the coordinate $x$ ($dx\rightarrow d\tilde{x} = dx/\sqrt{\Delta_x}$) in order to eliminate the function $\Delta_x$. Doing so, and dropping the tilde over the new coordinate, we find that this is equivalent to setting
\begin{equation}\label{Dx3}
\Delta_x = 1 \,.
\end{equation}
In particular, note that due to Eqs. (\ref{Sx3}) and (\ref{Dx3}) the metric is independent of the coordinate $x$. Thus, besides the Killing vector fields $\partial_{\sigma_1}$ and $\partial_{\sigma_2}$, $\partial_{x}$ does also generate a symmetry. These three independent Killing vector fields commute with each other and, therefore, yields an abelian three-dimensional algebra. According to Bianchi's classification of three-dimensional Lie Algebras, this isometry algebra is of Bianchi type I \cite{LBianchi}. Moreover, it is worth noting that the Killing tensor (\ref{KillingT1}) is trivial in this subcase. Indeed, with the choices (\ref{Sx3}) and (\ref{Dx3}) we get $\bl{K}=-\partial_x^2$, so that the first integral associated to $\bl{K}$ for the geodesic motion is just the square of the one associated to the Killing vector $\partial_{x}$ \cite{Santillan}.
Postponing the analysis of the special case in which $P_y$ is constant, we can assume expressions (\ref{Dy}) and (\ref{Qy}) to hold. Doing so, and using \eqref{Sx3} and \eqref{Dx3}, it follows from the integration of $R_{\sigma_1}^{\ph{\sigma_1}\sigma_1}- R_{x}^{\ph{x}x}=0$ that $\Omega_y$ must be given by
\begin{equation}\label{Omegay3}
\Omega_y=c_2(P_y-a_1)^{d}(P_y-a_2)^{1-d},
\end{equation}
where $c_2$ and $d$ are arbitrary integration constants. Then, assuming \eqref{Omegay3} to hold, it follows from the integration of $R_{\sigma_1}^{\ph{\sigma_1}\sigma_1}- R_{y}^{\ph{y}y}=0$ that
\begin{equation}\label{S2}
S_y=\left[b_1\left(\frac{P_y-a_1}{P_y-a_2}\right)^{d_+} + b_2\left(\frac{P_y-a_1}{P_y-a_2}\right)^{d_-}\right]^{-2/3}.
\end{equation}
In the above expression, while $b_1$ and $b_2$ are, for the time being, arbitrary integration constants, $d_{\pm}$ are not arbitrary,
rather they are given in terms of $d$ by:
\begin{equation}\label{epm}
d_{\pm} =\frac{1}{2}\left[1 - 2\,d \pm \sqrt{d(d-1)+1}\right] \,.
\end{equation}
Finally, integrating $R_x^{\ph{x}x}=\Lambda$, we find that $b_1$ and $b_2$ must be constrained by the following relation:
\begin{equation}\label{b1b2}
b_1 b_2 = \frac{3\Lambda}{c_1 c_2^2(a_1-a_2)^2[d(d-1)+1]} \,.
\end{equation}
In order for the latter expression to be meaningful we need to have $a_1\neq a_2$. The special case $a_1=a_2$ will be considered latter. With the above prescriptions, namely Eqs. (\ref{Dy}), (\ref{Qy}), and (\ref{Sx3})-(\ref{b1b2}), we have that Einstein's vacuum equation is fully obeyed. Notice that in this solution the function $P_y$, apart from being non-constant, has not been constrained. This freedom on the choice of $P_y$ is expected from the fact that in the metric \eqref{metric2} we could, for instance, have set $\Delta_y=1$ by means of a redefinition of the coordinate $y$. Thus, we have started with more degrees of freedom than necessary. The important point is that different choices of $P_y$ can be understood as different choices of the coordinate $y$ and, therefore, represent the same physical space.
\subsubsection{Turning the metric into a diagonal form}\label{SubSecC1}
Now, let us try to identify the solution just found.
Integrating the Killing equation, we can check that this solution admits no other independent generators of symmetries besides the commuting Killing vector fields $\partial_{\sigma_1}$, $\partial_{\sigma_2}$, and $\partial_{x}$. Thus, this solution is, indeed, a Bianchi Type I space.
A well-known class of spacetimes that are Bianchi type I are the so-called \textit{Bianchi type I cosmological spacetimes}, which have the diagonal form
\begin{equation}\label{bianchiI}
ds^2=-d\tau^2+(A^1_{\tau})^2dz_1^2+ (A^2_{\tau})^2 dz_2^2+ (A^3_{\tau})^2 dz_3^2 \,,
\end{equation}
where $A^1_{\tau}$, $A^2_{\tau}$ and $A^3_{\tau}$ are arbitrary functions of $\tau$. These spacetimes are used by cosmologists to incorporate anisotropy at the space-like hyper-surfaces $\tau=constant$, providing a generalization of the FRW cosmological model \cite{Jacobs}. The Killing vectors $\partial_{z_1}$, $\partial_{z_2}$ and $\partial_{z_3}$ generate a three-dimensional abelian isometry group, so that the isometry algebra is of Bianchi type I. This isometry group acts transitively on the three-dimensional hyper-surfaces given by $\tau = constant$. The diagonal form of this line element indicates that the coordinate vector fields are orthogonal to family hyper-surfaces. In particular, the Killing vectors $\partial_{z_1}$, $\partial_{z_2}$, and $\partial_{z_3}$ are hyper-surface orthogonal. For instance, $\partial_{z_1}$ is orthogonal to the hyper-surfaces $z_1=constant$.
Coming back to our Bianchi type I solution found in the present subsection, one can see that while $\partial_x$ is a hyper-surface orthogonal Killing vector, the existence of the term $d\sigma_1 d\sigma_2$ in the line element (\ref{metric2}) indicates that the Killing vector fields $\partial_{\sigma_1}$ and $\partial_{\sigma_2}$ are not orthogonal to families of hyper-surfaces. Indeed, we can check that
\begin{equation*}
(\partial_{\sigma_1})_{[a}\nabla_b(\partial_{\sigma_1})_{c]}\neq0 \quad \text{and} \quad
(\partial_{\sigma_2})_{[a}\nabla_b(\partial_{\sigma_2})_{c]}\neq0 \,.
\end{equation*}
Thus, let us try to find two independent Killing vector fields that are orthogonal to families of hyper-surfaces to replace $\partial_{\sigma_1}$ and $\partial_{\sigma_2}$. Defining the Killing vector field
\begin{equation*}
\bl{k} = \alpha\,\partial_{\sigma_1} + \partial_{\sigma_2}
\end{equation*}
and imposing the condition $k_{[a}\nabla_bk_{b]}=0$, one can find that as long as $Q_y=\sqrt{(P-a_1)(P-a_2)}$, irrespective of form of the other functions appearing in the line element (\ref{metric2}), we end up with two possible values for the constant parameter $\alpha$: either $\alpha=a_1$ or $\alpha=a_2$. Thus, whenever $a_1\neq a_2$ we can exchange the independent Killing vector fields $\partial_{\sigma_1}$ and $\partial_{\sigma_2}$ by
\begin{equation}\label{k1k2}
\bl{k_1} = a_1\partial_{\sigma_1}+\partial_{\sigma_2} \quad \text{and} \quad \bl{k_2} = a_2\partial_{\sigma_1}+\partial_{\sigma_1}\,,
\end{equation}
which are also independent if $a_1\neq a_2$. The important point is that $\bl{k_1}$ and $\bl{k_2}$ are hyper-surface orthogonal, differently from $\partial_{\sigma_1}$ and $\partial_{\sigma_2}$.
Since $\bl{k_1}$ and $\bl{k_2}$ commute with each other, we can associate to them coordinates $\phi_1$ and $\phi_2$ such that $\bl{k_1} = \partial_{\phi_1}$ and $\bl{k_2} = \partial_{\phi_2}$. Indeed, $\phi_1$ and $\phi_2$ are defined by
\begin{equation}
\sigma_1 = a_1 \phi_1 + a_2 \phi_2 \quad \text{and} \quad \sigma_2= \phi_1 +\phi_2 \,.
\end{equation}
In terms of these coordinates, the line element (\ref{metric2}) takes the form below:
\begin{align}
ds^2=& \frac{S_y}{\Delta_y}dy^2 -\frac{(a_2-a_1)(P_y-a_1)S_y}{\Omega_y}d\phi_1^2 \nonumber \\
&+\frac{(a_2-a_1)(P_y-a_2)S_y}{\Omega_y}d\phi_2^2 + S_ydx^2. \label{metric4}
\end{align}
This diagonal line element can be easily put in the general form (\ref{bianchiI}) by redefining the coordinate $y$.
The fact that the investigated solution could be diagonalized using three cyclic coordinates, $\phi_1$, $\phi_2$ and $x$, could be anticipated from the fact that if we take a general Killing vector field, $\bl{\eta} = \lambda_1 \partial_{\sigma_1} + \lambda_2 \partial_{\sigma_2} + \lambda_3 \partial_{x} $ and compute its squared norm, we will conclude that if $a_1\neq a_2$ then $\eta^a\eta_a = 0$ only if $\lambda_1=\lambda_2=\lambda_3 = 0$. Thus, the hyper-surfaces $y=constant$, spanned by the Killing vector fields, have metrics that are either positive-definite or negative-definite. In this circumstance, there is a result on the literature stating that the metric can be diagonalized. Indeed, in \cite{Jacobs} it is shown that it is always possible to diagonalize a metric of the form $ds^2=-dt^2+\gamma_{ij}dx^i dx^j$, where $\gamma_{ij}$ is a positive/negative-definite three-dimensional metric, whenever the Einstein's vacuum equation with cosmological constant is imposed. Nevertheless, for the case in which $a_1=a_2$ we can have a non-zero light-like Killing vector, so that the diagonalization cannot be attained using cyclic coordinates.
Remember that the non-constant function $P_y$ has not been constrained, which was a consequence of the freedom in the choice of the coordinate $y$, as argued above. Thus, without any loss of generality, we can set
\begin{equation}\label{Py}
P_y = \frac{a_2 F_y -a_1}{F_y -1} \,,
\end{equation}
with the function $F_y$ being defined by
\begin{equation*}
F_y=\left[\sqrt{\frac{b_2}{b_1}}\tan\left(\frac{\sqrt{3\Lambda}y}{2}\right)\right]^{2/\sqrt{d(d-1)+1}}.
\end{equation*}
This choice in the coordinate $y$ was made so that the component $g_{yy}$ of the metric became equal to the unit. Then, assuming (\ref{Py}) to hold and replacing the cyclic coordinates $\phi_1$, $\phi_2$ and $x$ by their rescaled versions defined by
\begin{align*}
x_1 &= \sqrt{ \frac{(3\Lambda)^{p_1}(a_1-a_2) b_2^{p_1-2/3}}{2^{2p_1} \,c_2\, b_1^{p_1}}} \,\, \phi_1 \,,\\
x_2 &= \sqrt{\frac{(3\Lambda)^{p_2} (a_2-a_1) b_2^{p_2-2/3}}{2^{2p_2} \,c_2\, b_1^{p_2}}} \,\, \phi_2 \,, \\
x_3 &= \sqrt{\frac{(3\Lambda)^{p_3} b_2^{p_3-2/3} }{ 2^{2p_3} \ b_1^{p_3}}} \,\, x \,,
\end{align*}
with the constant parameters $p_i$ given by
\begin{align*}
p_1&=\frac{2-d}{3\sqrt{d(d-1)+1}} + \frac{1}{3}\,,\\
p_2 &=-\,\frac{d+1}{3\sqrt{d(d-1)+1}} + \frac{1}{3} ,\\
p_3&=\frac{2d-1}{3\sqrt{d(d-1)+1}} + \frac{1}{3} ,
\end{align*}
it follows that the line element (\ref{metric4}) becomes
\begin{equation}\label{KasnerM}
ds^2=dy^2+L_y^{2/3}\Bigg[ \sum_{i=1}^{3}e^{2(p_i-\frac{1}{3})N_y}(dx_i)^2\Bigg]\,,
\end{equation}
where
\begin{equation*}
L_y =\frac{\sin(\sqrt{3\Lambda}y)}{\sqrt{3\Lambda}} \, ,\;\;
N_y =\textrm{Log}\left[\frac{2 \,\tan\left( \sqrt{3\Lambda}y/2\right)}{\sqrt{3\Lambda}} \right].
\end{equation*}
Note that the parameters $p_i$ obey the following constraint.
\begin{equation}
\sum_{i=1}^{3} p_i= 1\,, \; \textrm{ and } \; \sum_{i=1}^{3} p_i^2 = 1.
\end{equation}
The solution (\ref{KasnerM}) is a generalization of the Kasner metric for the case in which the cosmological constant is different from zero. This particular solution is already available in the literature, see chapter 13 of Ref. \cite{Stephani}. In the limit $\Lambda\rightarrow 0$ the solution becomes
\begin{equation}
ds^2=dy^2+y^{2p_1}dx_1^2+ y^{2p_2}dx_2^2+y^{2p_3}dx_3^2,
\end{equation}
which is Kasner Metric \cite{Stephani, EKasner}. Such a solution is used in cosmology to model an anisotropic vacuum universe \cite{Jacobs}.
In order to obtain the latter solution we have avoided two special cases, namely we have assumed that $P_y$ is non-constant, so that (\ref{Dy}) hold, and have assumed $a_1\neq a_2$. Thus, for completeness, we should also tackle these cases. First, considering $P_y$ constant and following steps analogous to the ones adopted above, one can check that solutions can be attained but all these solutions are either equivalent to (\ref{KasnerM}) or one of its subcases. So, the special case in which $P_y$ is constant does not lead to new solutions. Differently, the special case $a_1=a_2$ will yield a new solution that is not available in the literature. In the case $a_1=a_2$, Eqs. (\ref{S2}) and (\ref{b1b2}) are not valid so that the calculations should be done separately, which we shall do in the next subsection. Note that in this special case the Killing vectors (\ref{k1k2}) are not independent from each other, so that the diagonal form above cannot be attained, as hinted by the fact that the coordinates $\phi_1$ and $\phi_2$ are proportional to each other when $a_1=a_2$.
\subsubsection{The special case $a_1=a_2$} \label{SubSecC2}
The special case $a_1=a_2$ will be considered in the present section. It turns out that this will be the most interesting case, since, as far as the authors know, the obtained solution has not been described in the literature yet.
In the sequel, we will assume
\begin{equation}\label{SxDxDy}
S_x=0 \;,\;\; \Delta_x=1 \;,\; \textrm{and } \; \Delta_y=\frac{c_1Q_y^2\Omega_y^2}{S_y^2(P_y')^2}\,,
\end{equation}
as assumed for the general case, whereas the function $Q_y$ reduces to
\begin{equation}\label{delta2Q}
Q_y = P_y-a_1 \,,
\end{equation}
since now $a_1=a_2$. Then, from the integration of the equation $R_{\sigma_1}^{\ph{\sigma_1}\sigma_1}- R_{x}^{\ph{x}x}=0$, we obtain
\begin{equation}\label{Sk}
\Omega_y=c_2\,Q_y\,e^{-\tilde{d}/Q_y},
\end{equation}
where $c_2$ and $\tilde{d}$ are arbitrary integration constants.
Using this result for integrating the equation $R_{\sigma_1}^{\ph{\sigma_1}\sigma_1}- R_{y}^{\ph{y}y}=0$, we find that
\begin{equation}\label{S2k}
S_y=\left[ b_1 e^{3\tilde{d}/(2Q_y)} + b_2\, e^{\tilde{d}/(2Q_y)} \right]^{-2/3},
\end{equation}
with $b_1$ and $b_2$ being arbitrary integration constants.
Finally, solving $R_{\sigma_1}^{\ph{\sigma_1}\sigma_1}=\Lambda$, we conclude that the constants $b_1$ and $b_2$ must be related to $\Lambda$ as follows:
\begin{equation}\label{b1b2k}
b_1 b_2=\frac{3\Lambda}{ c_1 \,c_2^2 \, \tilde{d}^2}.
\end{equation}
This concludes the integration, as it can be checked that the remaining components of Einstein's equations are obeyed. Thus, we have completely integrated Einstein's equations for the particular case in which $a_1=a_2$, the general solution being given by the line element (\ref{metric2}) with its functions given by (\ref{SxDxDy})-(\ref{b1b2k}). An interesting fact is that this solution for the case $a_1=a_2$ can be obtained from the case $a_1\neq a_2$ by defining
\begin{equation*}
d = \frac{\tilde{d}}{a_1 - a_2}
\end{equation*}
and then taking the singular limit $a_2\rightarrow a_1$ in the expressions (\ref{Omegay3}), (\ref{S2}), and (\ref{b1b2}).
Now, let us try to put the solution just found in a neater form. First, let us make use of the degree of freedom on the choice of $P_y$ to set
\begin{equation*}
P_y = y \,.
\end{equation*}
As explained before, this amounts to no loss of generality. Then, we shall perform the coordinate transformation $(\sigma_1, \sigma_2, x, y)\rightarrow (t,\phi,\theta,r)$, where the new coordinates are defined by
\begin{align*}
\sigma_1 &= - \frac{\sqrt{c_2} \, b_1^{1/3}}{2\sqrt{\tilde{d}} } \left[ (\tilde{d}+ a_1\,\tilde{c}) \,t - a_1 \phi\right] \\
\sigma_2 &= \frac{\sqrt{c_2} \, b_1^{1/3}}{2 a_1 \sqrt{\tilde{d}} } \left[ (\tilde{d}- a_1\,\tilde{c}) \,t + a_1 \phi\right]\\
x&= b_1^{1/3} \, e^{(a_1\tilde{c}-\tilde{d})/(2a_1)} \,\, \theta \,,\\
y &= \frac{a_1^2\,(r + \tilde{c} )}{a_1 \, r + a_1\tilde{c}-\tilde{d}} \, ,
\end{align*}
with the constant $\tilde{c}$ standing for
\begin{equation*}
\tilde{c} = \frac{\tilde{d}}{a_1}-\log(c_1\,c_2^2\,b_1^2\, \tilde{d}^2 /3)\,.
\end{equation*}
In terms of these new coordinates the line element is given by
\begin{equation}\label{metric23}
ds^2=\frac{e^{-r} dr^2}{3(1+\Lambda e^{-r})^2}+\frac{e^{-r}d\theta^2-dt(r\, dt+d\phi)}{(1+ \Lambda e^{-r} )^{2/3}}\,.
\end{equation}
Notice that we were able to get rid of all of the integration constants, so that this solution depends just on the cosmological constant, which is an external parameter. In these coordinates the metric is Lorentzian, although the signature could be easily changed by means of Wick rotations.
\section{Analyzing The New Solution }\label{Sec.NewSOL}
In this section we shall analyze the geometrical properties of the line element (\ref{metric23}) aiming the identification of the spacetime. As we will argue in the sequel, such analysis hints that the metric given in Eq. (\ref{metric23}) might be a new exact solution for Einstein's vacuum equation. In order to arrive at this conclusion, we have tried to characterize this line element as much as possible and then looked for known solutions with the same geometrical features. The bottom line is that as far as the authors were able to investigate, the solution (\ref{metric23}) has not been defined in the literature yet.
First, let us point out that the special case of vanishing cosmological constant of the solution (\ref{metric23}) is already described in the literature. Indeed, when $\Lambda=0$ it follows that $\partial_\phi$ is a covariantly constant null vector field, so that the line element represents a $pp$-wave spacetime \cite{Stephani,EhlersKundt}. The $pp$-wave spacetimes are Petrov type $N$ and all their curvature scalars vanish identically (VSI spacetimes), for more in this class of spaces see Ref. \cite{VPravdaEtAl}.
Thus, it remains to analyze the general case $\Lambda\neq0$. A good starting point is to investigate the isometry group of the solution (\ref{metric23}). A complete integration of the Killing equation yields that the isometry group is three-dimensional and abelian, with the trivial Killing vectors $\partial_t$, $\partial_\theta$, and $\partial_\phi$ being a basis for the isometry Lie algebra. So, the isometry algebra is of Bianchi type I. Forming a general linear combination of these Killing vectors, we can see that the only ones that are orthogonal to families of hyper-surfaces are $\partial_\theta$ and $\partial_\phi$. Moreover, note that the Killing vector field $\partial_\phi$ is null. In particular, the existence of a null Killing vector implies that the line element cannot be put in a diagonal form using cyclic coordinates, differently from the previous case $a_1\neq a_2$, see the discussion on the paragraph below Eq. (\ref{metric4}).
Besides studying the isometry group, another geometric way to characterize the solution \eqref{metric23} is analysing its Petrov type. In order to do so, we need to use a so-called null tetrad frame $\{\bl{\ell},\bl{n},\bl{m},\bl{\bar{m}}\}$, in which the vector fields $\bl{\ell}$ and $\bl{n}$ are real, while $\bl{m}$ and $\bl{\bar{m}}$ are complex and conjugated to each other. The only non-vanishing inner products in such a frame are
$\ell^a n_{a} = -1$ and $m^a \bar{m}_{a} = 1$. Using one of the null tetrad frames below, i.e. choosing either the $+$ frame or the $-$ frame,
\begin{widetext}
\begin{align*}
\bl{\ell} &= \partial_\phi \,, \\
\bl{n}_{\pm} &= \pm\frac{e^r\sqrt{2}}{\sqrt{\Lambda}} (1+ \Lambda e^{-r})^{2/3}(3+ \Lambda e^{-r})^{1/2} \, \partial_\theta \
+ 2 (1+ \Lambda e^{-r})^{2/3} \partial_t
+ \frac{1}{ \Lambda } (1+ \Lambda e^{-r})^{2/3}[3 e^r + \Lambda(1-2r) ] \, \partial_\phi \,, \\
\bl{m}_{\pm} &= \frac{\sqrt{3}e^{r/2}}{\sqrt{2\,}} (1+ \Lambda e^{-r}) \, \partial_r \
+ i\,\frac{e^{r/2}}{\sqrt{2}} (1+ \Lambda e^{-r})^{1/3} \partial_\theta
\pm i\, \frac{e^{r/2}}{\sqrt{\Lambda}} (3+ \Lambda e^{-r})^{1/2} (1+ \Lambda e^{-r})^{1/3} \, \partial_\phi \,, \\
\bl{\bar{m}}_{\pm} &= \frac{\sqrt{3}e^{r/2}}{\sqrt{2\,}} (1+ \Lambda e^{-r}) \, \partial_r \
- i\,\frac{e^{r/2}}{\sqrt{2}} (1+ \Lambda e^{-r})^{1/3} \partial_\theta
\mp i\, \frac{e^{r/2}}{\sqrt{\Lambda}} (3+ \Lambda e^{-r})^{1/2} (1+ \Lambda e^{-r})^{1/3} \, \partial_\phi \,,
\end{align*}
\end{widetext}
it follows that the only Weyl scalars different from zero are, respectively,
\begin{align*}
\Psi_2 &= \frac{\Lambda}{6} (1+ \Lambda e^{-r}) \, ,\, \textrm{ and} \\
\Psi_3 &=\mp \,i \sqrt{\frac{\Lambda e^r}{4 }} (1+ \Lambda e^{-r})^{4/3} (3+ \Lambda e^{-r})^{1/2}\,.
\end{align*}
The fact that $\Psi_0$, $\Psi_1$, and $\Psi_4$ all vanish in these frames means that $\bl{\ell}=\partial_\phi$ is a repeated principal null direction of the Weyl tensor, while $\bl{n}_{\pm}$ are non-degenerated principal null directions. Moreover, this implies that the Weyl tensor is of Petrov type II. For some review on the Petrov classification, see Ref. \cite{Bat-Book-art2}.
Another important geometric characterization of this spacetime is that the null vector field $\partial_\phi$ is geodesic, shear-free, twist-free, and expansion-free. This means that the above solution is contained in the Kundt class of spacetimes. For a recent review on this class of spacetimes see \cite{ColeyPapadopoulos}.
All the above features of the solution (\ref{metric23}) have been extensively used in order to try to find it in the literature. In particular, a thorough search has been performed by the authors on the books \cite{Stephani,GrifPodol-Book}. In fact, the closest that we could get from finding such a solution in the literature was in chapter 31 of Stephani et. al. book \cite{Stephani}, where they exhibit the Kundt's class of spacetimes. In particular, for solutions of Petrov type II with non-zero cosmological constant, the authors of \cite{Stephani} refer to two papers, \cite{Garcia} and \cite{Khlebnikov}, where special solutions in such a class of spacetimes are found. However, our solution (\ref{metric23}) could not be found there, inasmuch as their solutions contain strictly nonzero electromagnetic fields. In light of this, it seems to the authors of the present paper that the spacetime described by the metric (\ref{metric23}) has not been presented in the literature so far, being a new solution of Einstein's field equations with cosmological constant. Actually, the analysis of the existing literature revealed that there are few known exact vacuum solutions of Petrov type II. In contrast, solutions of Petrov type D are much more abundant. For instance, W. Kinnersley has been able to fully integrate Einstein's vacuum equation with vanishing cosmological constant for the entire class of type D spacetimes \cite{typeD}, yielding a plethora of solutions, a particular example being Kerr metric.
Concerning the regularity of the line element (\ref{metric23}), it seems that it is regular at all range of the coordinate $r$ except for the point $r=-\infty$ and when the denominator $(1+\Lambda e^{-r})$ vanishes. Computing some curvature scalars we have found the following pattern:
\begin{align}
& R^{a_1b_1}_{\ph{a_1b_1}a_2b_2} R^{a_2b_2}_{\ph{a_1b_1}a_3b_3} \cdots R^{a_nb_n}_{\ph{a_nb_n}a_1b_1} = \nonumber\\
& \quad 4 \sum_{j=0}^{n}\,\binom{n}{j}\, \frac{e^{-jr} }{3^j} \Lambda^{n+j} + \frac{2}{3^n} (-2 \Lambda^2 e^{-r})^n\,, \label{Scalars2}
\end{align}
where $R_{abcd}$ stands for the Riemann tensor.
Note that all these scalars are finite for $r\neq - \infty$. On the other hand, in the limit
$r\rightarrow- \infty$ these scalars diverge exponentially like $e^{n|r|}$. Thus, the point $r=-\infty$ is a singularity of the spacetime, while other points are regular. Likewise, computing the curvature scalar
\begin{equation}\label{DR2}
\nabla^{a}R^{bcde}\nabla_{a}R_{bcde} = \frac{20}{3} \Lambda^4 (1+\Lambda e^{-r})^2 e^{-r}\,,
\end{equation}
we check that there is a divergence just at $r=-\infty$.
Note that when the cosmological constant is negative the denominator $(1+\Lambda e^{-r})$ can vanish, which could indicate the existence of a real singularity at $r= \log(-\Lambda)$, inasmuch as the line element (\ref{metric23}) blows up. However, the fact that the curvature scalars (\ref{Scalars2}) and (\ref{DR2}) are perfectly regular at $r= \log(-\Lambda)$ reveals that this is not the case. In other words, the divergence of the metric components at $r= \log(-\Lambda)$, when $\Lambda < 0$, is just a coordinate singularity.
The asymptotic limit $r\rightarrow \infty$ has a particularly simple structure concerning the curvature scalars. While Eq. (\ref{DR2}) reveals that the square of the derivative of the curvature tensor goes to zero in this limit, the powers of the Riemann tensor given in Eq. (\ref{Scalars2}) goes to $4\Lambda^n$ when $r\rightarrow \infty$. Such a simple structure reminds of spaces of constant curvature like (anti-)de Sitter, $(a)dS_4$, which is a four-dimensional Lorentzian space of constant curvature, and (anti-)Nariai, $(a)N_4$, which is a solution of Einstein's equation that is the direct product of two spaces of constant curvature. However, although these two spacetimes have covariantly constant Riemann tensors, so that $\nabla^{a}R^{bcde}\nabla_{a}R_{bcde}=0$, in agreement with the behaviour of Eq. (\ref{DR2}) in the limit $r\rightarrow \infty$, the powers of the Riemann tensor differ from the ones of our spacetime. Instead of $4\Lambda^n$, which is obtained from Eq. (\ref{Scalars2}) in the limit $r\rightarrow \infty$, for these solutions we have
\begin{equation*}
R^{a_1b_1}_{\ph{a_1b_1}a_2b_2} \cdots R^{a_nb_n}_{\ph{a_nb_n}a_1b_1} = \left\{
\begin{array}{ll}
(a)dS_4:\;\; 6 (2\Lambda/3)^n \,, \\
\quad \\
(a)N_4:\;\; 2 (2\Lambda)^n\,.
\end{array}
\right.
\end{equation*}
Thus, we can state that the new solution is neither asymptotically $(a)dS_4$ nor asymptotically $(a)N_4$.
In order to investigate the asymptotic limit of our solution, we shall focus on the block related to $dt$ and $d\phi$ in the line element (\ref{metric23}), namely let us consider
\begin{equation*}
ds^2_{t\phi} \equiv - dt(\,r \,dt + d\phi) \,.
\end{equation*}
Then, performing the coordinate transformation $(t,\phi) \rightarrow (\tilde{t},\tilde{\phi})$, where
\begin{equation*}
\tilde{t} = r\,t \,, \;\; \textrm{and} \;\; \tilde{\phi} = r^{-1}\,\phi \,,
\end{equation*}
it follows that $ds^2_{t\phi}$ becomes:
\begin{equation*}
ds^2_{t\phi} = -d\tilde{t} \, d\tilde{\phi} - \frac{1}{r}\left[ d\tilde{t}^2 + \tilde{\phi}\, d\tilde{t} dr - \tilde{t}\, d\tilde{\phi}dr \right] + O\left( r^{-2} \right),
\end{equation*}
where $O\left(r^{-2}\right)$ denotes terms that fall off as $r^{-2}$, or faster, when $r\rightarrow\infty$.
Thus, in terms of the coordinates $(\tilde{t},\tilde{\phi})$, the asymptotic limit of the block $ds^2_{t\phi}$ becomes
\begin{equation*}
ds^2_{t\phi}|_{r\rightarrow\infty} \simeq -d\tilde{t} \, d\tilde{\phi} \,.
\end{equation*}
Hence, we can say that in the asymptotic limit the solution (\ref{metric23}) converges to
\begin{equation}\label{metric23-Limit_1}
ds^2|_{r\rightarrow\infty} \simeq \frac{ e^{-r} dr^2}{3(1+ \Lambda e^{-r})^2}+
\frac{e^{-r}d\theta^2-d\tilde{t}\,d\tilde{\phi} }{(1+ \Lambda e^{-r} )^{2/3}}\,.
\end{equation}
This limit spacetime turn out to be a particular member of the generalized Kasner class of solutions, as demonstrated in App. \ref{AppendixA}. More precisely, the solution (\ref{metric23-Limit_1}) corresponds to the choice $(p_1,p_2,p_3)=(2/3,2/3,-1/3)$ of the generalized Kasner metric (\ref{KasnerM}). As shown in App. \ref{AppendixA}, this limit spacetime is of Petrov type D and possess a four-dimensional isometry algebra. Curiously, one can check that the curvature scalars of the line element (\ref{metric23-Limit_1}) are exactly the same as the ones of the solution (\ref{metric23}), namely Eqs. (\ref{Scalars2}) and (\ref{DR2}) are also valid for the solution (\ref{metric23-Limit_1}). This, however, do not imply that these two spacetimes are the same. Indeed, it is well-known that two geometries can have the same curvature scalars and still be different from each other \cite{Coley:2009eb,Olver}. A famous example is given by $pp$-wave spacetimes, which, in spite of having all curvature scalars equal to zero, are not flat. Thus, here we have obtained another example of two different spacetimes with the same curvature scalars.
\section{Conclusions and Perspectives}\label{Sec.Conc}
In this paper we have completely integrated Einstein's vacuum equation with a cosmological constant for a subclass of the most general four-dimensional metric containing two commuting Killing vector fields and a non-trivial Killing tensor of rank two. As we have seen, most of the solutions then found have already been described in the literature. Among them, we have obtained flat space, spaces of constant curvature, and a generalization of the Kasner metric to the case of non-zero cosmological constant. Nevertheless, we have also obtained a solution that, as far as we know, have never been described in the literature before, see Eq. (\ref{metric23}). In order to arrive at this conclusion some features of this solution were investigated, such as its isometry group, its Petrov type, and the optical scalars related to the null Killing vector field of this solution. More precisely, we have obtained that the isometry algebra of this solution is three-dimensional and abelian, which means that it is Bianchi type I, its Weyl tensor is of Petrov type II, and the solution is contained in the Kundt class of spacetimes. Then, we searched in the literature pre-existing vacuum solutions having the same features, but no match occurred. Finally, we have proved that in the asymptotic limit $r\rightarrow\infty$ this solution approaches a member of the class of generalized Kasner spacetimes which have the same curvature scalars.
We hope that this new solution, along with the characterization given in this paper could give rise to applications within the framework of gravitation, cosmology and beyond. The analysis of the physical properties of the solution (\ref{metric23}) can give a hint on the range of its applicability. Therefore, in a future work we intend to investigate the physics of such exact solution .
\begin{acknowledgments}
C. B. would like to thank Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'ogico (CNPq) for the partial financial support through the research productivity fellowship. Likewise, C. B. thanks Universidade Federal de Pernambuco for the funding through Qualis A project. G. L. A. thanks CNPq for the financial support.
\end{acknowledgments}
|
2,877,628,091,218 | arxiv | \section*{Abstract}
The task of the brain is to look for structure in the external input. We study a
network of integrate-and-fire neurons with several types of recurrent connections that learns the
structure of its time-varying feedforward input by attempting to efficiently represent this input with
spikes. The efficiency of the representation arises from incorporating the structure of the input into
the decoder, which is implicit in the learned synaptic connectivity of the network. While in the
original work of [Boerlin, Machens, Den{\`e}ve 2013] and [Brendel et al., 2017] the structure learned
by the network to make the representation efficient was the low-dimensionality of the feedforward
input, in the present work it is its temporal dynamics. The network achieves the efficiency by
adjusting its synaptic weights in such a way, that for any neuron in the network, the recurrent
input cancels the feedforward for most of the time. We show that if the only temporal structure that the
input possesses is that it changes slowly on the time scale of neuronal integration, the dimensionality of the network
dynamics is equal to the dimensionality of the input. However, if the input follows
a linear differential equation of the first order, the efficiency of the representation can be increased by increasing
the dimensionality of the network dynamics in comparison to the dimensionality of the input.
If there is only one type of slow synaptic current in the network, the increase is two-fold, while if there are
two types of slow synaptic currents that decay with different rates and whose amplitudes can be adjusted separately, it
is advantageous to make the increase three-fold. We numerically simulate the network with synaptic weights that imply the most efficient input representation in the above cases. We also propose a learning rule by means of which the corresponding synaptic weights can be learned.
\section*{Introduction}
In this work we generalize and give a new interpretation of the neural encoding scheme suggested in \cite{Boerlin2013} and further developed in \cite{LongSI},\cite{Bourdoukan2015},\cite{Alemi2017}, which considers neural networks of integrate-and-fire neurons that perform one of the two tasks. The first task is encoding the feedforward input by the time sequence of the spikes of the network and the second one is encoding the solution of a particular differential equation with the source term given by the feedforward input. In the present work we focus on the first task, which we refer to as autoencoding. The framework in question provides an account for the trial-to-trial variability ubiquitously observed in the neural systems, which is an alternative to modeling the neurons as Poisson spike generators. This framework also assumes an advantageous scaling of the accuracy of the representation with the number of neurons, in comparison to Poisson units.
Even though these are among the strongest points of the discussed framework, we don't consider them to be the only ones.
We propose that the model suggests a more general perspective on the operation of the neural system that spans different levels of organization. To find such perspective means to understand in a unified way not only how the nervous system solves particular computational tasks, but also how these tasks arise, how it "knows" that these computations should be performed.
We suggest that studying an efficient way to encode a structured input in a neural network with the time sequence of the spikes of its neurons is a good place to start in searching for this unifying perspective. The reason is that in order to be efficient, the network should be able to extract the existing structure from the input and represent only the information that can not be predicted based on knowing this structure. To clarify, we use the term "structure" in the most general sense, to mean any form of predictability in the input.
In the current work we will demonstrate, based on \cite{Boerlin2013} and \cite{LongSI}, that extracting the existing structure of the input is equivalent to
adjusting the synaptic efficacies in the way as to keep membrane potentials of all the neurons close to the rest value whenever possible and adjusting either neuronal gains or firing thresholds to keep the same the average activity of all the neurons. This relates an objective of information encoding, which is a task assigned to the network by a researcher, to something that is formulated in purely network terms without appealing to any externally imposed task.
This observation is in line with seeing the goal to maintain the homeostasis as the main principle of functioning of the organism at various levels of organization. The attractiveness of this hypothesis is that it seems to be plausible in the context of evolution, offering a scenario of gradual complexification of life by adding new links connecting different components, such as different parts of the organisms body, different brain areas or connecting the organism to different aspects of the environment by means of new sensory organs or modification of the old ones. With more links added, the task of maintaining homeostasis whenever possible requires discovering more and more complex structures in the environment, so that the corresponding homeostatic changes in the organism can be predicted and compensated for to keep the homeostatic state. Very abstractly put, in this framework the organism is trying to autoencode its own response to complex environment, and all other computational tasks can be seen as arising as "obstacles" on the way.
The model presented in this paper is formulated on the network level assuming the simplest dynamics for an individual unit: integrate-and- fire neuron (IF neuron). We point out, however, that this dynamics can itself be viewed as realization of the same principle that we propose for the operation of the network (see section \ref{sec:Balance}). The network is receiving a time-varying feed-forward input and its task is to represent this input in real time with as few spikes as possible, so that it can be easily decoded from the network activity. What makes the representation efficient, in the sense of using less spikes, is incorporating the existing structure of the input, such as low dimensionality \cite{Boerlin2013} or temporal predictability (sections \ref{sec:SB}, \ref{sec:TS}) into the network.
The central principle that allows to build a network corresponding to a particular input structure is the principle of feedforward-recurrent balance (section \ref{sec:Balance}). In a nutshell, it can be formulated as adjusting the recurrent connections in the network in such a way as to make the recurrent contribution to the neuronal voltages cancel the feedforward contribution, so that the voltages are maintained close to the rest values whenever possible. As the recurrent contribution can be unambiguously recovered from the network activity by an external decoder, the feedforward contribution can also be decoded. This observation implies that maintaining the balance of the voltage is equivalent to faithfully encoding the feedforward input. The efficiency of the representation is improved when the structure of the feedforward input can be reproduced by the recurrent, as this improves the feedforward-recurrent balance and delays the next population spike.
The flexibility in the temporal profile of the synaptic currents determines what kind of structure present in the feedforward input can be incorporated into the network. The network with only fast connections, described in \cite{Boerlin2013} and in the section \ref{sec:fastOnly}, can incorporate low-dimensional but not temporal structure of the input. A network with a slow synaptic current whose temporal profile is fixed (see section \ref{sec:SB}) is able to match the feedforward current at the time of the spike, if in addition to being low dimensional the feedforward input changes slowly in time. This improves the balance and makes the representation more efficient. The network with variable temporal profile of the synaptic current, which we model by introducing two types of slow currents that decay with different time constants and whose relative strengths can be adjusted locally (see section \ref{sec:DE2}), can match not only the input, but also its first time derivative at the time of the spike, which further increases the efficiency of the representation. This is only possible, however, if the input possesses the temporal structure of following a homogeneous first order differential equation, which implies that its evolution can be predicted from the current value. Note, that when the feedforward input stops obeying the differential equation, it is still represented faithfully by the network's spikes, only this representation is less efficient, which is immediately reflected in the increase of the firing rate of the network.
Our model can also be seen as a network realization of the predictive coding framework \cite{Spratling2017review}, where the expectation about the feedforward input is implemented in the value or temporal evolution of the recurrent input and is updated at every spike according to the structure of the network connectivity.
Although we do not investigate in detail in this work how this connectivity structure can be learned, we do propose in the Discussion (section \ref{sec:Dis}) an unsupervised learning rule, inspired by \cite{LongSI}. This learning rule is local and follows from the aim of keeping the postsynaptic voltage at zero after a presynaptic spike. We hypothesize that this learning rule will lead to the network incorporating the structure present in the input most of the time. We also discuss briefly a possible generalization of the presented framework, in which the recurrent input that implements the prediction is generated not in the synapses but in a separate network, that we call a simulator network. The advantage of the two-subnetwork system is that the possible predictions that can be realized are not constrained by the synaptic dynamics and can, for example, be on a much longer time scale. Learning in such a system can happen at two levels. At the first level, the connections between the two subnetworks are tuned in such a way as to initialize the fixed dynamics of the simulator to best match the prediction about the feedforward input. This is conceptually similar to learning of the synaptic connections with the aim of maintaining the feedforward-recurrent balance in a single network. When a particular temporal pattern of the feedforward input persists, or is considered important for other reasons, the connections within the simulator network can be changed, so that the dynamics of the simulator matches the input dynamics \cite{LearningSlowConnections}. This potentially can be a model of learning transfer between different brain areas.
In the light of spike-coding versus rate-coding debate, our framework takes an intermediate stand. From purely decoding point of view, it can be interpreted as the rate code, since the estimate of the encoded variable is a convolution of the spike train with a smooth kernel (or a sum of such convolutions). Changing a timing of a particular spike gradually degrades the decoding performance. From the point of view of network dynamics, however, the model is not a rate model: it is deterministic and the time of every spike can be predicted exactly, given the feedforward input into the network. Moreover, every spike is interpretable, but instead of neurons being selective to a particular feature of the input, in our framework the identity of the neuron that has just spiked determines unambiguously the current representation error. In the case of the network that has learned a temporal structure of the input, neurons are selective to a combination of the current representation error and the expected future error, namely the difference between the future evolution of the recurrent input and the prediction about the evolution of the feedforward one. The trial-to-trial variability in our model arises from a the network being extremely sensitive to the smallest variations in the input.
We also point out that the function of the fast connections in the model network presented here (section \ref{sec:fastOnly}) is the same as in the original work \cite{Boerlin2013}, while the slow connections described in the sections \ref{sec:SB} and \ref{sec:TS} are conceptually different from the slow connections of \cite{Boerlin2013}. In the model of \cite{Boerlin2013}, the slow connections are introduced in order to make the network produce the output which is a solution of a differential equation with the source term given by the feedforward input. In the present work, on the other hand, the feedforward input itself is following a differential equation, and the job of the slow connections is to incorporate this temporal structure into the network to make it more efficient in encoding the input.
\section{Network setup}\label{sec:1}
Our network consists of $N$ leaky integrate-and-fire neurons, that are receiving a time-varying feedforward input $I^{ext}$ and are coupled by means of several types of synaptic currents $h^{(a)}$ with corresponding connectivity matrices $\Omega^{(a)}$. By discriminating between different types of synaptic currents, we imply different
dynamics of these currents after a presynaptic spike. A larger variety of synaptic currents makes the network more flexible and better in discovering a temporal structure of its feedforward input, as will become clear later.
The membrane voltages $V_i$ of the network are governed by the following dynamics:
\begin{align}
&\dot V_i(t) = -\lambda V_i(t) + I^{ext}_i(t) + \sum_{a}\sum_{j = 1}^N\Omega^{(a)}_{ij}h^{(a)}_j(t) \hspace{1cm} \text{for } i = 1\dots N\label{eq:IF} \\
&V_i(t) = T_i\hspace{1cm} \text{neuron }i \text{ spikes and } V_i\rightarrow 0 \notag
\end{align}
The reset mechanism of the integrate-and-fire neuron can be incorporated into equation (\ref{eq:IF}) by adding a delta-function contribution to the input current of a neuron at the moment of its spike:
\begin{align}
&\dot V_i(t) = -\lambda V_i(t) + I^{ext}_i(t) + \sum_{a}\sum_{j = 1}^N\Omega^{(a)}_{ij}h^{(a)}_j(t) - T_i \sum_{s = 1}^{N_{i}}\delta(t-t_s^i)\hspace{1cm}\notag \\
&\text{where } t_s^i \text{ are such that}\\
&V_i(t_s^i) = T_i\hspace{1cm}
\label{eq:IF1}
\end{align}
Here $N_i$ is the total number of spikes of the neuron $i$, $t_s^i$ is the time of the spike $s$ of neuron $i$, and $\delta(t-t_s^i)$ is the Dirac delta-function.
Assuming the spike times are known, this equation can be solved, leading to:
\begin{align}
&V_i(t) = \hat I_i^{ext}(t) + \sum_a\sum_{j = 1}^N\Omega^{(a)}_{ij}\hat h^{(a)}_j(t) - T_i r_i(t)\label{eq:IFI}\\
&V_i(t_s^i) = T_i\hspace{1cm} \text{: spike of neuron }i \notag
\end{align}
where
\begin{align}
&\hat I_i^{ext}(t) = \int_{-\infty}^t I_i^{ext}(t')\text{e}^{-\lambda(t-t')}dt'\label{eq:hatI}\\
&\hat h^{(a)}_i(t) = \int_{-\infty}^t h^{(a)}_i(t')\text{e}^{-\lambda(t-t')}dt'\label{eq:hath}\\
&r_i(t) = \int_{-\infty}^t \sum_{s = 1}^{N_i}\delta(t'-t_s^i)\text{e}^{-\lambda(t-t')}dt'= \sum_{s = 1}^{N_i}\text{e}^{-\lambda(t-t_s^i)}\label{eq:hatr}
\end{align}
are corresponding inputs filtered with the exponential decay kernel $\text{e}^{-\lambda t}$.
We will always assume that the typical time scale of the evolutions of the external input $|I_i^{ext}/\dot{I}_i^{ext}|$ is slow in comparison to the membrane time scale $\frac{1}{\lambda}$. In this case, $\hat I^{ext}_i(t)$ is approximately proportional to $I^{ext}_i(t)$ with the rescaling factor $\frac{1}{\lambda}$:
\begin{equation}
\left|\frac{\dot{I}_i^{ext}}{I_i^{ext}}\right|\ll \lambda \hspace{0.5cm}\implies\hspace{0.5cm} \hat I_i^{ext} \approx \frac{1}{\lambda}I_i^{ext}
\label{eq:Iscale}.
\end{equation}
We also assume that the feedforward input is large $\hat I^{ext}_i\gg T_i$, so that the neurons are in the input-driven regime and in the absence of recurrent contribution would have spiked regularly with a high rate.
\section{Feedforward-recurrent balance and efficiency} \label{sec:Balance}
In this work we assume that the task of the network is to represent its time-varying feedforward input with its spikes.
Following \cite{Boerlin2013} we define representation efficiency as the number of spikes that the network spends to represent its input given an upper bound on the representation error. The central idea of the present framework, introduced in \cite{Boerlin2013} and further clarified in \cite{LongSI}, is that balancing the feedforward input by the recurrent contribution, which keeps the neuronal voltages close to zero
\begin{equation}
V_i(t)\approx 0 \hspace{1cm} \text{for } i = 1\dots N,
\end{equation}
is equivalent to the efficiency of input representation.
Indeed, if the time-varying feedforward input is approximately balanced by the recurrent, the recurrent input reconstructed from the network's spikes and multiplied by $-1$ gives an online estimate of the feedforward input. To guarantee an upper bound on the representation error, we need to guarantee the precision of the balance. In other words, as long as the balance is violated beyond the tolerance margin, the network should spike to correct the recurrent input, restoring the balance and updating the estimate.
As integrate-and-fire neurons, considered here, spike based on their voltages, not their inputs, the upper bound on the error can only be set for representing the filtered version of the input $\hat I_i^{ext}(t)$, not the input itself (see (\ref{eq:IFI})). However, if the input is assumed to be slow, $\hat I_i^{ext}(t)$ and $I_i^{ext}(t)$ differ only by a scale (see (\ref{eq:hatI}) and (\ref{eq:Iscale})). We will loosely refer to the representation of the leaky integral of the input $\hat I_i^{ext}(t)$ by the network as input representation, keeping in mind that input fluctuations on the time scale smaller than $\frac{1}{\lambda}$ can not be represented by the suggested scheme.
The reset mechanism of an isolated integrate-and-fire neuron by itself can be seen as enforcing the feedforward-recurrent balance if the self-inhibition after a spike is considered as a recurrent delta-function input (see \cite{LongSI}). When the voltage of the neuron reaches the threshold, a spike is fired conveying this information to the decoder. For a single neuron, the analogue of the equation (\ref{eq:IFI}) is
$$
V(t) = \hat I^\text{ext}(t) - Tr(t)
$$
where $r(t)$ is the spike train of the neuron filtered with the exponential kernel, as given in (\ref{eq:hatr}). We interpret $Tr(t)$ as the estimate of $\hat I^\text{ext}(t)$, which makes the voltage $V(t)$ play the role of the representational error. As $V(t)$ is bounded by the firing threshold from above, but not from below, the representational error can be controlled only if $I^\text{ext}(t)$ is constraint to be positive ($\hat I^\text{ext}(t) - Tr(t)$ can not become negative), in which case the error can not exceed the threshold $T$. What follows can be seen as an extension of this encoding scheme to the case of multi-dimensional input with different neurons representing different directions in the space of inputs.
Coming back to the network of $N$ neurons (\ref{eq:IFI}) and assuming no recurrent connections ($\Omega_{ij} = 0$), we see that the filtered input into the neuron $i$, $\hat I^\text{ext}_i(t)$, is represented by the spikes of this neuron as $\hat I_i(t)\approx T_ir_i(t)$ if $I^\text{ext}_i(t)$ is always positive. However, when the input has a low-dimensional structure, namely the vector $\boldsymbol{I^\text{ext}}(t)$ changes in a $J$-dimensional subspace of the $N$-dimensional space and $J\ll N$, introducing connections between neurons ($\Omega_{ij}\neq 0$) can make the repesentation much more efficient. Namely, the total number of population spikes required for the same upper bound on the representational error can be decreased by a factor of $\frac{\sqrt{2}}{N}$ in the limit $N\rightarrow\infty$, $J = \text{const}$. Also, the constraint on $I_i^\text{ext}(t)$ being positive can be lifted.
In the next section we will show that for strictly low-dimensional input, a spike of one neuron in the network can bring not only its own voltage, but also the voltages of all other neurons to zero, thus delaying the next population spike. In order to achieve this, the neurons should interact by means of synaptic currents that are large, but short-lasting on the time scale of neuronal integration. We will describe this mechanism in detail and derive the required connectivity matrix, following \cite{Boerlin2013}. If the input is approximately low-dimensional, the proposed connectivity structure will still reduce the number of spikes to the extent dependent on the ratio of the low-dimensional and the full-dimensional parts of the input.
If, in addition to low-dimensionality of the input, the future input can be predicted based on the current state of the network, the voltages of all the neurons can be kept close to zero for some period after a population spike. In this case, the next spike will be delayed even further. We will discuss this scenario in the following sections.
In summary, the spatial (low dimensionality) or temporal structure of the feedforward input can be incorporated into the network connectivity to
enhance the balance, which decreases the number of spikes fired by the network, without sacrificing the accuracy with which the feedforward input can be reconstructed from the network's spikes.
\section{Neuron-to-neuron predictability, fast connections}\label{sec:fastOnly}
In this section we describe how a network of integrate-and-fire neurons can exploit the low-dimensional structure of its input to represent it more efficiently, compared to population of non-connected neurons. This network is a slight modification of the autoencoder of \cite{Boerlin2013} and \cite{LongSI}, looked at from a somewhat different perspective.
We assume that the feedforward input into the network $\boldsymbol{I^{ext}}(t)$ varies in a subspace spanned by the columns of an non-degenerate $N\times J$ - dimensional matrix $\boldsymbol{F}$. This is equivalent to saying, that there are $J<N$ time-varying signals $c_\alpha(t),\hspace{0.1cm}\alpha = 1\dots J$
and
\begin{equation}
I_i^{ext}(t) = \sum_{\alpha=1}^JF_{i\alpha}c_\alpha(t)\hspace{1cm} i = 1\dots N
\label{eq:Ilow}
\end{equation}
We refer to the $i$-th row of the matrix $\boldsymbol{F}$ as the vector of feedforward weights of the neuron $i$. The feedforward input into a neuron $i$ is than given by an inner product of two $J$-dimensional vectors: $\boldsymbol{c}(t)$ and $\mathbf{F_i}$.
We scale the feedforward input in such a way that the Euclidean norm of $\boldsymbol{c}(t)$ is of the order of neuronal integration constant $\lambda$,
$$
|\boldsymbol{c}(t)|=O(\lambda).
$$
This implies that the leaky integral $\boldsymbol{\hat c}(t)$ of the input, which is given by
\begin{equation}
\boldsymbol{\hat c}(t)= \int_{-\infty}^t \boldsymbol{c}(t')\text{e}^{-\lambda(t-t')}dt'
\label{eq:chat}
\end{equation}
and is the variable represented by the network, is of order one:
$$
|\boldsymbol{\hat c}(t)| = O(1)
$$
At this point we don't impose any additional assumptions on the input $\boldsymbol{c}(t)$.
In this section we consider the network with only one type of synaptic current, namely \emph{fast current} $\boldsymbol{h^\text{f}}$ which we assume to be short-lasting on the scale of neuronal time constant $\frac{1}{\lambda}$.
\begin{align}
&\dot V_i(t) = -\lambda V_i(t) + \sum_{\alpha=1}^JF_{i\alpha}c_\alpha(t) + \sum_{j = 1}^N\Omega^{\text{f}}_{ij}h^{\text{f}}_j(t) - T_i \sum_{s = 1}^{N_{i}}\delta(t-t_s^i)\hspace{1cm}\notag \\
&V_i(t_s^i) = T_i\hspace{1cm} \text{: spike of neuron }i
\label{eq:IF30}
\end{align}
The postsynaptic current of neuron $i$, $h^\text{f}_i(t)$ is the sum of synaptic kernels $o(t-t_s^i)$ placed at the times of the neuron's spikes
\begin{equation}
h^\text{f}_i(t) = \sum_{s = 1}^{N_{i}}o(t - t_s^i).
\label{eq:ho}
\end{equation}
We assume that the synaptic kernel $o(t-t_s^i)$ has support $\Delta t$ and is normalized as
\begin{equation}
\int_0^{\Delta t} o(t')\text{e}^{-\lambda(\Delta t - t')}dt' = 1
\end{equation}
(the normalization factor can be absorbed in the connectivity matrix).
We also assume that $\Delta t$ is small when compared to the neuronal time constant $1/\lambda$
$$
\Delta t\ll\frac{1}{\lambda}
$$
which together with $|\boldsymbol{c}(t)|=O(\lambda)$ implies that the synaptic kernel $o(t-t_s)$ can be substituted by Dirac delta function:
\begin{equation}
h_i^\text{f}(t) = \sum_{s = 1}^{N_{i}}\delta(t - t_s^i)
\label{eq:hf}
\end{equation}
Then, the equation (\ref{eq:IF30}) becomes
\begin{align}
&\dot V_i(t) = -\lambda V_i(t) + \sum_{\alpha = 1}^JF_{i\alpha}c_\alpha(t) + \sum_{j = 1}^N\Omega^\text{f}_{ij}\sum_{s = 1}^{N_j}\delta(t-t_s^j)
\label{eq:IF3}\\
&V_i(t_s^i) = T_i\hspace{1cm} \text{: spike of neuron }i \notag
\end{align}
where we have absorbed the last term in (\ref{eq:IF30}) into the recurrent interaction by redefining the diagonal entries of $\Omega^\text{f}$.
We can now integrate (\ref{eq:IF3}) to get
\begin{align}
&V_i(t) = \sum_{\alpha = 1}^JF_{i\alpha}\hat c_\alpha(t) + \sum_{j = 1}^N\Omega^\text{f}_{ij}r_j(t)\label{eq:IF2}\\
&V_i = T_i\hspace{1cm} r_i\rightarrow r_i +1\notag
\end{align}
where
\begin{equation}
\hat c_\alpha(t) = \int_{-\infty}^tc_\alpha(t')\text{e}^{-\lambda(t-t)}dt',
\end{equation}
and $r_i$ is given by (\ref{eq:hatr}).
Following the idea described in the previous section, we impose the feedforward-recurrent balance by choosing the connectivity matrix $\Omega^\text{f}$ in such a way that all the voltages are close to zero for as much time as possible. Since the synaptic currents are short-lasting, the best that the network can do is to bring all the voltages to zero after every population spike and let them evolve following on the feedforward input between the spikes. We note that the proposed scheme only implements the greedy minimization of $\boldsymbol{V}^2(t)$ and is not an optimal solution to minimizing $\int\boldsymbol{V}^2(t)dt$.
The network's ability to bring the voltages of all the neurons to zero after a spike of a particular neuron is surprising when the dimensionality $J$ of the input is greater than one. Indeed, the fact that one of the neurons has reached its threshold should give the full information about the voltages of all other neurons at this time. In other words, the voltage of a neuron $j$ should be the same every time a given neuron $i$ spikes. We will now demonstrate, that this is indeed the case under certain conditions.
We assume that the fast connectivity matrix $\Omega^\text{f}$ is of the form
\begin{equation}
\Omega^\text{f}_{ij} = - \sum_{\alpha = 1}^JF_{i\alpha}D^\text{f}_{j \alpha}\label{eq:flr}
\end{equation}
Namely, it has a low rank structure with the left singular vectors spanning the same subspace as the columns of the feedforward matrix $\boldsymbol{F}$ ($\text{Im}\boldsymbol{\Omega^\text{f}} = \text{Im}\boldsymbol{F}$). We call this subspace \emph{$F$-space}.
In this case, the vector of voltages of all the neurons is given by (see (\ref{eq:IF2}))
\begin{equation}
V_i(t) = \sum_{\alpha = 1}^JF_{i\alpha}\hat d_\alpha(t)
\label{eq:Vd}
\end{equation}
where we have introduced a $J$- dimensional vector
\begin{equation}
\hat d_\alpha(t) = \hat c_\alpha(t) - \sum_{j = 1}^N D^\text{f}_{j \alpha}r_j(t)
\label{eq:dhat}
\end{equation}
We call $\boldsymbol{\hat d}(t)$ \emph{state vector}, as it determines the internal state of the network unambiguously by (\ref{eq:Vd}), and the $J$-dimensional space of all possible values of $\boldsymbol{\hat d}$ - \emph{state space} of the network.
The time evolution of the state vector is given by
\begin{equation}
\dot{\hat d}_{\alpha}(t) = -\lambda\hat d_\alpha(t) + c_\alpha(t) - \sum_{j = 1}^ND^\text{f}_{j \alpha}\sum_{s = 1}^{N_j}\delta(t - t_s^j)\label{eq:evdhat}
\end{equation}
which is just the equation of time evolution of the vector of voltages (\ref{eq:IF3}) multiplied by the pseudoinverse of the matrix $\boldsymbol{F}$.
A given neuron $i$ spikes when its voltage reaches the threshold $T_i$,
\begin{equation}
V_i = \sum_{\alpha = 1}^JF_{i\alpha}\hat d_{\alpha} = T_i\label{eq:ispikes}
\end{equation}
This condition describes a $(J-1)$-dimensional hyperplane in the $J$-dimensional state space. We refer to this hyperplane as \emph{firing plane} of the neuron $i$. The orientation of the firing plane $i$ is determined by the vector $\boldsymbol{F_i}$ of feedforward weights of the neuron, while the Euclidean distance from the origin $\boldsymbol{\hat d} = \boldsymbol{0}$ to the firing plane is given by $\frac{T_i}{|\mathbf{F_i}|}$, with $|\mathbf{F_i}|$ representing the Euclidean norm of the vector $\mathbf{F_i}$ (see figure \ref{fig:box}a). We choose the firing thresholds $T_i$ in such a way, that this distance is the same for all neurons in the network and is equal to $\omega$ which, as we will see, determines the accuracy of the input representation:
\begin{equation}
T_i = \omega |\mathbf{F_i}|\hspace{1cm}\text{for } i = 1\dots N
\label{eq:sph}
\end{equation}
When the state vector $\mathbf{\hat d}(t)$ reaches one of the firing planes, the corresponding neuron $i$ fires a spike, increasing $r_i$ by 1 (see (\ref{eq:IF2})) and changing $\mathbf{\hat d}(t)$ by $-\mathbf{D^\text{f}_i}$, where $\mathbf{D^\text{f}_i}$ is the corresponding column of the matrix $\mathbf{D^\text{f}}$ (see (\ref{eq:dhat})). This change happens in a step-wise manner because we have approximated large and short-lasting synaptic currents by $\delta$-functions.
We call the intersection of the half-spaces
$$
\boldsymbol{F_i}\boldsymbol{\hat d} \leq T_i\hspace{0.5cm} i = 1\dots N
$$
\emph{state domain} ${\cal{D}}$. This is the region of the state space where none of the neurons in the network fires. We refer to the boundary of this region ${\cal{D}}$ as \emph{firing surface}.
The state domain can be either a convex polyhedron as on the figure \ref{fig:box}a, or have an open end as is in \ref{fig:box}b. The latter case is equivalent to the input subspace overlapping with the sector $V_i < 0,\quad i = 1\dots N$, so that for some directions of $\boldsymbol{c}$ all the neurons receive hyperpolarizing input and no neuron spikes even if the norm of $\boldsymbol{c}$ is large (figure \ref{fig:box}c).
When the angles between any pair of neighboring faces of the firing surface are greater or equal to 90 degrees, which we will always assume, it is possible to choose the matrix $\boldsymbol{D^\text{f}}$ in such a way, that the state vector $\boldsymbol{\hat d}(t)$ never leaves the state domain. So, the dynamics of the network is described by the vector $\boldsymbol{\hat d}(t)$ evolving according to
$\boldsymbol{\dot{\hat{d}}} = -\lambda\boldsymbol{\hat d}(t) + \boldsymbol c(t)$ inside the state domain and reflecting from the firing surface.
The image of a vector bouncing within a polyhedron shape was introduced in \cite{NatureNeuroscience} for illustrative purposes and used in \cite{Nuno} to investigate the stability of the efficient balanced network. We point out that in these studies, the vector that is confined to stay within the polyhedron is the error of the representation of the feedforward input by the spikes of the network, while in the current work it is the state vector $\mathbf{\hat d}(t)$. The two are mathematically equivalent to each other, however the formulation of \cite{NatureNeuroscience} and \cite{Nuno} implies representational framework, while the current derivation is purely dynamical.
In general, spike of a particular neuron conveys the information only about the projection of the state vector $\mathbf{\hat d}(t)$ onto the vector of its input weights (\ref{eq:ispikes}). In other words, the spike means that the instantaneous value of $\mathbf{\hat d}(t)$ is somewhere on the firing plane of the spiking neuron, which is not enough to specify the voltages of other neurons in the network. However, not any point on a given firing plane can be reached from the origin $\mathbf{\hat d} = \mathbf{0}$ without crossing a firing plane of another neuron in the network. We refer to the segment of a firing plane of neuron $i$ that can be reached from the origin as \emph{firing face} of neuron $i$ (see figure \ref{fig:box}). If the firing face is small, the neuron will spike only when the state vector $\mathbf{\hat d}(t)$ is close to a particular value, which implies one-to-one correspondence between the identity of the spiking neuron and the instantaneous value of the state vector.
The firing faces of all neurons will be small if the number of neurons $N$ is much larger than the dimensionality of the state space $J$, and the weight vectors $\boldsymbol F_i$ sweep all the directions approximately uniformly. In the neuronal space this condition corresponds to the fact that any direction of the input subspace ($\text{Im}\boldsymbol{F}$) has positive component along many neuronal axes, so that a small change in this direction will change the identity of the neuronal axis along which the component is largest.
\begin{figure}
\includegraphics[width = 13cm]{figure1.pdf}
\centering
\caption{\textbf{a.} Firing surface for a network of $N = 11$ neurons. \textbf{b.} Open box \textbf{c.} The case of the open box in neuronal space. }
\label{fig:box}
\end{figure}
The spike of a neuron $i$ at time $t_s^i$ implies that the instantaneous value of the state vector $\boldsymbol{\hat d}(t_s^i)$ is somewhere on the firing face $i$, which can be written as
\begin{equation}
\mathbf{\hat d}(t_i) = \mathbf{\hat d^{(i)}} + \mathbf{\Delta_s^i},
\end{equation}
where we have introduced the \emph{preferred vector} of the neuron $i$, $\mathbf{\hat d^{(i)}}$, which is the normal drawn from the origin onto the firing plane $i$ (see figure \ref{fig:box}):
\begin{equation}
\mathbf{\hat d^{(i)}} = \omega\frac{\mathbf{F_i}}{|\mathbf{F_i}|} = \frac{T_i}{\mathbf{F_i}^T\mathbf{F_i}}\mathbf{F_i}
\label{eq:di}
\end{equation}
The preferred vector of the neuron is the vector of principal components of the collective voltage of the network $\boldsymbol{V}$ to which the neuron is selective.
The difference $\boldsymbol{\Delta_s^i}$ between the value of the state vector at the moment of a spike and the preferred vector of the spiking neuron is bounded by the diameter of the firing face $i$. The information about the value of $\boldsymbol{\Delta_s^i}$ is not conveyed to the decoder.
We introduce a bound $\Delta$ on the size of the firing faces of all the neurons in the network. To define $\Delta$ we first define the radius of a firing face $i$ as the distance from the base of the normal $\mathbf{\hat d^{(i)}}$ to the furthest point on the firing face. Then $\Delta$ is the maximum of these radii over all firing faces, so that
\begin{equation}
|\mathbf{\Delta_s^i}|\leq \Delta \hspace{0.5cm}\forall i,s\label{eq:Delta}
\end{equation}
For fixed input dimensionality $J$, if the directions of the feedforward weights $\boldsymbol{F_i}$ are chosen to cover the sphere in the $J$-dimensional space uniformly, $\Delta \propto \frac{1}{N}$.
Since a spike of a neuron $i$ changes the state vector $\boldsymbol{\hat d}(t)$ by $-\mathbf{D^\text{f}_i}$,
in the infinitely large network, the state vector $\boldsymbol{\hat d}(t)$ can be brought to zero after every spike by defining the $i$-th column of the matrix $\boldsymbol{D^\text{f}}$ as
\begin{equation}
\mathbf{D^\text{f}_i} = \mathbf{\hat d^{(i)}} = \frac{T_i}{|\mathbf{F_i}|^2}\mathbf{F_i} = \omega\frac{\mathbf{F_i}}{|\mathbf{F_i}|}
\label{eq:fastD}
\end{equation}
This implies that the vector of neuronal voltages $\boldsymbol{V}(t)$ is also zero after every population spike.
\begin{align}
\boldsymbol{\hat d}(t_s^{i+}) = 0\notag\\
\boldsymbol{V}(t_s^{i+}) = 0\notag
\end{align}
In the network with a finite number of neurons, and thus a finite $\Delta$, the state vector and the vector of neuronal voltages are only guaranteed to be close to zero after every spike, more precisely
\begin{align}
|\boldsymbol{\hat d}(t_s^{i+})| \leq \Delta\notag\\
|\boldsymbol{V}(t_s^{i+})| \leq f\Delta\notag
\end{align}
where $f$ is the largest singular value of the matrix $\boldsymbol{F}$ (square root of the largest eigenvalue of $\boldsymbol{F}^T\boldsymbol{F}$).
When the state domain is closed and the connectivity matrix is given by (\ref{eq:flr}) and (\ref{eq:fastD}), the dynamics of the network is equivalent to a point moving in the $J$-dimensional box of figure \ref{fig:box}a, which bounces back to zero (approximately, for a finite-size network) every time it hits a wall.
As the state vector $\mathbf{\hat d}(t) = \boldsymbol{\hat c}(t) - \boldsymbol{Dr}(t)$ is bounded by this box (see (\ref{eq:dhat})), the leaky integral of the input $\mathbf{\hat c}(t)$ can be read out from the spikes of the network with a decoder $\mathbf{D}$ applied to the filtered spikes of the network $\mathbf{r}(t)$
\begin{equation}
\hat c(t)_\alpha \approx \sum_{j = 1}^ND^\text{f}_{j \alpha}r_j(t)
\end{equation}
The error of this representation is bounded by the linear size of the state domain $\cal{D}$ (the furtherst point in $\cal{D}$ from the origin). In the limit of infinite network,
\begin{equation}
|\mathbf{\hat c}(t) - \mathbf{D}\mathbf{r}(t)|\leq \omega\label{eq:errw}
\end{equation}
and the error goes to zero after every population spike. More accurate representations correspond to smaller scale of the thresholds and higher population firing rates of the network.
For finite size networks, the error may exceed the value of $\omega$ (whenever $\boldsymbol{\hat d}(t)$ exits the ball $|\boldsymbol{\hat d}| = \omega$). The exact bound on the representation error will depend on the set of vectors of the feedforward weights $\mathbf{F_i}$.
Also, the error right after the spike will not be exactly zero, but will be bounded in norm by $\Delta$, which is the largest distance from the center of a firing face to an adjacent corner (see (\ref{eq:Delta})).
When the firing surface is open (see figures \ref{fig:box}b,c), the error grows unboundedly when the input is in the certain sector of the space.
The minimal number of neurons needed to avoid the situation of figure \ref{fig:box}b for a $J$-dimensional input is $J+1$, in which case the state domain $\cal{D}$ is a J-dimensional symplex with J+1 faces (triangle, tetrahedron, and so on). However, in this case the angles between neighboring firing faces are less than 90 degrees, which implies that the state vector can be ``bounced'' outside of the state domain and it is not possible to guarantee that only one neuron at a time is above threshold, which is our assumption. The smallest number of neurons that allows to avoid this situation is $2J$, in which case the feedforward weights $\boldsymbol{F}$ should be chosen in such a way, that the firing surface is a hypercube.
The representation is most accurate when the sizes of the firing faces are the smallest. As there is no a priori reason to single out any direction in the state space, we require the size of the largest firing face $\Delta$ to be as small as possible, which, given final number of neurons $N$, means distributing the vectors $\boldsymbol{F_i}$ approximately uniformly on the $J-1$ dimensional sphere.
The apparent contradiction pointed at in the beginning of this section, namely that the fact of a single voltage reaching the threshold value (a spike of a particular neuron) unambiguously determines the instantaneous value of the $J$-dimensional state vector, is resolved as follows: the spike of a given neuron $i$ implies not only that the voltage of this neuron has reached the threshold, but also, that the voltage of this neuron has reached the threshold first, before the voltage of any other neuron in the network did.
\section{Slow connections. Voltage-to-input predictability}\label{sec:SB}
In this section we consider a network with additional synaptic currents that are slow on the time scale of neuronal integration $\frac{1}{\lambda}$. Now the dynamics of the network is described by the following equation
\begin{align}
&\dot V_i(t) = -\lambda V_i(t) + \sum_{\alpha = 1}^JF_{i\alpha}c_\alpha(t) + \sum_{j = 1}^N\Omega^\text{f}_{ij}\sum_{s = 1}^{N_j}\delta(t-t_s^j)+ \sum_{j = 1}^N\Omega^\text{s}_{ij}h^\text{s}_j(t) \hspace{0.5cm} \text{for } i = 1\dots N\notag \\
&V_i(t_s^i) = T_i\hspace{0.5cm} \text{spikie of neuron }i
\end{align}
The slow synaptic current has a kernel that decays exponentially with the decay constant $\lambda_s\ll \lambda$
\begin{equation}
h^\text{s}_i(t) = \sum_{s = 1}^{N_i}\text{e}^{-\lambda_s(t-t_s^i)}\label{eq:slowH}
\end{equation}
We assume the connectivity matrix, corresponding to the slow synapses, to be of the same form as the fast connectivity matrix, but with a different decoder $\boldsymbol{D^\text{s}}$:
\begin{equation}
\Omega^\text{s}_{ij} = - \sum_{\alpha = 1}^JF_{i\alpha}D^\text{s}_{j \alpha}
\label{eq:OmegaSB}
\end{equation}
It is also crucial for what is described in this section, that the input changes slowly on the time scale of neuronal time constant $\frac{1}{\lambda}$.
The equation (\ref{eq:evdhat}) for time evolution of the state vector $\boldsymbol{\hat d}$ becomes:
\begin{align}
&\dot {\hat d}_\alpha(t) = -\lambda\hat d_{\alpha}(t) + c_\alpha(t) - \sum_{j = 1}^ND^\text{f}_{j \alpha}\sum_{s = 1}^{N_j}\delta(t-t_s^j) - \sum_{j = 1}^ND^\text{s}_{j \alpha}\sum_{s = 1}^{N_j} \text{e}^{-\lambda_s(t-t_s^j)}
\end{align}
which can be solved between the spikes, assuming that the previous spike happened at the moment $t_0$
\begin{equation}
\hat d_\alpha(t) = \int_{t_0}^t\left(c_\alpha(t') - \sum_{j = 1}^ND^s_{j \alpha}h^\text{s}_j(t')\right)\text{e}^{-\lambda(t-t')}dt' + \hat d_\alpha(t_0)\text{e}^{-\lambda(t-t_0)}
\label{eq:ibs}
\end{equation}
The next spike will happen when the vector $\boldsymbol{\hat d}(t)$ will reach the firing face of some neuron $i$: $\boldsymbol{\hat d}(t_s^i) = \boldsymbol{\hat d^{(i)}}$, where $\boldsymbol{\hat d^{(i)}}$ is given by (\ref{eq:di}). If this happens at the time $t_s^i$ such that $\lambda(t_s^i-t_0)\gg 1$, we can ignore the last term in (\ref{eq:ibs}).
Taking into account that $\boldsymbol{c}(t)$ and $\boldsymbol{h^\text{s}}(t)$ are changing slowly, we can write:
\begin{equation}
\hat d_\alpha (t_s^i) = \hat d^{(i)}_\alpha \approx \frac{1}{\lambda}\left(c_\alpha(t_s^i) - \sum_{j = 1}^ND^s_{j \alpha}h^\text{s}_j(t_s^i)\right)
\label{eq:dI}
\end{equation}
This implies, that the identity of the spiking neuron determines approximately the total input, feedforward plus recurrent, at the time of the spike. In the limit of infinitely large $\lambda$, the equation is exact.
Thus, the network "knows" what correction is needed to be made by the spiking neuron in order to bring the total input into all the neurons to zero, not only the current voltages as in the previous section.
As at the moment of the spike of neuron $i$, the $i$-th component of the slow synaptic current $h_i^\text{s}$ is increased by 1, it follows from (\ref{eq:dI}) that to bring the total current to zero after the spike, the corresponding column of the matrix $\boldsymbol{D^\text{s}}$ should be chosen as follows
\begin{equation}
\mathbf{D^{s}_{i}} = \lambda\mathbf{\hat d^{(i)}} = \lambda\omega\frac{\mathbf{F_i}}{|\mathbf{F_i}|}
\label{eq:DSB}
\end{equation}
As the recurrent input decays with the slow time-constant set by the synaptic dynamics, and the feedforward input evolves also slowly but with its own dynamics, the two inputs will eventually diverge and stop cancelling each other. As this happens, the state vector $\mathbf{\hat d}(t)$, and thus the voltages of the neurons will slowly deviate from zero.
In summary,
having both fast and slow types of connections in place ensures that the voltages go to zero very fast after the spike, and then evolve slowly as the recurrent input ceases to match the feedforward one. Removing the fast connections while leaving the slow ones in place, will not change the picture drastically: instead of going to zero right after a spike, the state vector $\mathbf{\hat d}(t)$ will decay to zero with the fast time scale $\lambda$, and then deviate from zero again as the recurrent inputs ceases to match the feedforward one.
\section{Temporally-structured input. Expanding the dimensionality of the network dynamics relative to the dimensionality of the feedforward input.} \label{sec:TS}
In this section we consider a feedforward input which apart from having a low-dimensional structure $\boldsymbol{I^{ext}}(t) = \boldsymbol{Fc}(t)$ is predictable in time. In particular, the principal components of the input $c_\alpha(t)$ follow a first order differential equation:
\begin{equation}
\dot c_\alpha(t) = \sum_{\beta = 1}^JA_{\alpha\beta}c_\beta(t)\hspace{1cm}\alpha = 1\dots J
\label{eq:dynsys}
\end{equation}
where $A$ is a $J$-by-$J$ matrix whose eigenvalues are either imaginary or have negative real parts.
In this case the current value of the feedforward input $\boldsymbol{c}(t)$ can be recovered from its leaky integral, since
\begin{equation}
\int_{-\infty}^t\boldsymbol{c}(t')\text{e}^{-\lambda(t-t')}dt' = (\lambda\boldsymbol{I}+\boldsymbol{A})^{-1}\boldsymbol{c}(t)
\end{equation}
As the dynamics of the slow recurrent input is also fixed and of the first order, the instantaneous value of the recurrent input can be recovered from its filtered version in the same way. This makes it possible, in principle, to balance the total input by the contribution of the spiking neuron exactly (in the limit of the large nework) even for finite values of $\lambda$. Moreover, as the time derivatives of both, the feedforward and the recurrent inputs are determined by their current values, the derivative of the current total input and the balancing contribution can be matched to prolong the balance after a spike and delay the next one.
We will first describe the network that balances the feedforward input with the recurrent input, and then extend the argument to construct a network that also matches the derivatives.
\subsubsection{Balancing the predictable input.}\label{sec:DE1}
As in the previous section, we start by assuming two types of synaptic currents, one fast and one slow. The dynamics of the network is described by
\begin{align}
&\dot V_i(t) = -\lambda V_i(t) + \sum_{\alpha = 1}^JF_{i\alpha}c_\alpha(t) + \sum_{j = 1}^N\Omega^\text{f}_{ij}\sum_{s = 1}^{N_j}\delta(t-t_s^j)+ \sum_{j = 1}^N\Omega^\text{s}_{ij}h^\text{s}_j(t) \hspace{0.5cm} \text{for } i = 1\dots N\notag\\
&V_i(t_s^i) = T_i\hspace{0.2cm}\hspace{0.5cm} \text{: spike of neuron }i
\end{align}
with the slow current $\boldsymbol{h^\text{s}}(t)$ given by (\ref{eq:slowH}). The only difference is that we assume that $\boldsymbol{c}(t)$ follows the dynamics (\ref{eq:dynsys}).
We can integrate this equation, assuming that the last spike has occurred at the moment $t_0$:
$$
\boldsymbol{V}(t) = \boldsymbol{F}(\lambda\boldsymbol{I}+\boldsymbol{A})^{-1}\boldsymbol{c}(t) + \frac{1}{\lambda-\lambda_\text{s}}\boldsymbol{\Omega^\text{s}h^\text{s}}(t) + \boldsymbol{V_0}\text{e}^{-\lambda(t-t_0)}
$$
where $V_0$ is a constant of integration.
At the time of the next population spike, $t_s^i$, the last term can be ignored, assuming that interspike interval is large compared to $1/\lambda$.
\begin{equation}
\boldsymbol{V}(t_s^i) = \boldsymbol{F}(\lambda\boldsymbol{I}+\boldsymbol{A})^{-1}\boldsymbol{c}(t_s^i) + \frac{1}{\lambda-\lambda_\text{s}}\boldsymbol{\Omega^\text{s}h^\text{s}}(t_s^i)
\end{equation}
The vector of total currents into all the neurons at the moment just before the spike is
\begin{equation}
\boldsymbol{I^\text{tot}}(t_s^i) = \boldsymbol{Fc}(t_s^i) + \boldsymbol{\Omega^\text{s}h^\text{s}}(t_s^i)\label{eq:iim}
\end{equation}
and it is this value that should be balanced by the contribution of the spike of the neuron $i$ to the slow current (which is given by the corresponding column of the matrix $\Omega^\text{s}$). In order for this to be possible, the value of expression (\ref{eq:iim}) should be the same every time neuron $i$ spikes.
If we had chosen the matrix of slow connectivity $\boldsymbol{\Omega^\text{s}}$ to be of the form $\boldsymbol{\Omega^\text{s}} = -\boldsymbol{FD^\text{s}}$ as before, it is the value of $(\lambda\boldsymbol{I} +\boldsymbol{A})^{-1}\boldsymbol{c}(t) + \frac{1}{\lambda-\lambda_\text{s}}\boldsymbol{h^\text{s,before}}$ that would be determined by the preferred vector of the neuron $i$, $\boldsymbol{\hat{d}^{(i)}}$, and thus be the same at every instance of neuron $i$ firning a spike. However, the value of this expression implies a certain value of (\ref{eq:iim}) only approximately, when we ignore $|\boldsymbol{A}|$ and $\lambda_\text{s}$ in comparison to $\lambda$, which we have done in the previous section. The norm of the matrix, in this case $|\boldsymbol{A}|$, is defined as the maximum of the absolute value of its eigenvalues. We keep this convention throughout the current manuscript.
To ensure that the identity of the spiking neuron determines the value of the total input current (\ref{eq:iim}) exactly, the dimensionality of the network dynamics should be extended relative to the dimensionality of the feedforward input by assuming the matrix of slow connections to be of the form:
\begin{equation}
\boldsymbol{\Omega^\text{s}} = -\boldsymbol{FD^\text{s}} +\boldsymbol{ F^\text{int}\tau}\boldsymbol{D^\text{s}},\label{eq:Os2}
\end{equation}
with the columns of the $N$ by $J$ matrix $\boldsymbol{F^\text{int}}$ being orthogonal to the F-subspace
$$
\sum_{i = 1}^N F_{i\alpha}F^\text{int}_{i\beta} = 0\hspace{0.6cm}\forall \alpha,\beta
$$
The vector of the total input currents then has two components: along the subspace $\text{Im}\boldsymbol{F}$ and along the subspace $\text{Im}\boldsymbol{F^\text{int}}$
\begin{equation}
\boldsymbol{I^\text{tot}}(t_s^i) = \boldsymbol{F}(\boldsymbol{c}(t_s^i)-\boldsymbol{D^\text{s}h^\text{s}}(t_s^i)) + \boldsymbol{F^\text{int}\tau D^\text{s}h^\text{s}}(t_s^i)\label{eq:iim2}
\end{equation}
We assume that the $2J$-dimensional vectors obtained by concatinating the corresponding rows of the matrices $\boldsymbol{F}$ and $\boldsymbol{F^\text{int}}$, written as vectors $\begin{bmatrix}\boldsymbol{F_i}\\ \boldsymbol{F^\text{int}_i}\end{bmatrix}$ have a particular distribution in the $2J$-dimensional space, that we will discuss later.
The matrix $\boldsymbol{\tau}$ is an invertible $J$ by $J$ matrix. We will also discuss the constrains on the eigenvalues of $\boldsymbol{\tau}$ later in this section.
The dynamics of the network is now described by the $2J$-dimensional state vector $\boldsymbol{\hat d}(t)$, that is bounded by the firing surface, which is now embedded in the $2J$-dimensional space. The location of the firing face of each neuron is described by $2J$-dimensional vector $\boldsymbol{\hat d^{(i)}}$. The vectors $\boldsymbol{\hat d^{(i)}}$ are determined by the feedforward weights of the neurons $\boldsymbol{F_i}$ and the corresponding rows $\boldsymbol{F^\text{int}_i}$ of the matrix $\boldsymbol{F^\text{int}}$
\begin{equation}
\boldsymbol{\hat d^{(i)}} = \frac{T_i}{\boldsymbol{F_i^2}+ (\boldsymbol{F^\text{int}_i})^2}\begin{bmatrix}
\boldsymbol{F_i}\\
\boldsymbol{F^\text{int}_i}
\end{bmatrix} = \omega\frac{1}{\sqrt{\boldsymbol{F_i^2} + (\boldsymbol{F^\text{int}_i}})^2}\begin{bmatrix}
\boldsymbol{F_i}\\
\boldsymbol{F^\text{int}_i}
\end{bmatrix}
\label{eq:di2}
\end{equation}
The fast interactions are now responsible for bringing the $2J-$dimensional vector $\boldsymbol{d}(t)$ to the origin, and hence the rank of the matrix
of fast connections $\boldsymbol{\Omega^\text{f}}$ becomes $2J$, rather than $J$ as in the previous section
$$
\boldsymbol{\Omega^\text{f}} = - [\boldsymbol{F}\hspace{0.1cm} \boldsymbol{F^\text{int}}]\boldsymbol{D^\text{f}}
$$
where the columns of the matrix $\boldsymbol{D^\text{f}}$ are given by the selectivity vectors $\boldsymbol{\hat d^{(i)}}$ from (\ref{eq:di2}).
Now the spike of neuron $i$ means that two $J$-dimensional vector equations are satisfied:
\begin{equation}
\left\{\begin{matrix}
(\lambda\boldsymbol{I} + \boldsymbol{A)}^{-1}\boldsymbol{c}(t_s^i)-\frac{1}{\lambda-\lambda_\text{s}}\boldsymbol{D^\text{s}h^\text{s}}(t_s^i) = \boldsymbol{\hat d^{(i)}_{1\dots J}}\\
\frac{1}{\lambda-\lambda_\text{s}}\boldsymbol{\tau}\boldsymbol{D^\text{s}h^\text{s}}(t_s^i) = \boldsymbol{\hat d^{(i)}_{J+1\dots 2J}}\
\end{matrix}
\right.\label{eq:ASCDE1}
\end{equation}
These two equation can be solved together to determine $\boldsymbol{c}(t_s^i)$ and $\boldsymbol{D^\text{s}h^{s}}(t_s^i)$ separately:
\begin{align}
&\boldsymbol{D^\text{s}h^\text{s}}(t_s^i) = (\lambda-\lambda_\text{s})\boldsymbol{\tau}^{-1}\boldsymbol{\hat d^{(i)}_{J+1\dots 2J}}\notag\\
&\boldsymbol{c}(t_s^i) = (\lambda\boldsymbol{I}+\boldsymbol{A})(\boldsymbol{\hat d^{(i)}_{1\dots J}}+\boldsymbol{\tau}^{-1}\boldsymbol{\hat d^{(i)}_{J+1\dots 2J}})\notag
\end{align}
This implies that the spike of
neuron $i$ specifies not only the instantaneous value of the total input filtered with the neuronal time constant (a particular linear combination of the instantaneous values of the feedforward and the recurrent inputs), but also about the instantaneous values of the feedforward and the recurrent inputs separately. In other words, every time the neuron $i$ spikes, $\boldsymbol{c}(t)$ is the same, and so is $\boldsymbol{D^\text{s}h^{s}}(t)$. This allows to correct for the total input imbalance (\ref{eq:iim}) with the slow synaptic current of the neuron $i$, or more precisely, the component of the total input along the subspace $\text{Im}\boldsymbol{F}$ (the first term in (\ref{eq:iim2})). As the contribution of the spike of the neuron $i$ to this component of the synaptic current is equal to $-\boldsymbol {FD^\text{s}_i}$, this is achieved by choosing
\begin{equation}
\boldsymbol{D^\text{s}_i} = \boldsymbol{c}(t_\text{s}^i) - \boldsymbol{D^\text{s}h^\text{s}}(t_\text{s}^i) = (\lambda\boldsymbol{I} + \boldsymbol{A})\boldsymbol{\hat d^{(i)}_{1\dots J}} + (\lambda_\text{s}\boldsymbol{I} + \boldsymbol{A})\boldsymbol{\tau}^{-1}\boldsymbol{\hat d^{(i)}_{J+1\dots 2J}}
\label{eq:DS2J}
\end{equation}
Imposing this equation for every neuron $i$ determines the matrix $\boldsymbol{D^\text{s}}$, and hence the matrix of slow connection $\Omega^\text{s}$ (see (\ref{eq:Os2})).
In the limit of large networks, $N\rightarrow\infty$ and $J$ remaining finite, the total input current in the $F$-subspace can be balanced exactly after every spike. However, this is achieved at the cost of introducing an additional current in the subspace $\text{Im}\boldsymbol{F^\text{int}}$, which will also drive the neurons away from the rest potential, shortening interspike interval. In order for the dimensionality expansion described above to make the network more efficient, we must require that this extra current is small in comparison with the imbalance that remains after a spike in the case of no dimensionality expansion and $\boldsymbol{D^\text{s}} = \lambda \boldsymbol{D^\text{f}x}$ (see section \ref{sec:SB}). This imbalance is of the order of $\frac{\lambda^\text{slow}}{\lambda} \boldsymbol{c}(t_s^i)\approx \frac{\lambda^\text{slow}}{\lambda}\boldsymbol{D^\text{s}h^\text{s}}(t_s^i)$, where
$$
\lambda^\text{slow} = \text{max}(|\boldsymbol{A}|,\lambda^\text{s})
$$
is the slowest time scale in the problem. So, in order for the newly introduced second term of equation (\ref{eq:iim2}) to be small in comparison with the first term, we require that $|\boldsymbol{\tau}|\boldsymbol{D^\text{s}h^\text{s}}(t_s^i)\ll \frac{\lambda^\text{slow}}{\lambda}\boldsymbol{c}(t_s^i)$, or
\begin{equation}
|\boldsymbol{\tau}|\ll \frac{\lambda^\text{slow}}{\lambda}\label{eq:tauSmall}
\end{equation}
This condition alone would imply that it is best to choose the norm of the matrix $\boldsymbol{\tau}$ as small as possible to maximize the representation efficiency. However, this is only true in the limit of $N\rightarrow\infty$. For finite networks, the condition (\ref{eq:ASCDE1}) is guaranteed to be satisfied after every spike only with some accuracy determined by the bound on the size of firing faces of neurons, $\Delta$ (see section \ref{sec:fastOnly}). The final size effect introduces the error of the order $\Delta$ into the the second equation of (\ref{eq:ASCDE1}), which translates into a residual imbalance in the total synaptic current after the spike, whose upper bound inversely scales with the norm of the matrix $\boldsymbol{\tau}$ and is equal to $(\lambda_\text{s}\boldsymbol{I} +\boldsymbol{A})\boldsymbol{\tau}^{-1}\boldsymbol{\Delta}(t_s^i)$. This imbalance should be small compared to the correction to the current made by the spike, which is of norm $\lambda\omega$. Consequently, there is another constraint on the norm of the matrix $\tau$:
\begin{equation}
|\boldsymbol{\tau}|\gg\frac{\lambda^\text{slow}}{\lambda}\frac{\Delta}{\omega}
\label{eq:tauBig}
\end{equation}
Here $\frac{\Delta}{\omega}$ is the maximum value of the angle at which a firing face is seen from the origin. This value depends on the number of neurons in the network and the distribution of the vectors $\boldsymbol{\hat d^{(i)}}$, but not on the representation error $\omega$, which uniformly scales the firing domain but does not change the angles.
In the network of section \ref{sec:SB}, whose dynamics has the same dimensionality as its input, it was most efficient to assume approximately uniform distribution of the directions of the firing vectors of the neurons $\boldsymbol{\hat d_i}$ in the $J$-dimensional space. In the present case, however only a small fraction of the directions in the $2J-$dimensional state space will be explored by the network. This implies that the number of neurons necessary to ensure that the firing faces are small enough, is much lower than would be expected if the entire sphere in the $2J-$ dimensional space had to be tiled. In the Appendix, section \ref{sec:App}, we describe the most efficient arrangement of the firing vectors $\boldsymbol{\hat d_i}$ (which are collinear with the vectors $[\boldsymbol F, \boldsymbol F^\text{int}]$, obtained by concatenating the corresponding rows of the matrices $\boldsymbol{F}$ and $\boldsymbol{F^\text{int}}$), and derive from the condition (\ref{eq:tauBig}) an approximate lower bound on the number of neurons in the network $N$:
\begin{equation}
N\gg \frac{(2B)^JV_JS_J}{V_{2J-1}\omega^J|\boldsymbol \tau|^{J-1}}\left(\frac{\lambda^\text{slow}}{\lambda}\right)^{2J-1}
\label{eq:boundN}
\end{equation}
where $S_J$ is the area of the unit sphere and $V_J$ is the volume of the unit ball in $J-$ dimensional space, $B$ is an upper bound on the typical value of $|\boldsymbol{c}|/\lambda$.
If this lower bound is not satisfied, the proposed scheme of expanding the dimensionality of the network dynamics in comparison with the dimensionality of the input will not make the input representation more efficient.
\subsubsection{Balancing the predictable input with matching the first derivative}\label{sec:DE2}
In this section we consider a network with two types of slow synaptic currents, $\boldsymbol{h^{\text{s1}}}$ and $\boldsymbol{h^{\text{s2}}}$ that decay with different rates $\lambda_{\text{s1}}$ and $\lambda_{\text{s2}}$
\begin{align}
\dot{h}^{\text{s1}}_i(t) = -\lambda_{\text{s1}}h^{\text{s1}}_i(t) + \rho_i(t)\notag\\
\dot{h}^{\text{s2}}_i(t) = -\lambda_{\text{s2}}h^{\text{s2}}_i(t) + \rho_i(t)
\end{align}
where $\rho_i(t)$ is neural response function of the neuron $i$ (the train of Dirac delta-functions at spike times).
We will show that for feedforward input following (\ref{eq:dynsys}), we can choose the recurrent connectivity in such a way that not only the values of the recurrent and the feedforward inputs are guaranteed to match after every spike, but also their first derivatives. This prolongs the feedforward-recurrent balance after a spike and increases the interspike interval further, making the representation of the input even more efficient.
The network now has two slow connectivity matrices $\boldsymbol{\Omega^{\text{s1}}}$ and $\boldsymbol{\Omega^\text{s2}}$ associated with the two currents. The expression for the voltage between the spikes becomes
\begin{equation}
\boldsymbol{V}(t) = \boldsymbol{F}(\lambda\boldsymbol{I}+\boldsymbol{A})^{-1}\boldsymbol{c}(t) + \frac{1}{\lambda - \lambda_\text{s1}}\boldsymbol{\Omega^\text{s1}h^\text{s1}}(t) + \frac{1}{\lambda - \lambda_\text{s2}}\boldsymbol{\Omega^\text{s2}h^\text{s2}}(t) + \boldsymbol{V_0}\text{e}^{-\lambda(t-t_\text{sp})}
\label{eq:V52}
\end{equation}
where $t_\text{sp}$ is the time of the last population spike, and $\boldsymbol{V_0}$ is the integration constant that is determined from the value of the voltage right after the spike (if the matrix of fast connections is in place, $\boldsymbol{V_0}$ is determined from $\boldsymbol V(t) =\boldsymbol 0$).
The dimensionality of the network dynamics is expanded even further compared to the dimensionality of the input, and is now equal to $3J$.
The slow connectivity matrices are of the form:
\begin{align}
\boldsymbol{\Omega^\text{s1}} = -\boldsymbol{FD^\text{s1}} + \boldsymbol{F^\text{int}\tau_1D^\text{s1}} +\boldsymbol{\bar F^\text{int}\bar\tau_1D^\text{s1}} \notag\\
\boldsymbol{\Omega^\text{s2}} = -\boldsymbol{FD^\text{s2}} + \boldsymbol{F^\text{int}\tau_2D^\text{s2}} +\boldsymbol{\bar F^\text{int}\bar\tau_2D^\text{s2}}
\end{align}
Here, $\boldsymbol{F^\text{int}}$ is an $N\times J$-dimensional matrix whose columns are orthogonal to the columns of the matrix $\boldsymbol{F}$, and $\boldsymbol{\bar F^\text{int}}$ is another $N\times J$ matrix with the columns orthogonal to all the columns of matrices $\boldsymbol{F}$ and $\boldsymbol{F^\text{int}}$, and $\boldsymbol{\tau_1}$, $\boldsymbol{\tau_2}$, $\boldsymbol{\bar\tau_1}$ and $\boldsymbol{\bar\tau_2}$ are invertible $J\times J$ matrices that are arbitrary except for the scale of their eigenvalues.
The state vector of the network $\boldsymbol{\hat d}(t)$ is now $3J$ - dimensional and so are the firing surface and the selectivity vectors of neurons, $\boldsymbol{\hat d^{(i)}}$.
The neuron $i$ spikes when the state vector of the network $\boldsymbol{\hat d}(t)$ reaches the corresponding firing face. As usual we assume, that the number of neurons $N$ is large, the vectors $[\boldsymbol {F,F^\text{int},\bar F^\text{int}} ]$ cover uniformly the directions of the $3J$-dimensional space that are explored by the state vector $\boldsymbol{\hat d}(t)$, and that the interspike interval is large on the time scale of $1/\lambda$ so that the last term in equation (\ref{eq:V52}) can be ignored at the moment right before a spike. Then, this implies that three vector equation are satisfied at the moment just before the spike:
\begin{equation}
\left\{
\begin{matrix}
(\lambda\boldsymbol{I} + \boldsymbol{A})^{-1}\boldsymbol{c}(t_\text{s}^i) - \frac{1}{\lambda-\lambda_\text{s1}}\boldsymbol{D^\text{s1}h^\text{s1}}(t_\text{s}^i) - \frac{1}{\lambda-\lambda_\text{s2}}\boldsymbol{D^\text{s2}h^\text{s2}}(t_\text{s}^i)= \boldsymbol{\hat d^{(i)}_{1\dots J}} \\
\frac{1}{\lambda-\lambda_\text{s1}}\boldsymbol{\tau_1D^\text{s1}h^\text{s1}}(t_\text{s}^i) + \frac{1}{\lambda-\lambda_\text{s2}}\boldsymbol{\tau_2D^\text{s2}h^\text{s2}}(t_\text{s}^i) = \boldsymbol{\hat d^{(i)}_{J+1\dots 2J}}\\
\frac{1}{\lambda-\lambda_\text{s1}}\boldsymbol{\bar\tau_1D^\text{s1}h^\text{s1}}(t_\text{s}^i)+ \frac{1}{\lambda-\lambda_\text{s2}}\boldsymbol{\bar\tau_2D^\text{s2}h^\text{s2}}(t_\text{s}^i)= \boldsymbol{\hat d^{(i)}_{2J+1\dots 3J}}
\end{matrix}
\right.
\label{eq:SCDE2}
\end{equation}
The balance condition is imposed only in the input subspace (the subspace spanned by the columns of the matrix $\boldsymbol{F}$), and the recurrent input in the additional dimensions is small. We require that both, the total input and its first derivative are zero after a spike:
\begin{align}
&\boldsymbol{c}(t_\text{s}^i) - \boldsymbol{D^\text{s1}h^\text{s1,after}}(t_\text{s}^i) - \boldsymbol{D^\text{s1}h^\text{s1,after}}(t_\text{s}^i) = \mathbf{0}\notag\\
&\boldsymbol{Ac}(t_\text{s}^i) +\lambda_\text{s1} \boldsymbol{D^\text{s1}h^\text{s1,after}}(t_\text{s}^i) +\lambda_\text{s2}\boldsymbol{D^\text{s1}h^\text{s1,after}}(t_\text{s}^i) = \mathbf{0}
\label{eq:balanceDE2}
\end{align}
This implies that the contributions of spiking neuron $i$ to the two synaptic currents along the input space, $\boldsymbol{D^\text{s1}_i}$ and $\boldsymbol{D^\text{s2}_i}$, should satisfy
\begin{align}
&\boldsymbol{D^\text{s1}_i} + \boldsymbol{D^\text{s2}_i} = \boldsymbol{c}(t_\text{s}^i) - \boldsymbol{D^\text{s1}h^\text{s1}}(t_\text{s}^i) - \boldsymbol{D^\text{s2}h^\text{s2}}(t_\text{s}^i)\notag\\
&\lambda_\text{s1}\boldsymbol{D^\text{s1}_i} + \lambda_\text{s2}\boldsymbol{D^\text{s2}_i} = - (\boldsymbol{Ac}(t_\text{s}^i) +\lambda_\text{s1}\boldsymbol{D^\text{s1}h^\text{s1}}(t_\text{s}^i) +\lambda_\text{s2} \boldsymbol{D^\text{s2 }h^\text{s2}}(t_\text{s}^i))
\label{eq:balanceDE2i}
\end{align}
This is possible only if the time-dependent right-hand side of the equations is the same every time neuron $i$ spikes. And indeed, the system of equations (\ref{eq:SCDE2}) can be solved to obtain the components of the recurrent and feedforward currents along the input space at the moment just before the spike $\boldsymbol{D^\text{s1}h^\text{s1}}(t_\text{s}^i)$ , $\boldsymbol{D^\text{s2}h^\text{s2}}(t_\text{s}^i)$ and $\boldsymbol{c}(t_\text{s}^i)$. The result can then be plugged in into (\ref{eq:balanceDE2i}) to obtain the expressions for the $i$-th columns of the matrices $\boldsymbol{D^\text{s1}}$ and $\boldsymbol{D^\text{s2}}$, which determine the contributions of the spike of neuron $i$ to the two slow currents $\boldsymbol{h^\text{s1}}$ and $\boldsymbol{h^\text{s2}}$.
\begin{align}
\boldsymbol{D^\text{s1}_i} = \frac{1}{\lambda_\text{s2} - \lambda_\text{s1}}\left[(\lambda_\text{s2}\boldsymbol{I}+\boldsymbol{A})(\lambda\boldsymbol{I}+\boldsymbol{A})(\boldsymbol{ \hat d^{(i)}_{1\dots J}}+\boldsymbol{b}) + (\lambda_\text{s1}\boldsymbol{I}+\boldsymbol{A})((\lambda+\lambda_\text{s2}-\lambda_\text{s1})\boldsymbol{I}+\boldsymbol{A})\boldsymbol{c}\right] \notag\\
\boldsymbol{D^\text{s2}_i} = \frac{1}{\lambda_\text{s1} - \lambda_\text{s2}}\left[(\lambda_\text{s1}\boldsymbol{I}+\boldsymbol{A})(\lambda\boldsymbol{I}+\boldsymbol{A})(\boldsymbol{ \hat d^{(i)}_{1\dots J}}+\boldsymbol{c}) + (\lambda_\text{s2}\boldsymbol{I}+\boldsymbol{A})((\lambda+\lambda_\text{s1}-\lambda_\text{s2})\boldsymbol{I}+\boldsymbol{A})\boldsymbol{b}\right]
\end{align}
Here $\boldsymbol{b}$ and $\boldsymbol{c}$ are the following combinations of the $J$-dimensional projections of the firing vector $\boldsymbol{\hat d^{(i)}}$ onto coordinate planes $\boldsymbol{\hat d^{(i)}_{J+1\dots 2J}}$ and $\boldsymbol{\hat d^{(i)}_{2J+1\dots 3J}}$:
\begin{align}
&\boldsymbol{b} = (\boldsymbol{\tau_1}^{-1}\boldsymbol{\tau_2} - \boldsymbol{\bar\tau_1}^{-1}\boldsymbol{\bar\tau_2})^{-1}(\boldsymbol{\tau_1}^{-1}\boldsymbol{\hat d^{(i)}_{J+1\dots 2J}} - \boldsymbol{\bar\tau_1}^{-1}\boldsymbol{\hat d^{(i)}_{2J+1\dots 3J}})\\
&\boldsymbol{c} = (\boldsymbol{\tau_2}^{-1}\boldsymbol{\tau_1} - \boldsymbol{\bar\tau_2}^{-1}\boldsymbol{\bar\tau_1})^{-1}(\boldsymbol{\tau_2}^{-1}\boldsymbol{\hat d^{(i)}_{J+1\dots 2J}} - \boldsymbol{\bar\tau_2}^{-1}\boldsymbol{\hat d^{(i)}_{2J+1\dots 3J}})
\end{align}
The firing vector $\boldsymbol{\hat d^{(i)}}$ is in turn determined by the $i$-th row of the matrices $\boldsymbol{F}$, $\boldsymbol{F^\text{int}}$ and $\boldsymbol{\bar F^\text{int}}$:
\begin{align}
&\boldsymbol{\hat d^{(i)}_{1\dots J}} = \frac{T_i}{\boldsymbol{F_i}^2 + (\boldsymbol{F^\text{int}})^2 + (\boldsymbol{\bar F^\text{int}})^2}\boldsymbol{F_i}\notag\\
&\boldsymbol{\hat d^{(i)}_{J+1\dots 2J}} = \frac{T_i}{\boldsymbol{F_i}^2 + (\boldsymbol{F^\text{int}})^2 + (\boldsymbol{\bar F^\text{int}})^2}\boldsymbol{F^\text{int}_i}\notag\\
&\boldsymbol{\hat d^{(i)}_{2J+1\dots 3J}} = \frac{T_i}{\boldsymbol{F_i}^2 + (\boldsymbol{F^\text{int}})^2 + (\boldsymbol{\bar F^\text{int}})^2}\boldsymbol{\bar F^\text{int}_i}
\end{align}
The matrix of fast connections has rank $3J$ and is given by:
$$
\boldsymbol{\Omega^\text{f}} = - [\boldsymbol{F} \hspace{0.1cm}\boldsymbol{F^\text{int}} \hspace{0.1cm}\boldsymbol{\bar{F}^\text{int}}]\boldsymbol{D^\text{f}}
$$
with the matrix $\boldsymbol{D^\text{f}}$ consisting of the vectors $\boldsymbol{\hat d^{(i)}}$.
We estimate the number of neurons in the network $N$ required for the current scheme of dimensionality expansion to be
\begin{equation}
N\gg \frac{(2B)^{2J}V_{2J}S_J}{V_{3J-1}\omega^{2J}\tau_\text{max}^{J-1}}\left(\frac{\lambda^\text{slow}}{\lambda}\right)^{3J-1}
\label{eq:boundN3}
\end{equation}
with $S_J$ is the area of the unit sphere and $V_J$ is the volume of the unit ball in $J-$ dimensional space, $B$ is an upper bound on the typical value of $|\boldsymbol{c}|/\lambda$ and $\tau_\text{max}$ - the maximum among the eigenvalues of the matrices $\boldsymbol{\tau_1}$, $\boldsymbol{\tau_2}$, $\boldsymbol{\bar\tau_1}$ and $\boldsymbol{\bar \tau_2}$ in absolute value (see Appendix, section \ref{sec:App}).
\section{Numerical Simulations}
To illustrate the dynamics of the efficient balanced network with different structures of synaptic interactions, we numerically simulate the activity of four different networks in response to the same feedforward input. The feedforward input in our simulation is two-dimensional and its principal components follow an autonomous linear differential equation of the first order, like in section \ref{sec:TS}:
$
\dot{\boldsymbol{c}} = \boldsymbol{Ac}
$
with $\boldsymbol{A} = \begin{bmatrix}-0.12& -0.036\\ 1 &0\end{bmatrix}$ (eigenvalues $-0.06\pm 0.18i$) and the initial condition $\boldsymbol{c}(0) = \begin{bmatrix}-0.3\\0.96\end{bmatrix}$ (see Figure \ref{fig:sim1}\textbf{a} ).
The first network is the network with only fast synaptic currents which was described in section \ref{sec:fastOnly}. The strength of synaptic connections given by (\ref{eq:flr}) and (\ref{eq:fastD}) guarantees that the membrane voltages of all the neurons are brought to zero after every spike, which can be seen from the Figure \ref{fig:sim1}\textbf{d}, where the time evolution of the voltages of 5 randomly chosen neurons are plotted. However, this network has no capacity to balance the feedforward input between the spikes, which is reflected in the rapid rise of the membrane voltages between the spikes (see Figure \ref{fig:sim1}\textbf{d}). The total number of spikes fired by this network $N_\text{sp} = 2875$. The time evolution of the two principal components of the feedforward input $c_1(t)$ and $c_2(t)$ can be approximately recovered from the spikes of the network by means of the decoder matrix $\boldsymbol{D^\text{f}}$ given in (\ref{eq:fastD}): $\boldsymbol{c} \approx \lambda\boldsymbol{D^\text{f}r}(t)$. This reconstruction is illustrated on the Figure \ref{fig:sim1}\textbf{b}. The precision of the approximation is determined by how fast the input $\boldsymbol{c}(t)$ changes on the time scale of membrane time constant $\lambda$. In the present simulation $\lambda = 10$ and the absolute value of the eigenvalues of the matrix $A$ is $|\boldsymbol{A}|\approx 0.19\ll \lambda$. The signal that is recovered exactly in the case of an infinitely large network is the leaky integral of the feedforward input $\boldsymbol{\hat{c}}(t)$ given in (\ref{eq:chat}). The reconstruction of $\boldsymbol{\hat{c}}(t)$ as $\boldsymbol{D^\text{f}r}(t)$ is shown on the Figure \ref{fig:sim1}\textbf{c}, the plots are multiplied by $\lambda$ to keep the same scale as on the Figures \ref{fig:sim1}\textbf{a} and \ref{fig:sim1}\textbf{b}. As the feedforward weights of the neurons were chosen to correspond to the selectivity vectors required to represent this particular input (see the end of the current section for details), the reconstruction is very precise.
\begin{figure}
\includegraphics[width = 15cm]{figure2.pdf}
\centering
\caption{Numerical Simulations. The network with only fast connections. \textbf{a.} The shape of two-component time varying input used in all the simulations. The total duration of the input is 100 time units, but for illustration purposes when showing the reconstructed signal we use the 3 time units window bounded by the dashed line. \textbf{b.} The two components of the input $\boldsymbol{c}(t)$ and their reconstruction from the spikes of the network $\lambda \boldsymbol{D^\text{f}r}(t)$. The reconstructed input changes stepwise at the moment of a spike approaching the true value, however not attaining it. The scale of the representational error right after a spike is determined by the ratio of the rate of change of the input and the neuronal time constant $\lambda$, which in the current case is small ($\approx 0.02$). \textbf{c.} The leaky integral of the input $\boldsymbol{\hat c}(t)$ (as defined in (\ref{eq:chat})) multiplied by $\lambda$ to be on the same scale as $\boldsymbol{c}(t)$ and the same reconstruction as in \textbf{b}, $\lambda \boldsymbol{D^\text{f}r}(t)$. The reconstruction becomes exact after every spike because the simulated neurons were chosen to have the exact required selectivity vectors (feedforward weights), imitating the infinite network which has neurons with any given selectivity vector. \textbf{d.} The trace of membrane voltages of five randomly chosen neurons. After every spike the voltages of all the neurons drop to zero in step-wise manner.}
\label{fig:sim1}
\end{figure}
The second network that we simulate has a slowly decaying synaptic current in addition to the fast one, with the structure of the synaptic efficacies chosen in such a way, that the recurrent input is in the same 2-dimensional subspace of the neural space as the feedforward input, see equations (\ref{eq:OmegaSB}) and (\ref{eq:DSB}) and section \ref{sec:SB}. The feedforward input $\boldsymbol{c}(t)$ can now be approximately recovered from the slow synaptic current that is balancing it: $\boldsymbol{c}(t)\approx \boldsymbol{D^\text{s}h^\text{s}}(t)$. The slow current $\boldsymbol{h^\text{s}}(t)$ is in tern recoverable from the spiking activity of the network by convolving it with the decaying exponential kernel (in the present simulation, the exponent is $\lambda_\text{s} = 2$). The reconstruction is shown on the Figure \ref{fig:sim2}\textbf{a}. The precision is determined by the slowest time constant among the eigenvalues of the matrix $A$ and $\lambda_\text{s}$ in relation to $\lambda$, which in this simulation is $\lambda_\text{s}/\lambda = 0.2$. This second network is equivalent to the first one, whose feedforward input is the sum of the actual feedforward input $\boldsymbol{Fc}(t)$ and the slow recurrent input $-\boldsymbol{FD^\text{s}h^\text{s}}(t)$. The leaky integral of this total input is exactly (in the case of infinite network) recoverable from the spikes of the network with the decoder $\boldsymbol{D^\text{f}}$ (\ref{eq:fastD}) as $\boldsymbol{\hat{c}}(t)-\boldsymbol{D^\text{s}\hat h^\text{s}}(t) = \boldsymbol{D^\text{f}r}(t)$, see Figure \ref{fig:sim2}\textbf{b}. Consequently, the signal $\boldsymbol{\hat c}$ can be decoded exactly from the combination of the spike train filtered with the exponential kernel of decay rate $\lambda$, $\boldsymbol{r}(t)$, and the slow synaptic current filtered with the same kernel, $\boldsymbol{\hat h^\text{s}}$: $\boldsymbol{\hat c}(t) = \boldsymbol{D^\text{s}\hat h^\text{s}}(t)+\boldsymbol{D^\text{f}r}(t)$, as is shown on the Figure \ref{fig:sim2}\textbf{c}. Again, as the size of the network is large, $N = 1452$ (how the number of neurons in the network and the matrix of the feedforward weights were chosen, will be explained later in this section), the reconstruction is very good. Figure \ref{fig:sim2}\textbf{d} shows the time evolution of the membrane voltages of 5 randomly chosen neurons. As the feedforward input is now partially balanced by the recurrent one, the accumulation of the voltage is slower compared to the case of the network with only fast connections (see Figure \ref{fig:sim1}\textbf{d}). The total number of spikes in response to the same feedforward input is now $N_\text{sp} = 486$.
\begin{figure}
\includegraphics[width = 15cm]{figure3.pdf}
\centering
\caption{Numerical Simulations. The network with one type of slow connections and no dimensionality expansion. \textbf{a.} The two components of the feedforward input $\boldsymbol{c}(t)$ and its reconstruction $\boldsymbol{D^\text{s}h^\text{s}}(t)$. The reconstruction is approximate with the accuracy determined by the ratio of the synaptic time constant $\lambda^\text{s}$ and the neuronal time constant $\lambda$. In the present case this ratio is equal to 0.2. \textbf{b}. The two components of the leaky integral of the total input into the network, feedforward plus slow recurrent $\boldsymbol{\hat c}(t) - \boldsymbol{D^\text{s}\hat h^\text{s}}(t)$, together with its reconstruction $\boldsymbol{D^\text{f}r}(t)$ (both scaled by $\lambda$). The reconstruction is exact, as on the figure \ref{fig:sim1}\textbf{c}. However as the slow recurrent input approximately balances the feedforward, the amplitude of the total input is much smaller and consequently, the spikes are more sparse. \textbf{c.} The leaky integral of the feedforward input $\boldsymbol{\hat{c}}(t)$ can be recovered as the sum of the leaky integral of the recurrent input $\boldsymbol{D^\text{s}\hat {h}^\text{s}}(t)$ and the spikes of the network filtered with neuronal time constant, $\boldsymbol{D^\text{f}r}(t)$. \textbf{d.} The evolution of membrane voltages of five randomly chosen neurons. The voltages change much more slowly compared to figure \ref{fig:sim1}\textbf{d} as the feedforward input is partially balanced by the recurrent.
}
\label{fig:sim2}
\end{figure}
The results of simulating the two network structures described in the sections \ref{sec:DE1} and \ref{sec:DE2} are shown on the Figure \ref{fig:sim3}. The plots on the panel \textbf{a} of Figure \ref{fig:sim3} show the principal components of the feedforward input $c_1(t)$ and $c_2(t)$ and their reconstruction from the activity of the network of section \ref{sec:DE1}. This network still has only one synaptic time constant, $\lambda_\text{s} = 2$, but the dimensionality of its dynamics is expanded 2-fold relative to the dimensionality of the feedforward input, by not constraining the synaptic current to stay in the input subspace, see (\ref{eq:Os2}). As before, the feedforward input is reconstructed from the network's spikes filtered with the synaptic kernel $\text{e}^{-\lambda_\text{s}t}$, namely $\boldsymbol{h^\text{s}}$(t): $\boldsymbol{c}(t) = \boldsymbol{D^\text{s}h^\text{s}}(t)$, where the slow decoder $\boldsymbol{D^\text{s}}$ is given by (\ref{eq:DS2J}). The error of representation is corrected exactly after every spike, which implies that the feedforward input is cancelled by the projection of the recurrent one onto the input subspace after every spike. The additional recurrent input in the subspace spanned by the columns of the matrix $\boldsymbol{F^\text{int}}$ (see (\ref{eq:Os2})) is very small, and hence the time derivative of the membrane voltages on the Figure \ref{fig:sim3}\textbf{b} are very close to zero after every spike. The total number of spikes fired by this network is $N_\text{sp} = 268$, and the number of neurons in the network is $N = 1869$.
The panels \textbf{c} and \textbf{d} of the Figure \ref{fig:sim3} refer to the network described in the section \ref{sec:DE2}, which has two slow synaptic currents decaying with time constants $\lambda_\text{s1} = 2$ and $\lambda_\text{s2} = 1.2$. The dimensionality of the network dynamics is expanded 3-fold in comparison with the dimensionality of the feedforward input. For this network, as one can see from Figure \ref{fig:sim3}\textbf{c}, not only the feedforward input is matched by the corresponding projection of the recurrent one after every spike, but also its first time derivative, so the balance is maintained longer into the inter-spike interval. On the Figure \ref{fig:sim3}\textbf{d} we plot the membrane voltages of 5 randomly chosen neurons. As the projections of the recurrent input onto the subspaces, spanned by the columns of $\boldsymbol{F^\text{int}}$ and $\boldsymbol{\bar F^\text{int}}$ are not cancelled, the time derivatives of the voltages immediately after a spike are not zero. In fact, the derivatives right after the spike are larger than in the case of the network with only 2-fold dimensionality expansion (see Figure \ref{fig:sim3}\textbf{b}), however they stay small for a longer time, delaying the next spike of the network. This is because the main contribution to the membrane voltage, namely the feedforward input, is predicted by the network and cancelled more efficiently.
\begin{figure}
\includegraphics[width = 15cm]{figure4.pdf}
\centering
\caption{Numerical simulations of the two networks with dimensionality expansion. \textbf{a.} The reconstruction of the input by the network with one synaptic time constant and 2-fold dimensionality expansion (section \ref{sec:DE1}). The reconstruction of the input $\boldsymbol{c}(t)$ from the slow synaptic current $\boldsymbol{D^\text{s}h^\text{s}}(t)$ is precise after every spike of the network. \textbf{b.} The membrane voltages of five randomly chosen neurons in the network of (\textbf{a}). As the feedforward input is balanced by the recurrent in the input subspace and the additional current in the $J$-dimensional internal subspace is small, the first derivatives of all the voltages are close to zero after every spike. \textbf{c.} The reconstruction of the input by the network with two synaptic time constants and 3-fold dimensionality expansion, described in the section \ref{sec:DE2}. In this case, not only the value but also the first derivative of the predictable feedforward input is matched by the total slow synaptic current after every spike. The two components of the input $\boldsymbol{c}(t)$ are encoded by the sum of the two slow synaptic currents $\boldsymbol{D^\text{s1}h^\text{s1}}(t)+\boldsymbol{D^\text{s2}h^\text{s2}}(t)$. \textbf{d.} The membrane voltages of five randomly chosen neurons in the network of (\textbf{c}). The feedforward current in the input subspace is still balanced by the synaptic current after every spike, while the additional current in the $2J$-dimensional internal subspace is larger than in (\textbf{b}), so the first derivatives of the voltages are further from zero. However, because the first derivative of the balancing synaptic current in the input subspace matches the derivative of the feedforward current, the voltages stay below threshold longer, delaying the next spike.
}
\label{fig:sim3}
\end{figure}
The increasing efficiency of the balanced network with its complexity is summarized on the Figure \ref{fig:sim4} where we plot the histogram of the absolute value of the representation error over time bins and also indicate the total number of spikes $N_\text{sp}$ fired by the corresponding network in response to the same input, which is shown on the Figure \ref{fig:sim1}\textbf{a}.
\begin{figure}
\includegraphics[width = 15cm]{figure5.pdf}
\centering
\caption{The histograms of the absolute values of the decoding errors and the total number of spikes of the four versions of balanced network. The errors were measured at every time step of numerical simulation. \textbf{a.} The network with only fast connections. \textbf{b.} The network with one type of slow connections (one synaptic time constant) and no dimensionality expansion. \textbf{c.} The network with one type of slow connections with 2-fold dimensionality expansion. \textbf{d.} The network with 2 types of slow connections (two synaptic time constants) and 3-fold dimensionality expansion.
}
\label{fig:sim4}
\end{figure}
To choose the matrix of the feedforward weights $\boldsymbol{F}$ and the matrix of the internal dimensions of the network dynamics $\boldsymbol{F^\text{int}}$ and $\boldsymbol{\bar F^\text{int}}$ we used the following procedure. We first determined the selectivity vectors $\boldsymbol{\hat d^{(i)}}$ of the neurons that would fire in response to the given input, and only generated the neurons with corresponding vectors $\boldsymbol{F_i}$, $\boldsymbol{F^\text{int}_i}$ and $\boldsymbol{\bar F^\text{int}_i}$ (if applicable). In all versions of the network except for the one with only fast connections, we also generate $2(d-1)$ (with $d$ being the dimensionality of the vector $\boldsymbol{\hat d^{(i)}}$) additional neurons per a firing neuron, whose selectivity vectors differ from $\boldsymbol{\hat d^{(i)}}$ by a small shift in a direction orthogonal to $\boldsymbol{\hat d^{(i)}}$. The additional neurons with the selectivity vectors similar to the firing neurons are there for the proof of principle. We use this procedure to imitate the infinite network, which would have a neuron for every chosen direction in its state space but only a small number of them would be active in response to the given input. In the two cases when the dimensionality of the state space is equal to the dimensionality of the input space, namely $J = 2$ in our simulations, this was not necessary, as having very small size of the firing faces (small $\Delta$) when choosing the feedforward weights randomly would require a size of the network that is still reasonable. However, in the other two cases where the dimensionality of the state space is equal to 4 or 6, this would be impractical.
We provide the Matlab code for setting up the networks and running the simulations at \url{https://github.com/l-kushnir/temporalStr}.
\section{Generalization of the framework}
In the setup of the previous sections, when the future evolution of the feedforward input is fully determined by its value at the current moment, one can imagine extending the dimensionality of the state space of the network and adding another slow time constant to impose the constraint that the second derivatives of the feedforward and the recurrent inputs are also matched after every spike. Similarly, the third derivative, the fourth and so forth. The representation of the input will become more and more efficient until the period into the future for which the recurrent input closely reflects the feedforward one will reach the period for which the feedforward input is predictable, i.e. follows the equation (\ref{eq:dynsys}). Matching higher order derivatives will not decrease the number of spikes further.
It should also be noted, that as we increase the number of derivatives to be matched, the dimensionality of the firing surface increases, and the number of neurons in the network should be increased accordingly to keep the size of the firing faces small.
\section{Cost function formulation.}
In this section we draw connection between the approach taken in this paper and the original approach of \cite{Boerlin2013}, where the network with fast connections (see section {\ref{sec:fastOnly}}) was derived as an implementation of greedy minimization of a cost function.
We start with introducing a $J$-dimentional variable $x(t)$, that changes in time and is represented online by a sequence of discrete events. We do not specify at this point what is the nature of these events, but we refer to them as "spikes" for convenience. The online representation of the variable follows from greedy minimization of the time-dependent cost function ${\cal{L}}(t)$:
\begin{equation}
{\cal{L}}(t) = (\boldsymbol{x}(t)-\boldsymbol{\bar x}(t))^2 + \omega^2n_\text{sp}(t)
\label{eq:cost}
\end{equation}
here $\boldsymbol{x}(t)$ is the represented variable, $\boldsymbol{\bar x}(t')$ is its estimate, $n_\text{sp}(t)$ is the number of discrete events (spikes) up to a time $t$ and $\omega$ is a parameter which specifies the cost associated with a spike.
We assume that there are $N$ types of events that differently affect the online estimate $\boldsymbol{\bar x}(t)$. Namely, an event of type $j$ updates the estimate by a $J$-dimensional vector $\boldsymbol{D_j}$, that we refer to as the decoding vector corresponding to the event $j$. These $N$ types will later be interpreted as $N$ neurons. Between the events, the estimate decays to zero with the rate $\lambda$.
This estimation, or decoding process can be summarized by the equation
\begin{equation}
\bar x_\alpha(t) = \sum_{j = 1}^N\sum_{s = 1}^{N_j}D_{j \alpha}\text{e}^{-\lambda(t-t_s^j)}
\end{equation}
where $t_s^j$ is the time of the spike of a given type $j$ with the order number $s$, and $N_j$ is the total number of these spikes till the current moment $t$.
Note that
$$
\bar x_\alpha(t) = \sum_{j = 1}^ND_{j \alpha} r_j
$$
where $r$ is defined in section \ref{sec:1} (see (\ref{eq:hatr})).
We assume that $N$ is very large and for any arbitrarily chosen vector of the representation error, there is a decoding vector $\boldsymbol{D_i}$ which is sufficiently close to it, so that any error in the representation of $\boldsymbol{x}(t)$ can be corrected at any time. However, if the current error is small this correction will not reduce the cost function ${\cal{L}}(t)$ as additional spike increases the second term in (\ref{eq:cost}) by $\omega^2$:
Since any decoding vector is available, the greedy algorithm will choose the one that eliminates the error at the time of the event. Let's say, this is the vector $\boldsymbol{D_i}$. Then
\begin{equation}
x_\alpha(t_s^i) - \bar x_\alpha(t_s^i) = D_{i \alpha}\hspace{1cm} \alpha = 1\dots J
\label{eq:eD}
\end{equation}
Greedy minimization implies that the spike reduces the value of the cost function
\begin{equation}
{\cal{L}}_\text{spike}(t_s^i) - {\cal{L}}_\text{no spike}(t_s^i) < 0.
\label{eq:dcost}
\end{equation}
If the encoded variable $\boldsymbol{x}(t)$ changes continuously, so does the ${\cal{L}}_\text{no spike}(t)$ and the spike happens at the moment when it becomes equal to ${\cal{L}}_\text{spike}(t)$:
\begin{equation}
{\cal{L}}_\text{spike}(t_s^i) - {\cal{L}}_\text{no spike}(t_s^i) = 0
\label{eq:dcost1}
\end{equation}
Together with the definition of (\ref{eq:cost}) and the equation (\ref{eq:eD}), this condition leads
\begin{equation}
-2\boldsymbol{D_i}^T(\boldsymbol{x}(t_s^i) -\boldsymbol{\bar x}(t_s^i) ) + \boldsymbol{D_i}^T\boldsymbol{D_i}+\omega^2=0
\label{eq:dcost2}
\end{equation}
Taking into account (\ref{eq:eD}), this becomes
\begin{equation}
|\boldsymbol{D_i}| = \omega,
\end{equation}
which implies that only the decoding vectors of the norm $\omega$ are relevant, and only events of the type associated to such decoding vectors will be chosen. So, we can restrict the set of decoding vectors to
\begin{equation}
|\boldsymbol{D_i}| = \omega\hspace{1cm}i = 1\dots N
\end{equation}
Note, that this is slightly different from the line of reasoning in \cite{Boerlin2013} where the set of decoding vectors was assumed limited from the beginning.
The condition (\ref{eq:eD}) for the algorithm to choose the event of type $i$ at the moment $t_s^i$ can be rewritten as
\begin{equation}
\boldsymbol{D_i}^T(\boldsymbol{x}(t_s^i) -\boldsymbol{\bar x}(t_s^i))>\omega^2
\label{eq:fc1}
\end{equation}
We now introduce a vector $\boldsymbol{F_i}$ for every type of event $i$, which is aligned with the decoding vector $D_i$, but can have an arbitrary norm $|\boldsymbol{F_i}|$:
\begin{equation}
\boldsymbol{F_i} = \frac{|\boldsymbol{F_i}|}{\omega}\boldsymbol{D_i}
\end{equation}
In terms of this set of vectors, the condition for spike of type $i$ becomes
\begin{equation}
\boldsymbol{F_i}^T(\boldsymbol{x}(t_s^i) -\boldsymbol{\bar x}(t_s^i))>T_i
\label{eq:fc2}
\end{equation}
where
\begin{equation}
T_i = \omega|\boldsymbol{F_i}|
\end{equation}
We now interpret the time-dependent left-hand side of the condition (\ref{eq:fc2}) as a membrane voltage of the integrate-and-fire neuron, and $T_i$ as its firing threshold.
\begin{equation}
V_i(t) = \boldsymbol{F_i}^T(\boldsymbol{x}(t)-\boldsymbol{\bar x}(t))
\label{eq:VFe}
\end{equation}
What we called an event of type $i$ is now interpreted as a spike of neuron $i$, the decoding vectors associated to different neurons determine the matrix of synaptic connections (see below), and the vectors $\boldsymbol{F_i}$ are the feedforward vectors that describe how the $J$ components of the feedforward input are combined to obtain the input into the neuron $i$.
Taking the derivative of the equation (\ref{eq:VFe}) gives the time evolution of the membrane voltage:
\begin{equation}
\dot{\boldsymbol{V}} = \boldsymbol{F}\dot{\boldsymbol{x}} - \boldsymbol{FD}\dot{\boldsymbol r} = \boldsymbol{F}(\dot{\boldsymbol{x}} -\lambda \boldsymbol{x }+ \lambda\boldsymbol{ x}) - \boldsymbol{FD}(-\lambda \boldsymbol{r} + \boldsymbol{\rho}) = -\lambda\boldsymbol{ V} + \boldsymbol{F c} + \boldsymbol{\Omega^\text{f}\rho}
\end{equation}
Here we have switched to matrix notation: $\boldsymbol{V}$ is an $N$-dimensional vector, $\boldsymbol{F}$ is an $N$ by $J$-dimensional matrix and $\boldsymbol{D}$ is a $J$ by $N$-dimensional matrix. We have also introduced an $N$-dimensional vector $\boldsymbol{\rho} (t)$ of delta functions at the times of the spikes of the corresponding neuron:
\begin{equation}
\rho_i(t) = \sum_{s = 1}^{N_i}\delta(t-t_s^i)
\label{eq:rho}
\end{equation}
We also introduced an $N$ by $N$ dimensional matrix of fast synaptic connections $\boldsymbol{\Omega^\text{f}}$ and a $J$-dimensional vector of feedforward inputs $\boldsymbol{c}(t)$
\begin{align}
&\boldsymbol{\Omega^\text{f}} = -\boldsymbol{FD}\notag\\
&\boldsymbol{c}(t) = \dot{\boldsymbol{ x}}(t)+\lambda \boldsymbol{x}(t)
\end{align}
The resulting network is in direct correspondence with an infinite network with only fast connections, described in the section \ref{sec:fastOnly}. The matrix $\boldsymbol{D}$ corresponds to $\boldsymbol{D^\text{f}}$ in the section \ref{sec:fastOnly}, and the represented variable $\boldsymbol{x}(t)$ is the leaky integral of the feedforward input into the network, which we have denoted by $\boldsymbol{\hat c}(t)$ in section \ref{sec:fastOnly}.
When assuming that for any vector of norm $\omega$ in the $J$-dimensional space there is a neuron $i$ whose decoding vector $\boldsymbol{D_i}$ is arbitrarily close to it, we implicitly assume that the network has an infinite size. For a finite-size network, there is a limited set of decoding vectors, all of which still have the norm $\omega$.
Unlike in the case of the infinite network, the norm of representation error can exceed the value of $\omega$, when there is no neuron with the decoding vector required to correct it exactly. A spike will be fired when the component of the representation error along one of the vectors $\boldsymbol{D_i}$ becomes equal to $\omega$ (equation (\ref{eq:fc1})), and only this component of the error will be removed (vector $\boldsymbol{D_i}$ will be added to the estimate $\boldsymbol{\bar x}(t)$). Choosing the best neuron in the network to fire and only when it reduces the cost function corresponds to the dynamics of vector $\boldsymbol{\hat d}$ bouncing within the firing surface as described in the section \ref{sec:fastOnly}.
If we now assume
\begin{equation}
\boldsymbol{F}^T\boldsymbol{F} =\gamma \boldsymbol{I}
\label{eq:Fgamma}
\end{equation}
where $\gamma$ is an arbitrary constant and $\boldsymbol{I}$ is a $J$ by $J$ identity matrix,
the cost function can be rewritten as
\begin{equation}
{\cal{L}}(t) = \frac{1}{\gamma}(\boldsymbol{x}(t)-\boldsymbol{\bar x}(t))^T\boldsymbol{F}^T\boldsymbol{F}(\boldsymbol{x}(t)-\boldsymbol{\bar x}(t)) + \omega^2n_\text{sp}(t)
\end{equation}
This form allow to express the cost function in terms of the $N$-dimensional vector of the neuronal voltages
\begin{equation}
{\cal{L}}(t) = \frac{1}{\gamma}\boldsymbol{V}^2(t) + \omega^2n_\text{sp}(t)
\end{equation}
Note, that the assumption (\ref{eq:Fgamma}) holds for a large network with the vectors $\boldsymbol{F_i}$ drawn from an isotropic distribution.
Formulating the cost function in purely network terms demonstrates the equivalence of the notion of the feedforward-recurrent balance (see section \ref{sec:Balance}) and the efficiency of representation, unifying the encoding and the dynamical approaches.
\section{Discussion}\label{sec:Dis}
We have shown how the structure of the feedforward input, spatial or temporal, can be incorporated into the connectivity structure of the network of integrate-and-fire neurons to allow for more efficient online representation of the input by the network's spikes. According to the main principle of the feedforward-recurrent balance, the incorporation of the input structure into the network can be interpreted as the network "having an expectation" that the future feedforward-input will obey this structure and mimicking it in the generated recurrent input. When the difference between the expected and the actual feedforward input accumulates, a spike is fired to correct for this difference and to updated the future expectation. Such a network implements the framework of predictive coding on the level of neuronal interaction.
There are two notions of input predictability that we are concerned with. The first notion is neuron-to-neuron predictability, namely inferring the state or the input into all the neurons in the network based on the state of a particular one. The input structure that corresponds to this aspect of predictability is the low-dimensionality of the feedforward input (see (\ref{eq:Ilow})), which we often referred to as spatial structure. The network with only fast connections considered in the section \ref{sec:fastOnly} "expects" that the input changes in the low-dimensional subspace which is the image of its connectivity matrix $\boldsymbol{\Omega^\text{f}}$ and updates the states of all the neurons after each spike by the means of large short-lasting synaptic current. If the feedforward input indeed possesses this expected structure, the voltages of all the neurons are reset to zero after every population spike. The expectation about the future feedforward input in this case is that it will be zero after the spike, as is reflected in the fact that the synaptic current is short-lasting. This network, which is a slight modification of the network described in \cite{Boerlin2013}, is predictive only in the spatial sense. For the network considered in the section \ref{sec:SB}, the slowly decaying synaptic current implements the expectation that the feedforward current is smooth. To be more precise, the network expects that the feedforward current changes according to the equation $\dot{\boldsymbol{c}} = -\lambda_\text{s}\boldsymbol{c}$, where $\lambda_\text{s}$ is the slow synaptic time constant of the network. In the last presented version of the network, section \ref{sec:TS}, the network predicts that the input follows the second order differential equation $\ddot{\boldsymbol{c}}+(\lambda_1+\lambda_2)\dot{\boldsymbol c} + \lambda_1\lambda_2\boldsymbol{c} = \boldsymbol{0}$ as this is the equation obeyed by the sum of the two synaptic currents, one of which decays with the time constant $\lambda_1$ and the second with $\lambda_2$. The initial value of the input and its derivative is updated at every spike.
In the present manuscript we discussed the optimal network that incorporates different structures present in the feedforward input. It was shown in \cite{LongSI} that in the case of the network with only fast connections, the value of the connectivity matrix that optimally incorporates the prediction of the spatial structure (low dimensionality) of the input, can be learned. Following the logic of \cite{LongSI} we propose a modified learning rule for the connectivity matrix described in the section \ref{sec:fastOnly}:
$$
d{\Omega^\text{f}_{ij}}(t) = -\varepsilon V_i\rho_j(t^-)
$$
where $V_i(t)$ is the postsynaptic voltage at the moment $t$ and $\rho_j(t^-)$ is the presynaptic neural response function defined in (\ref{eq:rho}) evaluated slightly before the moment $t$ (the synaptic weights are updated only when the presynaptic neuron spikes, the value of the postsynaptic voltage $V_i$ is evaluated after the spike), $\varepsilon$ is the learning rate. In the section \ref{sec:fastOnly} we have derived that the ratio of the firing threshold to the absolute value of the feedforward weights $\frac{T_i}{|\boldsymbol{F_i}|}$ in the optimal network is the same for all the neurons. This is achieved by equalization of the mean firing rates either with adjusting the thresholds (homeostatic threshold) or with scaling up or down all the feedforward synapses converging onto a given neuron.
Simulations show that the proposed learning rule leads to the optimal connectivity matrix (\ref{eq:flr}) under an appropriate choice of parameters. We leave the details of the simulation beyond the scope of this paper.
We hypothesize that once the feedforward input has a corresponding temporal structure in addition to spatial and the synaptic interaction is mediated by one or more types of long-lasting currents whose effects on the postsynaptic neurons can be adjusted separately, a similar learning rule will lead to the connectivity described in sections \ref{sec:SB} - \ref{sec:DE1}. The learning rule we propose is
$$
d\Omega^{(a)}_{ij} = -\epsilon V_i(t)h^{(a)}_j(t)
$$
where $h^{(a)}_j(t)$ is the presynaptic current of type $(a)$. By different types of synaptic currents we mean different evolution of $h^{(a)}_j(t)$ after the spike. In the current work we have specified this dynamics as an exponential decay with different time constants corresponding to different types, however this was not necessary for the line of reasoning. The assumed exponential decay can be substituted with virtually any reproducible synaptic dynamics, as long as it differs sufficiently between the current types and has the range of time constants required to match the feedforward input.
Whether this learning rule leads to the connectivity structure corresponding to expanded dimensionality of network dynamics, as described in sections \ref{sec:SB} - \ref{sec:DE1}, and how the network "chooses" the internal dimensions (the matrix $\boldsymbol{F^\text{int}}$) and the matrices $\boldsymbol{\tau}$ remains a subject of future investigation.
\begin{figure}
\includegraphics[width = 10cm]{figure6.pdf}
\centering
\caption{Coding and Simulator networks}
\label{fig:CodeSim}
\end{figure}
The slow currents $\boldsymbol{h^\text{s1}}$ and $\boldsymbol{h^\text{s2}}$ of the section \ref{sec:TS}, can be generated by another network, instead of in the synapses of the network encoding the input. In figure \ref{fig:CodeSim} we show schematically the architecture of such two-subnetwork system that is equivalent to a single network described in the section \ref{sec:TS}. The coding subnetwork, which receives the feedforward input has only fast recurrent connections (see section \ref{sec:fastOnly}), while the slow synaptic input $\boldsymbol{\Omega^\text{s1}h^\text{s1}+\Omega^\text{s2}h^\text{s2}}$ is substituted by the input from the simulator network. The simulator is the network described in \cite{Boerlin2013}, whose output is the solution to a given differential equation with the source term given by the external input. This network receives a particular projection of the vector of spikes of the coding network as its input and generates in response a sustained output. In order to mimic the two synaptic currents of the section \ref{sec:TS}, the simulator network must generate a $2J$-dimensional dynamics with the first $J$ components decaying with the rate $\lambda_\text{s1}$ and the rest with the rate $\lambda_\text{s2}$ (see figure \ref{fig:CodeSim}). While mathematically the single network with slow synapses and the system of two subnetworks are equivalent, in the later the exponential decay can be substituted by any another liner or non-linear \cite{Alemi2017} dynamics. This allows to implement predictions about the feedforward input on the time scale longer than the decay of synaptic currents. When the dynamics of the simulator does not math the expected dynamics of the input, as in the example above, the dimensionality of the simulator dynamics should be higher, in order to approximate the expected evolution of the input. One could imagine, however, that when the given temporal pattern in the input persists, the recurrent connections within the simulator might change in order to reproduce the dynamics of the input \cite{LearningSlowConnections}, in which case the dimensionality of the simulator will drop to match the dimensionality of the encoded input.
\section{Acknowledgements}
LK would like to thank Christian Machens, Srdjan Ostojic, Mirjana Maras, Vincent Bouttier and Ivan Gordeli for useful discussions.
\section{Appendix} \label{sec:App}
In this section we derive equation (\ref{eq:boundN}) the lower bound on the number of neurons $N$ in the case of expanding the dimensionality of the network dynamics in comparison with the dimensionality of the input, as described in the section \ref{sec:DE1}. In the current scenario, only a small part of the directions in the $2J-$ dimensional state space will be explored by the network. Indeed, let us consider the projection of the state vector $\boldsymbol{\hat d}(t)$ onto the input space, which is a $J-$ dimensional vector of the first $J$ components of $\boldsymbol{\hat d}(t)$, $\boldsymbol{\hat d_{1\dots J}}(t)$, and the projection of $\boldsymbol{\hat d}(t)$ onto the internal space of network dynamics, $\boldsymbol{\hat d_{J+1\dots 2J}}(t)$. The ratio of the absolute values of these projections is equal to the tangent of the angle between the state vector and the input subspace, which we call \emph{latitude} and denote by $\alpha(t)$. At the moment of a spike, the state vector $\boldsymbol{\hat d}(t)$ is equal to one of the firing vectors $\boldsymbol{\hat d ^{(i)}}$, and it follows from the firing condition (\ref{eq:ASCDE1}), that at the moment of the spike, the angle $\alpha(t^i_\text{sp})$ is determined by
\begin{equation}
\text{tan}(\alpha(t^i_\text{sp})) = \frac{|\boldsymbol{\hat d_{J+1\dots 2J}^{(i)}}|}{|\boldsymbol{\hat d_{1\dots J}^{(i)}}|} = \frac{|\boldsymbol{\tau c}(t_\text{sp}^i)|}{\lambda\omega}
\end{equation}
Assuming that the ratio $\frac{\boldsymbol{c}(t)}{\lambda}$ (which is approximately equal to $\boldsymbol{x}(t)$) is bounded by a constant of the order one, which we denote by $B$, we can write
$$
|\text{tan}(\alpha(t^i_\text{sp}))|\leq B\frac{|\boldsymbol{\tau}|}{\omega}
$$
We assume that $\frac{|\boldsymbol{\tau}|}{\omega}$ is small, and make an approximation $\text{tan}(\alpha(t))\leq 1$.
This implies that only neurons whose firing faces are in the direction
\begin{equation}
|\alpha| \leq B\frac{|\boldsymbol{\tau}|}{\omega}\equiv\alpha_0
\label{eq:boundAlpha}
\end{equation}
will ever fire. The reset of the space should also be covered by the firing faces of neurons to make the firing surface closed, for the case of unexpected inputs, but this coverage can have much smaller resolution. As the number of these "additional" neurons is small in comparison with the neurons whose firing faces are in the directions that satisfies (\ref{eq:boundAlpha}), we will ignore them in the following calculation.
So, instead of covering the entire sphere of radius $\omega$ in the $2J$-dimensional space with the patches of linear size $\Delta$, we only need to cover a spherical segment determined by the constraint on the latitude (\ref{eq:boundAlpha}). We denote the area of this segment by $A_{2J,\alpha_0}$.
$$
A_{2J,\alpha_0} = (2\alpha_0\omega)^JV_JS_J\omega^{J-1} = (2\alpha_0)^J\omega^{2J-1}V_JS_J
$$
Here $S_J$ is the area of the unit sphere and $V_J$ is the volume of the unit ball in $J-$ dimensional space, $\alpha_0$ is defined in (\ref{eq:boundAlpha}). This area should be covered by patches of radius $\Delta$, which we approximate as $(2J-1)-$ dimensional balls.
$$
A_{2J,\alpha_0} = N\Delta^{2J-1}V_{2J-1}
$$
Equating the two expressions for $A_{J,\alpha}$ and taking into account (\ref{eq:tauBig}) as a bound on $\Delta$, we get the bound on the number of neurons in the network
$$
N\gg \frac{(2B)^JV_JS_J}{V_{2J-1}\omega^J|\boldsymbol{\tau}|^{J-1}}\left(\frac{\lambda^\text{slow}}{\lambda}\right)^{2J-1}
$$
which is the equation (\ref{eq:boundN}). Note, that the matrix $\boldsymbol \tau$ should still be chosen to satisfy (\ref{eq:tauSmall}).
In the case of further dimensionality expansion as described in the section \ref{sec:DE2}, the above estimate changes. Let $\tau_\text{max}$ be the largest (in the absolute value) of the eigenvalues of the matrices $\boldsymbol{\tau_1}$, $\boldsymbol{\tau_2}$, $\boldsymbol{\bar\tau_1}$ and $\boldsymbol{\bar \tau_2}$. Then, the absolute value of the angle between the state vector $\boldsymbol{\hat d}(t)$ and the input space is bounded by
$$
\alpha_{0} = \frac{\tau_\text{max}B}{\omega}
$$
The area of the corresponding spherical segment in the $3J-$dimensional space is given by
$$
A_{3J,\alpha_0} = (2\alpha_0\omega)^{2J}V_{2J}S_J\omega^{J-1} = (2\alpha_0)^{2J}\omega^{3J-1}V_{2J}S_J
$$
the same area should be covered with $3J-1$ dimensional patches of linear size $\Delta$:
$$
A_{3J,\alpha_0} = N\Delta^{3J-1}V_{3J-1}
$$
which together with (\ref{eq:tauBig}) leads the constraint (\ref{eq:boundN3}) on the number of neurons:
$$
N\gg \frac{(2B)^{2J}V_{2J}S_J}{V_{3J-1}\omega^{2J}\tau_\text{max}^{J-1}}\left(\frac{\lambda^\text{slow}}{\lambda}\right)^{3J-1}
$$
\bibliographystyle{utphys}
|
2,877,628,091,219 | arxiv | \section{Introduction}
The problem of flame propagation in a flow of gaseous mixture with a fixed ignition point is important from both experimental and theoretical points of view. In fact, relative simplicity of practical realization and importance in applications made anchored flames one of the most popular topics in combustion science. Despite these circumstances theoretical description of the subject is far from being complete. It would not even be exaggeration to say that the very mechanism of formation of steady flame configurations is not fully understood. One of the most difficult problems here is the influence of anchoring system on the flame, in particular, the question of its locality. Analytical investigation of this arduous question is so complicated that it is usually not raised at all. Another closely related problem is the identification of mechanisms driving the development of flame disturbances. It is known for a long time that anchored flames may develop bulbous structures \citep{progreport1949}, their stability properties are affected by gravity diversely and deeper than in the case of freely propagating flames \citep{cheng1,cheng2}, which means that the stabilizing mechanisms in the two instances are quite different.
These issues are especially nontrivial in the case of two-dimensional flames. Namely, reduction of dimensionality changes the long-range behavior of the Green functions, leading to enhanced interaction between distant parts of the flame. Specifically, pressure distribution is determined by the Green function of the Laplace operator, whose integral kernel grows logarithmically with distance in two dimensions, and therefore so does the response to a point source. This indicates that the regions with large velocity gradients, in particular, vicinity of the anchor, may have strong nonlocal impact on the global flame structure.
Of particular interest are the high-velocity streams. Experiments indicate that when the incoming gas velocity significantly exceeds the normal flame burning speed, the flame front assumes highly elongated shape which often can be well approximated by straight lines (V-flames). Even if the front shape is not piecewise linear, simplifications admitted by the high-velocity limit make it accessible for theoretical investigation. An important example is the flame anchored in a high-velocity uniform stream in a channel. The first detailed theoretical account of this case was given by \citep{zel1944} who derived an integral equation for the front position, and obtained its numerical solutions [the main results of this work are reproduced in the book \citep{zeldo1985}]. The problem was considered independently by \citep{scurlock1948}, whose results were subsequently critically reviewed and clarified by \citep{tsien1951}. In the latter work, in particular, the main assumptions employed in the analysis were identified and used to derive an integral equation similar to that of \citep{zel1944}.
All three works deal with the steady case and assume the following:
1) For sufficiently high stream velocity, curvature of the stream lines can be neglected. The gas velocity is thus parallel to the channel walls everywhere. This implies that the gas pressure is constant in every cross section of the fresh and burnt gas regions. Furthermore, neglecting the relatively small constant pressure jump across the front makes pressure constant in every cross section of the channel.
2) The fresh gas velocity is also constant in every cross section, both up- and downstream of the ignition point. Neglecting velocity jump at the front makes the flow field continuous everywhere in the channel.
Using the Bernoulli integral under these assumptions, it is straightforward to show that the pressure is monotonic along the channel, so that it can be taken as an independent coordinate, \citep[see][]{zel1944}. Alternatively, as an independent variable can be taken the fresh gas velocity which is also monotonic along the channel, \citep[see][]{tsien1951}. In either way, the use of mass conservation gives an integral equation for one of the flow variables.
It should be mentioned that no attempt was made in the cited papers to justify the approximations 1), 2) any more rigorously than outlined above. Although these approximations look naturally, exclusion of the transversal velocity component from the list of dynamical variables represents a quite nontrivial step. Indeed, replacing the system of two Euler equations by the single Bernoulli integral means that the two velocity components are completely decoupled from each other. Considered on its own, this reduction of the system is legitimate under the assumption of high stream velocity. But the flow equations themselves do not constitute the complete system of equations governing flame propagation. They must be supplemented by the evolution equation and the jump conditions at the flame front, as well as boundary conditions at the channel walls. But the two velocity components are strongly coupled by the evolution equation: relatively small variations of the transversal component give rise to large variations of the longitudinal component [Cf. Eq.~(\ref{evolutiongen1}) below]. At the same time, exclusion of the transversal component from the consideration based on the assumptions 1), 2) makes this essential equation obsolete. Thus, in order that the evolution equation take its proper place in the analysis of flame propagation, it is necessary to bring the transversal velocity component back into consideration. This requires in turn restoration of the remaining flow equation and the corresponding jump condition.
An important step in validating the model described above was made by \citep{cherny1954} who showed that the equations derived by Zel'dovich and Tsien are asymptotically exact. More precisely, he formulated the way the high-velocity limit should be taken in the complete system of governing equations, and demonstrated that the resulting system reduces to an integral equation which is equivalent to the equations derived by these authors. One of the purposes of the present paper will be to show that Cherny's consideration is not complete. Namely, it omits one of the boundary conditions to be satisfied by the flow velocity in the bulk. It turns out that enforcement of the missing condition makes the integral equation trivial, in the sense that the only its solution satisfying all boundary conditions is the rectilinear front configuration with constant upstream gas velocity.
Evidently, this fact entails two conclusions. First, the assumptions 1), 2) mentioned above oversimplify the problem, and cannot be used to describe nontrivial flame configurations. Second, the limiting procedure proposed by \citep{cherny1954} is also inadequate. The main purpose of the present paper is to derive the correct equation describing flames in high-velocity streams, and to find its nontrivial solutions.
For this purpose, the on-shell description of steady flames, developed in \citep{kazakov1,kazakov2}, will be used. Its main advantage is that it describes flames in a closed form, {\it i.e.,} in a form involving only quantities defined on the flame front. There is no need to solve the flow equations in the bulk explicitly. In particular, it allows one to avoid artificial assumptions about the bulk flow, such as the scaling laws for the gas-velocity and pressure in the high-velocity limit, which are adopted in one way or another by the conventional approach. This description was initially given for freely propagating flames, but it admits simple and natural extension to anchored flames. This generalization is obtained in \citep{jerk3}.
The paper is organized as follows. The Zel'dovich-Scurlock-Tsien approach is critically reviewed in Sec.~\ref{critiques} which starts with a brief account of Cherny's formulation of the high-velocity limit. It is shown, in particular, that this formulation reproduces the assumptions 1),2) of Zel'dovich-Scurlock-Tsien approach, and that the only solution of the main equation, satisfying boundary conditions, is the trivial solution. The reasons underlying this result are identified in Secs.~\ref{boundarycond}, \ref{roleeq}. We then go to the on-shell description in Sec.~\ref{onshell}, summarizing the main equations derived in \citep{kazakov1,kazakov2}, and their extension to anchored flames. The reader is referred to \citep{jerk3} for more details concerning inclusion of the anchoring system and its analytical description. Sections \ref{rolep} -- \ref{evolutionequation} are counterparts of Secs.~\ref{chernyf} -- \ref{roleeq}. They discuss the role of pressure, and indicate the place the boundary conditions and evolution equation take in our approach. Solution of the on-shell equations is obtained in Sec.~\ref{solution}. In Sec.~\ref{largesl}, a large-slope expansion of these equations is derived, which extends to curved flames the corresponding expansion constructed in \citep{jerk3} for V-flames. Using this expansion, it is shown in Sec.~\ref{reduction1} that the main integro-differential equation for the complex velocity reduces to ordinary differential equations. Together with the evolution equation, these equations can be partially integrated and further reduced to a single second-order differential equation. This is done in Sec.~\ref{reduction} where numerical solutions of the derived equations are also found. Section \ref{conclusions} summarizes the results of the work. The paper has an appendix which describes in detail transition to the case of vanishingly small anchor dimensions within the large-slope expansion.
\section{Critiques of Zel'dovich-Scurlock-Tsien approach}\label{critiques}
\subsection{Cherny's formulation of the high-velocity limit}\label{chernyf}
We start with brief recalling the results of \citep{cherny1954} using notation appropriate for the present paper. These results provide a rigorous base for Zel'dovich-Scurlock-Tsien approach. Consider a steady two-dimensional combustible ideal gas stream in a channel with plane-parallel walls, ignited at a fixed point in the middle of the channel (see Fig.~\ref{fig1} where only the right half of the channel is shown). This point (the flame anchor) will be chosen as the origin of Cartesian system of coordinates $\bm{r} = (x,y),$ with $y$-axis along the wall, and the initially uniform fresh gas at $y=-\infty.$ The channel half-width will be taken as a unit of length, while the gas velocity $\bm{v} = (w,u)$ will be measured in units of the normal flame velocity relative to the fresh gas. Then the flow variables obey the following equations in the bulk
\begin{eqnarray}\label{flow1}
\frac{\partial w}{\partial x} + \frac{\partial u}{\partial y} &=& 0\,,
\\ w\frac{\partial w}{\partial x} + u\frac{\partial w}{\partial y} &=& - \frac{1}{\rho}\frac{\partial p}{\partial x} \,, \label{flow2}
\\ w\frac{\partial u}{\partial x} + u\frac{\partial u}{\partial y} &=& - \frac{1}{\rho}\frac{\partial p}{\partial y} \,, \label{flow3}
\end{eqnarray}
\noindent where $p,\rho$ are respectively the gas pressure and density. Under the assumption that the flow is essentially subsonic, the gas can be considered incompressible. Taking the fresh gas density as a density unit, that of the burnt gas will be $1/\theta,$ where $\theta>1$ is the gas expansion coefficient.
The flame configuration is assumed to be symmetric with respect to the $y$-axis. More precisely,
\begin{eqnarray}\label{antisym}
f(x) = f(-x)\,, \qquad w(x,y) = - w(-x,y)\,, \qquad u(x,y)
= u(-x,y)\,.
\end{eqnarray}
\noindent In particular, the transversal velocity component vanishes at the symmetry axis, $w(0,y) = 0,$ so that this formulation applies also to the case of a flame anchored at the channel wall, the point of view taken up in \citep{zel1944}.
Cherny formulated the high-velocity limit as follows. Denote the fresh gas velocity far from the flame front by $U,$ and introduce new (designated with a tilde) coordinates and flow variables according to
\begin{eqnarray}\label{newvar}
\tilde{x} = x\,, \quad \tilde{y} = y/U\,, \quad \tilde{u} = u/U\,, \quad
\tilde{w} = w\,, \quad \tilde{p} = p/U^2\,.
\end{eqnarray}
\noindent After that, switch to the new independent variables $(\tilde{y},\psi),$ where $\psi$ is the stream function defined by
\begin{eqnarray}\label{psivar}
\rho \tilde{u} = \frac{\partial \psi}{\partial \tilde{x}}\,, \quad \rho \tilde{w} = - \frac{\partial \psi}{\partial \tilde{y}}\,.
\end{eqnarray}
\noindent
Then the limit $U \to \infty$ is to be taken under the assumption that the quantities $\tilde{w},\tilde{u},\tilde{p}$ as well as their derivatives with respect to $\tilde{y},\psi$ remain bounded. Equations~(\ref{flow2}), (\ref{flow3}) (\ref{psivar}) thus take the form
\begin{eqnarray}\label{floweq}
\frac{\partial \tilde{p}}{\partial\psi} = 0\,, \quad \tilde{u}\frac{\partial \tilde{u}}{\partial \tilde{y}} + \frac{1}{\rho}\frac{\partial \tilde{p}}{\partial \tilde{y}} = 0\,, \quad \frac{\partial \tilde{x}}{\partial\psi} = \frac{1}{\rho \tilde{u}}\,, \quad \frac{\partial \tilde{x}}{\partial\tilde{y}} = \frac{\tilde{w}}{\tilde{u}}\,.
\end{eqnarray}
\noindent The first two equations give
\begin{eqnarray}\label{floweqint}
\tilde{p} = \tilde{p}(\tilde{y})\,, \quad \frac{\tilde{u}^2}{2} + \frac{\tilde{p}}{\rho} = i(\psi)\,,
\end{eqnarray}
\noindent where $i(\psi)$ is an arbitrary function. There are also jump conditions to be satisfied across the flame front, which in the limit $U \to \infty$ read
\begin{eqnarray}\label{boundclim}
\tilde{u}_+ = \tilde{u}_-\,, \quad \tilde{p}_+ = \tilde{p}_-\,, \quad \psi_+ = \psi_-\,,
\end{eqnarray}
\noindent where the minus (plus) subscript denotes restriction to the flame front of the flow function defined upstream (downstream). Noting that $i(\psi) = {\rm const}$ upstream (since the incoming flow is uniform), we see that equations (\ref{floweqint}) together with the first two equations (\ref{boundclim}) exactly reproduce the assumptions 1) and 2) stated in the introduction. Next, integrating the third of equations (\ref{floweq}), and combining its solution with the relations~(\ref{floweqint}) Cherny arrives at the following integral equation for the function $\tilde{y}(\tilde{u}_-)$
\begin{eqnarray}\label{zteq}
(\theta - 1)\int\limits_{1}^{u_1}\frac{u\tilde{y}(u)du}{\sqrt{\theta u^2_1 - (\theta - 1)u^2}} = \frac{(u_1 - 1)^2}{2}\,, \quad u_1\equiv \tilde{u}_-\,,
\end{eqnarray}
\noindent which is equivalent to the equations derived by Zel'dovich and Tsien. This completes the proof that the assumptions 1), 2) of Zel'dovich-Scurlock-Tsien approach are equivalent to the limiting procedure formulated by Cherny.
But this is not the end of the story. It turns out that the consideration of \citep{cherny1954} is not complete, in that it does not take into account boundary conditions for the gas flow. More precisely, these conditions are applied only at the end-points of the flame front, but not in the bulk. In terms of the stream function, the condition that the transversal component, $w,$ of gas velocity vanish at the walls and at the symmetry axis reads
\begin{eqnarray}\label{bcpsi1}
\psi = 0 \quad {\rm for} \quad \tilde{x} = 0, \\
\psi = 1 \quad {\rm for} \quad \tilde{x} = 1, \label{bcpsi2}
\end{eqnarray}
\noindent as is seen from equations~(\ref{psivar}) written in the form
$d\psi = -\rho \tilde{w}d\tilde{y} + \rho \tilde{u}d\tilde{x}.$
Consider the flow upstream. Since $i(\psi) = {\rm const}$ there, Eqs.~(\ref{floweqint}) tell us that $u$ is independent of $\psi,$ so that integration of the third of Eqs.~(\ref{floweq}) yields
$$\tilde{x} = \frac{\psi}{\tilde{u}(\tilde{y})} + X(\tilde{y})\,,$$ where the function $X(\tilde{y})$ is to be determined from the boundary conditions (\ref{bcpsi1}), (\ref{bcpsi2}). Substitution gives $$X(\tilde{y}) = 0\,, \quad \tilde{u}(\tilde{y}) = 1\,.$$ Therefore, the flow turns out to be uniform in the whole region upstream of the flame front. Then Eqs.~(\ref{floweqint}), (\ref{boundclim}) show that so is the flow downstream. Thus, $u_1=1$ is the only solution of Eq.~(\ref{zteq}) consistent with the boundary conditions at the channel walls. In other words, the limiting procedure proposed by Cherny, and hence the equivalent assumptions 1), 2) cannot be used to describe nontrivial flame configurations.
Technically, this means that the scalings (\ref{newvar}) are inappropriate. For instance, the scaling of pressure only seems natural. It is suggested by the Bernoulli integral
$$\frac{p}{\rho} + \frac{1}{2}(u^2 + w^2) = {\rm const}$$ which after neglecting $w$ in comparison with large $u$ would give $p \sim U^2.$ However, it is variable part of pressure that only matters. Setting $u = U + \hat{u},$ and absorbing $U^2/2$ in the ``{\rm const}'' on the right gives
$$\frac{p}{\rho} + U\hat{u} + \frac{\hat{u}^2}{2} = {\rm const}\,.$$ Now, $p\sim U,$ if $\hat{u}$ is assumed bounded, or something else if $\hat{u}$ behaves differently.
\subsection{Boundary conditions}\label{boundarycond}
Although not always stated explicitly, impermeability of the channel walls (and of the symmetry axis) is assumed in the Zel'dovich-Scurlock-Tsien approach, but is not used. As we saw in the preceding section, enforcing this condition in the Cherny's formulation makes the flow trivial, leaving the piecewise linear front configuration with constant gas velocities up- and downstream as the only possibility. In this formulation, impermeability of the walls and the symmetry axis is expressed in the form (\ref{bcpsi1}), (\ref{bcpsi2}), but these conditions are used only at the end-points of the flame front, and at the origin. But in fact, things must be just the opposite: transversal gas velocity must vanish at the channel walls and the symmetry axis everywhere except a small vicinity of the flame anchor. Indeed, suppose for simplicity that the flame is anchored by a cylindrical rod with circular cross-section (as is often the case in practice, see Fig.~\ref{fig2}). Then the gas flow is significantly disturbed in a vicinity of the rod: The slowdown of gas elements heading the rod leads to appearance of a nonzero transversal velocity component. This happens no matter how small the rod radius is. If one follows the trajectory of a fresh-gas element moving near the symmetry axis, its transversal velocity rapidly changes near the rod from zero to some finite value at the flame front. This means that in the limit of vanishing anchor dimensions ({\it i.e.,} in the Zel'dovich-Scurlock-Tsien setting), the gas flow is singular at the point of its location.
To be more specific, the presence of the rod in a high-velocity stream can be described mathematically by superimposing the incoming flow velocity $\bm{v}^0 = (0,U)$ with the velocity field $\bm{v}^d$ of a dipole located at the origin
\begin{eqnarray}\label{dipole}
\bm{v}^d = \frac{UR^2}{r^4} (-2xy, x^2 - y^2)\,,
\end{eqnarray}
\noindent where $R \ll 1$ is the rod radius. Indeed, if the rod is located downstream as is shown in Fig.~\ref{fig2}, the flame front bends round the rod, and for large $U,$ gets close to its surface, so that the normals to the front and the rod surface coincide. To the leading order, the normal gas-velocity is negligible in comparison with $U.$ Hence, velocity of the fresh gas near the rod must satisfy
\begin{eqnarray}\label{dipoleident}
(\bm{v},\bm{\nu}) = 0\,,
\end{eqnarray}
\noindent
where $\bm{\nu}$ is the normal to the rod. The velocity field $\bm{v}^0 + \bm{v}^d$ does satisfy this condition by virtue of the identity $(\bm{v}^0 + \bm{v}^d,\bm{r})|_{r=R} = 0,$ because $\bm{r}$ is normal to the rod. On the other hand, if the rod is located upstream ({\it i.e.,} flame is stabilized in the wake of the rod), Eq.~(\ref{dipoleident}) is the true boundary condition obeyed by $\bm{v}^0 + \bm{v}^d$ identically.
The transversal velocity component induced by the rod is large for $r\sim R,$ and rapidly decreases with distance. At distances $r$ such that $R\ll r \ll 1,$ the inner solution describing the flow near the rod is to be matched with the large-scale outer solution we are interested in. In particular, matching at the flame front assigns $w_-$ a definite value, say~$w_0.$ Since the transversal velocity in the outer flow is of the order of unity, so is $w_0.$ Let us denote by $R_0,$ $R\ll R_0 \ll 1,$ the characteristic distance where the two solutions are matched near the flame front. We thus have (considering the right branch of the front, $x>0$)
\begin{eqnarray}\label{bcpsir}
w_-|_{r\sim R_0} = w_0\,,
\end{eqnarray}
\noindent where $w_0$ is a positive or negative number, while \begin{eqnarray}\label{bcpsiry}
w(0,y)=0\,, \quad y<-R.
\end{eqnarray}
Finally, let us determine the scaling of $R_0$ with $U.$ For $r\sim R_0,$ $y=f(x),$ one has $y \sim R_0,$ $x\sim R,$ and hence $$w^d_-|_{r\sim R_0}\sim \frac{UR^2}{R^4_0}RR_0 = \frac{UR^3}{R^3_0}\,.$$ Therefore, $w_0 \sim 1$ implies $R_0\sim U^{1/3}R.$
Taking this into account, we find also
$$u^d_-|_{r\sim R_0}\sim \frac{UR^2}{R^4_0}R^2_0 = U^{1/3}\,.$$
\noindent Neglecting $U^{1/3}$ in comparison with $U,$ we conclude that to the leading order, the longitudinal velocity satisfies
\begin{eqnarray}\label{bcpsiru}
u_-|_{r\sim R_0} = U\,.
\end{eqnarray}
\noindent
Needless to say that the boundary behavior outlined above cannot be described within the Zel'dovich-Scurlock-Tsien approach in which transversal velocity component is completely excluded from consideration.
We have considered the simplest case of a circular rod. In the general case, specific form of the local flow is of course different, but the hierarchy of length scales $R\ll R_0 \ll 1,$ as well as relations (\ref{bcpsir}), (\ref{bcpsiru}) remain the same. These relations will be invoked in Sec.~\ref{strategy}.
\subsection{The role of evolution equation}\label{roleeq}
The local propagation law of a flame is determined by the so-called evolution equation which gives the normal fresh gas velocity, $v^n_-,$ as a function of the flame front curvature and space-time derivatives of the gas velocity at the front. For zero-thickness flames, it states that the normal flame velocity is simply equal to the velocity of planar flame. Namely, if the flame front position is represented by the curve $y=f(x),$ the evolution equation reads $v^n_- = 1,$ or
\begin{eqnarray}\label{evolutiongen}
u_- - f'w_- = N\,,
\end{eqnarray}
\noindent where the prime denotes $x$-differentiation, and $N = \sqrt{1 + f'^2}.$
Curiously, this characteristic property which plays fundamental role in the whole theory of flame propagation is not invoked in the derivation of the Zel'dovich-Tsien equation. The reason for this is the already mentioned disregard of the transversal velocity component. The point is that the transversal and longitudinal velocity components are strongly coupled by the evolution equation. Indeed, $u_-$ and $f'$ are both $O(U),$ so that the two terms on the left-hand side of Eq.~(\ref{evolutiongen}) are of the same order of magnitude. Hence, exclusion of $w$ from the list of dynamical variables makes this equation obsolete. In other words, any surface of discontinuity with arbitrary propagation law would satisfy Eq.~(\ref{zteq}), provided that its normal velocity is small compared to $U.$
The only place where Eq.~(\ref{evolutiongen}) finds application in the Zel'dovich-Scurlock-Tsien approach is the relation between the flame front length and the incoming stream velocity
\begin{eqnarray}\label{lu}
U = f(1)\,,
\end{eqnarray}
\noindent which follows from the mass conservation condition
$$U = \int\limits_{0}^{1}dx \sqrt{1 + f'^2}\, v^n_- \approx \int\limits_{0}^{1}dx f' = f(1) - f(0)\,.$$ Of course, this relation is independent of the particular dynamical model employed, and is valid in the high-velocity limit whatever the structure of the incoming flow. In the Cherny's formulation, Eq.~(\ref{evolutiongen}) takes the form $d\psi_-/d\tilde{y} = 1\,,$ and after integration under conditions (\ref{bcpsi1}), (\ref{bcpsi2}) yields Eq.~(\ref{lu}). These equations are not used in the derivation of Eq.~(\ref{zteq}).
\section{On-shell description of flame propagation}\label{onshell}
As shown in \citep{kazakov1,kazakov2}, the system of bulk flow equations and jump conditions at the flame front can be reduced to a single complex integro-differential equation relating values of the flow variables at the flame front (their {\it on-shell} values), so that explicit solving of the bulk equations (which is the most difficult part of consideration) turns out to be unnecessary. This equation reads
\begin{eqnarray}\label{generalc1st}&&
2\left(\omega_-\right)' + \left(1 +
i\hat{\EuScript{H}}\right)\left\{[\omega]' -
\frac{Nv^n_+\sigma_+\omega_+}{v^2_+} \right\} = 0\,.
\end{eqnarray}
\noindent Here, $\omega = u + iw$ is the complex velocity, $[\omega] = \omega_+ - \omega_-$ its jump across the front, $v^n_+$ is the normal velocity of burnt gas at the front, $\sigma_+$ is the on-shell value of vorticity produced by the curved flame, and the operator $\hat{\EuScript{H}}$ is defined on $2$-periodic functions by
\begin{eqnarray}\label{hcurvedf}
\left(\hat{\EuScript{H}}a\right)(x) = \frac{1 + i
f'(x)}{2}~\fint\limits_{-1}^{+1}
d\eta~a(\eta)\cot\left\{\frac{\pi}{2}(\eta - x +
i[f(\eta) - f(x)])\right\}\,,
\end{eqnarray}
\noindent the slash denoting the principal value of integral. It satisfies the important identity
\begin{eqnarray}\label{hident}
\hat{\EuScript{H}}^2 = - 1\,.
\end{eqnarray}
\noindent
The functions $\sigma_+,v^n_+$ as well as the velocity jumps at the front, entering equation (\ref{generalc1st}), are all known functionals of on-shell fresh gas velocity \citep[see, e.g.,][]{matalon,pelce}. For zero-thickness flames,
\begin{eqnarray}\label{jumps}
\bar{v}^n_+ &=& \theta\,, \quad [u] = \frac{\theta - 1}{N}\ ,
\quad [w] = - f'\frac{\theta - 1}{N}\,, \\
\sigma_+ &=& - \frac{\theta - 1}{2\theta N}(u^2_- + w^2_-)'\,.\label{vorticity}
\end{eqnarray}
\noindent Together with the evolution equation
(\ref{evolutiongen}), Eq.~(\ref{generalc1st}) constitutes a closed system of three equations for the three unknown functions $w_-(x), u_-(x),$ and $f(x).$
Equation (\ref{generalc1st}) describes freely propagating flames, but can be easily modified to take into account the presence of the rod. This generalization is given in \citep{jerk3}, and reads
\begin{eqnarray}\label{generalc1str}&&
2\left(\omega_-\right)' + \left(1 +
i\hat{\EuScript{H}}\right)\left\{[\omega]' -
\frac{Nv^n_+\sigma_+\omega_+}{v^2_+} \right\} = 2\left(\omega^d_-\right)'\,,
\end{eqnarray}
\noindent where $\omega^d$ is the complex velocity of the dipole (\ref{dipole}). Since this field satisfies the symmetry relations (\ref{antisym}), the boundary condition (\ref{bcpsiry}) is still met. We assume for definiteness that the rod is located in the downstream region (as is usually the case in practice; this assumption is inconsequential, Cf. footnote~\ref{footn1}). Then $\omega_-$ and $\omega^d_-$ satisfy
\begin{eqnarray}\label{chup}
\left(1 - i \hat{\EuScript{H}}\right)\left(\omega_-\right)' &=& 0\,.\\ \label{chupd}
\left(1 - i\hat{\EuScript{H}}\right)\left(\omega^d_-\right)' &=& 0\,,
\end{eqnarray}
\noindent which express analyticity and boundedness of the functions $\omega(z),$ $\omega^d(z),$ $z=x+iy,$ in the upstream region. Equations (\ref{chup}), (\ref{chupd}) are consistent with Eq.~(\ref{generalc1str}) by virtue of the identity (\ref{hident}).
We will now consider the problem of anchored flame propagation within the on-shell description in the light of the issues discussed in Sec.~\ref{critiques}.
\subsection{The role of gas pressure}\label{rolep}
It was already mentioned in the Introduction that specifics of the two-dimensional case make the issue of flame flow nonlocality especially nontrivial. The region near the rod is characterized by large velocity gradients, and therefore affects significantly the pressure field far apart from the rod. Indeed, pressure is determined by the Poisson equation $$\Delta p = -\rho \left(\nabla(\bm{v}\nabla)\bm{v}\right)\,,$$ while the Green function of the Laplacian is proportional to $\ln r.$ The flow near the anchor thus can be expected to have a strong nonlocal influence on the flame structure. It is one of the advantages of the on-shell formulation that it reveals the completely subordinate role pressure plays in determining the front structure. In fact, the variable $p$ does not appear in Eq.~(\ref{generalc1st}). In particular, no assumption such as that contained in the point 1) of the Zel'dovich-Scurlock-Tsien approach, or the scaling prescription (\ref{newvar}) of the Cherny's limiting procedure, is needed in the on-shell description. Also, exclusion of pressure from consideration weakens the above argument concerning nonlocality of the anchor impact.
\subsection{Strategy of solving Eq.~(\ref{generalc1str}). Boundary conditions}\label{strategy}
We are concerned with the situation where the rod radius, $R,$ is much smaller than the channel width, and interested in the large-scale front structure, {\it i.e.,} its structure at distances large compared to $R$ (the outer solution). We observe, first of all, that the presence of the rod is described simply by adding the dipole field to Eq.~(\ref{generalc1st}), which is noticeable only in a small vicinity of the origin. This fact indicates that despite the pressure argument given in the preceding section, the detailed flow structure near the rod may be actually unimportant. This would mean that the rod can be considered point-like, thus greatly simplifying account of its influence on the global flame structure. Considerations of the subsequent sections will show that this is indeed so in the high-velocity limit. Namely, it turns out that Eq.~(\ref{generalc1st}) reduces in this limit to an ordinary differential equation. This naturally opens the following way of solving Eq.~(\ref{generalc1str}): we consider this equation at $x$'s such that $r\gtrsim R_0,$ and extract the leading terms of the high-velocity expansion. Detailed analysis reveals that for such $x$'s, the dipole contribution is negligible, so that Eq.~(\ref{generalc1str}) reduces to Eq.~(\ref{generalc1st}). Exclusion of the region near the rod implies that this equation must be supplemented by an auxiliary condition at a point $x_0$ corresponding to the matching region $r\sim R_0$ (see Fig.~\ref{fig2}). Since Eq.~(\ref{generalc1st}) is complex, and involves only quantities defined at the flame front, it requires two real auxiliary conditions expressed in terms of on-shell quantities. As such, we take the relations (\ref{bcpsir}), (\ref{bcpsiru}) in which the numbers $w_0,U$ play the role of parameters specifying given large-scale solution. In the limit $R\to 0,$ these relations take the form
\begin{eqnarray}\label{bcpsir1}
w_-(0^+) = w_0\,,\\
\label{bcpsiru1}
u_-(0^+) = U\,,
\end{eqnarray}
\noindent with the understanding that the functions $w_-(x)\equiv w(x,f(x)),$ $u_-(x)\equiv u(x,f(x))$ are solutions of Eq.~(\ref{generalc1st}) considered on the semi-open interval $x \in (0,1],$ and the quantities $w_-(0^+),$ $u_-(0^+)$ are their right limiting values for $x \to 0.$ Furthermore, taking the limit $R\to 0$ leads obviously to the following condition for the function $f(x)$
\begin{eqnarray}\label{bcpsirf}
f(0) = 0\,.
\end{eqnarray}
\noindent
Finally, there is also the usual condition of vanishing of $w$ at the wall $x = +1,$
\begin{eqnarray}\label{bcpsirw1}
w_-(+1) = 0\,.
\end{eqnarray}
\noindent If a solution of Eqs.~(\ref{evolutiongen}), (\ref{generalc1st}) satisfying the above conditions is found for $x \in (0,1],$ then the rule (\ref{antisym}) gives it on the semi-open interval $x \in [-1,0)$ as
\begin{eqnarray}\label{antisym1}
f(x) = f(-x)\,, \qquad w_-(x) = - w_-(-x)\,, \qquad u_-(x)
= u_-(-x)\,,
\end{eqnarray}
\noindent in particular, $w_-(x)$ satisfies $w_-(0^-) = -w_0,$ so that this function has a discontinuity at $x=0.$
In connection with the above procedure of finding the large-scale solution, the following circumstance should be emphasized. We are able to formulate this procedure in a closed form, without the need to construct the local solution near the rod explicitly, just because Eq.~(\ref{generalc1st}) relates only functions defined at the flame front. This would not be possible in the conventional approach based on explicit solving the bulk equations. Indeed, as we saw in Sec.~\ref{boundarycond}, the transversal velocity of gas elements moving near the rod rapidly changes from zero to some finite value at the flame front. Thus, if we were to construct a bulk outer solution satisfying the matching and boundary conditions (\ref{bcpsir}), (\ref{bcpsiry}), this solution ought to describe the rapid change of $w$ from zero to $w_0 \sim 1$ over a distance $\sim R_0.$ One might try to avoid this complication by shifting the matching region farther from the rod, but then the dipole field would be completely neglected. In effect, the anchor disappears from the description, and the solution becomes trivial. This is exactly what happens in the
Zel'dovich-Scurlock-Tsien approach. In our approach, on the contrary, only longitudinal velocity induced by the dipole is negligible in the matching region, but not the transversal.
At last, it is worth noting that the parameters $w_0, U$ specifying the large-scale solution are independent of each other. This fact becomes evident when we note that instead of the rod with circular cross-section we might use a more complicated cylindrical shape. Then for the same value of $U,$ matching of the inner and outer solutions would give a different value for $w_0.$ We conclude that the large-scale solutions form a two-parameter family. This is in contrast with the
Zel'dovich-Scurlock-Tsien approach where the single parameter $U$ completely determines the solution.
\subsection{Evolution equation}\label{evolutionequation}
Having formulated the outer problem in a closed form, we may now note that the experimentally observed anchored flames usually have fairly smooth, highly elongated front configurations. Hence, neglecting the small regions near the rod and the channel walls where finite-thickness effects are only important, we can use the evolution equation in the simplest form (\ref{evolutiongen}) applicable to zero-thickness flames. Restricting ourselves again to the right half of the channel where the front slope is positive, and taking into account that $f' = O(U)$ [Cf. Eq.~(\ref{lu})] this equation can be rewritten as
\begin{eqnarray}\label{evolutiongen1}
u_- = f'(w_- + 1)\,, \quad x>0\,,
\end{eqnarray}
\noindent the correction term being of the relative order $O(1/U^2).$
It should be emphasized that neglecting the regions near the rod and the channel walls makes boundary conditions for the function $f'(x)$ itself unnecessary. Such conditions are only needed in establishing the detailed structure of these regions, which is directly related to the fact that the finite front-thickness effects determining this structure are described by equations of higher differential order.
This means that the slope of the function $f(x)$ describing the outer solution is to be considered large everywhere on the semi-open intervals $x \in (0,1]$ and $x \in [-1,0).$ The points $x = \pm 1$ are included here by continuity, but not the point $x=0$ which represents the true singularity of the flow, where $f'$ is undefined.
\section{Solution of Eq.~(\ref{generalc1st}) in the high-velocity limit}\label{solution}
\subsection{Large-slope expansion of the $\EuScript{H}$-operator}\label{largesl}
The nature of nonlocality of steady flames is encoded entirely in the structure of the operator $\hat{\EuScript{H}},$ and of major importance is the fact that this operator greatly simplifies in the high-velocity limit. For large $U,$ the front
slope is also large, $|f'|\sim U,$ so the argument of cotangent in
Eq.~(\ref{hcurvedf}) has a large imaginary part for almost all
values of the integration variable. Therefore, one can write
\begin{eqnarray}\label{approxcot}
\cot\left\{\frac{\pi}{2}\left(\eta - x + i[f(\eta) -
f(x)]\right)\right\} \approx - i\chi(|\eta| - |x|)\,,
\end{eqnarray}
\noindent where $\chi(x)$ is the sign function,
$$\chi(x) = \left\{
\begin{array}{cc}
+1,& x>0\,,\\
-1,& x<0\,.
\end{array}
\right.
$$ This approximation is valid for all $\eta$ except two
small regions near $\eta = \pm|x|\,.$ More precisely, taking
into account that, for real $a_{1,2},$ $$\cot(a_1+ia_2) =
-i\,\frac{e^{(a_2 - ia_1)} + e^{-(a_2 - ia_1)}}{e^{(a_2 - ia_1)} -
e^{-(a_2 - ia_1)}} = -i\chi(a_2) + O\left(e^{-2|a_2|}\right)\,,$$ we
see that Eq.~(\ref{approxcot}) holds true, with an exponential
accuracy, everywhere except $$\eta: |\eta| \in (|x| -
\delta, |x| + \delta),$$ where $\delta = O(1/U).$
To develop an asymptotic large-slope expansion of $\hat{\EuScript{H}},$ let us choose a real $\varepsilon > 0$
such that
\begin{eqnarray}\label{epsiloncond}
\varepsilon \ll 1\,, \quad U\varepsilon \gg 1\,.
\end{eqnarray}
\noindent Then the integral in Eq.~(\ref{hcurvedf}) can be
rewritten, for $x>0,$ as
\begin{eqnarray}\label{intred}
\fint\limits_{-1}^{+1}
&&d\eta~a(\eta)\cot\left\{\frac{\pi}{2}\left(\eta - x + i[f(\eta) -
f(x)]\right)\right\} \nonumber\\ = &&-
i\left[\int\limits_{-1}^{-x-\varepsilon} +
\int\limits_{-x+\varepsilon}^{0} + \int\limits_{0}^{x-\varepsilon} +
\int\limits_{x+\varepsilon}^{+1}\right]
d\eta~a(\eta)\chi(|\eta| - x) \nonumber\\&& +
\left[\int\limits_{-x-\varepsilon}^{-x+\varepsilon} +
\fint\limits_{x-\varepsilon}^{x+\varepsilon}\right]
d\eta~a(\eta)\cot\left\{\frac{\pi}{2}\left(\eta - x + i[f(\eta) - f(x)]\right)\right\}\,.
\end{eqnarray}
\noindent Notice that in the last term on the right hand side of Eq.~(\ref{intred}), only one of the two integrals is defined in the principal value sense. As
such, it is proportional to $a'(x).$ It is not difficult to see that contributions of this kind give rise to terms of the order $1/U^2.$ This is because expanding the function $a(\eta)$ around $x$ brings in an extra small factor $(\eta - x).$ Below, we will need $\hat{\EuScript{H}}$ expanded up to $O(1)$-terms, so the principal-sense integral can be neglected. The other integral can be evaluated as follows, using continuity of $a(\eta)$
\begin{eqnarray}&&
\int\limits_{-x-\varepsilon}^{-x+\varepsilon}d\eta~a(\eta)
\cot\left\{\frac{\pi}{2}\left(\eta - x + i[f(\eta) - f(x)]\right)\right\}
= - i a(-x) \int\limits_{-\varepsilon}^{+\varepsilon}dy\coth \left\{\frac{\pi f'(x)}{2}y + \pi ix\right\} \nonumber\\&& =
- i a(-x)\left.\frac{2}{\pi f'(x)} \ln{\rm sh} y\right|_{-\pi f'(x)\varepsilon/2 + \pi i x}^{+\pi f'(x)\varepsilon/2 + \pi i x}\,.\nonumber
\end{eqnarray}
\noindent In view of (\ref{epsiloncond}), $\ln{\rm sh} y$ can be replaced, with the exponential accuracy, by $y$ and $-y$ at the upper and lower integration limits, respectively. Taking into account also that for $x\to 0^+,$ ${\rm arg}({\rm sh} y)$ gains $-\pi,$ we find
\begin{eqnarray}&&
\left.\phantom{\frac{2}{\pi f'(x)}} \ln{\rm sh} y\right|_{-\pi f'(x)\varepsilon/2 + \pi i
x}^{+\pi f'(x)\varepsilon/2 + \pi i x} = \pi
i(2x - 1)\,.\nonumber
\end{eqnarray}
\noindent On the other hand, replacing the cotangent in the last term in Eq.~(\ref{intred}) by the sign function gives zero within the same accuracy
\begin{eqnarray}&&
\int\limits_{-x-\varepsilon}^{-x+\varepsilon}d\eta~a(\eta)
\chi(|\eta| - x) = a(-x)
\int\limits_{-\varepsilon}^{+\varepsilon}d\eta~\chi(\eta)
= 0\,.\nonumber
\end{eqnarray}
\noindent Using these formulas in Eq.~(\ref{intred}), and then
putting it in Eq.~(\ref{hcurvedf}) gives finally
\begin{eqnarray}\label{hcurvedf2}
\left(\hat{\EuScript{H}}a\right)(x) = (f'(x) -
i)~\int\limits_{0}^{+1} d\eta~\frac{a(\eta) +
a(-\eta)}{2}\chi(\eta - |x|)+ia(-x)(2|x| - 1) +
O\left(\frac{1}{U}\right)\,.\nonumber\\
\end{eqnarray}
\noindent This result is written in the form applicable to negative as well as positive $x,$ which can be verified by noting that $i\hat{\EuScript{H}}$ is invariant under the combined operation of coordinate inversion $(x\to -x)$ and complex conjugation, as is seen from Eq.~(\ref{hcurvedf}). In particular, the asymptotic action of $\hat{\EuScript{H}}$ on the derivative of a function $a(x)$ continuous at the origin and satisfying $a(-1) = a(+1),$ is
\begin{eqnarray}\label{hcurvedf3}
\left(\hat{\EuScript{H}}a'\right)(x) = (f'(x) -
i)\left\{a(-|x|) - a(|x|)\right\} + ia'(-x)(2|x| - 1) +
O\left(\frac{1}{U}\right),
\end{eqnarray}
\noindent where the prime now denotes derivative of the function
with respect to its argument, $a'(y) = da(y)/dy.$ It turns out, however, that the formula (\ref{hcurvedf3}) remains valid even if the conditions $a(0^-) = a(0^+),$ $a(-1) = a(+1)$ are not met. In particular, $a(x)$ can be discontinuous at $x=0,$ so that its derivative is singular at the origin. This important fact is proved in the appendix.
Formula (\ref{hcurvedf3}) was derived for $x$'s where the front slope is large, {\it i.e.,} for all $x$ except small regions near the rod and the channel walls, where front curvature is large. Neglecting these regions as we did before, we can say that Eq.~(\ref{hcurvedf3}) is valid on the semi-open intervals $x\in (0,+1]$ and $x\in [-1,0)$ (see Sec.~\ref{evolutionequation}).
The following comments concerning the structure of Eqs.~(\ref{hcurvedf2}), (\ref{hcurvedf3}) will be useful in subsequent applications. First, it is
seen that the result of the action of $\hat{\EuScript{H}}$ depends
essentially on parity properties of the function $a(x),$ namely,
$\hat{\EuScript{H}}a = O(U),$ if $a(x)$ is even, and
$\hat{\EuScript{H}}a = O(1),$ if it is odd. Second, it should be noted
that although the identity $\hat{\EuScript{H}}^2 = - 1$ is valid whatever the shape of the flame-front, in particular, in the large-$U$ limit, it cannot be
verified using the expression on the right of Eq.~(\ref{hcurvedf2}),
already because the composition of its leading term with the
undetermined remainder $O(U)\circ O(1/U) = O(1).$
\subsection{Reduction to the system of ordinary differential equations}\label{reduction1}
\subsubsection{Equation for the transversal velocity component}
We now go over to proving the results announced in Sec.~\ref{strategy}. First off all, let us determine the orders of various terms in Eq.~(\ref{generalc1str}) within the high-velocity expansion. As we know, $u_- = O(U),$ $f'=O(U),$ so Eq.~(\ref{evolutiongen1}) tells us that $w_- = O(1).$ Using these estimates in the expressions (\ref{jumps}), (\ref{vorticity}) shows that
$$[u] = O(1/U)\,, \!\!\quad [w] = O(1)\,, \!\!\quad u_+ = O(U)\,, \!\!\quad w_+ = O(1)\,, \!\!\quad v_+ = O(U)\,, \!\!\quad N\sigma_+ = O(U^2)\,.$$ Therefore, one has for the braces in Eq.~(\ref{generalc1str})
$${\rm Re}\left\{[\omega]' -
\frac{Nv^n_+\sigma_+\omega_+}{v^2_+} \right\} = O(U)\,, \quad {\rm Im}\left\{[\omega]' -
\frac{Nv^n_+\sigma_+\omega_+}{v^2_+} \right\} = O(1)\,.$$ Since ${\rm Re}\{...\}$ is an odd function of $x,$ while ${\rm Im}\{...\}$ is even, formula (\ref{hcurvedf2}) shows that the leading term in the real part of $i\hat{\EuScript{H}}\{...\}$ is $O(U)\,.$ Thus, the real part of the left hand side of Eq.~(\ref{generalc1str}) is $O(U).$ At the same time, as we saw in Sec.~(\ref{boundarycond}), the real part of the dipole velocity field is $O(1)$ in the matching region, and negligible far apart from the origin. Therefore, to the leading order of the high-velocity expansion, the real contribution to the right hand side of Eq.~(\ref{generalc1str}) can be omitted.
However, things are quite different for the imaginary contribution. In this case,
the above estimates and formula (\ref{hcurvedf2}) show that both sides of Eq.~(\ref{generalc1str}) are $O(1).$ Moreover, expansion of $\hat{\EuScript{H}}$ only up to $O(1)$-terms is actually insufficient for the purpose of extracting the imaginary part. Indeed, the undetermined remainder in Eq.~(\ref{hcurvedf2}) is $O(1/U),$ and may contain real as well as imaginary parts. Since the argument of the $\EuScript{H}$-operator is $O(U),$ this remainder gives rise to terms of the order $O(1/U)\cdot O(U) = O(1).$ These two complications can be overcome by resorting to Eq.~(\ref{chup}) which expresses potentiality of the upstream flow, and can be considered as a consistency condition for Eq.~(\ref{generalc1str}). Repeating literally the above arguments,\footnote{For a flame stabilized in the wake, Eq.~(\ref{chup}) is replaced by $\left(1 - i \hat{\EuScript{H}}\right)\left(\omega_-\right)' =
2\left(\omega^d_-\right)'$ \citep{jerk3}. This makes no difference for the present analysis, as the dipole field is negligible on the same grounds as in Eq.~(\ref{generalc1str}). \label{footn1}} one sees that the real part of Eq.~(\ref{chup}) can be consistently extracted with the help of the expansion obtained in the previous section. Namely, using the formula (\ref{hcurvedf2}) yields
$$u_-'(x) - f'(x)\{w_-(|x|) - w_-(-|x|)\} + u_-'(-x)(2|x| - 1) = 0\,,$$ or since $u_-'(-x) = - u_-'(x),$ $w_-(-x) = - w(x),$
\begin{eqnarray}\label{chup1}
w_-(|x|) = (1 - |x|)\frac{u_-'(x)}{f'(x)}\,.
\end{eqnarray}
\noindent By the construction, this equation is valid for $x\in [-1,0)\cup (0,1].$ In particular, we see that the boundary condition (\ref{bcpsirw1}) is satisfied automatically.
Thus, we proved that in order to find the large-scale solutions of Eq.~(\ref{generalc1str}), it is sufficient to consider Eq.~(\ref{generalc1st}).
\subsubsection{Equation for the longitudinal velocity component}\label{eqlong}
Turning back to extracting the real part of Eq.~(\ref{generalc1st}), we have to consider the question concerning the contribution of the small region near the rod. As we saw in the preceding section, the dipole field on the right of Eq.~(\ref{generalc1str}) can be neglected for $x\ne 0.$ However, this does not settle the question, because integration on the left hand side extends over all $x$ including zero. The first term in the braces is a derivative, so that we can use Eq.~(\ref{hcurvedf3}) to find how it is transformed by $\hat{\EuScript{H}}.$ Note that since this term is the derivative of a function discontinuous at $x=0,$ it contains contribution proportional to the Dirac $\delta$-function. Indeed, one has from Eq.~(\ref{jumps})
$$[\omega] = \frac{\theta - 1}{N}
- if'(x)\frac{\theta - 1}{N} \approx - i(\theta - 1)\chi(x)\,,$$ so that
$$[\omega]' = - 2i(\theta - 1)\delta(x)\,.$$ However, it is proved in the Appendix that the formula (\ref{hcurvedf3}) is still applicable in this case. As to the second term in the braces, it also contains a derivative factor, {\it viz.,} $(u^2_- + w^2_-)'$ coming from $\sigma_+,$ but this time this is a derivative of an even function. However, it would be premature to conclude that this term does not contain a $\delta$-contribution. The point is that $(u^2_- + w^2_-)'$ is multiplied by $\omega_+$ whose imaginary part is an odd function. Let us trace the development of the quantity $Q = Nw_+\sigma_+/v^2_+$ near the rod (see Fig.~\ref{fig3}). At the matching point $x = - x_0$ on the left of the rod, $v^2_+$ is large but changes slowly, while $w_+ = -w_0 + (\theta-1) = O(1).$ Also, if the bulk transversal velocity of fresh gas is not too large, and $|v_+|$ increases away from the anchor (pressure normally drops down along the stream), then $w_+>0,$ $(v^2_+)'<0,$ and so
\begin{eqnarray}\label{q}
0 < Q = - w_+\frac{(\theta - 1)}{2\theta}\left(\ln v^2_+\right)' = O(1)\,.
\end{eqnarray}
\noindent
$|\sigma_+|$ increases near the rod as the result of gas slowdown caused by the rod, and for $|x|\lesssim R,$ $Q$ becomes $O(U/R).$ Here, the front curvature is large, and the zero-front-thickness expression (\ref{q}) can be used only for a rough estimate. It shows that $Q$ is negative in this region, because $w_+<0,$ and $\sigma_+>0.$ $Q$ rapidly turns into zero at $x=0,$ because both $w_+$ and $\sigma_+$ vanish at the origin. For positive $x \lesssim R,$ $Q$ is again a negative $O(U/R)$ quantity, since $w_+>0,$ $\sigma_+<0.$ At larger $x$'s its modulus decreases, and $Q$ becomes $O(1)$ at the matching point $x = x_0$ on the right of the rod. From the large-scale point of view, this behavior means that $Q$ contains a term $q\delta(x),$ with a negative coefficient $q.$ The exact value of $q$ can be found, of course, only if the inner solution is known. We arrive at the conclusion that the expression in the braces in Eq.~(\ref{generalc1st}) can be written as
$$- i\bar{q}\delta(x) + (\theta - 1)\frac{(u^2_- + w^2_-)'\omega_+}{2v^2_+}\,,$$
where $\bar{q} = q + 2(\theta - 1),$ and it is understood that the $\delta$-contribution is excluded from the second term. In other words, this term is calculated using the functions $u_-,w_+$ etc. that describe the outer solution. Taking into account that $v^2_+ = v^2_- + \theta^2 - 1,$ using Eqs.~(\ref{hcurvedf2}), (\ref{hcurvedf3}), and extracting the real part of Eq.~(\ref{generalc1st}) gives, to the leading order,
$$u'_-(x) (1 + \alpha |x|) - \frac{\bar{q}}{2}f'(x)
- \frac{\alpha}{2}f'(x)\int\limits_{0}^{1}d\eta \frac{u'_-(\eta)}{u_-(\eta)}[w_-(\eta) - \alpha]\chi(\eta - |x|) = 0\,, \quad \alpha \equiv \theta - 1\,.$$
This equation involves the unknown parameter $\bar{q}.$ To get rid of it, we divide the equation by $f',$ and then differentiate it with respect to $x.$ The result is the following ordinary differential equation
\begin{eqnarray}\label{main1}
\frac{d}{dx}\left[\frac{u'_-(x)}{f'(x)} (1 + \alpha |x|)\right] + \alpha\frac{u'_-(x)}{u_-(x)}[w_-(x) - \alpha] = 0\,.
\end{eqnarray}
\noindent
Together with Eqs.~(\ref{evolutiongen1}), (\ref{chup1}) it constitutes the system of three ordinary differential equations for the three functions $u_-(x),w_-(x),f(x).$ Evidently, it requires three initial conditions which are Eqs.~(\ref{bcpsir1}) -- (\ref{bcpsirf}).
\subsection{Reduction to a single differential equation. Numerical solutions}\label{reduction}
Introducing an auxiliary function $$\varphi = \frac{u'_-}{f'} = \frac{d u_-}{d f}\,,$$ the system (\ref{evolutiongen1}), (\ref{chup1}), (\ref{main1}) can be rewritten as an ordinary differential equation for $\varphi(x)$:
\begin{eqnarray}\label{main2}
\frac{d}{dx}\left[\varphi (1 + \alpha x)\right] + \alpha\varphi\frac{(1 - x)\varphi - \alpha}{(1 - x)\varphi + 1} = 0\,, \quad x>0\,.
\end{eqnarray}
\noindent The initial condition for $\varphi$ follows from Eqs.~(\ref{bcpsir1}), (\ref{chup1}): $\varphi(0^+) = w_0.$
We note also that the functions $u_-(x),$ $f(x)$ are related by a simple algebraic equation. One has from Eqs.~(\ref{evolutiongen1}), (\ref{chup1})
\begin{eqnarray}\label{uf}
u_- = (1-x)u'_- + f'\,,
\end{eqnarray}
\noindent
or $$[(1-x)u_-]' + f' = 0\,.$$ Integrating this equation, and using the initial conditions (\ref{bcpsiru1}), (\ref{bcpsirf}) gives
\begin{eqnarray}\label{main3}
f = U - (1 - x)u_-\,.
\end{eqnarray}
\noindent Finally, combining Eqs.~(\ref{uf}), (\ref{main3}) one can express $\varphi$ in terms of $f$
$$\varphi(x) = \frac{U - f(x)}{(1 - x)^2 f'(x)} - \frac{1}{1 - x}\,.$$ Substitution of this expression into Eq.~(\ref{main2}) leads to a second-order differential equation for the front position. The corresponding initial conditions follow from Eqs.~(\ref{bcpsir1}) -- (\ref{bcpsirf}), and (\ref{evolutiongen1}):
\begin{eqnarray}\label{initc}
f(0) = 0\,, \quad f'(0^+) = \frac{U}{w_0 + 1}\,.
\end{eqnarray}
\noindent
Numerical solutions of these equations for various values of $U,w_0$ and $\theta$ are plotted in Figs.~\ref{fig4} -- \ref{fig6}. They have the following general features. First of all, the function $f(x)$ is monotonic in all cases (in each of the two channel halves), as was assumed throughout our consideration. Second, flames in which the fresh-gas flow diverges near the rod ($w_0 >0$) are convex towards the incoming flow, while those with convergent fresh-gas flow ($w_0<0$) are concave (recall that we deal here with the large-scale solutions, characterized by distances $r\gtrsim R_0$ from the rod; for $|x|<R,$ of course, $w_0$ is positive in any case). Numerical analysis shows that in the latter case, solutions with the given negative $w_0$ exist only for sufficiently small values of the gas expansion coefficient. For example, solutions with the fairly small value $w_0 = - 0.1,$ one of which is shown in Fig.~\ref{fig6}, disappear at $\theta \approx 3.5,$ and this threshold value is independent of $U.$ Finally, solutions with $w_0 >0$ are characterized by monotonic increase of $u_-$ as one moves from the rod to the wall. The overall velocity rise is substantial -- typically two to four times. This is what normally observed in experiments. Solutions with $w_0<0$ are anomalous in this respect, as $u_-$ slightly decreases away from the rod. They are most likely unstable.
At last, solutions with $w_0=0$ are trivial. Indeed, in this case $\varphi(0^+) = 0,$ and Eq.~(\ref{main2}) tells us that also $\varphi'(0^+)=0.$ Repeated differentiation of this equation then shows that all higher derivatives of $\varphi$ also vanish, {\it i.e.,} $\varphi(x)\equiv 0.$ Hence, $u_- = {\rm const} = U,$ and Eq.~(\ref{main3}) gives $f(x) = Ux,$ $x>0.$ In the light of the discussion given in Sec.~\ref{boundarycond}, it is natural that the case $w_0=0$ reproduces the result of Sec.~\ref{chernyf}.
\section{Conclusions}\label{conclusions}
The results obtained in this paper provide consistent description of steady anchored flames in high-velocity gas streams in channels. Given the values of the incoming flow velocity and its transversal component near the anchor, the formulas derived in Sec.~\ref{reduction} allow simple determination of the flame front shape and on-shell gas-velocity. A practically more convenient may be ``geometrical'' parametrization using the ordinates of the front end-points and its slope at the origin, which is related to the initial one by Eqs.~(\ref{lu}), (\ref{initc}).
The remarkable fact revealed by the above investigation is that the flame structure in a high-velocity gas flow obeys an ordinary differential equation. In other words, this structure turns out to be local in the usual sense: behavior of the flame front slope and gas velocity in an infinitesimal vicinity of a given point is determined by their values at this point. The sole role of the anchor is to provide an initial condition. This result answers the question as to the nature of nonlocality of the anchor influence on the flame structure: Although detailed structure of the flame holder is immaterial for the properties of the large-scale flow, the flow distortion it causes ultimately determines the whole flame configuration. We saw in Sec.~\ref{boundarycond} that mathematically, the presence of the anchor with vanishingly small dimensions signifies existence of a singularity in the bulk flow solution. The failure to recognize this fact is what makes it impossible to consistently describe nontrivial flame configurations within the Zel'dovich-Scurlock-Tsien approach. Actually, this defect is inherent to this approach as it neglects the transversal gas velocity, while the role of this component is crucial in describing the anchor impact.
Finally, the role of vorticity in the formation of curved flames is to be emphasized. It is described by the second term in the braces in Eq.~(\ref{generalc1st}), while the first term (the complex velocity jump) corresponds to a purely potential contribution. As we have seen in Sec.~\ref{eqlong}, the latter falls off from the equations describing the large-scale flame structure. One can say that formation of the steady flame pattern in a high-velocity stream is governed by the vorticity generated in the curved flame front. Therefore, it cannot be described within potential-flow models such as suggested in \citep{frankel1990}.
\acknowledgments{I am grateful to Guy Joulin and Hazem El-Rabii for discussions of various issues considered in the paper. Although this work was not discussed directly with my colleagues, our numerous conversations definitely influenced my understanding of the problem.}
\begin{appendix}
\section{Extension of Eq.~(\ref{hcurvedf3}) to discontinuous functions}
If the function $a(x)$ in Eq.~(\ref{hcurvedf3}) does not satisfy conditions
\begin{eqnarray}\label{contcond}
a(0^+) = a(0^-)\,, \quad a(+1) = a(-1)\,,
\end{eqnarray}
\noindent
its derivative is singular at $x=0,\pm 1,$ and the integration by parts used in the transition from Eq.~(\ref{hcurvedf2}) to Eq.~(\ref{hcurvedf3}) is ambiguous. We recall that the functions describing the true flame configuration are actually smooth and periodic, and hence satisfy the conditions (\ref{contcond}), whereas discontinuities arise as the result of simplified description. Therefore, in order to correctly evaluate the integral, one has to turn back to the exact formula (\ref{hcurvedf}) in which all the functions involved are smooth, and apply it to a function $A(x)$ satisfying (\ref{contcond}), whose behavior near the rod or channel walls looks discontinuous from the large-scale point of view. More precisely, $A(x)$ is supposed to vary rapidly for $|x| < R \ll 1$ and near the walls, but normally at the intervals $x_0 < x < 1 - x_0$ and $-1 + x_0 < x < - x_0,$ where it coincides with $a(x).$ Here the positive numbers $R,x_0$ are such that $R < x_0\ll 1$; they have the same meaning as in Sec.~\ref{strategy}. Thus, $$\lim\limits_{R\to 0}A(x) = a(x)\,.$$ ($R \to 0$ implies that $x_0$ also goes to zero.)
Neglecting the anchor dimensions means that the action of $\hat{\EuScript{H}}$ on $a'$ is defined as
\begin{eqnarray}\label{limdef}
\left(\hat{\EuScript{H}}a'\right)(x) = \lim\limits_{R\to 0}\left\{\left(\hat{\EuScript{H}}A'\right)\right\}(x)\,.
\end{eqnarray}
\noindent
To find out how $\hat{\EuScript{H}}$ acts on the derivative of $A(x),$ we replace $a$ by $A$ in Eq.~(\ref{hcurvedf2}), and integrate the right hand side by parts
\begin{eqnarray}\label{hcurvedfa1}
\left(\hat{\EuScript{H}}A'\right)(x) &=& \frac{1 + i
f'(x)}{2}~\fint\limits_{-1}^{+1}
d\eta~A'(\eta)\cot\left\{\frac{\pi}{2}(\eta - x +
i[f(\eta) - f(x)])\right\} \nonumber\\ &=& \frac{1}{2}\frac{d}{dx}\fint\limits_{-1}^{+1}
d\eta~[1 + if'(\eta)]A(\eta)\cot\left\{\frac{\pi}{2}(\eta - x +
i[f(\eta) - f(x)])\right\}\,.
\end{eqnarray}
\noindent The boundary terms vanish here because the integral kernel is $2$-periodic, and $A(x)$ satisfies $A(-1)=A(+1),$ by the assumption. The function $a(x)$ is allowed to have only a finite jump, and so is the slope, $f',$ of the limiting form of the front. Therefore, the last integral in Eq.~(\ref{hcurvedfa1}), in which all functions are replaced by their limiting expressions, is well-defined, representing a continuously differentiable function for all $|x|\in (0,1).$ Thus, we can write
$$\lim\limits_{R\to 0}\left\{\left(\hat{\EuScript{H}}A'\right)\right\}(x) = \frac{1}{2}\frac{d}{dx}\fint\limits_{-1}^{+1}
d\eta~[1 + if'(\eta)]a(\eta)\cot\left\{\frac{\pi}{2}(\eta - x +
i[f(\eta) - f(x)])\right\}\,,$$ it being understood that $f$ in the integrand is used in its limiting form.
Next, we go over to the large-slope limit. The right hand side of the last equation can be evaluated in this case in exactly the same way as we arrived to Eq.~(\ref{hcurvedf2}). Comparison with Eq.~(\ref{intred}) shows that the role of the function $a(\eta)$ in this equation is now played by $[1 + if'(\eta)] a(\eta),$ the only difference being that the large factor $f'$ comes from the integrand, rather than from the pre-integral factor in Eq.~(\ref{hcurvedf}). Taking this into account, we readily find
\begin{eqnarray}&&
\left(\hat{\EuScript{H}}a'\right)(x) = \frac{1}{2}\frac{d}{dx}\left[\int\limits_{0}^{1}d\eta \left\{a(\eta)[f'(\eta) - i] + a(-\eta)[f'(-\eta) - i]\right\}\chi(\eta - |x|) \right.\nonumber\\&& \left.- ia(-x)(2|x| - 1)\phantom{\int}\hspace{-0,4cm}\right] = - f'(|x|)\chi(x)\left\{a(|x|) - a(-|x|)\right\} + i\chi(x)\left\{a(|x|) + a(-|x|)\right\} \nonumber\\&& - 2ia(-x)\chi(x) + ia'(-x)(2|x| - 1)\,. \nonumber
\end{eqnarray}
\noindent Using the obvious identities $f'(|x|)\chi(x) = f'(x),$ $\chi(x)\{a(|x|) + a(-|x|) - 2a(-x) \} = a(|x|) - a(-|x|),$ we finally obtain
\begin{eqnarray}&&
\left(\hat{\EuScript{H}}a'\right)(x) =
(f'(x) - i)\left\{a(-|x|) - a(|x|)\right\} + ia'(-x)(2|x| - 1)\,, \nonumber
\end{eqnarray}
\noindent which is exactly Eq.~(\ref{hcurvedf3}), as was to be proved. We also observe that the result is independent of the particular choice of $A(x).$
Moreover, it turns out that this formula is valid not only on the open intervals $x\in (-1,0)\cup (0,1),$ but in the whole channel domain $x \in [-1,+1],$ if the derivatives of functions discontinuous at $x=0$ are understood in the sense of distributions. Having in mind possible future applications, let us prove this fact. Note first of all, that if $a(x)$ is discontinuous at $x=0,$ {\it i.e.,} $a(0^+) - a(0^-) \equiv [a]_0 \ne 0,$ then for the function $b(x) = a(x) - [a]_0\chi(x)/2,$ one has $[b]_0 = 0,$ so that Eq.~(\ref{hcurvedf3}) is valid for $b(x).$ Writing $a(x) = b(x) + [a]_0\chi(x)/2,$ we see that since $\hat{\EuScript{H}}$ is a linear operator, it is sufficient to prove the above statement only for the sign function. Let $X(x)$ be its smooth approximation. Take a test function $\phi(x),$ {\it i.e.,} a smooth function that slowly varies for $x\sim R,$ and integrate it with Eq.~(\ref{hcurvedfa1}) over interval $-\Delta \leqslant x \leqslant + \Delta,$ where $\Delta$ is such that $R \ll \Delta < 1.$ We get
\begin{eqnarray}&&
\int\limits_{-\Delta}^{\Delta}dx\phi(x)\left(\hat{\EuScript{H}}X'\right)(x) = \frac{1}{2}\fint\limits_{-1}^{+1}
d\eta~[1 + if'(\eta)]X(\eta)\nonumber\\&&\times\left[\phi(\Delta)\cot\left\{\frac{\pi}{2}(\eta - \Delta +
i[f(\eta) - f(\Delta)])\right\} - \phi(-\Delta)\cot\left\{\frac{\pi}{2}(\eta + \Delta +
i[f(\eta) - f(\Delta)])\right\}\right] \nonumber\\&& - \frac{1}{2}\int\limits_{-\Delta}^{\Delta}dx\phi'(x)\fint\limits_{-1}^{+1}
d\eta~[1 + if'(\eta)]X(\eta)\cot\left\{\frac{\pi}{2}(\eta - x +
i[f(\eta) - f(x)])\right\}\,.\nonumber
\end{eqnarray}
\noindent All integrals on the right are well-defined in the limit $R \to 0,$ so that $X(\eta)$ can be replaced by $\chi(\eta).$ Then the $\eta$-integrations are readily done because the primitives are $\ln\sin\{\cdot\}$. For example,
\begin{eqnarray}&&
\fint\limits_{-1}^{+1} d\eta~[1 + if'(\eta)]X(\eta)\cot\left\{\frac{\pi}{2}(\eta - \Delta + i[f(\eta) - f(\Delta)])\right\} \nonumber\\&& = \frac{2}{\pi}\left[\fint\limits_{0}^{+1} - \int\limits_{-1}^{0}\right]d\ln\sin\left\{\frac{\pi}{2}(\eta - \Delta + i[f(\eta) - f(\Delta)])\right\} = 2[f(1) - i] - 4[f(\Delta) -i\Delta ]\,,\nonumber
\end{eqnarray}
\noindent where it is taken into account that the front slope is large for $|\eta|\gg R.$ A simple calculation gives
$$\int\limits_{-\Delta}^{\Delta}dx\phi(x)\left(\hat{\EuScript{H}}X'\right)(x) = -2i\phi(0) - 2\int\limits_{-\Delta}^{\Delta}dx\phi(x)[f'(x) - i]\,.$$
Finally, since $\phi(x)$ is independent of $R,$ the limit of this equation for $R \to 0$ can be written using the definitions of Dirac $\delta$-function and (\ref{limdef}) as
$$\int\limits_{-\Delta}^{\Delta}dx\phi(x)\left(\hat{\EuScript{H}}\chi'\right)(x) = -2i\int\limits_{-\Delta}^{\Delta}dx\delta(x)\phi(x) - 2\int\limits_{-\Delta}^{\Delta}dx\phi(x)[f'(x) - i]\,,$$
which in view of arbitrariness of $\phi(x)$ yields
$$\left(\hat{\EuScript{H}}\chi'\right)(x) = -2i\delta(x) - 2[f'(x) - i]\,.$$
By virtue of the relations $\chi'(x) = 2\delta(x),$ $|x|\delta(x) =0,$ understood in the sense of distributions, this is just Eq.~(\ref{hcurvedf3}) for $a=\chi.$
\end{appendix}
~\newpage
|
2,877,628,091,220 | arxiv | \section{Introduction to holographic space-time\cite{holost}}
This paper is the written version of a talk given at the conference celebrating the 80th birthday
of Murray GellMann at the Nanyang Technical University in Singapore. I'd like to thank Harald Fritzsch and
the other organizers of the conference for inviting me to join in honoring one
of the greatest physicists of the 20th century.
String theory models are our only rigorously established models of quantum
gravity, but none of the known models apply to the real world. They do not
incorporate cosmology, and they do not explain the breaking of supersymmetry
(SUSY) that we observe.
The theory of Holographic Space-Time is an attempt to generalize string
theory in order to resolve these problems. Its basic premise is a strong
form of the holographic principle, formulated by myself and W. Fischler:
\begin{itemize} \item Each causal diamond in a $d$ dimensional Lorentzian
space-time has a maximal area space-like $d - 2$ surface in a foliation of
its boundary. The area of this {\it holographic screen} in Planck units is
$4$ times the logarithm of the dimension of the Hilbert space describing all
possible measurements within the diamond. \end{itemize}
Every pair of causal diamonds has a maximal area causal diamond in their
intersection. This is identified with a common tensor factor in the Hilbert
spaces of the individual diamonds
$${\cal H}_1 = {\cal O}_{12} \otimes {\cal N}_1$$
$${\cal H}_2 = {\cal O}_{12} \otimes {\cal N}_2 .$$ A holographic space-time
is defined by starting from a $d-1$ dimensional spatial lattice, which
specifies the {\it topology of a particular space-like slice}. To each
point ${\bf x}$ on this lattice, we associate a sequence of Hilbert spaces
$${\cal H} (n, {\bf x}) = \otimes {\cal P}^n .$$
The {\it single pixel Hilbert space}, ${\cal P}$ will be specified below,
and has to do with the geometry of compactified dimensions. These spaces
represent the sequence of causal diamonds of a time-like observer as the
proper time separation of its future tip from the point where it crosses the
space-like slice increases. $N ({\bf x})$ is the maximal value that $n$
attains as the proper time goes to infinity. In a future asympotically dS
space time, $N({\bf x})$ will be finite. In an asymptotically flat
space-time or FRW universe which is matter or radiation dominated,$n$ will
go to infinity with the proper time, while in an asymptotically AdS universe
$n$ will go to infinity at finite proper time. In a Big Bang space-time the
past tip of each causal diamond lies on the Big Bang hypersurface. In a time
symmetric space-time we think of the diamonds as having past and future tips
which are equidistant in proper time from the slice on which the lattice is
placed. In either case we will refer to {\it the causal diamonds at a fixed time} as those
carrying the same label $n$.
In any theory of quantum gravity, the Hilbert space formulation will refer to
a particular time slicing. We have chosen slices such that the causal diamonds at
any fixed time have equal area holographic screens. Such equal area slicings exist
in all commonly discussed classical space-times.
The rest of the specification of holographic space-time consists of a
prescription of the overlap tensor factor $${\cal O} (m , {\bf x}; n, {\bf
y}),$$ in ${\cal H} (m, {\bf x}) $ and ${\cal H} (n, {\bf y})$. For nearest
neighbor points at $m = n$ this overlap is just ${\cal P}$. For other pairs
of points the specification of the overlap is part of the dynamical
consistency condition described below. The only kinematic restriction on it
is that the dimension of ${\cal O}$ is a non-increasing function of $d({\bf
x,y})$, the minimum number of lattice steps between the points.
We introduce dynamics as a sequence of unitary operators $U(n, {\bf x})$ in
${\cal H} (N({\bf x}), {\bf x}) $, with the property that $U(n, {\bf x}) =
V(n, {\bf x}) W(n, {\bf x})$, where $V(n)$ is a unitary in ${\cal H} (n,
{\bf x})$, while $W(n)$ is a unitary in the tensor complement of ${\cal H}
(n, {\bf x})$ in ${\cal H} (N({\bf x}), {\bf x}) $. This requirement
implements the idea that the dynamics inside a causal diamond effects only
those degrees of freedom associated with the diamond. In particular, in a
Big Bang space-time, it builds the concept of {\it particle horizon} into the
dynamics of the system. Note by the way that in Big Bang space time the
sequence of unitaries $U(n)$ may be thought of as a conventional time dependent
Hamiltonian system with a discrete time, while for a time symmetric space-time
they are instead ``approximate S-matrices", $U(T, - T)$.
Starting from some initial pure state in ${\cal H} (N({\bf x}), {\bf x})$, the
unitaries $U(n, {\bf x})$ produce a sequence of density matrices $\rho (n, {\bf x})$ in
each overlap factor involving the point ${\bf x}$. {\it The key dynamical consistency
condition for a holographic space time is that
$$\rho (n, {\bf y}) = U(n, {\bf x}; {\bf y}) \rho (n, {\bf x}) U^{\dagger} (n, {\bf x}; {\bf y})
,$$ for every pair of points.} This staggeringly complicated set of consistency
conditions is the analog in this formalism of the Dirac-Schwinger commutation
relations, which guarantee the consistency of ``many fingered time". The only known
solution of these conditions is the dense black hole fluid (DBHF) cosmology
described briefly below. In that example, the consistency conditions dictate both
the choice of overlap Hilbert spaces, and the dynamics at each point in the
lattice.
There are a number of very important points to understand about this formalism
\begin{itemize}
\item Although we have used geometrical pictures to motivate our constructions,
they are entirely phrased in quantum mechanical language. The Lorentzian space-time
is an emergent property of these quantum systems, useful in the limit of large
causal diamonds (large dimension Hilbert spaces).
\item The emergent space-time geometry is {\it not} a fluctuating quantum variable.
Its causal structure is specified by the overlaps, and its conformal factor by
the Hilbert space dimensions.
\item The lattice specifies only the topology of a space-like slice in the non-compact
dimensions\footnote{In a holographic theory, dS space has non-compact spatial
sections because one restricts attention to the causal diamond of a fixed observer.}
This topology does not change with time.
\end{itemize}
\section{SUSY and the holographic screens\cite{susyholo}}
Since space-time geometry is {\it not} a fluctuating quantum variable, it is natural to associate the
quantum variables with the properties of the holographic screen of the causal diamond. Intuitively, the
space-time orientation of an infinitesimal bit of screen is determined by the outgoing null direction, and the transverse
plane in which the screen lies. That information is encoded in the Cartan-Penrose equation
$$\bar{\psi} \gamma^{\mu} \psi (\gamma_{\mu})^{\alpha}_{\beta} \psi^{\beta} = 0.$$ Indeed this equation implies that
$\bar{\psi} \gamma^{\mu} \psi$ is a null vector, and that the hyperplanes
$$\bar{\psi} \gamma^{\mu_1 \ldots \mu_k} \psi,$$ with $k \geq 2$ all lie in a single $d-2$ plane.
More succinctly, $\psi = (0, S_a)$: $\psi$ is a transverse spinor in the light front frame defined by the null direction.
The Cartan-Penrose equation is conformally invariant, but our quantization procedure will violate that invariance.
This is simply the statement that the Bekenstein-Hawking area formula is being used to define the conformal factor of
our space-time geometry in terms of the dimension of the quantum Hilbert space. The holographic principle
now implies two constraints on the quantization procedure:
We want to have independent degrees of freedom for different points on the holographic screen. This is compatible with a finite
dimensional Hilbert space only if a finite area screen is ``pixelated": its function algebra must be replaced by
a finite dimensional algebra. If $n$ labels a basis of the algebra, the single pixel Hilbert space ${\cal P}$ is the lowest dimension
representation space of the algebra generated by the $S_a (n)$ variables. If we insist on transverse $SO(d-2)$ invariance, the only quantization
rule having a finite dimensional representation space is
$$[S_a (n), S_b (n) ]_+ = \delta_{ab}$$ SUSY aficionadas will recognize this as the algebra of a single massless supersymmetric particle with longitudinal momentum
proportional to $(1, \overrightarrow{\Omega})$, where $\Omega$ is the angular position of the pixel on $S^{d-2}$.
If $d=11$ the smallest representation of this algebra is the SUGRA multiplet. In fewer non-compact dimensions there are non-gravitational multiplets,
but, since we are trying to construct a theory of gravity, we should retain $16$ real spinor generators for each pixel, in order to guarantee that there is a helicity two particle in the
spectrum.
The anti-commutation relations postulated so far are invariant under $S_a (n) \rightarrow (-1)^{F(n)} S_a (n)$.
This is a remnant of the rescaling symmetry of the CP equation. We treat it as a gauge symmetry. Using it,
we can perform a Klein transformation so that the independent operators on different pixels anti-commute rather than
commute with each other.
A convenient way to pixelate the holographic screen is to use fuzzy geometry. We replace the algebra of functions by a sequence of finite dimensional matrix algebras.
The most famous example is the two sphere. The algebra of $n\times n$ matrices has a natural action of the group $SU(2)$ on it because the spin $\frac{n-1}{2}$ representation is
$n$ dimensional. The matrices carry every spin from zero up to $n-1$ and so can be thought of as a natural cutoff of the angular momentum on the sphere. Vector bundles over the sphere
are rectangular matrices. In particular $n \times n+1$ and $n+1 \times n$ matrices converge to the two chiral spinor bundles over the sphere. Many of the compactification spaces of string
theory are Kahler manifolds, or Kahler fibrations over a one (Horava-Witten) or three dimensional (G2 manifolds which are K3 fibrations) base. These are naturally thought of as limits of finite
dimensional matrix algebras. The pixel variables of such compactifications will have the quantum algebra
$$ [ (\psi^M )_i^A , (\psi^{\dagger N})^j_B ]_+ = \delta_i^j \delta^A_B B^{MN} ,$$ where $i,j = 1 \ldots K$ and $A,B = 1 \ldots K + 1$ , so that the fermionic matrices fill out the two spinor bundles
over the fuzzy two sphere. The indices $M,N$ also run over a set of rectangular matrices which approximate either the spinor bundle over a seven manifold, or two copies of the spinor bundle
over a six manifold (and there are two possibilities, according to whether the two copies have the same or different chiralities). The $B^{MN}$ should be interpreted as wrapped brane charges.
They can be further decomposed into sums over cycles of various dimensions. That is to say, we have an algebraic way of encoding the homology of the manifold\footnote{Though of course, we know from
string duality that the interpretation as homology of a particular manifold will only be valid in certain limits. The algebra of SUSY charges (of which our algebra is an analog) is valid
independently of the geometric interpretation.}. In this formalism, the problem of (kinematically) classifying four dimensional compactifications reduces to classifying superalgebras such that
in the limit, $K \rightarrow \infty$ they contain one copy of the $N=1$ SUSY algebra. Equivalently, in this limit, the representation space of the algebra should contain exactly one $N=1$ graviton
supermultiplet. The super-generators are constructed by using the conformal structure of the 2-sphere, whose invariance group is $SO(1,3)$. Conformal Killing spinors on the sphere transform as the
Dirac spinor of $SO(1,3)$.
If $(q_{\alpha})^i_A$ are matrices that converge to the left handed conformal Killing spinor, then our kinematic condition on the algebra of pixel variables is that there exists a set of coefficients $F_M$
such that $$Q_{\alpha} \equiv F_M {\rm Tr}\ [\psi^M q_{\alpha} ] ,$$ satisfies the super-Poincare algebra as $K \rightarrow\infty$. The representation space of the pixel algebra should break up
into a finite number of single particle representations of the SUSY algebra, with only a single supergraviton multiplet. This is, in our formalism, the condition for a compactification with $N = 1$ SUSY.
Note that, for finite $K$, these constructions have no continuous moduli. They are finite dimensional unitary representations of finite dimensional non-abelian super-algebras.
\subsection{Particles}
If we suppose that we have found such an algebra, we can now make multiple copies of our single particle Hilbert space by replacing the algebra of functions, by the matrix algebra
$${\cal M}_K \otimes {\cal A}, $$ (where ${\cal A}$ is the algebra of matrices approximating the function algebra on the internal manifold), by a direct sum
$$\oplus_i {\cal M}_{Ki} \otimes {\cal A} , $$ and take the limit $K_i \rightarrow\infty$ with $\frac{K_i}{K_j}$ fixed. As in Matrix Theory\cite{bfss} the ratios are interpreted as the ratios of
longitudinal momenta $P_i (1, {\bf \Omega_i})$ of a set of particles. Here however, each particle has its own null direction.
The $S_p$ gauge symmetry relating commuting operators to block diagonal matrices is interpreted as particle statistics. Note that a particle must have a large momentum in order to have
good angular localization, but for fixed holographic screen area one can only make a finite number of particles, and the larger the momentum that each one carries the fewer particles we can make.
One can argue\cite{bfm} that the states with all the momentum carried by one ``particle" should actually be thought of as black holes that
fill the causal diamond.
\subsection{Holographic cosmology}
Here we give a brief description of the Dense Black Hole Fluid model of holographic cosmology\cite{holocosm}. In this model,
one takes the overlap Hilbert spaces to be
$${\cal O} (n, {\bf x}; n {\bf y}) = {\cal P}^{n - d({\bf x , y})}, $$ where $d({\bf x , y})$ is the minimum number of lattice steps between the two points. If the exponent is negative, we interpret it
as $0$. The time evolution operators are identical at each lattice point, and the time dependent Hamiltonian is chosen randomly at each time $n$ to be
$${\rm ln}\ V(n, {\bf x}) = \sum S_a (i) S_a (i) A(n; i, j) + I (n) .$$ Here the $S_a$ satisfy fermionic commutation relations\footnote{We have not yet made a cosmology compatible with
the more complicated superalgebras that arise for non-trivial compactifications.}, and $A(n; i,j)$ is a random $n \times n$ anti-symmetric matrix. For large $n$ the quadratic term converges to
the Hamiltonian for free massless fermions in $1+1$ dimensions, and $I(n)$ is chosen to be a random irrelevant perturbation of this CFT. One then argues that there is a coarse grained description of
this system as a flat FRW universe, with equation of state $p = \rho$, which saturates the covariant entropy bound.
This model is used to construct more realistic cosmologies by using the Israel junction condition. We think of our own universe as a low entropy ``defect" inside the DBHF. Consider first
a spherical volume of $p = w \rho$ universe with $-1 < w < 1$, embedded in a $p=\rho$ universe. Consider time slices of the two geometries of equal holographic area. This means the time coordinates are
proportional to each other with a fixed constant.
A coordinate volume of radius $L$ has a physical radius that grows as $t^{\frac{2}{3(1+w)}}$. Since the physical radius in the DBHF grows more slowly, we must let the coordinate $L$ shrink with time
in the $p = w \rho$ universe in order to satisfy the Israel condition that the geometry of the interface be the same in both embeddings. The exception is $w = -1$. In this case, a cosmological
horizon volume is bounded by a null surface of fixed holographic area. We can satisfy the Israel condition by matching to a black hole with the same area horizon, embedded in the $p = \rho$ background.
In \cite{holocosm} we argued that non-spherical defects could survive as $ - 1 < w < 1$ regions, but that the above
argument about the Israel condition implies that eventually the universe must approach $w = -1$. The late time cosmological constant is
determined by cosmological initial conditions, namely the number of degrees of freedom that are initially in a low entropy state. In this way of realizing dS space,
it is clear that only a single horizon volume of the classical geometry is necessary to the description, and that this is described as a quantum system with a finite
number of states: the representation space for the pixel algebra over the finite area holographic screen of the cosmological horizon.
\section{Cosmological SUSY breaking\cite{susyholo}\cite{bfm}\cite{holost}}
We have noted that in holographic cosmology, the cosmological
constant $\Lambda$ is a positive tunable parameter, determined by
cosmological initial conditions. To discuss particle physics, we can
replace the actual cosmological history with that of an eternal dS
space. In the limit $\Lambda \rightarrow 0$ the theory of stable dS
space approaches a super-Poincare invariant theory, similar to
conventional string theories, but with no moduli. The theory has a
discrete R symmetry, explaining the vanishing of the superpotential
at the supersymmetric point. However, the way in which this limit is
approached is interesting. One horizon volume of dS space approaches
all of Minkowski space. The logarithm of the total number of quantum
states of the dS theory is $\pi (RM_P)^2$, but only $~ (RM_P)^{3/2}
$ of that entropy can be modeled by field theory in the horizon
volume.
This entropy bound can be derived in two complementary ways\cite{nightmare}\cite{bfm}.
On the one hand, we can try to maximize the particle entropy in a horizon volume, subject to
the constraint that no black holes of size that scales like $R$ are formed. The maximal entropy particle states
are modeled as a cutoff CFT with cutoff $\mu$, so that the entropy,
$$S\sim (\mu R)^3 .$$ The condition that no horizon sized black holes are formed is
$$ \mu^4 R^3 < M_P^2 R ,$$ which leads to $\mu < (M_P / R)^{1/2}$, and the $(R M_P)^{3/2}$ scaling
of the particle entropy. On the other hand, if we model the holographic screen of dS space as a fuzzy sphere
with $K \propto R M_P $, and particle states by block diagonal fuzzy spheres of block sizes $K_i$ with $\sum K_i = K $, then the complementary
constraints of angular localization (maximizing each $K_i$) and maximizing the multiparticle particle entropy, lead to $K_i \sim \sqrt{K}$. If our basic
unit of longitudinal momentum is $1/R$, then this gives the same scaling for entropy, momentum cutoff, and average particle number as the previous argument.
These remarks also lead to a conjecture for what the other, off diagonal bands, of the matrices represent. The total entropy of dS space
allows us to have $(R M_P)^{1/2}$ independent copies of the field theoretic degrees of freedom in a single horizon volume, and it is an obvious conjecture that this is the
way the classical geometric result that at late global times dS space has an unbounded number of independent horizon volumes, is realized in the limit $RM_P \rightarrow\infty$.
The off block diagonal bands of the $K\times K$ matrix algebra approximation to the function algebra on the 2-sphere, represent the particle degrees of freedom in
different horizon volumes.
It is the field theoretic states in a single horizon volume, which approach the
scattering states of the Minkowski theory. The exponentially
overwhelming majority of the states of the dS theory decouple in
this limit\footnote{In quantum field theory, this is the statement
that the dS temperature goes to zero.}. These states should be
viewed as living on the cosmological horizon. However, because their
number is so large, the effect of the interaction of localized
particles with the horizon states may be larger than one might have
imagined.
The discrete R symmetry of the $\Lambda = 0$ theory is broken by
interactions with the horizon. The lightest particle in the theory
carrying R charge is the gravitino. Thus, R violating interactions
will be dominated by Feynman diagrams in which a gravitino
propagates out to the horizon. These are
suppressed by a factor $e^{- 2 m_{3/2} R}$. The contribution from
the interaction with the horizon has the form
$$ \sum_n \frac{|< \tilde{g} | V | n >|^2}{\Delta E} . $$ Note that there is no $n$ dependence
energy denominators in this formula, because the horizon states are
approximately degenerate. To estimate the number of states that
contributes to this formula we note that the horizon states, like
degenerate Landau levels, can be localized and have a fixed entropy
per unit area. The gravitino can propagate in the vicinity of the
horizon, a null surface, for a proper time of order
$\frac{1}{m_{3/2}}$. Quantum particles execute random walks in
proper time. If we take the step size to be Planck scale, the area covered will also scale like
$\frac{1}{m_{3/2} M_P}$. Thus, the contribution of this diagram is of
order $$e^{-2 m_{3/2} R + \frac{b M_P}{m_{3/2}}},$$ where $b$ is an
unknown constant. We know that $m_{3/2} \rightarrow 0$ as $R
\rightarrow\infty$. If it went to zero faster than $R^{-
\frac{1}{2}}$, then the diagram would blow up exponentially. If it
goes to zero more slowly than $R^{- \frac{1}{2}}$, then the diagram
is exponentially small. However, it is precisely the R violating
terms in the effective Lagrangian, which are supposed to be
responsible for the non-zero gravitino mass. So we have a
contradiction unless $m_{3/2} = K \Lambda^{1/4}$. In
\cite{pyramid1} we gave an argument that the constant $K$ is of
order $10$.
\section{CSB and phenomenology\cite{pyrma}}
The relationship $m_{3/2} = K \Lambda^{1/4}$, with $K$ of order
$10$, puts strong constraints on low energy phenomenological models.
In low energy effective SUGRA models, SUSY breaking is parametrized
by a non-vanishing F term for some chiral superfield, $X$. In order
to obtain gaugino masses, the model must generate couplings of the
form
$$ c_i \frac{\alpha_i}{4\pi} \frac{X}{M} {\rm tr} (W_{\alpha}^i)^2
.$$ Since, according to CSB, $F_X = K ({\rm TeV})^2 $, we cannot
have $M$ larger than a few TeV. Thus we {\it must} have a strongly
coupled hidden sector to generate the scale $M$, and that sector
must contain particles charged under the standard model gauge group.
That is, we have a model of direct gauge mediation.
If we wish to preserve the prediction of SUSY gauge coupling
unification, the new particles must be in complete multiplets of a
unified group, and transform under the hidden sector group $G$. If
the unified group contains $SU(5)$ we get, at least $R$ copies of
the $5 + \bar{5}$, where $R$ is a $G$ representation. If $R \geq 5$,
this leads to Landau poles below the unification scale, which
implies at best a fuzzy prediction of unification. All hidden sector
groups with $R < 5$ appear to predict light pseudo-Goldstone bosons
that transform under the standard model and should have been seen in
experiments.
The only resolution I have found to these competing exigencies is to
employ {\it trinification}\cite{trin}, with a hidden sector group
$SU_P (3)$. The resulting model has a pyramidal quiver diagram and
is called The Pyramid Scheme. It has perturbative one loop
unification, and no unpleasant PNGBs. The gauge group is $SU_P (3)
\otimes SU_1 (3) \otimes SU_2 (3) \otimes SU_3 (3) \rtimes Z_3$, and
the matter content is $$3 \times [(1,1,\bar{3},3) \oplus
(1,3,1,\bar{3}) \oplus (1,\bar{3}, 3, 1)],$$ $$ (3,\bar{3},1,1)
\oplus (3,1,\bar{3},1) \oplus (3,1,1,\bar{3}) \oplus c.c. $$ The
$Z_3$ symmetry permutes the last 3 $SU(3)$ subgroups. The
$SU(2)\times SU(3)$ of the standard model is embedded in the
indicated $SU_{2,3} (3)$ groups of the Pyramid, and the $U(1)$ is a
combination of a generator in $SU_1 (3)$ and one in $SU_2 (3)$. In
addition, we introduce 3 singlet fields $S_i$. The three fields
that couple both to $SU_P (3)$ and to the standard model are called
{\it trianons} and are denoted $T_i + \tilde{T}_i$, with the index
$i$ indicating that the field is charged under $SU_i (3)$.
The underlying principle of CSB implies that the low energy
Lagrangian consists of two pieces. The first, ${\cal L}_R$,
preserves a discrete R symmetry and has a supersymmetric R symmetric
minimum of its effective potential. This is the low energy
Lagrangian for the supersymmetric S-matrix of the $\Lambda = 0$
limit. Experience with string theory suggests that it should satisfy
the demands of field theoretic naturalness: every term consistent
with hypothesized symmetries is allowed. Any term smaller or larger
than would be indicated by Planck scale dimensional analysis should
be explained in terms of an explicit low energy dynamical mechanism.
The second term $\delta {\cal L}$ arises, in a low energy effective
picture, from interactions of a single gravitino with degrees of
freedom on the cosmological horizon in dS space. These DOF {\it do
not have a field theoretic description and we do not yet have a
precise model of them.} We can only list some properties of these
terms, which follow from general principles:
\begin{itemize}
\item They violate the discrete R symmetry.
\item They must give us a low energy effective theory that violates
SUSY, incorporating the relation $m_{3/2} = K \Lambda^{1/4}$.
\item The low energy effective theory must be consistent with a
model of dS space as a system with a finite number of quantum
states. In particular, if the SUSY violating minimum with c.c.
$\Lambda$ is not the absolute minimum of the potential, then the
potential must be {\it Above the Great
Divide}\cite{pyrma}\cite{abj}.
\end{itemize}
The last item implies that the non-gravitational low energy dynamics
must have a ${\it stable} $ SUSY violating ground state\footnote{If
the SUSY violating state is only meta-stable when $m_P \rightarrow
\infty$, and if the difference in energy density between the
meta-stable and "stable" minima is much larger than $\Lambda$, the
gravitational theory is below the Great Divide\cite{abj}\cite{bfl}.}. Results
of Nelson and Seiberg\cite{ns}, when combined with the requirement
that R symmetry is explicitly broken, then imply that the R
violating part of the Lagrangian {\it cannot} satisfy the demands of
naturalness. It must omit terms allowed by all symmetries. We have
however emphasized that the origin of these terms is novel and
corresponds to nothing in our experience with ordinary string theory
or quantum field models that emerge from quasi-local lattice
dynamics. In models of quantum gravity, the states on horizons,
whether black hole horizons or the cosmological horizon in dS space,
do not have a description in terms of localized bulk degrees of
freedom, obeying the rules of QFT. The R violating terms in the
Lagrangian for local degrees of freedom, are the residuum of
interactions with a large number of horizon states, which decouple
as the dS radius is taken to infinity. These terms are important,
because they are the origin of supersymmetry breaking. They do not obey the constraints
of naturalness.
The R preserving part of the TeV scale superpotential is the
superpotential of the standard model plus
$$W_R = \sum g_i S_i \tilde{T}_i T_i + \sum y_i (T_i^3) +
\tilde{y}_i (\tilde{T}_i)^3 + \sum g_{\mu i} S_i H_u H_d .$$ The R
symmetry, which must have no gauge anomalies, is chosen so that
either $g_1$ or $g_3$ vanishes, as well as one of the pairs
$y_{1,3}$ and $\tilde{y}_{1,3}$. It can also be chosen such that the
coefficients of all baryon and lepton number violating operators of
dimensions $4$ and $5$, apart from neutrino seesaw terms $(H_u
L)^2$, vanish. We require the vanishing of $g_{1\ or\ 3}$ in order
to eliminate SUSY preserving minima. The vanishing of one pair of
the $y$ couplings is introduced in order to have a dark matter
candidate.
The R violating superpotential, coming from interactions with the
horizon, is postulated to be
$$\delta W = W_0 + \sum (m_i T_i \tilde{T}_i ) + \mu_i^2 S_i + \mu H_u
H_d .$$
I'll conclude with a brief list of the properties of the model
\begin{itemize}
\item It has no Supersymmetric minimum at sub-Planckian field
values, and is compatible with an underlying model of dS space with
a finite number of states, incorporating the CSB relation $m_{3/2} =
K \Lambda^{1/4}$.
\item ${\cal L}_R$ has a discrete R symmetry and all R preserving
couplings appear with natural strength. Dimension $4$ and $5$
couplings that violate baryon number are absent, and the only
allowed lepton number violating couplings are the neutrino seesaw
terms $(H_u L)^2$. The $\mu$ term $H_u H_d$ is also forbidden by R
symmetry. All CP violating angles, apart from the CKM phase, can be
rotated away, and ${\cal L}_R$ has a dangerous axion. The
non-generic terms in $\delta W$ lift the axion. One can argue that
if the origin of CP violation is at energies below the Planck scale,
so that the thermal bath near the horizon is approximately CP
invariant, the CP violating phases in $\delta W$ are very small.
This is a novel solution of the strong CP problem. The NMSSM
couplings in ${\cal L}_R$ and the explicit $\mu$ term in $\delta W$
give an acceptable Higgs spectrum, without tuning.
\item All couplings are perturbative at the unification scale, and
the model generates a dynamical scale $\Lambda_3 \sim $ a few TeV,
which can explain the origin of gaugino masses. There is freedom to
separately tune different gaugino masses by using the parameters
$m_i$ in $\delta W$. The chargino decays promptly in this model, so
that the Fermilab trilepton analysis bounds its mass from below by
$\sim 270$ GeV. By making the parameter $m_3$ reasonably large, we
insure that the gluino is not heavy enough to make dangerous
modifications to the Higgs potential.
\item Dark matter is the pyrma-baryon field $ (T_i )^M_a(T_i )^N_b (T_i
)^K_c \epsilon^{abc} \epsilon_{MNK}$, where $i = 1$ or $3$. It is
not a thermal relic, but can have the right relic density if an
appropriate pyrma-baryon asymmetry is generated in the early
universe. There will be no dark matter annihilation signals. The
dark matter particle weighs tens of TeV, and has a magnetic moment.
The magnetic moment leads to an interesting pattern of signals in
terrestrial dark matter detectors, rather different from the signal
for a convential WIMP. The details of this are being worked out.
\end{itemize}
\section{Conclusions}
The theory of holographic space-time seeks to generalize string
theory to situations where the boundaries of space-time are not
asymptotically flat or anti-de Sitter, and the quantum theory does
not have a unique ground state. It builds space-time out of purely
quantum data, the dimensions of Hilbert spaces and common tensor
factors in a net of Hilbert spaces. The topology of a Cauchy surface
is part of the specification of the formalism, and does not change
with time. Space-time geometry is not a fluctuating quantum
variable. Instead the quantum degrees of freedom are quantized
orientations of pixels on the holographic screens of causal
diamonds. Their quantum kinematics is determined by a super-algebra,
whose structure incorporates the quantum remnants of the geometry of
compact dimensions. Compactifications are classified in terms of
possible superalgebras. {\it For finite area holographic screens, compactifications
are finite dimensional unitary representations of a superalgebra,
and have no continuous moduli. When dS space is modeled as the finite
holoscreen on the cosmological horizon, this automatically leads to a fixing
of moduli. Moduli stabilization is essentially kinematic, and has
nothing to do with an effective potential.}
In the limit that the holographic screen area goes to infinity, and
the screen approaches that of null infinity in Minkowski space, the
pixel variables describe supersymmetric multiplets, including the
gravity multiplet. There is, as yet, no general prescription for
calculating the scattering matrix of this super-Poincare invariant
theory.
The general formalism leads in particular to a completely
non-singular, mathematically complete quantum description of what one
might call the generic Big Bang universe. This is called the dense
black hole fluid (DBHF). Heuristically, at any time, the particle
horizon volume is completely filled with a single large black hole,
and causally disconnected black holes merge to preserve this
condition as the particle horizon expands. The coarse grained
description of this situation is a flat FRW geometry with equation
of state $p = \rho$. Note that flatness, homogeneity and isotropy
emerge automatically, without any inflation.
An heuristic model of our own universe based on the concept of a low
entropy defect in the DBHF implies that the universe {\it must}
approach an asymptotically dS future, with c.c. determined by
cosmological initial conditions. dS space is modeled as a quantum
system with a finite number of states, as first envisioned by
Fischler and the present author\cite{tbwf}.
The general formalism of holographic space-time implies that SUSY is
restored as $\Lambda \rightarrow 0$ . Two arguments, one of which
was reviewed above, suggest that $m_{3/2} = K \Lambda^{1/4}$ . The
constant $K$ has been argued to be of order $10$, and is related to
the ratio between the unification scale and the Planck scale. When
combined with the desire to explain the apparent unification of
standard model couplings, this low scale of SUSY breaking puts
strong constraints on the effective Lagrangian for particle physics
at TeV energy scales. So far, only a unique class of models has been
found, which can satisfy these constraints. These are the Pyramid
Schemes, and differ from each other only in the values of a few
parameters. They all have a discrete R symmetry in the $\Lambda = 0$
limit, which is broken by interactions with states on the horizon,
which decouple in this limit. The R violating terms in the effective
Lagrangian do not satisfy the usual laws of naturalness.
The Pyramid Scheme resolves many of the puzzles of low energy
supersymmetric particle physics, some by a novel mechanism. It has
an acceptable level of flavor changing neutral currents and no
dangerous B and L violating operators. It has a novel dark matter
candidate, which carries an approximately conserved $U(1)$ quantum
number and can have the right relic density if an appropriate
asymmetry is generated in the early universe. There are no
annihilation signals. The dark matter candidate is quite heavy and
has a magnetic dipole moment. Its signals in terrestrial detectors
depend on the target nucleus, and are being worked
out\cite{tbjffst}. The supersymmetric and strong CP problems, the
$\mu$ problem, and the little hierarchy problem are all resolved by
the non-generic nature of the R violating part of the effective
Lagrangian.
The theory of holographic space-time thus provides a comprehensive
quantum mechanical framework for early and late universe cosmology,
as well as incorporating the surprising connection between the
asymptotic dS nature of the universe and low energy particle
physics. The particle physics implications will be checked, at least
in part, by the LHC. If the theory's predictions are verified, one
would be motivated to attack the unsolved problem of formulating
dynamical equations for holographic space-time.
\section{Acknowledgments}
This research was supported in part by DOE grant number
DE-FG03-92ER40689.
|
2,877,628,091,221 | arxiv | \section{Introduction}
\label{sec:intro}
The \textsc{Steiner Tree} (ST) problem is one of the earliest and
most fundamental problems in combinatorial optimization: given an
undirected graph $G=(V,E)$ and a set $T\subseteq V$ of terminals, the
objective is to find a tree of minimum size which connects all the
terminals. The ST problem is believed to have been
first formally defined by Gauss in a letter in 1836, and the first
combinatorial formulation is attributed
independently to Hakimi~\cite{hakimi} and Levin~\cite{levin} in
1971. The ST problem is known to be NP-complete, and was in fact part of
Karp's original list~\cite{karp1972reducibility} of 21
NP-complete problems. In the directed version of the ST problem,
called \textsc{Directed Steiner Tree} (DST), we are
also given a root vertex $r$ and the objective is to find a minimum
size arborescence which connects the root $r$ to each terminal from
$T$. An easy reduction from \textsc{Set Cover} shows that the DST
problem is also NP-complete.
Steiner-type problems arise in the design of networks. Since many networks are
symmetric, the directed versions of Steiner-type problems were mostly of
theoretical interest. However, in recent years, it has been
observed~\cite{ramanathan1996multicast, salama1997evaluation} that the
connection cost in various networks such as satellite or radio networks are not
symmetric. Therefore, directed graphs form the most suitable model for such
networks. In addition, Ramanathan~\cite{ramanathan1996multicast} also used the
DST problem to find low-cost multicast trees, which have applications in
point-to-multipoint communication in high bandwidth networks. We refer the
interested reader to Winter~\cite{winter1987steiner} for a survey on
applications of Steiner problems in networks.
In this paper we consider two
well-studied Steiner-type problems in directed graphs, namely the \textsc{Strongly Connected Steiner Subgraph} \xspace and the
\textsc{Directed Steiner Network}\xspace problems. In the (vertex-unweighted) \textsc{Strongly Connected Steiner Subgraph} \xspace (SCSS) problem, given a directed graph $G=(V,E)$ and
a set $T=\{t_{1}, t_{2}, \ldots, t_{k}\}$ of $k$ terminals, the objective is to
find a set $S\subseteq V$ of minimum size such that $G[S]$ contains a $t_{i}\rightarrow t_{j}$
path for each $1\leq i\neq j\leq k$. Thus, just as DST, the SCSS problem is
another directed version of the ST problem, where all terminals need to be
connected to each other. The (vertex-unweighted) \textsc{Directed Steiner Network}\xspace (DSN) problem generalizes both DST and SCSS:
given a directed graph $G=(V,E)$ and a set $T=\{(s_1, t_{1}), (s_{2}, t_{2}),
\ldots, (s_k, t_{k})\}$ of $k$ pairs of terminals, the objective is to find a
set $S\subseteq V$ of minimum size such that $G[S]$ contains an $s_{i}\rightarrow t_{i}$ path
for each $1\leq i\leq k$.
We first describe the known results for both SCSS and DSN before stating our
results and techniques.
\subsection{Previous work}
Since both DSN and SCSS are NP-complete, one can try to design polynomial-time
approximation
algorithms for these problems. An $\alpha$-approximation for DST implies a $2\alpha$-approximation for SCSS as follows: fix a
terminal $t\in T$ and take the union of the solutions of the DST instances $(G,t, T\setminus t)$ and $(G_{\textup{rev}}, t, T\setminus
t)$, where $G_{\textup{{rev}}}$ is the graph obtained from $G$ by reversing the orientations of all edges. The best known approximation
ratio in polynomial time for SCSS is $k^{\epsilon}$ for any $\epsilon>0$~\cite{DBLP:journals/jal/CharikarCCDGGL99}. A result
of Halperin and Krauthgamer~\cite{DBLP:conf/stoc/HalperinK03} implies SCSS has no $\Omega(\log^{2-\epsilon} n)$-approximation
for any $\epsilon>0$, unless NP has quasi-polynomial Las Vegas algorithms. For
the more general DSN problem, the best approximation ratio known is $n^{2/3 +
\epsilon}$ for any $\epsilon>0$. Berman et
al.~\cite{DBLP:journals/iandc/BermanBMRY13} showed
that DSN has no $\Omega(2^{\log^{1-\epsilon} n})$-approximation for any $0< \epsilon <1$, unless NP has quasi-polynomial
time algorithms.
Rather than finding approximate solutions in polynomial time, one can
look for exact solutions in time that is still better than the running
time obtained by brute force algorithms. For (unweighted versions of) both the SCSS and DSN
problems, brute force can be used to check in time $n^{O(p)}$ if a
solution of size at most $p$ exists: one can go through all sets of
size at most $p$. A more efficient algorithm would have runtime $f(p)\cdot n^{O(1)}$,
where $f$ is some computable function depending only on $p$. A problem is said to be \emph{fixed-parameter tractable} (FPT) with a particular parameter $p$ if it
admits such an algorithm;
see~\cite{fpt-book,downey-fellows,flum-grohe,niedermeier} for more
background on FPT algorithms. A natural parameter for our considered problems is the number $k$
of terminals or terminal pairs; with this parameterization, it is not
even clear if there is a polynomial-time algorithm for every fixed
$k$, much less if the problem is FPT. It is known that \textsc{Steiner Tree} \xspace on
undirected graphs is FPT parameterized by the number $k$ of terminals: the classical algorithm of Dreyfus and
Wagner \cite{DBLP:journals/networks/DreyfusW71} solves the problem in
time $3^k\cdot n^{O(1)}$. The
running time was recently improved to $2^k\cdot n^{O(1)}$ by
Bj\"orklund et al.~\cite{DBLP:conf/stoc/BjorklundHKK07}. The same
algorithms work for \textsc{Directed Steiner Tree} \xspace as well.
For the SCSS and DSN problems, we cannot expect fixed-parameter
tractability: Guo et al.~\cite{guo-et-al} showed
that SCSS is W[1]-hard parameterized by the number of terminals $k$,
and DSN is W[1]-hard parameterized by the number of terminal pairs
$k$. In fact, it is not even clear how to solve these problems in
polynomial time for small fixed values of the number $k$ of
terminals/pairs. The case of $k=1$ in DSN is the well-known shortest
path problem in directed graphs, which is known to be polynomial time
solvable. For the case $k=2$ in DSN, an $O(n^5)$ algorithm was given
by Li et al.~\cite{Li1992267} which was later improved to
$O(mn+n^{2}\log n)$ by Natu and Fang~\cite{Natu1997207}. The question
regarding the existence of a polynomial time algorithm for DSN when $k=3$
was open. Feldman and Ruhl~\cite{feldman-ruhl} solved this question by
giving an $n^{O(k)}$ algorithm for DSN, where $k$ is the number of
terminal pairs. They first designed an $n^{O(k)}$ algorithm for SCSS,
where $k$ is the number of terminals, and used it as a subroutine in
the algorithm for the more general DSN problem.
\subsection{Our results and techniques}
Given the amount of attention
the planar version of Steiner-type problems received from the
viewpoint of approximation (see, e.g.,
\cite{DBLP:conf/soda/BateniCEHKM11,DBLP:journals/jacm/BateniHM11,DBLP:journals/talg/BorradaileKM09,DBLP:conf/icalp/DemaineHK09a,DBLP:conf/soda/EisenstatKM12})
and the availability of techniques for parameterized algorithms on
planar graphs (see, e.g.,
\cite{DBLP:conf/focs/BodlaenderFLPST09,DBLP:journals/cj/DemaineH08,DBLP:journals/jacm/FrickG01,DBLP:conf/icalp/KleinM12,MarxPP-FOCS2018}),
it is natural to explore SCSS and DSN restricted to planar graphs\footnote{Planarity for directed graph problems refers to the underlying undirected graph being planar}. In
general, one can have the expectation that the problems restricted to
planar graphs become easier, but sophisticated techniques might be
needed to exploit planarity. In particular, a certain {\em square root phenomenon} was observed for a wide range of algorithmic problems: the exponent of the running time can be improved from $O(k)$ to $O(\sqrt{k})$ (or to $O(\sqrt{k}\log k)$) and lower bounds indicate that this improvement is essentially best possible \cite{DBLP:conf/icalp/Marx12,DBLP:conf/icalp/KleinM12,MarxPP-FOCS2018,KleinM14,FominLMPPS16,DemaineFHT05,DBLP:conf/stacs/PilipczukPSL13,DBLP:conf/esa/MarxP15,DBLP:conf/fsttcs/LokshtanovSW12,DBLP:journals/corr/AboulkerBHMT15,FominKLPS16}. Our main algorithmic result is also an improvement of this form:
\begin{theorem}
\label{thm:algo-sqrt-k-h-minor-free} An instance $(G,T)$ of the vertex-weighted \textsc{Strongly Connected Steiner Subgraph} \xspace problem with $|G|=n$ and $|T|=k$ can be solved in $2^{O(k)}\cdot
n^{O(\sqrt{k})}$ time, when the underlying undirected graph of $G$ is planar.
\end{theorem}
This algorithm presents a major improvement over the Feldman-Ruhl algorithm for
SCSS in general graphs which runs in
$n^{O(k)}$ time. A preliminary version of this paper~\cite{DBLP:conf/soda/ChitnisHM14} by a subset of the authors contained a complicated algorithm with a worse running time of $2^{O(k\cdot \log k)}\cdot n^{O(\sqrt{k})}$. It relied on modifying the Feldman-Ruhl token game, and then using the excluded grid theorem for planar graphs followed by treewidth-based techniques. We briefly give some intuition behind this algorithm and the original $n^{O(k)}$ algorithm of Feldman-Ruhl. The algorithm of Feldman-Ruhl for SCSS is based on defining a game with $2k$ tokens and costs associated with the moves of the tokens such that the minimum cost of the game is equivalent to
the minimum cost of a solution of the SCSS problem; then the minimum cost of the game can be computed by exploring a state space of size
$n^{O(k)}$. The $2^{O(k\cdot \log k)}\cdot n^{O(\sqrt{k})}$ algorithm was obtained by generalizing the Feldman-Ruhl token game via introducing \emph{supermoves}, which are sequences of certain types of moves. The
generalized game still has a state space of $n^{O(k)}$, but it has the advantage that we can now give a bound of $O(k)$ on the number of
supermoves required for the game (such a bound is not possible for the original
version of the game). This gives an $O(k)$-sized summary of the token game, and
hence has treewidth~$O(\sqrt{k})$. However, this summary is ``unlabeled", i.e.,
we do not explicitly know which vertices occur where in the summary. Guessing by
brute force requires $n^{O(k)}$ time, and the improvement to $2^{O(k\cdot \log
k)}\cdot n^{O(\sqrt{k})}$ is obtained by using an embedding theorem of Klein and Marx~\cite{DBLP:conf/icalp/KleinM12}.
Unlike the $2^{O(k\cdot \log k)}\cdot n^{O(\sqrt{k})}$ algorithm
of~\cite{DBLP:conf/soda/ChitnisHM14}, the $2^{O(k)}\cdot n^{O(\sqrt{k})}$
algorithm from Theorem~\ref{thm:algo-sqrt-k-h-minor-free} does not depend on the
Feldman-Ruhl algorithm. It is conceptually much simpler: first we show
combinatorially (see Lemma~\ref{lem:tw-sqrt-k}) that there is a minimal solution
whose treewidth is $O(\sqrt{k})$, and then use the dynamic-programming based
algorithm for finding bounded-treewidth solutions for DSN due to Feldmann and
Marx~\cite[Theorem 5]{andreas-dm-arxiv}. The simplicity of our new approach
also allows transparent generalizations in two directions:
\begin{itemize}
\item \underline{From planar to $H$-minor-free graphs:} we may use
the excluded grid minor theorem for $H$-minor-free
graphs~\cite{H-minor-free-grid-theorem} instead of the excluded grid minor
theorem for planar graphs~\cite{planar-grid-theorem} to prove the existence of
a minimal solution of treewidth $O(\sqrt{k})$, which again implies a
$2^{O(k)}\cdot n^{O(\sqrt{k})}$ time algorithm.
\item \underline{Between restricted inputs and restricted solutions:}
our algorithm only exploits the $H$-minor-freeness of an optimum solution, and not of
the whole input graph. Thus, only the existence of one optimum $H$-minor-free solution in an otherwise
unrestricted input graph is enough to show that some optimum solution (which might not necessarily be $H$-minor-free) can be found in $2^{O(k)}\cdot n^{O(\sqrt{k})}$ time\footnote{This can be thought of as an intermediate requirement between the two extremes of either forcing the input graph itself to be $H$-minor-free versus the other extreme of finding an optimum solution which is $H$-minor-free. There has been some recent work in this direction~\cite{rajesh-andreas-ipec,rajesh-andreas-pasin}}.
\end{itemize}
Can we get a better speedup in planar graphs than the improvement from $O(k)$ to $O(\sqrt{k})$ in the exponent of $n$?
Our main hardness result matches our algorithm: it shows that $O(\sqrt{k})$ is
best possible under the Exponential Time Hypothesis (ETH).
\begin{theorem}
\label{thm:scss-main-hardness-planar-graphs} The edge-unweighted version of the SCSS problem is W[1]-hard
parameterized by the number of terminals $k$, even when the underlying
undirected graph is planar. Moreover, under ETH, the SCSS problem on planar
graphs cannot be solved in $f(k)\cdot n^{o(\sqrt{k})}$ time where $f$ is any
computable function, $k$ is the number of terminals and $n$ is the number of
vertices in the instance.
\end{theorem}
This also answers the question of Guo et al.~\cite{guo-et-al}, who showed the
W[1]-hardness of these problems on general graphs and left the fixed-parameter
tractability status on planar graphs as an open question.
Recall that ETH has the consequence that
$n$-variable 3SAT cannot be solved in time $2^{o(n)}$~\cite{eth,eth-2}.
There are relatively few parameterized problems that
are W[1]-hard on planar
graphs~\cite{DBLP:conf/iwpec/BodlaenderLP09,DBLP:journals/mst/CaiFJR07,
DBLP:conf/iwpec/EncisoFGKRS09,DBLP:conf/icalp/Marx12}. The
reason for the scarcity of such hardness results is mainly because for most problems, the fixed-parameter tractability of finding a solution
of size $k$ in a planar graph can be reduced to a bounded-treewidth problem by standard layering techniques. However, in our case the
parameter $k$ is the number of terminals, hence such a simple reduction to the bounded-treewidth case does not seem to be
possible. Our reduction is from the \textsc{Grid Tiling}\xspace problem formulated by Marx~\cite{daniel-grid-tiling,DBLP:conf/icalp/Marx12} (see also \cite{fpt-book}), which is a
convenient starting point for parameterized reductions for planar problems.
For our reduction we need to construct two types of gadgets, namely the
connector gadget and main gadget, which are then arranged in a grid-like
structure (see Figure~\ref{fig:big-picture}). The main technical
part of the reduction is the structural result regarding the existence and
construction of particular types of connector gadgets
and main gadgets (Lemma~\ref{lem:connector-gadget} and Lemma~\ref{lem:main-gadget}). Interestingly, the construction of the
connector gadget poses a greater challenge: here we exploit in a fairly
delicate way the fact that the $t_i \leadsto t_j$ and the reverse $t_j\leadsto
t_i$ paths appearing in the solution subgraph might need to share edges to reduce the weight.
We present additional results that put our algorithm and lower bound for SCSS in a wider context. Given our speedup for SCSS in planar
graphs, one may ask if it is possible to get any similar speedup in general graphs. Our next result shows that the $n^{O(k)}$ algorithm of
Feldman-Ruhl is almost optimal in general graphs:
\begin{theorem}
\label{thm:scss-main-hardness-general-graphs} Under ETH, the edge-unweighted version of the SCSS problem cannot be solved in time $f(k)\cdot n^{o(k/\log k)}$ where $f$ is an arbitrary
computable function, $k$ is the number of terminals and $n$ is the number of vertices in the instance.
\end{theorem}
Our proof of Theorem~\ref{thm:scss-main-hardness-general-graphs} is similar to the W[1]-hardness proof of Guo et al.~\cite{guo-et-al}. They showed the W[1]-hardness of SCSS on general graphs parameterized by the number $k$ of terminals by giving a reduction from \textsc{$k$-Clique}. However, this reduction uses ``edge selection gadgets'' and since a $k$-clique has
$\Theta(k^2)$ edges, the parameter is increased at least to $\Theta(k^2)$. Combining with the result of Chen et al.~\cite{chen-hardness} regarding the non-existence of an $f(k)\cdot
n^{o(k)}$ algorithm for ${k}$-Clique under ETH, this gives a lower bound of $f(k)\cdot n^{o(\sqrt{k})}$ for SCSS on general graphs.
To avoid the quadratic blowup in the parameter and thereby get a stronger lower bound, we use the \textsc{Partitioned Subgraph Isomorphism}\xspace (PSI) problem as the source problem of our reduction. For this problem, Marx~\cite{marx-beat-treewidth} gave a $f(k)\cdot n^{o(k/\log k)}$ lower bound under ETH, where $k=|E(G)|$ is the number of edges of the subgraph $G$ to be found in graph $H$. The reduction of Guo et al.~\cite{guo-et-al} from \textsc{Clique} can be turned into a reduction from PSI which uses only $|E(G)|$ edge selection
gadgets, and hence the parameter is $\Theta(|E(G)|)$. Then the lower bound of $f(k)\cdot n^{o(k/\log k)}$ transfers from PSI to SCSS.
A natural question is whether we can close the $O(\log k)$ factor in the exponent: however, our reduction is from the PSI problem and the best known lower bound for PSI also has such a gap~\cite{marx-beat-treewidth}. Note that there are many other parameterized problems for which the only known way of proving almost tight lower bounds is by a similar reduction from PSI, and hence an $O(\log k)$ gap appears for these problems as well~\cite{DBLP:conf/esa/MarxP15,DBLP:journals/jcss/JansenKMS13,DBLP:conf/stoc/CurticapeanDM17,DBLP:journals/siamdm/JonesLRSS17,DBLP:conf/focs/CurticapeanX15,DBLP:conf/esa/BonnetM16,DBLP:journals/algorithmica/GuoHNS13,DBLP:journals/dam/BonnetS17,DBLP:conf/esa/Bringmann0MN16,DBLP:journals/corr/abs-1808-02162,DBLP:journals/corr/LokshtanovRSZ17,DBLP:conf/iwpec/BonnetGL17,logk-1,rajesh-andreas-pasin,fahad-dsn,logk-2}.
Even though Feldman and Ruhl were able to generalize their $n^{O(k)}$ time algorithm from SCSS to DSN, we show that, surprisingly, such a
generalization is not possible for our $2^{O(k)}\cdot n^{O(\sqrt{k})}$ time algorithm for planar SCSS.
\begin{theorem}
\label{thm:dsn-w[1]-hardness} The edge-unweighted version of the \textsc{Directed Steiner Network}\xspace problem is W[1]-hard parameterized by
the number $k$ of terminal pairs, even when the input is restricted to planar
directed acyclic graphs (planar DAGs). Moreover, there is no $f(k)\cdot
n^{o(k)}$ time algorithm for any computable function $f$, unless the ETH fails.
\end{theorem}
This implies that the Feldman-Ruhl algorithm for DSN is optimal, even on planar directed acyclic graphs. As in our lower bound for planar SCSS, the
proof is by a reduction from an instance of the $k\times k$~\textsc{Grid Tiling}\xspace problem.
However, unlike in the reduction to SCSS where we needed
$O(k^2)$ terminals, the reduction to DSN needs only $O(k)$ pairs of terminals (see Figure~\ref{fig:dsn}). Since the parameter blowup is linear, the
$f(k)\cdot n^{o(k)}$ lower bound for \textsc{Grid Tiling}\xspace from~\cite{daniel-grid-tiling} transfers to DSN.
\textbf{Remark:} All our hardness results (Theorem~\ref{thm:scss-main-hardness-planar-graphs}, Theorem~\ref{thm:scss-main-hardness-general-graphs} and Theorem~\ref{thm:dsn-w[1]-hardness}) are presented for weighted-edge versions with polynomially-bounded integer weights (including edges with weight zero). By splitting each edge of weight $W$ into $W$ edges of weight one, all the results also hold for the unweighted-edge version. Our algorithm (Theorem~\ref{thm:algo-sqrt-k-h-minor-free}) is presented for the weighted-vertex version.
Appendix~\ref{appendix:vertex-general-than-edge} shows that the unweighted-vertex version is more general than the weighted-edge version. Hence all our lower bounds also hold for the (un)weighted-vertex version too.
Finally, instead of parameterizing by the number of terminals, we can consider parameterization by the number of edges/vertices of the solution. Let us
briefly and informally discuss this parameterization. Note that the number of terminals is a lower bound on the number of edges/vertices
of the solution (up to a factor of 2 in the case of DSN parameterized by the number of edges), thus fixed-parameter tractability could be
easier to obtain by parameterizing with the number of edges/vertices. However,
our lower bound for SCSS on general graphs
(as well as the W[1]-hardness of Guo et al.~\cite{guo-et-al}) actually proves hardness also
with these parameterizations, making fixed-parameter tractability unlikely. On the other hand, it follows from standard techniques that
both SCSS and DSN are FPT on planar graphs when parameterizing by the number $k$ of edges/vertices in the solution. The main argument here
is that the solution is fully contained in the $k$-neighborhood of the
terminals, whose number is at most $2k$.
It is known that the $k$-neighborhood of $O(k)$ vertices in a planar graph has
treewidth $O(k)$, and thus one can use standard techniques on bounded-treewidth
graphs (dynamic programming or Courcelle's Theorem). Alternatively, at least in the unweighted case, one can formulate the problem as a first
order formula of size depending only on $k$ and then invoke the result of Frick and Grohe~\cite{DBLP:journals/jacm/FrickG01} stating that
such problems are FPT. Therefore, as fixed-parameter tractability is easy to establish on planar graphs, the challenge here is to obtain
optimal dependence on $k$. One would expect that sub-exponential dependence on~$k$
(e.g., $2^{O(\sqrt{k})}$ or $k^{O(\sqrt{k})}$) should be possible at least for SCSS, but this is
not yet fully understood even for undirected
\textsc{Steiner Tree} \xspace~\cite{DBLP:conf/stacs/PilipczukPSL13}. A slightly different parameterization
is to consider the number $k$ of {\em non-terminal} vertices in the solution,
which can be much smaller than the number of terminals. This leads to problems
of somewhat different flavour, see
e.g.~\cite{DBLP:conf/stacs/DvorakFKMTV18,DBLP:journals/siamdm/JonesLRSS17}.
\subsection{Further related work}
Subsequent to the conference version~\cite{DBLP:conf/soda/ChitnisHM14} of this
paper, there have been several related results. Chitnis et al.~\cite{khandekar}
considered a variant of SCSS with only 2 terminals but with a requirement of
multiple paths. Formally, in the $2$-SCSS-$(k_1, k_2)$ problem we are given two
vertices $s,t$ and the goal is to find a min weight subset $H\subseteq E(G)$
such that $H$ has $k_1, k_2$ paths from $s\leadsto t, t\leadsto s$,
respectively. The objective function is given by $\text{cost}(H) = \sum_{e\in H}
\phi(e)\cdot \text{cost}(e)$ where $\phi(e)$ is the maximum number of times $e$
appears on $s\leadsto t$ paths and $t\leadsto s$ paths. Chitnis et
al.~\cite{khandekar} showed that the $2$-SCSS-$(k, 1)$ problem can be solved in
$n^{O(k)}$ time for any $k\geq 1$, and has a $f(k)\cdot n^{o(k)}$ lower bound
under ETH.
Such{\'y}~\cite{suchy-wg} introduced a generalization of DST and SCSS called
the {\sc $q$-Root Steiner Tree} ($q$-RST) problem. In this problem, given a set
of $q$ roots and a set of $k$ leaves, the task is to find a minimum-cost
network where the roots are in the same strongly connected component and every
leaf can be reached from every root. Generalizing the token game of Feldman and
Ruhl~\cite{feldman-ruhl}, Such{\'y}~\cite{suchy-wg} designed a $2^{O(q)}\cdot
n^{O(k)}$ algorithm for $q$-RST.
Recently, Chitnis et al.~\cite{rajesh-andreas-pasin} considered the SCSS and DSN
problems on bidirected graphs: these are directed graphs with the guarantee
that for every edge $(u,v)$ the reverse edge $(v,u)$ exists and has the same
weight. They showed that on bidirected graphs, the DSN problem stays W[1]-hard
parameterized by $k$ but SCSS becomes FPT (while still being NP-hard). In fact,
under ETH, no $f(k)n^{o(k/\log k)}$ time algorithm for DSN on bidirected graphs
exists, and thus the problem is essentially as hard as for general directed
graphs. For bidirected planar graphs however, Chitnis et
al.~\cite{rajesh-andreas-pasin} show that DSN can be solved in
$2^{O(k^{3/2}\log k)}n^{O(\sqrt{k})}$, which is in contrast to
Theorem~\ref{thm:dsn-w[1]-hardness}.
Some FPT approximability and inapproximability results for SCSS and DSN were
also shown in~\cite{rajesh-andreas-pasin,rajesh-andreas-ipec}.
\textbf{Pattern graphs and DSN:} The set of pairs $\{(s_i, t_i)\ :\ i\in [k]\}$ in the input of DSN can be
interpreted as a directed (unweighted) pattern graph on a set
$R=\bigcup_{i=1}^{k} \{s_i, t_i\}$ of terminals. For a graph class
$\mathcal{H}$, the $\mathcal{H}$-DSN problem takes as input a directed graph
$H\in \mathcal{H}$ on vertex set $R$ and the goal is to find a minimum cost
subgraph $N\subseteq E(G)$ such that $N$ has an $s\leadsto t$ path for each
$(s,t)\in E(H)$. Thus for a fixed class $\mathcal{H}$ of pattern graphs, the $\mathcal{H}$-DSN problem is a restricted special case of the general DSN problem, and it is possible that $\mathcal{H}$-DSN is FPT (for example, if $\mathcal{H}$ is the class of out-stars). Feldmann and Marx~\cite{andreas-dm-arxiv} gave a complete
dichotomy for which graph classes the $\mathcal{H}$-DSN problem is FPT or
W[1]-hard parameterized by $|R|$.
Given an instance of DSN with the pattern graph $H=(R,A)$ on the terminal set
$R$ with $|A|=k$, the algorithm of Feldman and Ruhl~\cite{feldman-ruhl} runs in
$n^{O(k)}$ time. The $f(k)\cdot n^{o(k)}$ lower bound under ETH for DSN in this
paper (Theorem~\ref{thm:dsn-w[1]-hardness}) has $|A|=O(|R|)$. Hence, for the
parameter $|R|$ we have a lower bound of $f(|R|)\cdot n^{o(|R|)}$ and an upper
bound of $n^{O(|R|^2)}$ (since $|A|=O(|R|^2)$ in the worst case). Recently,
Eiben et al.~\cite{fahad-dsn} essentially closed this gap by showing a lower
bound of $f(|R|)\cdot n^{o(|R|^2 /\log |R|)}$ under ETH for DSN. They also gave
an algorithm for DSN on bounded genus graphs: for graphs of genus $g$, the
algorithm runs in $f(|R|)\cdot n^{O_{g}(|R|)}$ time where $O_{g}(\cdot)$ hides
constants depending only on $g$.
\section{Improved algorithm for SCSS on planar graphs}
In this section we describe the proof to
Theorem~\ref{thm:algo-sqrt-k-h-minor-free}, i.e., we present an algorithm to
solve SCSS on planar graphs in $2^{O(k)}\cdot n^{O(\sqrt{k})}$ time. The definitions
of some of the graph-theoretic notions used in this section such as treewidth
and minors are deferred to Appendix~\ref{appendix:gt-defns} to maintain the flow
of presentation. The key is to analyze the structure of \emph{edge-minimal
solutions}, i.e., subgraphs of the input graph $G$ (induced by some set
$S\subseteq V$) containing all terminals for which no edge can be removed
without also removing all $s\leadsto t$ paths for some terminal pair $(s,t)$. We
show that for an edge-minimal solution $M$ of the SCSS problem there is a vertex set
$W\subseteq V(M)$ of size $O(k)$ such that, after removing $W$ from $M$, each
component has constant treewidth. More formally, we define a
\emph{$W_M$-component} as a connected component of the (underlying undirected) graph induced by
$V(M)\setminus W$ in $M$, and prove the following.
\begin{lemma}\label{lem:W}
For any edge-minimal solution $M$ to the edge-weighted SCSS problem there is a set of at
most $9k$ vertices $W\subseteq V(M)$ for which every $W_M$-component has treewidth at most $2$.
\end{lemma}
We defer the proof of Lemma~\ref{lem:W} to Section~\ref{subsec:bounding-treewidth}. First, we see how we can use Lemma~\ref{lem:W} to bound the treewidth of the minimal solution $M$.
\begin{lemma}
If an edge-minimal solution $M$ to edge-weighted SCSS is planar (or excludes some fixed minor),
then its treewidth is~$O(\sqrt{k})$.
\label{lem:tw-sqrt-k}
\end{lemma}
\begin{proof}
By the planar grid theorem~\cite{planar-grid-theorem}, there is a constant $c_{\text{Planar}}$ such that any planar graph $G$ with treewidth $c_{\text{Planar}}\cdot \omega$ has a $\omega \times \omega$ grid minor.
If the treewidth of $M$ is at least $c_{\text{Planar}}\cdot \lceil 20\sqrt{k} \rceil$, then it follows that $M$ has a $\lceil 20\sqrt{k} \rceil\times
\lceil 20\sqrt{k} \rceil$ grid minor $M'$. It is easy to see that $M'$ contains at least $\lfloor \frac{\lceil 20\sqrt{k}\rceil}{3} \rfloor \cdot \lfloor \frac{\lceil 20\sqrt{k}\rceil}{3} \rfloor$ (pairwise vertex-disjoint) grids of size~$3\times 3$. For each $k\geq 1$, one can easily verify that $\lfloor \frac{\lceil 20\sqrt{k}\rceil}{3} \rfloor \geq \lceil 4\sqrt{k}\rceil$, and hence the number of pairwise vertex-disjoint $3\times 3$ grids is at least $\lceil 4\sqrt{k}\rceil \cdot \lceil 4\sqrt{k}\rceil \geq 4\sqrt{k}\cdot 4\sqrt{k} =16k$.
By Lemma~\ref{lem:W}, there is a set of vertices $W$ of size $9k$
whose deletion makes every $W_M$-component have treewidth at most $2$.
Since $16k>9k$, it follows that $W$ does not contain a vertex from at least one
of the (pairwise vertex-disjoint) $16k$ grid minors of size $3\times 3$ in~$M$. Hence, there is a
$W_{M}$-component, which contains a $3\times 3$ grid minor, and hence has
treewidth at least~$3$, which is a contradiction.
For the case when the input graph is $H$-minor-free for some fixed graph $H$,
we can instead use the excluded grid-minor theorem of Demaine and
Hajiaghayi~\cite{H-minor-free-grid-theorem}: for any fixed graph $H$, there is a constant $c_{H}$ (which depends only on $|H|$) such that any $H$-minor-free graph of treewidth at least $c_{H}\cdot \omega$ has a
$\omega\times \omega$ grid as a minor.
\end{proof}
To prove Theorem~\ref{thm:algo-sqrt-k-h-minor-free}, which is restated below,
we invoke an algorithm of~\cite{andreas-dm-arxiv} to find the optimum solution
of bounded treewidth. The algorithm of~\cite{andreas-dm-arxiv} is designed for the edge-weighted version, and we state below the corresponding statement for the more general unweighted vertex version (so that it may also be of future use).
\begin{theorem}
\label{thm:dm-andreas-vertex-version} (\textbf{generalization of~\cite[Theorem~5]{andreas-dm-arxiv}})
If there is an optimum solution to an instance on $k$ terminals of the vertex-weighted version of SCSS
which has treewidth at most $\omega$, then an optimum solution\footnote{Not necessarily the same optimum solution as the one mentioned in the first part of this theorem. For example, the actual optimum found by this algorithm might have treewidth much larger than $\omega$.} can be found in $2^{O(k+\omega \cdot \log
\omega)}\cdot n^{O(\omega)}$ time.
\end{theorem}
\begin{proof}
In the given graph $G$, we start by subdividing each edge by adding a non-terminal vertex of weight $0$ (note that this does not increase the treewidth). Let us call these vertices we have added as dummy vertices, and the graph obtained at this point be $G^*$. Note that each dummy vertex has in-degree one and out-degree one. Now we reduce the vertex-weighted version of SCSS to the edge-weighted
version, using a standard reduction: substitute each non-terminal vertex $u\in G$ of weight $W$ with
two new non-terminal vertices $u^-$ and $u^+$ and an edge $(u^-,u^+)$ of the same weight $W$.
Every edge that had $u$ as its head will now have $u^-$ as its head instead, and
every edge that had $u$ as its tail will now have $u^+$ as its tail. We set the weight of all these
edges to be zero. Let the graph obtained after these modifications be $G'$.
Consider an optimum solution $S$ for the vertex-weighted version of SCSS, and without loss of generality we can assume that $S$ is minimal under vertex deletions (if it is not, then make it minimal by deleting unnecessary vertices). Let $S' = (S\cap T)\cup \{\{u^-, u^+\}\ : u\in S\cap (V\setminus T)\}$. We now show that the induced graph $G'[S']$ is an edge-minimal solution (with same weight as that of $S$) for the edge-weighted version of SCSS: we do this by showing that deletion of any edge from $G'[S']$ creates a non-terminal source or a non-terminal sink which contradicts the fact that $S$ was a vertex-minimal solution for vertex-weighted version of SCSS. The construction of the graph $G'$ from $G$ implies that any edge $e$ in $G$ must be of either of the following two types:
\begin{itemize}
\item Without loss of generality\footnote{The other case is the edge being $(v^+,y)$ for some dummy vertex $y$ and some non-terminal $v\in G$}, the edge is $(y,v^-)$ for some dummy vertex $y$ and some non-terminal $v\in G$ in which case deleting this edge makes the non-terminal $y$ to be a sink.
\item The edge is $(z^-,z^+)$ for some non-terminal $z\in G$ in which case deleting this edge makes the non-terminal $z^+$ a source and the non-terminal $z^-$ a sink.
\end{itemize}
Note that $G'$ is not necessarily planar (or $H$-minor-free)
even if $G$ is. However, the treewidth of $G'[S']$ is at most twice the treewidth of
$G[S]$ since we can simply replace each non-terminal vertex $u$ in the bags of the tree
decomposition of $N$ by the two vertices $u^-$ and $u^+$.
Feldmann and Marx~\cite[Theorem~5]{andreas-dm-arxiv} showed that if the optimum
solution to an instance on $k$ terminals of the edge-weighted version of SCSS
has treewidth $\omega$, then it can be found in $2^{O(k+\omega \cdot \log
\omega)}\cdot n^{O(\omega)}$ time. Hence, the claimed running time for the vertex-weighted version follows.
\end{proof}
Finally, we are now ready to prove Theorem~\ref{thm:algo-sqrt-k-h-minor-free}
\begin{reptheorem}{thm:algo-sqrt-k-h-minor-free}
An instance $(G,T)$ of the vertex-weighted \textsc{Strongly Connected Steiner Subgraph} \xspace problem with $|G|=n$ and $|T|=k$ can be solved
in $2^{O(k)}\cdot n^{O(\sqrt{k})}$ time, when the underlying undirected graph of
$G$ is planar (or more generally, $H$-minor-free for any fixed graph $H$).
\end{reptheorem}
\begin{proof}
Consider a subgraph $M$ of $G$ induced by the optimum solution $S\subseteq V$,
which is also minimal, i.e., no edge of $M$ can be removed without destroying
the connectivity between some terminal pair $(s,t)$. By
Lemma~\ref{lem:tw-sqrt-k} we know that the treewidth of $M$ is $O(\sqrt{k})$. Hence, the claimed running time follows from Theorem~\ref{thm:dm-andreas-vertex-version}.
\end{proof}
Note that Lemma~\ref{lem:tw-sqrt-k} only used the planarity (or
$H$-minor-freeness) of $M$, and not of the input graph. Hence, the algorithm of
Theorem~\ref{thm:algo-sqrt-k-h-minor-free} also works for the weaker restriction
of finding an optimal planar (or $H$-minor-free) solution in an otherwise
unrestricted input graph, rather than finding an optimal solution in a planar
(or $H$-minor-free respectively) graph. It only remains to prove
Lemma~\ref{lem:W}, which is done in the next section.
\subsection{Proof of Lemma~\ref{lem:W}}
\label{subsec:bounding-treewidth}
Fix an arbitrary terminal $r\in T$. It is easy to see (observed for example by Feldman and Ruhl~\cite{feldman-ruhl}) that any minimal SCSS
solution $M$ is the union of an in-arborescence $A_{\texttt{in}}$ and an
out-arborescence~$A_{\texttt{out}}$, both having the same root $r\in T$ and
only terminals as leaves, since every terminal of $T$ can be reached from $r$,
and conversely every terminal can reach $r$ in $M$. We construct the set $W_M$ by including
three different kinds of vertices. First, $W_M$ contains every \emph{branching
point} of $A_{\texttt{in}}$ and $A_{\texttt{out}}$, i.e.\ every vertex with in-degree at least
$2$ in $A_{\texttt{in}}$ and every vertex with out-degree at least $2$ in~$A_{\texttt{out}}$.
Since $A_{\texttt{in}}$ and $A_{\texttt{out}}$ are arborescences with at most $k$ leaves (the
terminals), they each have at most $k$ branching points. Secondly, $W_M$
contains all terminals of $T$, which adds another $k$ vertices to the
set~$W_M$. The third kind of vertices in $W_M$ is the following. Note that
every component of the intersection of $A_{\texttt{in}}$ and $A_{\texttt{out}}$ forms a path (possibly consisting only of a single vertex),
since
every vertex of $A_{\texttt{in}}$ has out-degree at most~$1$, while every vertex of
$A_{\texttt{out}}$ has in-degree at most~$1$. We call such a component a \emph{shared
path}. If a shared path contains a branching point or a terminal, we add the endpoints of the shared path to $W_M$. For a branching point or terminal~$v$ on such a shared path,
we can map the endpoints of the shared path to $v$. This maps at most two
endpoints of shared paths to each branching point or terminal, so that the
number of vertices of the third kind in $W_M$ is at most $6k$ (as there are $k$
terminals and at most $2k$ branching points). Thus the total size of $W_M$ is at
most~$9k$.
We claim that every $W_M$-component consists of at most two interacting paths,
one from $A_{\texttt{in}}$ and one from $A_{\texttt{out}}$. More formally, consider a $u\to v$
path $P$ of $A_{\texttt{in}}$ such that $u$ and $v$ are either terminals or branching
points of $A_{\texttt{in}}$, and such that no internal vertex of $P$ is a terminal or
branching point of $A_{\texttt{in}}$. We call any such path $P$ an \emph{essential path
of} $A_{\texttt{in}}$. Note that we ignore the branching points of $A_{\texttt{out}}$ in this
definition, and that the edge set of the arborescence~$A_{\texttt{in}}$ is the disjoint
union of the edge sets of its essential paths. Analogously we define the
\emph{essential paths of} $A_{\texttt{out}}$ as those $u\to v$ paths $P$ in~$A_{\texttt{out}}$
for which $u$ and $v$ are terminals or branching points of $A_{\texttt{out}}$, and no
internal vertices of $P$ are of such a type.
\begin{claim}\label{clm:2paths}
Every $W_M$-component contains edges of at most two essential paths, one
from~$A_{\texttt{in}}$ and one from~$A_{\texttt{out}}$.
\end{claim}
\begin{proof}
Any vertex at which two essential paths of the same arborescence intersect is a
terminal or branching point. These vertices are in $W_M$ and therefore not
contained in any $W_M$-component. Thus if a $W_M$-component $H$ contains at
least
two essential paths then they either coincide on every edge of $H$, in
which case the claim is clearly true, or $H$ contains the endpoint $v$ of a shared path, i.e., there are two essential paths, one
from each arborescence, that both contain vertex $v$. We will show that there is only one pair of
essential paths that can meet at an endpoint of a shared path in $H$, from which
the claim follows.
In order to prove this, we label every essential $u\to v$ path $P$ of $A_{\texttt{in}}$ with those
terminals $T_P\subseteq T$ that can reach the start vertex of $P$ in the in-arborescence, i.e.\
$t\in T_P$ if and only if there exists a $t\to u$ path in $A_{\texttt{in}}$. Note that no
two essential paths of $A_{\texttt{in}}$ can have the same label. We also label any
essential $u\to v$ path $Q$ of $A_{\texttt{out}}$ analogously, by setting the label
$T_Q\subseteq T$ to be the terminals which can be reached from the end vertex of $Q$ in the
out-arborescence, i.e.\ there is a $v\to t$ path in~$A_{\texttt{out}}$ if and only if
$t\in T_Q$. Even though no two essential paths of an individual arborescence
have the same label, there can be pairs of essential paths from $A_{\texttt{in}}$ and
$A_{\texttt{out}}$ with the same label. Let $P$ and $Q$ be essential paths of $A_{\texttt{in}}$
and $A_{\texttt{out}}$, respectively. We prove that if $P$ and $Q$ meet at an endpoint
$v$ of a shared path, then $v\in W_M$ or $T_P=T_Q$.
Assume this is not the case so that $v\notin W_M$ and $T_P\neq T_Q$. Let $I$ be
the shared path in the intersection of $A_{\texttt{in}}$ and $A_{\texttt{out}}$ for which $v$ is
an endpoint. If $u$ is the other endpoint of $I$, assume w.l.o.g.\ that $I$ is
a $u\to v$ path (the other case is symmetric). If there were any branching
points or terminals on $I$ then~$v\in W_M$, since $v$ would then be one of the
third kind of vertices in $W_M$. As this is not the case, $I$ lies in the
intersection of $P$ and $Q$, there are edges $e_v\in E(P)$ and $f_v\in E(Q)$
leaving $v$ such that $e_v\notin E(A_{\texttt{out}})$ and $f_v\notin E(A_{\texttt{in}})$, and
there are edges $e_u\in E(P)$ and $f_u\in E(Q)$ entering $u$ such that
$e_u\notin A_{\texttt{out}}$ and $f_u\notin E(A_{\texttt{in}})$.
As $T_P\neq T_Q$ there is a terminal $t$ contained in one of the two sets but
not the other. Consider the case when $t\in T_Q\setminus T_P$, i.e.\ there is a
$v\to t$ path in $A_{\texttt{out}}$ but no $t\to u$ path in~$A_{\texttt{in}}$. The latter implies
that $e_v$ cannot be reached from $t$ in $A_{\texttt{in}}$, as the $u\to v$ path $I$
contains no branching point of $A_{\texttt{in}}$. The in-arborescence $A_{\texttt{in}}$ does
however contain a $t\to r$ path from $t$ to the root~$r$. Since $e_v\notin
E(A_{\texttt{out}})$, this means that the root $r$ can be reached from $v$ through the
$v\to t$ path of $A_{\texttt{out}}$ and the $t\to r$ path without passing through~$e_v$.
Hence $e_v$ can safely be removed without making the solution $M$ infeasible.
This contradicts the minimality of $M$.
In case $t\in T_P\setminus T_Q$ a symmetric argument shows that the edge $f_u$
is redundant in $M$, which again contradicts its minimality. We have thus shown
that $P$ and $Q$ are the only essential paths that meet in any endpoint of a
shared path in the $W_M$-component $H$. Hence $H$ consists of exactly these two
paths $P$ and $Q$, and the claim follows.
\renewcommand{\qedsymbol}{$\lrcorner$}\end{proof}
Consider the case when there is at most one shared path of $M$ that intersects
with a $W_M$-component~$H$. Since by Claim~\ref{clm:2paths}, $H$ consists of at
most
two essential paths, it is easy to see that in this case $H$ is a tree, and
thus its treewidth is $1$. If at least two shared paths of $M$ intersect with
$H$, by Claim~\ref{clm:2paths}, $H$~contains edges of two essential paths $P$ and
$Q$ of $A_{\texttt{in}}$ and $A_{\texttt{out}}$ respectively. To show that in this case the
treewidth of $H$ is at most $2$, we need the following observation on $P$ and
$Q$:
\begin{claim}\label{clm:order}
Let $I_1,\ldots,I_h$ be the connected components in the intersection of $P$ and
$Q$, ordered in the way that $P$ visits them, i.e.\ for any $i\in\{1,\ldots,
h-1\}$ there is a subpath of $P$ with prefix $I_i$ and suffix~$I_{i+1}$. The
path $Q$ visits the shared paths in the opposite order, i.e.\ for any
$i\in\{1,\ldots, h-1\}$ there is a subpath of $Q$ with suffix $I_i$ and prefix
$I_{i+1}$.
\end{claim}
\begin{proof}
Assume this is not the case, so that there is an index $i\in\{1,\ldots, h-1\}$
such that both $P$ and $Q$ contain subpaths with prefix $I_i$ and suffix
$I_{i+1}$. This means that there are edges $e\in E(P)\setminus E(Q)$ and $f\in
E(Q)\setminus E(P)$ which share the last vertex $u$ of $I_i$. Hence $Q$ contains
a $u\to v$ subpath $Q'$ to the first vertex $v$ of $I_{i+1}$, which does not contain
the edge $e$, and also $P$ contains a $u\to v$ subpath $P'$, which does not
contain the edge $f$. As $Q$, and therefore also $Q'$, contains no branching
point of $A_{\texttt{out}}$, any terminal reachable from $u$ through $Q'$ in $A_{\texttt{out}}$ is
also reachable from $u$ via the detour $P'$. We can therefore remove edge $f\in
E(A_{\texttt{out}})$ without violating the feasibility of $M$. This however contradicts
its minimality.
\renewcommand{\qedsymbol}{$\lrcorner$}
\end{proof}
\begin{figure}[t]
\centering
\resizebox{10.0cm}{!}{
\begin{tikzpicture}[rotate=90]
\centering
\draw [black] plot [only marks, mark size=7, mark=*] coordinates {(4,0)}
node[label={[xshift=0mm,yshift=0mm] $$}] {} ;
\draw [black] plot [only marks, mark size=7, mark=*] coordinates {(8,0)}
node[label={[xshift=0mm,yshift=0mm] $$}] {} ;
\draw [black] plot [only marks, mark size=7, mark=*] coordinates {(4,-4)}
node[label={[xshift=0mm,yshift=0mm] $$}] {} ;
\draw [black] plot [only marks, mark size=7, mark=*] coordinates {(8,-4)}
node[label={[xshift=0mm,yshift=0mm] $$}] {} ;
\draw [black] plot [only marks, mark size=7, mark=*] coordinates {(4,-8)}
node[label={[xshift=0mm,yshift=0mm] $$}] {} ;
\draw [black] plot [only marks, mark size=7, mark=*] coordinates {(8,-8)}
node[label={[xshift=0mm,yshift=0mm] $$}] {} ;
\draw [black] plot [only marks, mark size=7, mark=*] coordinates {(4,-12)}
node[label={[xshift=0mm,yshift=0mm] $$}] {} ;
\draw [black] plot [only marks, mark size=7, mark=*] coordinates {(8,-12)}
node[label={[xshift=0mm,yshift=0mm] $$}] {} ;
\path (4,0) node(a) {} (8,0) node(b) {};
\draw[line width=1.5mm,black,->] (a) -- (b);
\path (4,-4) node(a) {} (8,-4) node(b) {};
\draw[line width=1.5mm,black,->] (b) -- (a);
\path (4,-8) node(a) {} (8,-8) node(b) {};
\draw[line width=1.5mm,black,->] (a) -- (b);
\path (4,-12) node(a) {} (8,-12) node(b) {};
\draw[line width=1.5mm,black,->] (b) -- (a);
\path (4,4) node(a) {} (4,0) node(b) {};
\draw[line width=1.5mm,lightgray,->] (a) -- (4,0.2);
\path (8,0) node(a) {} (8,-4) node(b) {};
\draw[line width=1.5mm,lightgray,->] (8,-0.2) -- (8,-3.8);
\path (4,-4) node(a) {} (4,-8) node(b) {};
\draw[line width=1.5mm,lightgray,->] (4,-4.2) -- (4,-7.8);
\path (8,-8) node(a) {} (8,-12) node(b) {};
\draw[line width=1.5mm,lightgray,->] (8,-8.2) -- (8,-11.8);
\path (4,-12) node(a) {} (4,-16) node(b) {};
\draw[line width=1.5mm,lightgray,->] (4,-12.2) -- (4,-15.8);
\path (8,0) node(a) {} (8,4) node(b) {};
\draw[line width=1.5mm,loosely dotted,->] (8,0.3) -- (8,4);
\path (4,-4) node(a) {} (4,0) node(b) {};
\draw[line width=1.5mm,loosely dotted,->] (a) -- (4,-0.3);
\path (8,-8) node(a) {} (8,-4) node(b) {};
\draw[line width=1.5mm,loosely dotted,->] (a) -- (8,-4.3);
\path (4,-12) node(a) {} (4,-8) node(b) {};
\draw[line width=1.5mm,loosely dotted,->] (a) -- (4,-8.3);
\path (8,-16) node(a) {} (8,-12) node(b) {};
\draw[line width=1.5mm,loosely dotted,->] (a) -- (8,-12.3);
\end{tikzpicture}
}
\caption{Let $H$ be a $W_M$-component $M$ such that at least two shared paths of
$M$ intersect with $H$. By Claim~\ref{clm:2paths}, $H$~contains edges of two
essential paths $P$ and $Q$ of $A_{\texttt{in}}$ and $A_{\texttt{out}}$, respectively.
Claim~\ref{clm:order} shows that $H$ looks roughly as shown in the figure: let
$P, Q$ be denoted by light+black paths and dotted+black paths respectively. Then the
black paths are exactly the shared paths. Note though that a shared path may
have length~$0$.
}
\label{fig:tw-2}
\end{figure}
Claim~\ref{clm:order} implies that the structure of $H$ is roughly as shown in
Figure~\ref{fig:tw-2} (the black edges shown in Figure~\ref{fig:tw-2} correspond
to paths of length~$0$ or more, while the light and dotted edges correspond to paths
of length at least~$1$).
If we contract each path of length at least $1$ to a path of length $1$, then
the resulting graph is planar and all vertices belong to the outer face. Such
graphs are called outerplanar graphs. In other words, $H$ is a subdivision of
an outerplanar graph. Lemma~\ref{lem:tw-outerplanar} shows that treewidth of
subdivisions of outerplanar graphs is at most $2$,
which proves Lemma~\ref{lem:W}.
\section{W[1]-hardness for SCSS in planar graphs}
The goal of this section is to prove Theorem~\ref{thm:scss-main-hardness-planar-graphs}. We reduce from the \textsc{Grid Tiling}\xspace problem\footnote{The \textsc{Grid Tiling}\xspace problem has been defined in two (symmetrical) ways in the literature: either the first coordinate or the second coordinate remains the same in a row. Here, we follow the notation of~\cite{daniel-grid-tiling}, but the other definition also appears in some places (e.g.~\cite{fpt-book}).} introduced by Marx~\cite{daniel-grid-tiling}:
\begin{center}
\noindent\framebox{\begin{minipage}{6.00in}
\textbf{$k\times k$~\textsc{Grid Tiling}}\\
\emph{Input }: Integers $k, n$, and $k^2$ non-empty sets $S_{i,j}\subseteq [n]\times [n]$ where $1\leq i, j\leq k$\\
\emph{Question}: For each $1\leq i, j\leq k$ does there exist an entry $\gamma_{i,j}\in S_{i,j}$ such that
\begin{itemize}
\item If $\gamma_{i,j}=(x,y)$ and $\gamma_{i,j+1}=(x',y')$ then $x=x'$.
\item If $\gamma_{i,j}=(x,y)$ and $\gamma_{i+1,j}=(x',y')$ then $y=y'$.
\end{itemize}
\end{minipage}}
\end{center}
Under ETH~\cite{eth,eth-2}, it was shown by Chen et
al.~\cite{chen-hardness} that $k$-\textsc{Clique}\footnote{The
$k$-\textsc{Clique} problem asks whether there is a clique of size $\geq k$.}
does not admit an algorithm running in time $f(k)\cdot n^{o(k)}$ for any computable
function $f$. There is a simple reduction~\cite[Theorem 14.28]{fpt-book} from
$k$-\textsc{Clique} to $k\times k$~\textsc{Grid Tiling}\xspace implying the same runtime lower bound
for the latter problem.
To prove Theorem~\ref{thm:scss-main-hardness-planar-graphs}, we give a reduction which transforms the problem of $k\times k$
\textsc{Grid Tiling}\xspace into a planar instance of SCSS with $O(k^2)$ terminals.
We design two types of gadgets: the \emph{connector gadget} and the \emph{main gadget}. The reduction from \textsc{Grid Tiling}\xspace represents each cell of the grid with a copy of the main gadget, with a connector gadget between main gadgets that are adjacent either horizontally or vertically (see Figure~\ref{fig:big-picture}).
The proof of Theorem~\ref{thm:scss-main-hardness-planar-graphs} is divided into the following steps: In
Section~\ref{sec:connector:exist}, we first introduce the connector gadget and Lemma~\ref{lem:connector-gadget} states the
existence of a particular type of connector gadget. In Section~\ref{sec:main:exists}, we introduce the main gadget and
Lemma~\ref{lem:main-gadget} states the existence of a particular type of main gadget. Section~\ref{sec:construction-of-g^*-scss-planar} describes the construction of the planar instance $(G^*, T^*)$ of SCSS. The two directions implying the reduction from \textsc{Grid Tiling}\xspace to SCSS are proved in Section~\ref{proof:scss-planar-easy} and Section~\ref{proof:scss-planar-hard} respectively. Using Lemmas~\ref{lem:connector-gadget}
and ~\ref{lem:main-gadget} as a blackbox, we prove Theorem~\ref{thm:scss-main-hardness-planar-graphs} in
Section~\ref{sec:main-scss-planar-proof-subsec}. The proofs of Lemmas~\ref{lem:connector-gadget} and Lemma~\ref{lem:main-gadget} are given in
Sections~\ref{connector:proof} and \ref{main:proof} respectively.
\subsection{Existence of connector gadgets}
\label{sec:connector:exist}
A connector gadget $CG_{n}$ is a directed (embedded) planar graph with $O(n^2)$ vertices and positive integer weights\footnote{Weights are polynomial in $n$.} on its edges. It has a total of
$2n+2$ distinguished vertices divided into the following 3 types:
\begin{itemize}
\item The vertices $p, q$ are called \emph{internal-distinguished} vertices
\item The vertices $p_1, p_2, \ldots, p_n$ are called \emph{source-distinguished} vertices
\item The vertices $q_1, q_2, \ldots, q_n$ are called \emph{sink-distinguished} vertices
\end{itemize}
Let $P=\{p_1, p_2,\ldots, p_n\}$ and $Q=\{q_1, q_2,\ldots, q_n\}$. The vertices $P\cup Q$ appear in the clockwise order $p_1$, $\dots$, $p_n$, $q_n$, $\dots$, $q_1$ on the boundary of the gadget. In the connector gadget $CG_n$, every vertex in $P$ is a source
and has exactly one outgoing edge. Also every vertex in $Q$ is a sink and has exactly one incoming edge.
\begin{definition}
An edge set $E'\subseteq E(CG_{n})$ satisfies the \textbf{\emph{connectedness}} property if
each of the following four conditions hold for the graph $CG_{n}[E']$:
\begin{enumerate}
\item $p$ can be reached from some vertex in $P$
\item $q$ can be reached from some vertex in $P$
\item $p$ can reach some vertex in $Q$
\item $q$ can reach some vertex in $Q$
\end{enumerate}
\label{defn:connectedness}
\end{definition}
\begin{definition}
An edge set $E'$ satisfying the connectedness property \textbf{represents} an integer $i\in [n]$ if in $E'$ the only
outgoing edge from $P$ is the one incident to $p_i$ and the only incoming edge into $Q$ is the one incident to $q_i$.
\label{defn:represents-connector}
\end{definition}
The next lemma shows we can construct a particular type of connector gadget:
\begin{lemma}
Given an integer $n$ one can construct in polynomial time a connector gadget $CG_{n}$ and an integer $C^*_{n}$ such that the following two properties hold \footnote{We use the notation $C^*_n$ to emphasize that $C^*$ depends only on $n$.}:
\begin{enumerate}
\item For every $i\in [n]$, there is an edge set $E_i \subseteq E(CG_{n})$ of weight $C^*_n$ such that $E_i$ satisfies the
connectedness property and represents $i$. Note that, in particular, $E_i$ contains a $p_i \leadsto q_i$ path (via $p$ or $q$).
\item If there is an edge set $E'\subseteq E(CG_{n})$ such that $E'$ has weight at most $C^*_n$ and $E'$ satisfies the connectedness
property, then $E'$ has weight exactly $C^*_n$ and it represents some $i\in [n]$.
\end{enumerate}
\label{lem:connector-gadget}
\end{lemma}
\subsection{Existence of main gadgets}
\label{sec:main:exists}
A main gadget $MG$ is a directed (embedded) planar graph with $O(n^3)$ vertices and positive integer weights on its edges. It has $4n$ distinguished
vertices given by the following four sets:
\begin{itemize}
\item The set $L=\{\ell_1, \ell_2, \ldots, \ell_n\}$ of \emph{left-distinguished} vertices.
\item The set $R=\{r_1, r_2, \ldots, r_n\}$ of \emph{right-distinguished} vertices.
\item The set $T=\{t_1, t_2, \ldots, t_n\}$ of \emph{top-distinguished} vertices.
\item The set $B=\{b_1, b_2, \ldots, b_n\}$ of \emph{bottom-distinguished} vertices.
\end{itemize}
The distinguished vertices appear in the (clockwise) order $t_1$, $\dots$, $t_n$,
$r_1$, $\dots$, $r_n$, $b_n$, $\dots$, $b_1$, $\ell_n$, $\dots$,
$\ell_1$ on the boundary of the gadget. In the main gadget $MG$, every
vertex in $L\cup T$ is a source and has exactly one outgoing
edge. Also each vertex in $R\cup B$ is a sink and has exactly one
incoming edge.
\begin{definition}
An edge set $E'\subseteq E(MG)$ satisfies the \textbf{\emph{connectedness}} property if
each of the following four conditions hold for the graph $MG[E']$:
\begin{enumerate}
\item There is a directed path from some vertex in $L$ to $R\cup B$
\item There is a directed path from some vertex in $T$ to $R\cup B$
\item Some vertex in $R$ can be reached from $L\cup T$
\item Some vertex in $B$ can be reached from $L\cup T$
\end{enumerate}
\label{defn:source-sink-connectivity}
\end{definition}
\begin{definition}
An edge set $E'\subseteq E(MG)$ \textbf{represents} a pair $(i,j)\in
[n]\times [n]$ if each of the following five conditions holds:
\begin{itemize}
\item The only edge of $E'$ leaving $L$ is the one incident to $\ell_i$
\item The only edge of $E'$ entering $R$ is the one incident to $r_i$
\item The only edge of $E'$ leaving $T$ is the one incident to $t_j$
\item The only edge of $E'$ entering $B$ is the one incident to $b_j$
\item $E'$ contains an $\ell_i \leadsto r_i$ path and an $t_j \leadsto b_j$ path
\end{itemize}
\label{defn:represents-main}
\end{definition}
The next lemma shows we can construct a particular type of main gadget:
\begin{lemma}
Given a subset $S\subseteq [n]\times [n]$, one can construct in polynomial time a main gadget $MG_{S}$ and an integer
$M^*_n$ such that the following two properties hold \footnote{We use the notation $M^*_n$ to emphasize that $M^*$ depends only on $n$, and not on the set $S$.}:
\begin{enumerate}
\item For every $(i,j)\in S$ there is an edge set $E_{i,j} \subseteq E(MG_{S})$ of weight $M^*_n$ such that $E_{i,j}$
represents $(i,j)$. Note that the last condition of Definition~\ref{defn:represents-main} implies that $E_{i,j}$ satisfies the connectedness property.
\item If there is an edge set $E'\subseteq E(MG_{S})$ such that $E'$ has weight at most $M^*_n$ and satisfies the connectedness
property, then $E'$ has weight exactly $M^*_n$ and represents some $(i,j)\in S$.
\end{enumerate}
\label{lem:main-gadget}
\end{lemma}
\subsection{Construction of the SCSS instance}
\label{sec:construction-of-g^*-scss-planar}
In order to prove Theorem~\ref{thm:scss-main-hardness-planar-graphs}, we reduce from the \textsc{Grid Tiling} problem. The following assumption will be helpful in handling some of the
border cases of the gadget construction. We may assume that $1 < x,y< n$ holds for every $(x,y)\in S_{i,j}$: indeed, we can increase $n$ by two
and replace every $(x,y)$ by $(x+1,y+1)$ without changing the problem. This is just a minor technical modification\footnote{For the interested reader, what this modification does is to ensure no shortcut edge added in Section~\ref{subsec:description-of-edges-in-main-gadget} has either endpoint on the unbounded face of the planar embedding of the main gadget provided in Figure~\ref{fig:main-gadget}. This helps to streamline the proofs by avoiding the need to have to consider any special cases.} which is introduced to make some of the arguments easier in Section~\ref{main:proof} cleaner.
Given an instance $(k,n, \{S_{i,j}\ :\ i,j\in [k]\})$ of \textsc{Grid Tiling}\xspace, we construct an instance $(G^*, T^*)$ of SCSS the following way (see Figure~\ref{fig:big-picture}):
\begin{figure}[t]
\centering
\includegraphics[height=5in]{big-picture-new}
\caption{An illustration of the reduction from \textsc{Grid Tiling} to SCSS on planar graphs.
\label{fig:big-picture}}
\end{figure}
\begin{itemize}
\item We introduce a total of $k^2$ main gadgets and $2k(k+1)$ connector gadgets.
\item For every set $S_{i,j}$ in the \textsc{Grid Tiling}\xspace instance, we construct a main gadget $MG_{i,j}$ using
Lemma~\ref{lem:main-gadget} for the subset $S_{i,j}$.
\item Half of the connector gadgets have the same orientation as in Figure~\ref{fig:connector-gadget} (with the $p_i$ vertices on the top side and the $q_i$ vertices on the bottom side), and we call them
$HCG$ to denote \emph{horizontal connector gadgets}\footnote{The horizontal connector gadgets are so called because they connect things horizontally as seen by the reader.}. The other half of the connector gadgets are rotated anti-clockwise by 90 degrees
with respect to the orientation of Figure~\ref{fig:connector-gadget}, and we call them $VCG$ to denote \emph{vertical
connector gadgets}. The internal-distinguished vertices of the connector gadgets are shown in
Figure~\ref{fig:big-picture}.
\item For each $1\leq i,j\leq k$, the main gadget $MG_{i,j}$ is surrounded by the following four connector gadgets:
\begin{enumerate}
\item The \emph{vertical connector gadgets} $VCG_{i,j}$ is on the top and $VCG_{i+1,j}$ is on the bottom.
Identify (or glue together) each sink-distinguished vertex of $VCG_{i,j}$ with the top-distinguished vertex of
$MCG_{i,j}$ of the same index. Similarly identify each source-distinguished vertex of $VCG_{i+1,j}$ with the
bottom-distinguished vertex of $MCG_{i,j}$ of the same index.
\item The \emph{horizontal connector gadgets} $HCG_{i,j}$ is on the left and $HCG_{i,j+1}$ is on the right.
Identify (or glue together) each sink-distinguished vertex of $HCG_{i,j}$ with the left-distinguished vertex
of $MCG_{i,j}$ of the same index. Similarly, identify each source-distinguished vertex of $HCG_{i,j+1}$ with
the right-distinguished vertex of $MCG_{i,j}$ of the same index.
\end{enumerate}
\item We introduce two special vertices $x^*, y^*$ and add an edge $(x^*, y^*)$ of weight 0.
\item For each $1\leq i\leq k$, add an edge of weight 0 from $y^*$ to each source-distinguished vertex of the vertical connector gadget $VCG_{1,i}$.
\item For each $1\leq j\leq k$, add an edge of weight 0 from $y^*$ to each source-distinguished vertex of the horizontal connector gadget $HCG_{j,1}$.
\item For each $1\leq i\leq k$, add an edge of weight 0 from each sink-distinguished vertex of the vertical connector gadget $VCG_{k+1,i}$ to $x^*$.
\item For each $1\leq j\leq k$, add an edge of weight 0 from each sink-distinguished vertex of the horizontal connector gadget $HCG_{j,k+1}$ to $x^*$.
\item For each $i\in [k], j\in [k+1]$, denote the two internal-distinguished vertices of $HCG_{i,j}$ by $\{p^{h}_{i,j}, q^{h}_{i,j}\}$
\item For each $i\in [k+1], j\in [k]$, denote the two internal-distinguished vertices of $VCG_{i,j}$ by $\{p^{v}_{i,j},
q^{v}_{i,j}\}$
\item The set of terminals $T^*$ for the SCSS instance on $G^*$ is $\{x^*, y^*\}\cup \{p^{h}_{i,j}, q^{h}_{i,j}\ |\ 1\leq i\leq k+1,
1\leq j\leq k\}\cup \{p^{v}_{i,j}, q^{v}_{i,j}\ |\ 1\leq i\leq k, 1\leq j\leq k+1\}$.
\item We note that the total number of terminals is $|T^*|=4k(k+1)+2 = O(k^2)$
\item The edge set of $G^*$ is a disjoint union of
\begin{itemize}
\item Edges of main gadgets
\item Edges of horizontal connector gadgets
\item Edges of vertical connector gadgets
\item Edges from $y^*$ to source-distinguished vertices of the vertical connector gadgets $VCG_{1,i}$ (for each $i\in [k]$), and from $y^*$ to source-distinguished vertices of horizontal connector gadgets $HCG_{j,1}$ (for each $j\in [k]$)
\item Edges from sink-distinguished vertices of the vertical connector gadgets $VCG_{k+1,i}$ (for each $i\in [k]$) to $x^*$, and from sink-distinguished vertices of horizontal connector gadgets $HCG_{j,k+1}$ (for each $j\in [k]$) to $x^*$
\item The edge $(x^*, y^*)$
\end{itemize}
\end{itemize}
Define the following quantity:
\begin{equation}
\label{eqn:w-star} W^*_n = k^{2}\cdot M^*_n + 2k(k+1)\cdot C^*_n.
\end{equation}
In the next two sections, we show that \textsc{Grid Tiling}\xspace has a solution if and only if the SCSS instance $(G^*,T^*)$ has a solution of weight at
most $W^{*}_n$.
\subsection{\textsc{Grid Tiling}\xspace has a solution $\Rightarrow$ SCSS has a solution of weight $\leq W^*_n$}
\label{proof:scss-planar-easy}
\begin{lemma}
If the \textsc{Grid Tiling}\xspace instance $(k,n, \{S_{i,j}\ :\ i,j\in [k]\})$ has a solution, then the SCSS instance $(G^*, T^*)$ has a solution of weight at most $W^*_n$.
\label{lem:scss-planar-easy}
\end{lemma}
\begin{proof}
Since \textsc{Grid Tiling}\xspace has a solution, for each $1\leq i,j\leq k$ there is an entry $(x_{i,j}, y_{i,j})=\gamma_{i,j}\in S_{i,j}$ such that
\begin{itemize}
\item For every $i\in [k]$, we have $x_{i,1}=x_{i,2}=x_{i,3}=\ldots=x_{i,k} = \alpha_i$
\item For every $j\in [k]$, we have $y_{1,j}=y_{2,j}=y_{3,j}=\ldots=y_{k,j} = \beta_j$
\end{itemize}
We build a solution $E^*$ for the SCSS instance $(G^*, T^*)$ and show that it has weight at most $W^*_n$. In the edge set $E^*$, we
take the following edges:
\begin{enumerate}
\item The edge $(x^*, y^*)$ which has weight 0.
\item For each $j\in [k]$ the edge of weight 0 from $y^*$ to the source-distinguished vertex of $VCG_{1,j}$ of index $\beta_j$, and the edge of weight 0 from the sink-distinguished vertex of $VCG_{k+1,j}$ of index $\beta_j$ to $x^*$.
\item For each $i\in [k]$ the edge of weight 0 from $y^*$ to the source-distinguished vertex of $HCG_{i,1}$ of index $\alpha_i$, and the edge of weight 0 from the sink-distinguished vertex of $HCG_{i,k+1}$ of index $\alpha_i$ to $x^*$
\item For each $1\leq i,j\leq k$ for the main gadget $MG_{i,j}$, use Lemma~\ref{lem:main-gadget}(1) to generate a solution
$E^{M}_{i,j}$ which has weight $M^*_n$ and represents $(\alpha_i, \beta_j)$.
\item For each $1\leq i\leq k$ and $1\leq j\leq k+1$ for the horizontal connector gadget $HCG_{i,j}$, use
Lemma~\ref{lem:connector-gadget}(1) to generate a solution $E^{HC}_{i,j}$ of weight $C^*_n$ which represents $\alpha_i$.
\item For each $1\leq j\leq k$ and $1\leq i\leq k+1$ for the vertical connector gadget $VCG_{i,j}$, use
Lemma~\ref{lem:connector-gadget}(1) to generate a solution $E^{VC}_{i,j}$ of weight $C^*_n$ which represents $\beta_j$.
\end{enumerate}
The weight of $E^*$ is $k^2\cdot M^*_n + k(k+1)\cdot C^*_n+ k(k+1)\cdot
C^*_n = W^*_n$. It remains to show that $E^*$ is a solution for the SCSS
instance $(G^*, T^*)$. Since we have already picked up the edge $(x^*,
y^*)$, it is enough to show that for any terminal $t\in T^{*}\setminus
\{x^*, y^*\}$, both $t\leadsto x^*$ and $y^*\leadsto t$ paths exist in
$E^*$. Then for any two terminals $t_1, t_2$, there is a $t_1 \leadsto
t_2$ path given by $t_1\leadsto x^*\rightarrow y^*\leadsto t_2$.
We now show the existence of both a $t\leadsto x^*$ path and a $y^*\leadsto t$ path in $E^*$ when $t$ is a terminal in a vertical connector gadget. Without loss of generality, let $t$ be the terminal $p^{v}_{i,j}$ for some $1\leq i\leq k, 1\leq j\leq k+1$.
\begin{itemize}
\item \underline{Existence of $p^{v}_{i,j}\leadsto x^*$ path in $E^*$}: By Lemma~\ref{lem:connector-gadget}(1), the terminal $p^{v}_{i,j}$ can
reach the sink-distinguished vertex of $VCG_{i,j}$ which has the index $\beta_{j}$. This vertex is the
top-distinguished vertex of the index $\beta_j$ of the main gadget $MG_{i,j}$. By Definition~\ref{defn:represents-main}, there
is a path from this vertex to the bottom-distinguished vertex of the index $\beta_j$ of the main gadget $MG_{i,j}$.
However this vertex is exactly the same as the source-distinguished vertex of the index $\beta_j$ of $VCG_{i+1,j}$. By
Lemma~\ref{lem:connector-gadget}(1), the source-distinguished vertex of the index $\beta_j$ of $VCG_{i+1,j}$ can reach
the sink-distinguished vertex of the index $\beta_j$ of $VCG_{i+1,j}$. This vertex is exactly the top-distinguished
vertex of $MG_{i+1,j}$. Continuing in this way we can reach the source-distinguished vertex of the index $\beta_j$ of
$VCG_{k+1,j}$. By Lemma~\ref{lem:connector-gadget}(1), this vertex can reach the sink-distinguished vertex of the
index $\beta_j$ of $VCG_{k+1,j}$. But $E^*$ also contains an edge (of weight 0) from this sink-distinguished vertex to $x^*$, and hence there is a
$p^{v}_{i,j}\leadsto x^*$ path in $E^*$.
\item \underline{Existence of $y^*\leadsto p^{v}_{i,j}$ path in $E^*$}: Recall that $E^*$ contains an edge (of weight 0) from $y^*$ to the source-distinguished
vertex of the index $\beta_j$ of $VCG_{1,j}$. If $i=1$, then by Lemma~\ref{lem:connector-gadget}(1), there is a path
from this vertex to $p^{v}_{1,j}$. If $i\geq 2$, then by Lemma~\ref{lem:connector-gadget}(1), there is a path from
source-distinguished vertex of the index $\beta_j$ of $VCG_{1,j}$ to the sink-distinguished vertex of the index
$\beta_j$ of $VCG_{1,j}$. But this is the top-distinguished vertex of $MG_{1,j}$ of the index $\beta_j$. By
Definition~\ref{defn:represents-main}, from this vertex we can reach the bottom-distinguished vertex of the index $\beta_{j}$
of $MG_{1,j}$. However, this vertex is exactly the source-distinguished vertex of index $\beta_j$ of $VCG_{2,j}$. Continuing this way we can reach the source-distinguished vertex of the index $\beta_j$ of $VCG_{i,j}$.
By Lemma~\ref{lem:connector-gadget}(1), from this vertex we can reach $p^{v}_{i,j}$. Hence there is a $y^*\leadsto p^{v}_{i,j}$
path in $E^*$.
\end{itemize}
The arguments when $t$ is a terminal in a horizontal connector gadget are similar, and we omit the details here.
\end{proof}
\subsection{SCSS has a solution of weight $\leq W^*_n$ $\Rightarrow$ \textsc{Grid Tiling}\xspace has a solution }
\label{proof:scss-planar-hard}
First we show that the following preliminary claim:
\begin{claim}
Let $E'$ be any solution to the SCSS instance $(G^*, T^*)$. Then
\begin{itemize}
\item $E'$ restricted to each connector gadget satisfies the connectedness property (see Definition~\ref{defn:connectedness}).
\item $E'$ restricted to each main gadget satisfies the connectedness property (see Definition~\ref{defn:source-sink-connectivity}).
\end{itemize}
\label{claim:scss-planar-gadgets-are-connected}
\end{claim}
\begin{proof}
First we show that the edge set $E'$ restricted to each connector gadget satisfies the connectedness property.
Consider a horizontal connector gadget $HCG_{i,j}$ for some $1\leq j\leq k+1, 1\leq i\leq k$. This gadget contains two terminals: $p_{i,j}^{h}$ and $q_{i,j}^{h}$. The only incoming edges from $G^*\setminus HCG_{i,j}$ into $HCG_{i,j}$ are incident onto the source-distinguished vertices of $HCG_{i,j}$, and the only outgoing edges from $HCG_{i,j}$ into $G^*\setminus HCG_{i,j}$ are incident on the sink-distinguished vertices of $HCG_{i,j}$. Since $E'$ is a solution of the SCSS instance $(G^*, T^*)$ it follows that $E^*$ contains a path from $p_{i,j}^{h}$ to the terminals in $T^* \setminus \{p_{i,j}^{h}\cup q_{i,j}^{h}\}$. Since the only outgoing edges from $HCG_{i,j}$ into $G^*\setminus HCG_{i,j}$ are incident on the sink-distinguished vertices of $HCG_{i,j}$, it follows that $p_{i,j}^{h}$ can reach some sink-distinguished vertex of $HCG_{i,j}$ in the solution $E'$. The other three conditions of Definition~\ref{defn:connectedness} can be verified similarly, and hence $E'$ restricted to each main connector satisfies the
connectedness property.
Next we argue that $E'$ restricted to each main gadget satisfies the connectedness property. Consider a main gadget
$MG_{i,j}$. Since $E'$ is a solution for the SCSS instance $(G^*, T^*)$ it follows that the terminal $p_{i,j}^{h}$ from $HCG_{i,j}$ is able to reach other terminals of $T^*$. However, the only outgoing edges from $HCG_{i,j}$ into $G^*\setminus HCG_{i,j}$ are incident on the sink-distinguished vertices of $HCG_{i,j}$. Moreover, each sink-distinguished vertex of $HCG_{i,j}$ is identified with a left-distinguished vertex of $MG_{i,j}$ of the same index. Hence, these outward paths from $p_{i,j}^{h}$ to other terminals of $T^*$ must continue through the left-distinguished vertices of $MG_{i,j}$. However, the only outgoing edges from $MG_{i,j}$ into $G^*\setminus MG_{i,j}$ are incident on the right-distinguished vertices or bottom-distinguished vertices of $HCG_{i,j}$. Hence, some left-distinguished vertex of $MG_{i,j}$ can reach some vertex in the set given by the union of right-distinguished and bottom-distinguished vertices of $MG_{i,j}$.
Hence the first condition of Definition~\ref{defn:source-sink-connectivity} is satisfied. Similarly it can be shown the other three conditions
of Definition~\ref{defn:source-sink-connectivity} also hold, and hence $E'$ restricted to each main gadget satisfies the
connectedness property.
\end{proof}
Now we are ready to prove the following lemma:
\begin{lemma}
If the SCSS instance $(G^*, T^*)$ has a solution $E''$ of weight at most $W^*_n$, then the \textsc{Grid Tiling}\xspace instance $(k,n, \{S_{i,j}\ :\ i,j\in [k]\})$ has a solution.
\label{lem:scss-planar-hard}
\end{lemma}
\begin{proof}
By Claim~\ref{claim:scss-planar-gadgets-are-connected}, the edge set $E''$ restricted to any connector gadget satisfies the
connectedness property and the edge set $E''$ restricted to any main gadget satisfies the connectedness property. Let $\mathcal{C}$ and $\mathcal{M}$ be the sets of connector and main gadgets respectively. Recall that $|\mathcal{C}|=2k(k+1)$ and $|\mathcal{M}|=k^2$. Recall that we have defined $W^*_n$ as $k^{2}\cdot M^*_n + 2k(k+1)C^*_n$. Let $\mathcal{C}'\subseteq \mathcal{C}$ be the set of connector gadgets that have weight at most $C^*_n$ in $E''$. By Lemma~\ref{lem:connector-gadget}(2), each connector gadget from the set $\mathcal{C}'$ has weight exactly $C^*_n$. Since all edge-weights in connector gadgets are positive integers, each connector gadget from the set $\mathcal{C}\setminus \mathcal{C}'$ has weight at least $C^{*}_n+1$. Similarly, let $\mathcal{M}'\subseteq \mathcal{M}$ be the set of main gadgets which have weight at most $M^*_n$ in~$E''$. By Lemma~\ref{lem:main-gadget}(2), each main gadget from the set $\mathcal{M}'$ has weight exactly $M^*_n$. Since all edge-weights in main gadgets are positive integers, each main gadget from the set $\mathcal{M}\setminus \mathcal{M}'$ has weight at least $M^{*}_n+1$. As any two gadgets are pairwise edge-disjoint, we have
\begin{align*}
W^*_n &= k^{2}\cdot M^*_n + 2k(k+1)C^*_n \\
&\geq |\mathcal{M}\setminus \mathcal{M}'|\cdot (M^{*}_n+1) + |\mathcal{M}'|\cdot M^{*}_n + |\mathcal{C}\setminus \mathcal{C}'|\cdot (C^{*}_n+1) + |\mathcal{C}'|\cdot C^{*}_n \\
&= |\mathcal{M}|\cdot M^*_n + |\mathcal{C}|\cdot C^*_n + |\mathcal{M}\setminus \mathcal{M}'| + |\mathcal{C}\setminus \mathcal{C}'| \\
&= k^{2}\cdot M^*_n + 2k(k+1)\cdot C^*_n + |\mathcal{M}\setminus \mathcal{M}'| + |\mathcal{C}\setminus \mathcal{C}'|\\
&= W^*_n + |\mathcal{M}\setminus \mathcal{M}'| + |\mathcal{C}\setminus \mathcal{C}'|.
\end{align*}
This implies $|\mathcal{M}\setminus \mathcal{M}'| = 0 = |\mathcal{C}\setminus \mathcal{C}'|$. However, we had $\mathcal{M'}\subseteq \mathcal{M}$ and $\mathcal{C'}\subseteq \mathcal{C}$. Therefore, $\mathcal{M'}= \mathcal{M}$ and $\mathcal{C'}= \mathcal{C}$. Hence in $E''$, each connector gadget has weight $C^*_n$ and each main gadget has weight $M^*_n$.
From
Lemma~\ref{lem:connector-gadget}(2) and Lemma~\ref{lem:main-gadget}(2), we have
\begin{itemize}
\item For each vertical connector gadget $VCG_{i,j}$, the restriction of the edge set $E''$ to $VCG_{i,j}$ represents an
integer $\beta_{i,j}\in [n]$ where $i\in [k+1], j\in [k]$.
\item For each horizontal connector gadget $HCG_{i,j}$, the restriction of the edge set $E''$ to $HCG_{i,j}$ represents an
integer $\alpha_{i,j}$ where $i\in [k], j\in [k+1]$.
\item For each main gadget $MG_{i,j}$, the restriction of the edge set $E''$ to $MG_{i,j}$ represents an ordered pair $(\alpha'_{i,j}, \beta'_{i,j})\in S_{i,j}$ where $i, j\in [k]$.
\end{itemize}
Consider the main gadget $MG_{i,j}$ for any $1\leq i,j\leq k$. We can make the following observations:
\begin{itemize}
\item \underline{$\beta_{i,j}=\beta'_{i,j}$}: By Lemma~\ref{lem:connector-gadget}(2) and
Definition~\ref{defn:represents-connector}, the terminal vertices in $VCG_{i,j}$ can exit the vertical connector
gadget only via the unique edge entering the sink-distinguished vertex of index $\beta_{i,j}$. By
Lemma~\ref{lem:main-gadget}(2) and Definition~\ref{defn:represents-main}, the only edge in $E''$ incident to any
top-distinguished vertex of $MG_{i,j}$ is the unique edge leaving the top-distinguished vertex of the index
$\beta'_{i,j}$. Hence if $\beta_{i,j}\neq \beta'_{i,j}$ then the terminals in $VCG_{i,j}$ will not be able to exit $VCG_{i,j}$ and reach
other terminals.
\item \underline{$\beta'_{i,j}=\beta_{i+1,j}$}: By Lemma~\ref{lem:connector-gadget}(2) and
Definition~\ref{defn:represents-connector}, the unique edge entering $VCG_{i+1,j}$ is the edge entering the
source-distinguished vertex of the index $\beta_{i+1,j}$. By Lemma~\ref{lem:main-gadget}(2) and
Definition~\ref{defn:represents-main}, the only edge in $E''$ incident to any bottom-distinguished vertex of
$MG_{i,j}$ is the unique edge entering the bottom-distinguished vertex of index $\beta'_{i,j}$. Hence if
$\beta'_{i,j}\neq \beta_{i+1,j}$, then the terminals in $VCG_{i+1,j}$ cannot be reached from the other terminals.
\item \underline{$\alpha_{i,j}=\alpha'_{i,j}$}: By Lemma~\ref{lem:connector-gadget}(2) and
Definition~\ref{defn:represents-connector}, the paths starting at the terminal vertices in $HCG_{i,j}$ can leave the horizontal connector
gadget only via the unique edge entering the sink-distinguished vertex of index $\alpha_{i,j}$. By
Lemma~\ref{lem:main-gadget}(2) and Definition~\ref{defn:represents-main}, the only edge in $E''$ incident to any
left-distinguished vertex of $MG_{i,j}$ is the unique edge leaving the left-distinguished vertex of the index
$\alpha'_{i,j}$. Hence if $\alpha_{i,j}\neq \alpha'_{i,j}$ then the terminals in $HCG_{i,j}$ will not be able to reach
other terminals.
\item \underline{$\alpha'_{i,j}=\alpha_{i,j+1}$}: By Lemma~\ref{lem:connector-gadget}(2) and
Definition~\ref{defn:represents-connector}, the unique edge entering $HCG_{i,j+1}$ is the edge entering the
source-distinguished vertex of index $\alpha_{i,j+1}$. By Lemma~\ref{lem:main-gadget}(2) and
Definition~\ref{defn:represents-main}, the only edge in $E''$ incident to any right-distinguished vertex of $MG_{i,j}$
is the unique edge entering the right-distinguished vertex of index $\alpha'_{i,j}$. Hence if
$\alpha'_{i,j}\neq \alpha_{i,j+1}$, then the terminals in $HCG_{i,j+1}$ cannot be reached from the other terminals.
\end{itemize}
We claim that for $1\leq i,j\leq k$, the entries $(\alpha'_{i,j}, \beta'_{i,j})\in S_{i,j}$ form a solution for the \textsc{Grid Tiling}\xspace instance.
For this we need to check two conditions:
\begin{itemize}
\item \underline{$\alpha'_{i,j} = \alpha'_{i,j+1}$}: This holds because $\alpha_{i,j} = \alpha'_{i,j} = \alpha_{i,j+1} =
\alpha'_{i,j+1}$.
\item \underline{$\beta'_{i,j} = \beta'_{i+1,j}$}: This holds because $\beta_{i,j} = \beta'_{i,j} = \beta_{i+1,j} =
\beta'_{i+1,j}$.
\end{itemize}
This completes the proof of the lemma.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:scss-main-hardness-planar-graphs}}
\label{sec:main-scss-planar-proof-subsec}
Finally we are ready to prove Theorem~\ref{thm:scss-main-hardness-planar-graphs} which is restated below:
\begin{reptheorem}{thm:scss-main-hardness-planar-graphs}
The edge-unweighted version of the SCSS problem is W[1]-hard parameterized by the number of terminals $k$, even when the underlying undirected graph is planar. Moreover, under the ETH, the SCSS problem on planar graphs cannot be solved in $f(k)\cdot n^{o(\sqrt{k})}$ time where $f$ is any computable function, $k$ is the number of terminals
and $n$ is the number of vertices in the instance.
\end{reptheorem}
\begin{proof}
Each connector gadget has $O(n^2)$ vertices and $G^*$ has $O(k^2)$ connector gadgets. Each main gadget has $O(n^3)$ vertices and $G^*$ has $O(k^2)$ main gadgets. It is easy to see that the graph $G^*$ has $O(n^{3}k^{2})={\mathrm{poly}}(n,k)$ vertices. Moreover, the graph $G^*$ can be constructed in ${\mathrm{poly}}(n+k)$ time: recall that each connector gadget (Lemma~\ref{lem:connector-gadget}) and main gadget (Lemma~\ref{lem:main-gadget}) can be constructed in polynomial time. Each main gadget and connector gadget is planar, and any two gadgets are pairwise edge-disjoint. Moreover, the 0-weight edges incident on $x^*$ or $y^*$ do not affect planarity (see Figure~\ref{fig:big-picture} for a planar embedding). Hence, $G^*$ is planar.
It is known~\cite[Theorem 14.28]{fpt-book} that $k\times k$ \textsc{Grid Tiling}\xspace is W[1]-hard parameterized by $k$, and under ETH cannot be solved in $f(k)\cdot n^{o(k)}$ for any computable function $f$. Combining the two directions from Section~\ref{proof:scss-planar-easy} and Section~\ref{proof:scss-planar-hard}, we get a parameterized reduction from $k\times k$ \textsc{Grid Tiling}\xspace to a planar instance of SCSS with $O(k^2)$ terminals. Hence, it follows that SCSS on planar graphs is W[1]-hard and under ETH cannot be solved in $f(k)\cdot n^{o(\sqrt{k})}$ time for any computable function $f$.
\end{proof}
This shows that the $2^{O(k)}\cdot n^{O(\sqrt{k})}$ algorithm for SCSS on planar graphs given in
Theorem~\ref{thm:algo-sqrt-k-h-minor-free} is asymptotically optimal.
\section{Proof of Lemma~\ref{lem:connector-gadget}: constructing connector gadgets} \label{connector:proof}
We prove Lemma~\ref{lem:connector-gadget} in this section, by constructing a connector gadget satisfying the specifications of Section~\ref{sec:connector:exist}.
\begin{figure}[H]
\centering
\def0.7\linewidth}\includesvg{main-new-copy}{\linewidth}\includesvg{connector-new}
\vspace{-35mm}
\caption{The connector gadget \label{fig:connector-gadget} for $n=3$. A set of edges representing 3 is highlighted in the figure.}
\end{figure}
\subsection{Different types of edges in connector gadget}
Before proving Lemma~\ref{lem:connector-gadget}, we first describe the construction of the connector gadget in more detail (see
Figure~\ref{fig:connector-gadget}). The connector gadget has $2n+4$ rows denoted by $R_0, R_1, R_2, \ldots, R_{2n+3}$ and $4n+1$ columns
denoted by $C_0, C_1, \ldots, C_{4n}$. Let us denote the vertex at the intersection of row $R_i$ and column $C_j$ by
$v_{i}^j$. We now describe the different kinds of edges present in the connector gadget.
\begin{enumerate}
\item \textbf{Source Edges}: For each $i\in [n]$, there is an edge $(p_i, v_{0}^{2i-1})$. These edges are together called source edges.
\item \textbf{Sink Edges}: For each $i\in [n]$, there is an edge $(v_{2n+3}^{2n+2i-1}, q_i)$. These edges are together called sink edges.
\item \textbf{Terminal Edges}: The union of the sets of edges incident to the terminals $p$ or $q$ are called terminal edges. The set of edges incident on $p$ is $\{(p, v^{0}_{2i+1}\ :\ i\in [n])\} \cup \{(v^{0}_{2i}, p\ :\ i\in [n])\}$. The set of edges incident on $q$ is $\{(q, v^{4n}_{2i+1}\ :\ i\in [n])\} \cup \{(v^{4n}_{2i}, q\ :\ i\in [n])\}$.
\item \textbf{Inrow Edges}:
\begin{itemize}
\item \underline{Inrow Up Edges}: For each $0\leq i\leq n+1$, we call the $\uparrow$ edges connecting vertices of row $R_{2i+1}$ to $R_{2i}$ as
inrow up edges. Explicitly, this set of edges is given by $\{(v_{2i+1}^{2j}, v_{2i}^{2j})\ :\ 0\leq j\leq 2n \}$.
\item \underline{Inrow Down Edges}: For each $0\leq i\leq n+1$, we call the $\downarrow$ edges connecting vertices of row $R_{2i}$ to
$R_{2i+1}$ as inrow down edges. Explicitly, this set of edges is given by $\{(v_{2i}^{2j-1}, v_{2i+1}^{2j-1})\ :\ 1\leq j\leq 2n \}$.
\item \underline{Inrow Left Edges}: For each $0\leq i\leq 2n+3$, we call the $\leftarrow$ edges connecting vertices of row $R_{i}$ as inrow
left edges. We explicitly list the set of inrow left edges for even-numbered and odd-numbered rows below:
\begin{itemize}
\item For each $0\leq i\leq n+1$, the set of inrow left edges for the row $R_{2i}$ is given by $\{(v_{2i}^{2j}, v_{2i}^{2j-1})\ :\ j\in [2n] \}$
\item For each $0\leq i\leq n+1$, the set of inrow left edges for the row $R_{2i+1}$ is given by $\{(v_{2i+1}^{2j-1}, v_{2i+1}^{2j-2})\ :\ j\in [2n] \}$
\end{itemize}
\item \underline{Inrow Right Edges}: For each $0\leq i\leq 2n+3$, we call the $\rightarrow$ edges connecting vertices of row $R_{i}$ as inrow
right edges. We explicitly list the set of inrow right edges for even-numbered and odd-numbered rows below:
\begin{itemize}
\item For each $0\leq i\leq n+1$, the set of inrow right edges for the row $R_{2i}$ is given by $\{(v_{2i}^{2j-2}, v_{2i}^{2j-1})\ :\ j\in [2n] \}$
\item For each $0\leq i\leq n+1$, the set of inrow right edges for the row $R_{2i+1}$ is given by $\{(v_{2i+1}^{2j-1}, v_{2i+1}^{2j})\ :\ j\in [2n] \}$
\end{itemize}
\end{itemize}
\item \textbf{Interrow Edges}: For each $i\in [n+1]$ and each $j\in [2n]$, we subdivide the edge $(v_{2i-1}^{2j-1}, v_{2i}^{2j-1})$ by introducing a new vertex $w_{i}^{j}$ and adding the edges $(v_{2i-1}^{2j-1}, w_{i}^j)$ and $(w^{j}_i, v_{2i}^{2j-1})$. All these edges are together called interrow edges. Note that there is a total of $4n(n+1)$ interrow edges.
\item \textbf{Shortcuts}: There are $2n$ shortcut edges, namely $e_1, e_2, \ldots, e_n$ and $f_1, f_2, \ldots,f_n$. They are drawn
as follows:
\begin{itemize}
\item The edge $e_i$ is given by $(v_{2n-2i+2}^{2i-2}, w_{n-i+1}^i)$.
\item The edge $f_i$ is given by $(w_{n-i+2}^{n+i }, v_{2n-2i+3}^{2n+2i})$.
\end{itemize}
\end{enumerate}
\subsection{Assigning weights in the connector gadget}
Fix the quantity $B=18n^2$. We assign weights to the edges as follows
\begin{enumerate}
\item For $i\in [n]$, the source edge $(p_i, v_{0}^{2i-1})$ has weight $B^{5}+(n-i+1)$.
\item For $i\in [n]$, the sink edge $(v_{2n+3}^{2n+2i-1}, q_i)$ has weight $B^{5}+i$.
\item Each terminal edge has weight $B^4$.
\item Each inrow up edge has weight $B^3$.
\item Each interrow edge has weight $\dfrac{B^2}{2}$ each.
\item Each inrow right edge has weight $B$.
\item For each $i\in [n]$, the shortcut edge $e_i$ has weight $n\cdot i$.
\item For each $j\in [n]$, the shortcut edge $f_j$ has weight $n(n-j+1)$.
\item Each inrow left edge and inrow down edge has weight 0.
\end{enumerate}
Now we define the quantity $C^*_n$ stated in statement of Lemma~\ref{lem:connector-gadget}:
\begin{equation}
C^*_n = 2B^{5} + 4B^{4} + (2n+1)B^{3} + (n+1)B^{2} + (4n-2)B + (n+1)^2.
\label{eqn:c-star}
\end{equation}
In the next two sections, we prove the two statements of Lemma~\ref{lem:connector-gadget}.
\subsection{For every $i\in [n]$, there is a solution $E_i$ of weight $C^*_n$ that satisfies the connectedness property and represents $i$}
\label{subsection:connector-first-statement}
Let $E_i$ be the union of the following sets of edges:
\begin{itemize}
\item
Select the edges
$(p_i, v_{0}^{2i-1})$ and $(v_{2n+3}^{2n+2i-1}, q_i)$. This incurs a weight of $B^{5}+ (n-i+1) + B^{5}+
i = 2B^{5}+(n+1)$.
\item The two terminal edges $(p, v_{2n-2i+3}^{0})$ and $(v_{2n-2i+2}^{0}, p)$. This incurs a weight of $2B^4$.
\item The two terminal edges $(q, v_{2n-2i+3}^{4n})$ and $(v_{2n-2i+2}^{4n}, q )$. This incurs a weight of $2B^4$.
\item All $2n$ inrow right edges and $2n$ inrow left edges which occur between vertices of $R_{2n-2i+2}$. This incurs a
weight of $2n\cdot B$ since each inrow left edge has weight 0 and each inrow right edge has weight~$B$.
\item All $2n$ inrow right edges and $2n$ inrow left edges which occur between vertices of $R_{2n-2i+3}$. This incurs a weight of
$2n \cdot B$ since each inrow left edge has weight 0 and each inrow right edge has weight~$B$.
\item All the $2n+1$ inrow up edges that are between vertices of $R_{2n-2i+2}$ and $R_{2n-2i+3}$. These edges are given
by $(v_{2n-2i+3}^{2j}, v_{2n-2i+2}^{2j})$ for $0\leq j\leq 2n$. This incurs a weight of $(2n+1)B^3$.
\item All $2n$ inrow down edges that occur between vertices of row $R_{2n-2i+2}$ and row $R_{2n-2i+3}$. This incurs a weight of 0, since each inrow down edge has weight 0.
\item The vertically downward $v_{0}^{2i-1}\leadsto v_{2n-2i+3}^{2i-1}$ path $P_1$ formed by interrow edges and inrow down edges, and the vertically downward $v_{2n-2i+2}^{2n+2i-1}\leadsto v_{2n+3}^{2n+2i-1}$ path $P_2$ formed by interrow edges and inrow down edges.
These two paths together incur a total weight of $(n+1)B^2$, since the inrow down edges have weight 0.
\item The edges $e_i$ and $f_i$. This incurs a weight of $n\cdot i+ n(n-i+1) = n(n+1)$.
\end{itemize}
Finally, \textbf{remove} the two inrow right edges $(v_{2n-2i+2}^{2i-2},v_{2n-2i+2}^{2i-1})$ and $(v_{2n-2i+3}^{2n+2i-1}, v_{2n-2i+3}^{2n+2i})$ from
$E_i$. This saves a weight of $2B$. From the above paragraph and Equation~\ref{eqn:c-star} it follows that the total weight of $E_i$
is exactly $C^*_n$. Note that even though we removed the edge $(v_{2n-2i+2}^{2i-2},v_{2n-2i+2}^{2i-1})$ we can still travel from $v_{2n-2i+2}^{2i-2}$ to $v_{2n-2i+2}^{2i-1}$ in $E_i$ using the edge $e_i$ as follows: take the path $v_{2n-2i+2}^{2i-2}\rightarrow w_{n-i+1}^i \rightarrow v_{2n-2i+2}^{2i-1}$. Similarly, even though we removed the edge
$(v_{2n-2i+3}^{2n+2i-1}, v_{2n-2i+3}^{2n+2i})$ we can still travel from $v_{2n-2i+3}^{2n+2i-1}$ to $v_{2n-2i+3}^{2n+2i}$ in
$E_i$ using the edge $f_i$ as follows: take the path $v_{2n-2i+3}^{2n+2i-1} \rightarrow w_{n-i+2}^{n+i} \rightarrow v_{2n-2i+3}^{2n+2i}$.
It remains to show that $E_i$ satisfies the connectedness property and it represents $i$. It is easy to see $E_i$ represents $i$
since the only edge in $E_i$ which is incident to $P$ is the edge leaving $p_i$. Similarly, the only edge in $E_i$
incident to $Q$ is the one entering $q_i$. We show that the connectedness property holds as follows (recall Definition~\ref{defn:connectedness}):
\begin{enumerate}
\item There is a $p_i \leadsto p$ path in $E_i$ by starting with the source edge leaving $p_i$ and then following downward path $P_1$
from $v_{0}^{2i-1} \leadsto v_{2n-2i+3}^{2i-1}$. Then travel towards the left from $v_{2n-2i+3}^{2i-1}$ to $p$ by
using inrow left, inrow up and inrow down edges from rows $R_{2n-2i+2}$ and $R_{2n-2i+3}$. Finally, use the edge
$(v_{2n-2i+2}^{0}, p)$
\item For the existence of a $p_i \leadsto q$ path in $E_i$, we have seen above that there is a $p_i \leadsto
v_{2n-2i+3}^{2i-1}$ path. Then travel towards the right from $v_{2n-2i+2}^{2i-1}$ to $q$ by using inrow right, inrow
up and inrow down edges from rows $R_{2n-2i+2}$ and $R_{2n-2i+3}$ to reach the vertex $v_{2n-2i+2}^{4n}$. The only potential issue is that the inrow right edge $(v_{2n-2i+3}^{2n+2i-1}, v_{2n-2i+3}^{2n+2i})$ is missing in $E_i$: however this is not a problem since we have the path $v_{2n-2i+3}^{2n+2i-1}\rightarrow w_{n-i+2}^{n+i} \rightarrow v_{2n-2i+3}^{2n+2i}$ in $E_i$. Finally, use the edge $(v_{2n-2i+2}^{4n}, q)$.
\item For the existence of a $p \leadsto q_i$ path in $E_i$, first use the edge $(p, v_{2n-2i+3}^{0})$. Then travel
towards the right by using inrow up, inrow right and inrow down edges from rows $R_{2n-2i+2}$ and $R_{2n-2i+3}$ to reach the vertex $v_{2n-2i+2}^{2n+2i-1}$. The only potential issue is that the inrow right edge $(v_{2n-2i+2}^{2i-2},v_{2n-2i+2}^{2i-1})$ is missing in $E_i$: however this is not a problem since we have the path $v_{2n-2i+2}^{2i-2}\rightarrow w_{n-i+1}^{i} \rightarrow v_{2n-2i+2}^{2i-1}$ in $E_i$. Then take the downward path $P_2$ from $v_{2n-2i+2}^{2n+2i-1}$ to
$v_{2n+3}^{2n+2i-1}$. Finally, use the sink edge $(v_{2n+3}^{2n+2i-1}, q_i)$ incident to $q_i$.
\item For the existence of a $q \leadsto q_i$ path in $E_i$, first use the terminal edge $(q, v_{2n-2i+3}^{4n})$. Then travel
towards the left by using inrow up, inrow left and inrow down edges from rows $R_{2n-2i+2}$ and $R_{2n-2i+3}$ until you reach
the vertex $v_{2n-2i+2}^{2n+2i-1}$. Then take the downward path $P_2$ from $v_{2n-2i+2}^{2n+2i-1}$ to
$v_{2n+3}^{2n+2i-1}$. Finally, use the sink edge $(v_{2n+3}^{2n+2i-1}, q_i)$ incident to $q_i$.
\end{enumerate}
Therefore, $E_i$ indeed satisfies the connectedness property.
\subsection{$E'$ satisfies the connectedness property and has weight at most $C^*_n \Rightarrow E'$ represents some $\beta\in [n]$ and has weight exactly $C^*_n$}
\label{subsection:connector-second-statement}
Next we show that if a set of edges $E'$ satisfies the connectedness property and has weight at most $C^*_n$, then in fact the weight of $E'$ is exactly $C^*_n$ and it represents some
$\beta\in [n]$.
We do this via the following series of claims and observations.
\begin{claim}
\label{claim:one-in-out} $E'$ contains exactly one source edge and one sink edge.
\end{claim}
\begin{proof}
Since $E'$ satisfies the connectedness property it must contain at least one source edge and at least one sink edge. Without
loss of generality, suppose that there are at least two source edges in $E'$. Then the weight of $E'$ is a least the sum of the weights
of these two source edges plus the weight of at least one sink edge.
Thus if $E'$ contains at least two source edges, then its weight is at least $3B^{5}$. However, from Equation~\ref{eqn:c-star} we get that
\begin{align*}
C^*_n &= 2B^{5} + 4B^{4} + (2n+1)B^{3} + (n+1)B^{2} + (4n-2)B + (n+1)^2\\
&\leq 2B^5 + 4n\cdot B^4 + 3n\cdot B^4 + 2n\cdot B^4 + 4n\cdot B^4 +4n\cdot B^4\\
&\leq 2B^5 + 17n\cdot B^4\\
&< 3B^5,
\end{align*}
since $B=18n^2 > 17n$.
\end{proof}
Thus we know that $E'$ contains exactly one source edge and exactly
one sink edge. Let the source edge be incident to $p_{i'}$ and the
sink edge be incident to $q_{j'}$.
\begin{claim}
\label{claim:4-blue-dotted} $E'$ contains exactly four terminal edges.
\end{claim}
\begin{proof}
Since $E'$ satisfies the connectedness property, it must contain at
least one incoming and one outgoing edge for both $p$ and
$q$. Therefore, we need at least four terminal edges. Suppose we
have at least five terminal edges in $E'$. We already know that the
source and sink edges contribute at least $2B^{5}$ to weight of
$E'$, hence the weight of $E'$ is at least $2B^{5}+ 5B^{4}$. However, from
Equation~\ref{eqn:c-star}, we get that
\begin{align*}
C^*_n &= 2B^{5} + 4B^{4} + (2n+1)B^{3} + (n+1)B^{2} + (4n-2)B + (n+1)^2\\
&\leq 2B^5 + 4B^4 + 3n\cdot B^3 + 2n\cdot B^3 + 4n\cdot B^3 + 4n\cdot B^3\\
&= 2B^5 + 4B^4 + 13n\cdot B^3\\
&< 2B^5 + 5B^4,
\end{align*}
since $B=18n^2 > 13n$.
\end{proof}
Hence we know that $E'$ contains exactly four terminal edges.
\begin{claim}
\label{claim:one-red-inrow} $E'$ contains exactly $2n+1$ inrow up edges, one from each column $C_{2i}$ for $0\leq i\leq 2n$.
\end{claim}
\begin{proof}
Observe that for each $1\leq j\leq 2n-1$, the inrow up edges in column $C_{2j}$ form a cut between vertices from columns $C_{2j-1}$
and $C_{2j+1}$. Since $E'$ must have a $p_{i'} \leadsto p$ path, we need to use at least one inrow up edge from each of the
columns $C_0, C_2, \ldots, C_{2i'-2}$. Since $E'$ must have a $p_{i'} \leadsto q$, path we need to use at least one inrow up
edge from each of the columns $C_{2i'}, C_{2i'+2}, \ldots, C_{4n}$. Hence $E'$ has at least $2n+1$ inrow up edges, as we
require at least one inrow up edge from each of the columns $C_{0}, C_{2}, \ldots, C_{4n}$.
Suppose $E'$ contains at least $2n+2$ inrow up edges. We already know that $E'$ has a contribution of $2B^{5}+4B^{4}$ from source,
sink, and terminal edges. Hence the weight of $E'$ is at least $2B^{5}+ 4B^{4}+ (2n+2)B^3$. However, from Equation~\ref{eqn:c-star}, we get that
\begin{align*}
C^*_n &= 2B^{5} + 4B^{4} + (2n+1)B^{3} + (n+1)B^{2} + (4n-2)B + (n+1)^2\\
&\leq 2B^5 + 4B^4 + (2n+1)B^3 + 2n\cdot B^2 + 4n\cdot B^2 + 4n\cdot B^2\\
&= 2B^5 + 4B^4 + (2n+1)B^3 + 10n\cdot B^2\\
&< 2B^5 + 4B^4 + (2n+2)B^3,
\end{align*}
since $B=18n^2 > 10n$.
\end{proof}
Therefore, we know that $E'$ contains exactly one inrow edge per column $C_{2i}$ for every $0\leq i\leq 2n$. By
Claim~\ref{claim:4-blue-dotted}, we know that exactly two terminal edges incident to $p$ are selected in $E'$. Observe that the terminal edge leaving $p$ should be followed by an inrow up edge, and similarly, the terminal edge entering $p$ follows an inrow up edge. Since we select exactly
one inrow up edge from column $C_0$, it follows that the two terminal edges in $E'$ incident to $p$ must be incident to the rows $R_{2\ell+1}$
and $R_{2\ell}$ respectively for some $\ell \in [n]$. Similarly, the two terminal edges in $E'$ incident to $q$ must be
incident to the rows $R_{2\ell'+1}$ and $R_{2\ell'}$ for some $\ell' \in [n]$. We summarize this in the following claim:
\begin{observation}
\label{obs:same-for-s-and-t} There exist integers $\ell, \ell' \in [n]$ such that
\begin{itemize}
\item the only two terminal edges in $E'$ incident to $p$ are $(p, v_{2\ell+1}^{0})$ and $(v_{2\ell}^{0}, p)$, and
\item the only two terminal edges in $E'$ incident to $q$ are $(q, v_{2\ell'+1}^{4n})$ and $(v_{2\ell'}^{4n}, q)$.
\end{itemize}
\end{observation}
\begin{definition}
For $i\in [n+1]$, we call the $4n$ interrow edges which connect vertices from row $R_{2i-1}$ to vertices from $R_{2i}$ as
Type$(i)$ interrow edges. We can divide the Type$(i)$ interrow edges into $2n$ ``pairs" of adjacent interrow edges given by $(v_{2i-1}^{2j-1}, w_{i}^{j})$ and $(w_{i}^{j}, v_{2i}^{2j-1})$ for each $1\leq j\leq 2n$
\end{definition}
Note that there are a total of $n+1$ types of interrow edges.
\begin{claim}
\label{claim:one-interrow} $E'$ contains a pair of interrow edges of Type$(r)$ for each $r\in [n+1]$. Moreover, these two edges are the only interrow edges of Type$(r)$ chosen in $E'$.
\end{claim}
\begin{proof}
First we show that $E'$ contains at least one pair of interrow edges of each type. Observation~\ref{obs:same-for-s-and-t} implies that we cannot avoid using interrow edges of any type by, for example, going into $p$ via an edge from some $R_{2i}$ and then exiting $p$ via an edge to some $R_{2j+1}$ for any $j> i$ (similarly for $q$). By the
connectedness property, set $E'$ contains a $p_{i'}\leadsto p$ path $P_1$. By Observation~\ref{obs:same-for-s-and-t}, the only edge entering $p$ is $(v_{2\ell}^{0}, p)$. Hence $E'$ must contain at least one pair of interrow edges of Type$(r)$ for $1\leq r\leq \ell$ since the only way to travel from row $R_{2r-1}$ to $R_{2r}$ (for each $r\in [\ell]$) is by using a pair of interrow edges of Type$(r)$ . Similarly $E'$ contains a $p\leadsto q_i$ path and the only outgoing edge from $p$ is $(p, v_{2\ell+1}^{0})$. Hence
$E'$ must contain at least one pair of interrow edges of Type$(r)$ for each $\ell+1 \leq r\leq n+1$ since the only way to travel from row $R_{2r-1}$ to $R_{2r}$ is by using a pair of interrow edges of Type$(r)$. Therefore, the edge set $E'$ contains at least one pair of interrow edges of each Type$(r)$ for $1\leq r\leq n+1$.
Next we show that $E'$ contains exactly two interrow edges of Type$(r)$ for each $r\in [n+1]$. Suppose $E'$ contains at least three interrow edges of some Type$(r)$ for some $r\in [n+1]$. Since weight of each interrow edge is $B^2/2$, this implies $E'$ gets a weight of at least $(n+1 +\frac{1}{2})\cdot B^2$ from the interrow edges. We have already seen $E'$ has contribution of $2B^{5}+4B^{4}+(2n+1)B^3$ from source, sink, terminal, and inrow up
edges. Hence the weight of $E'$ is at least $2B^{5}+ 4B^{4}+ (2n+1)B^3+ (n+1 + \frac{1}{2})\cdot B^2$. However, from Equation~\ref{eqn:c-star}, we get that
\begin{align*}
C^*_n &= 2B^{5} + 4B^{4} + (2n+1)B^{3} + (n+1)B^{2} + (4n-2)B + (n+1)^2\\
&\leq 2B^5 + 4B^4 + (2n+1)B^3 + (n+1)B^2 + 4n\cdot B + 4n\cdot B\\
&= 2B^5 + 4B^4 + (2n+1)B^3 + (n+1)B^2 + 8n\cdot B\\
&< 2B^5 + 4B^4 + (2n+1)B^3 + (n+1 + \frac{1}{2})B^2,
\end{align*}
since $\frac{B}{2}=9n^2 > 8n$. Hence, $E'$ contains exactly two interrow edges of Type$(r)$ for each $r\in [n+1]$.
\end{proof}
\begin{claim}
For each $r\in [n+1]$, let the unique pair of interrow edges in $E'$ (guaranteed by Claim~\ref{claim:one-interrow}) of Type$(r)$ belong
to column $C_{2\ell_{r}-1}$. If the unique source and sink edges in $E'$ (guaranteed by Claim~\ref{claim:one-in-out}) are incident to $p_{i'}$ and
$q_{j'}$, respectively, then we have $i'\leq \ell_1 \leq \ell_2 \leq \ldots \leq \ell_{n+1} \leq n+j'.$
\label{claim:interrow-monotone}
\end{claim}
\begin{proof}
Observation~\ref{obs:same-for-s-and-t} implies the only way to get from row $R_{2i-1}$ to $R_{2i}$ is to use a pair of interrow edges
of Type$(i)$. By Claim~\ref{claim:one-interrow}, we use exactly one pair of interrow edges of each type. Recall that there is a
walk $P=p_{i'}\leadsto p\leadsto q_{j'}$ in $E'$, and this walk must use each of these interrow edges.
First we show that $\ell_1 \geq i'$. Suppose $\ell_1 <i'\leq n$. Since we use the source edge incident to $p_{i'}$, we must reach
vertex $v_{0}^{2i'-1}$. Since $i'>\ell_1$, to use the pair of interrow edges to travel from $v_{1}^{2\ell_{1}-1}$ to $v_{2}^{2\ell_{1}-1}$, the walk $P$ must contain a
$v_{0}^{2i'-1}\leadsto v_{1}^{2\ell_{1}-1}$ subwalk $P'$. By the construction of the connector gadget this subwalk $P'$ must
use the inrow up edge $(v_{1}^{2i'-2}, v_{0}^{2i'-2})$. Now the walk $P$ has to reach column $C_{2n+2j'-1}$ from column
$C_{2\ell_{1}-1}$, and so it must use another inrow edge from column $C_{2i'-2}$ (between rows $R_{2i}$ and $R_{2i+1}$ for some $i\geq 1$), which contradicts
Claim~\ref{claim:one-red-inrow}.
Now we prove $\ell_{n+1}\leq n+j'$. Suppose to the contrary that $\ell_{n+1}>n+j'$. Then by reasoning similar to that of above one
can show that at least two inrow up edges from column $C_{2n+2j'}$ are used, which contradicts Claim~\ref{claim:one-red-inrow}.
Finally suppose there exists $r\in [n]$ such that $\ell_{r} > \ell_{r+1}$. We consider the following three cases:
\begin{itemize}
\item \underline{$\ell_{r+1} < \ell_{r}\leq n$}: Then by using the fact that there is a $p_{i'}\leadsto q_{j'}$ walk in $E'$ we
get at least two inrow up edges are used from column $C_{2\ell_{r}-2}$, which contradicts
Claim~\ref{claim:one-red-inrow}.
\item \underline{$n< \ell_{r} \leq n+j'$}: Then we need to use at least two inrow up edges from column $C_{2\ell_{r}-2}$, which
contradicts Claim~\ref{claim:one-red-inrow}.
\item \underline{$\ell_{r} > n+j'$}: Then we need to use at least two inrow up edges from column $C_{2n+2j'}$, which
contradicts Claim~\ref{claim:one-red-inrow}.
\end{itemize}
\end{proof}
\begin{claim}
$E'$ contains at most two shortcut edges. \label{claim:at-most-2-shortcuts}
\end{claim}
\begin{proof}
For the proof we will use Claim~\ref{claim:interrow-monotone}. We will show that we can use at most one $e$-shortcut. The proof for $f$-shortcut is similar.
Suppose we use two $e$-shortcuts viz. $e_{x}$ and $e_{y}$ such that $x> y$. Note that it makes sense to include a shortcut into $E'$ only if
we use the interrow edge that continues it. Hence $\ell_x = x$ and $\ell_y = y$. By Claim~\ref{claim:interrow-monotone}, we have $y=\ell_y\geq \ell_x = x$, which is a contradiction.
\end{proof}
\begin{claim}
\label{claim:exactly-4n-2-inrow-right} $E'$ contains exactly $4n-2$ inrow right edges.
\end{claim}
\begin{proof}
Since $E'$ contains a $p\leadsto q_{j'}$ path, it follows that $E'$ has a path connecting some vertex from the column $C_{i}$ to some vertex from column $C_{i+1}$ for each $0\leq
i\leq 2n+2j'-2$. Since $E'$ contains a $p_{i'}\leadsto q$ path, it follows that $E'$ has a path connecting some vertex from the column $C_{j}$ to some vertex from the column
$C_{j+1}$ for each $2i'-1\leq j\leq 4n-1$.
Since $2n+2j'-2\geq 2n$ and $2i'-1\leq 2n$, it follows that for each $0\leq i\leq 4n-1$ the solution $E'$ must contain a path connecting some vertex from column $C_i$ to some vertex from column $C_{i+1}$. Each such path has to either be a path of one which must be an inrow right edge, or a path of two edges consisting of a shorcut and an interrow edge.
But Claim~\ref{claim:at-most-2-shortcuts} implies $E'$ contains at most two shortcuts. Therefore, $E'$ contains at least $4n-2$ inrow right edges. Suppose $E'$ contains at
least $4n-1$ inrow right edges. We have already seen the contribution of source, sink, terminal, inrow up and interrow edges is
$2B^{5} + 4B^{4} + (2n+1)B^{3} + (n+1)B^{2}$. If $E'$ contains at least $4n-1$ inrow right edges, then the weight of
$E'$ is at least $2B^{5} + 4B^{4} + (2n+1)B^{3} + (n+1)B^{2} + (4n-1)B$.
However, from Equation~\ref{eqn:c-star}, we get that
\begin{align*}
C^*_n &= 2B^{5} + 4B^{4} + (2n+1)B^{3} + (n+1)B^{2} + (4n-2)B + (n+1)^2\\
&= 2B^5 + 4B^4 + (2n+1)B^3 + (n+1)B^2 + (4n-2)B + 4n^2\\
&< 2B^5 + 4B^4 + (2n+1)B^3 + (n+1)B^2 + (4n-1)B,
\end{align*}
since $B=18n^2 > 4n^2$.
\end{proof}
From Claim~\ref{claim:at-most-2-shortcuts} and the proof of Claim~\ref{claim:exactly-4n-2-inrow-right}, we can make the
following observation:
\begin{observation}
\label{claim:exactly-2-shortcuts} $E'$ contains exactly two shortcuts.
\end{observation}
Let the shortcuts used be $e_{i''}$ and $f_{j''}$. Recall that Claim~\ref{claim:one-in-out} implies that at most one edge incident to $P$ and
at most one edge incident to $Q$ is used in $E'$. Therefore, if we show that $i'=j'$, then it follows that $E'$ represents $\beta=i'=j'$.
\begin{claim}
The following inequalities hold:
\begin{itemize}
\item $i''\geq i'$ and $j''\leq j'$
\item $i''\geq j''$
\end{itemize}
\label{claim:properties-of-shortcut-indices}
\end{claim}
\begin{proof}
To use the shortcut $e_{i''}$, we need to use the lower half of a pair of interrow edges from column $C_{2i''-1}$.
Claim~\ref{claim:interrow-monotone} implies $i'\leq \ell_1$ and the pairs of interrow edges used are monotone from left to right. Hence
$i''\geq i'$. Similarly, to use the shortcut $f_{j''}$, we need to use the upper half of an interrow edge from Column
$C_{2n+2j''-1}$. Claim~\ref{claim:interrow-monotone} implies $n+j'\geq \ell_{n+1}\geq n+j''$. Hence $j''\leq j'$.
Since we use the shortcut $e_{i''}$ it follows that $\ell_{n-i''+1} = i''$. Similarly, since we use the shortcut $f_{j''}$ it follows that $\ell_{n-j''+2}=n+j''$. As $1\leq i'',j''\leq n$ it follows that $n+j''>i''$. By mono tonicity of the $\ell$-sequence shown in Claim~\ref{claim:interrow-monotone}, we have $n-j''+2> n-i''+1$, i.e., $i''\geq j''$.
\end{proof}
\begin{theorem}
\label{thm:connector-represents} The weight of $E'$ is exactly $C^*_n$, and $E'$ represents some integer $\beta\in [n]$.
\end{theorem}
\begin{proof}
As argued above it is enough to show that $i'=j'$. We have already seen $E'$ has the following contribution to its weight:
\begin{itemize}
\item The source edge incident to $p_{i'}$ has weight $B^{5}+(n-i'+1)$ by Claim~\ref{claim:one-in-out}.
\item The sink edge incident to $q_{j'}$ has weight $B^{5}+ j'$ by Claim~\ref{claim:one-in-out}.
\item The terminal edges incur weight $4B^4$ by Claim~\ref{claim:4-blue-dotted}.
\item The inrow up edges incur weight $(2n+1)B^{3}$ by Claim~\ref{claim:one-red-inrow}.
\item The interrow edges incur weight $(n+1)B^2$ by Claim~\ref{claim:one-interrow}.
\item The inrow right edges incur weight $(4n-2)B$ by Claim~\ref{claim:exactly-4n-2-inrow-right}.
\item The shortcut $e_{i''}$ incurs weight $n\cdot i''$ and $f_{j''}$ incurs weight $n(n-j''+1)$ by
Claim~\ref{claim:exactly-2-shortcuts}.
\end{itemize}
Thus we already have a weight of
\begin{equation}
C^{**}=(2B^{5}+(n-i'+j'+1))+ 4B^4 + (2n+1)B^{3} + (n+1)B^2 + (4n-2)B + n(n-j''+i''+1)
\label{eqn:c-star-star}
\end{equation}
Observe that adding any edge of non-zero weight to $E'$ (other than the ones mentioned above) increases the weight $C^{**}$ by
at least $B$, since Claim~\ref{claim:at-most-2-shortcuts} does not allow us to use any more shortcuts.
Equation~\ref{eqn:c-star} and Equation~\ref{eqn:c-star-star} imply $C^{**}+B - C^*_n = B - n(i'-j')-(j''-i'') \geq 20n^3 - n(i'-j')-(j''-i'')\geq 0$, since $i', i'', j', j''\in [n]$. This implies that the weight of $E'$ is exactly $C^{**}$. We now show that in fact $C^{**}-C^*_n \geq 0$, which will imply that $C^{**}=C^*_n$. From Equation~\ref{eqn:c-star} and Equation~\ref{eqn:c-star-star}, we have $C^{**} - C^*_n = (j'-i')+n(i''-j'')$. We now show that this
quantity is non-negative. Recall that from Claim~\ref{claim:properties-of-shortcut-indices}, we have $i''\geq j''$.
\begin{itemize}
\item If $i''>j''$ then $n(i''-j'')\geq n$. Since $j',i'\in [n]$, we have $j'-i'\geq 1-n$. Therefore, $(j'-i')+n(i''-j'')\geq n+(1-n) =1$
\item If $i''=j''$ then by Claim~\ref{claim:properties-of-shortcut-indices} we have $i'\leq i'' = j'' \leq j'$. Hence $(j'-i')\geq 0$ and so $(j'-i')+n(i''-j'')\geq 0$.
\end{itemize}
Therefore $C^{**}= C^{*}_n$, i.e., $E'$ has weight exactly $C^{*}_n$. However $C^*_n=C^{**}$ implies
\begin{equation}
j'-i' + n(i''-j'') = 0
\label{eqn:equality}
\end{equation}
Since $i', j'\in [n]$ we have $n-1\geq j'-i' \geq 1-n$. If $i''\neq j''$ then $n(i''-j'')\geq n$ and hence $j'-i' + n(i''-j'') \geq (1-n)+n \geq 1$. Contradiction. Hence, we have $j''=i''$ and therefore Equation~\ref{eqn:equality} implies $j'=i'$, i.e, $E'$ is represented by $i'=j'\in [n]$.
\end{proof}
\section{Proof of Lemma~\ref{lem:main-gadget}: constructing the main gadget}
\label{main:proof}
\begin{figure}[H]
\centering
{\footnotesize \def0.7\linewidth}\includesvg{main-new-copy}{0.7\linewidth}\includesvg{main-new-copy}}
\caption{The main gadget (for $n=4$) representing the set $\{(2,2),(2,3),(3,2)\}$. The highlighted edges represent the pair $(2,3)$. \label{fig:main-gadget}}
\end{figure}
We prove Lemma~\ref{lem:main-gadget} in this section, by constructing a main gadget satisfying the specifications of Section~\ref{sec:main:exists}.
Recall that, as discussed at the start of Section~\ref{sec:construction-of-g^*-scss-planar}, we may assume that $1< x,y< n$ holds for every $(x,y)\in S_{i,j}$.
\subsection{Different types of edges in main gadget}
\label{subsec:description-of-edges-in-main-gadget}
Before proving Lemma~\ref{lem:main-gadget}, we first describe the
construction of the main gadget in more detail (see
Figure~\ref{fig:main-gadget}). The main gadget has $n^2$ rows denoted
by $R_1, R_2, \ldots, R_{n^2}$ and $2n+1$ columns denoted by $C_0,
C_1, \ldots, C_{2n+1}$. Let us denote the vertex at intersection of
row $R_i$ and column $C_j$ by $v_{i}^j$. We now describe the various
different kinds of edges in the main gadget.
\begin{enumerate}
\item \textbf{Left Source Edges}: For every $i\in [n]$, the edge $(\ell_i, \ell'_{i})$ is a left source edge.
\item \textbf{Right Sink Edges}: For every $i\in [n]$, the edge $(r'_i, r_{i})$ is a right sink edge.
\item \textbf{Top Source Edges}: For every $i\in [n]$, the edge $(t_i, v_{1}^{i})$ is a top source edge.
\item \textbf{Bottom Sink Edges}: For every $i\in [n]$, the edge $(v_{n^2}^{n+i}, b_{i})$ is a bottom sink edge.
\item \textbf{Source Internal Edges}: This is the set of $n^2$ edges of the form $(\ell'_{i}, v_{j}^{0})$ for $i\in [n]$ and $n(i-1)+1
\leq j\leq n\cdot i$. We number the source internal edges from top to bottom, i.e., the edge $(\ell'_{i}, v_{j}^{0})$ is called the
$j^{th}$ source internal edge, where $i\in [n]$ and $n(i-1)+1 \leq j\leq n\cdot i$.
\item \textbf{Sink Internal Edges}: This is the set of $n^2$ edges of the form $(v_{j}^{2n+1}, r'_i)$ for $i\in [n]$ and $n(i-1)+1
\leq j\leq n\cdot i$. We number the sink internal edges from top to bottom, i.e., the edge $(v_{j}^{2n+1}, r'_i)$ is called the
$j^{th}$ sink internal edge, where $i\in [n]$ and $n(i-1)+1 \leq j\leq n\cdot i$.
\item \textbf{Bridge Edges}: This is the set of $n^2$ edges of the form $(v_{i}^{n}, v_{i}^{n+1})$ for $1\leq i\leq n^2$. We number
the bridge edges from top to bottom, i.e., the edge $(v_{i}^{n}, v_{i}^{n+1})$ is called the $i^{th}$ bridge edge. These edges are shown in red color in Figure~\ref{fig:main-gadget}.
\item \textbf{Inrow Right Edges}: For each $i\in [n^2]$ we call the $\rightarrow$ edges (except the bridge edge $(v_{i}^{n},
v_{i}^{n+1})$) connecting vertices of row $R_{i}$ as inrow right edges. Formally, the set of inrow right edges of row $R_{i}$ are given by $\{(v_{i}^{j}, v_{i}^{j+1})\ :\ 0\leq j\leq n-1 \} \bigcup \{(v_{i}^{j}, v_{i}^{j+1})\ :\ n+1\leq j\leq 2n\}$
\item \textbf{Interrow Down Edges}: For each $i\in [n^2 - 1]$ we call the $2n$ $\downarrow$ edges connecting vertices of row $R_{i}$
to $R_{i+1}$ as interrow down edges. The $2n$ interrow edges between row $R_{i}$ and $R_{i+1}$ are $(v_{i}^{j},
v_{i+1}^{j})$ for each $1\leq j\leq 2n$.
\item \textbf{Shortcut Edges}: There are $2|S|$ shortcut edges, namely $e_1, e_2, \ldots, e_{|S|}$ and $f_1, f_2, \ldots,f_{|S|}$.
The shortcut edge for a $(x,y)\in S$ for some $1 < x,y < n$ is defined the following way:
\begin{itemize}
\item Introduce a new vertex $g_{x}^y$ at the middle of the edge $(v_{n(x-1)+y-1}^{y}, v_{n(x-1)+y}^{y})$ to
create two new edges $(v_{n(x-1)+y-1}^{y}, g_{x}^y)$ and $(g_{x}^y, v_{n(x-1)+y}^{y})$. Then the edge
$e_{x,y}$ is $(v_{n(x-1)+y}^{y-1}, g_{x}^y)$.
\item Introduce a new vertex $h_{x}^y$ at the middle of the edge $(v_{n(x-1)+y}^{n+y}, v_{n(x-1)+y+1}^{n+y})$
to create two new edges $(v_{n(x-1)+y}^{n+y}, h_{x}^y)$ and $(h_{x}^y, v_{n(x-1)+y+1)}^{n+y})$. Then the
edge $f_{x,y}$ is $(h_{x}^{y}, v_{n(x-1)+y}^{n+y+1})$.
\end{itemize}
\end{enumerate}
\subsection{Assigning weights in the main gadget}
Define $B=11n^2$. We assign weights to the edges as follows:
\begin{enumerate}
\item Each left source edge has weight $B^4$.
\item Each right sink edge has weight $B^4$.
\item For every $1\leq i\leq n$, the $i^{th}$ top source edge $(t_i, v_{1}^{i})$ has weight $B^4$.
\item For every $1\leq i\leq n$, the $i^{th}$ bottom sink edge $(v_{n^2}^{n+i}, b_i)$ has weight $B^4$.
\item For each $i\in [n^2]$, the $i^{th}$ bridge edge $(v_{i}^{n}, v_{i}^{n+1})$ has weight $B^3$.
\item For each $i\in [n^2]$, the $i^{th}$ source internal edge has weight $B^{2}(n^2 - i)$.
\item For each $j\in [n^2]$, the $j^{th}$ sink internal edge has weight $B^{2}\cdot j$.
\item Each inrow right edge has weight $3 B$.
\item For each $(x,y)\in S$, both the shortcut edges $e_{x,y}$ and $f_{x,y}$ have weight $B$ each.
\item Each interrow down edge that does not have a shortcut incident to it has weight 2. If an interrow edge is split
into two edges by the shortcut incident to it, then we assign a weight 1 to each of the two parts.
\end{enumerate}
Now we define the quantity $M^*_n$ stated in Lemma~\ref{lem:main-gadget}:
\begin{equation}
M^*_n = 4B^{4} + B^{3} + B^{2}n^{2} + B(6n-4) + 2(n^2-1).
\label{eqn:m-star}
\end{equation}
Next we are ready to prove the statements of Lemma~\ref{lem:main-gadget}.
\subsection{For every $(x,y)\in S$, there is a solution $E_{x,y}$ of weight $M^*_n$ that represents $(x,y)$}
\label{subsection:main-first-statement}
For $(x,y)\in S\subseteq [n]\times [n]$ define $z=n(x-1)+y$. Let $E_{x,y}$ be the union of the following sets of edges:
\begin{itemize}
\item The $x^{th}$ left source edge and $x^{th}$ right sink edge. This incurs a weight of $2B^4$.
\item The $y^{th}$ top source edge and the $y^{th}$ bottom sink edge.
This incurs a weight of $2B^4$.
\item The $z^{th}$ bridge edge. This incurs a weight of $B^3$.
\item The $z^{th}$ source internal edge and $z^{th}$ sink internal edge. This incurs a weight of $B^{2}n^2$.
\item All inrow right edges from row $R_{z}$ except $(v_{z}^{y-1}, v_{z}^{y})$ and $(v_{z}^{n+y}, v_{z}^{n+y+1})$. This
incurs a weight of $3B\cdot (2n-2)$.
\item The shortcut edges $e_{x,y}$ and $f_{x,y}$. This incurs a weight of $2B$.
\item The vertically downward path $v_{1}^{y}\rightarrow v_{2}^{y}\rightarrow \ldots \rightarrow v_{z}^{y}$ formed by interrow down edges in column $C_y$. This incurs a
weight of $2(z-1)$.
\item The vertically downward path $v_{z}^{n+y}\rightarrow v_{z+1}^{n+y}\rightarrow \ldots \rightarrow v_{n^2}^{n+y}$ formed by interrow down edges in column $C_{n+y}$.
This incurs a weight of $2(n^2 - z)$.
\end{itemize}
From the above paragraph and Equation~\ref{eqn:m-star}, it follows the total weight of $E_{x,y}$ is exactly $M^*_n$. Note that we did not include two inrow right edges, $(v_{z}^{y-1}, v_{z}^{y})$ and $(v_{z}^{n+y}, v_{z}^{n+y+1})$, from row $R_{z}$ in $E_{x,y}$. However, we can mimic the function of both these inrow right edges using shortcut edges and interrow down edges in $E_{x,y}$ as follows:
\begin{itemize}
\item We can still travel from $v_{z}^{y-1}$ to $v_{z}^{y}$ in $E_{x,y}$ as follows: take the path $(v_{z}^{y-1}\rightarrow g_{x}^y \rightarrow v_{z}^{y})$.
\item We can still travel from $(v_{z}^{n+y}$ to $v_{z}^{n+y+1})$ in $E_{x,y}$ via the path $(v_{z}^{n+y}\rightarrow h_{x,y} \rightarrow
v_{z}^{n+y+1})$.
\end{itemize}
The following observation follows from the previous paragraph:
\begin{observation}
\label{obs:reachability}
In $E_{x,y}$ we can reach $v_{z}^{j}$ from $v_{z}^{i}$ for any $2n+1\geq j\geq i\geq 0$.
\end{observation}
It remains to show that $E_{x,y}$ represents $(x,y)\in S$. It is easy to see that the first four conditions of Definition~\ref{defn:represents-main} are satisfied since the definition of $E_{x,y}$ itself gives the following:
\begin{itemize}
\item In $E_{x,y}$ the only outgoing edge from $L$ is the one incident to $\ell_x$
\item In $E_{x,y}$ the only incoming edge to $R$ is the one incident to $r_x$
\item In $E_{x,y}$ the only outgoing edge from $T$ is the one incident to $t_y$
\item In $E_{x,y}$ the only incoming edge to $B$ is the one incident to $b_y$
\end{itemize}
We now show that the last condition of Definition~\ref{defn:represents-main} is also satisfied by $E_{x,y}$:
\begin{enumerate}
\item There is a $\ell_x \leadsto r_x$ path in $E_{x,y}$ obtained by taking the edges in the following order:
\begin{itemize}
\item the left source edge $(\ell_x, \ell'_x)$,
\item the source internal edge $(\ell'_x, v_{z}^{0})$,
\item the horizontal path $v_{z}^0 \rightarrow v_{z}^1 \rightarrow \ldots v_{z}^n$ given by Observation~\ref{obs:reachability},
\item the bridge edge $(v_{z}^n, v_{z}^{n+1})$,
\item the horizontal path $v_{z}^{n+1} \rightarrow v_{z}^{n+2} \rightarrow \ldots v_{z}^{2n+1}$ given by Observation~\ref{obs:reachability},
\item the sink internal edge $(v_{z}^{2n+1}, r'_x)$, and
\item the right sink edge $(r'_x, r_x)$.
\end{itemize}
\item There is a $t_y \leadsto b_y$ path in $E_{x,y}$ obtained by taking the edges in the following order:
\begin{itemize}
\item the top source edge $(t_y, v_{1}^{y})$,
\item the downward path $v_{1}^y \rightarrow v_{2}^y \rightarrow \ldots v_{z}^y$ given by interrow down edges
in column $C_y$,
\item the horizontal path $v_{z}^y \rightarrow v_{z}^{y+1} \rightarrow \ldots v_{z}^{n}$ given by Observation~\ref{obs:reachability},
\item the bridge edge $(v_{z}^n, v_{z}^{n+1})$,
\item the horizontal path $v_{z}^{n+1} \rightarrow v_{z}^{n+2} \rightarrow \ldots v_{z}^{n+y}$ given by Observation~\ref{obs:reachability},
\item the downward path $v_{z}^{n+y} \rightarrow v_{z+1}^{n+y} \rightarrow \ldots v_{n^2}^{n+y}$ given by
interrow down edges in column $C_{n+y}$, and
\item the bottom sink edge $(v_{n^2}^{n+y}, b_y)$.
\end{itemize}
\end{enumerate}
Therefore, $E_{x,y}$ has weight $M^*_n$ and represents $(x,y)$.
\subsection{$E'$ satisfies the connectedness property and has weight at most $M^*_n \Rightarrow E'$ represents some $(\alpha,\beta)\in S$ and has weight exactly $M^*_n$}
\label{subsection:main-second-statement}
In this section we show that if a set of edges $E'$ satisfies the connectedness property and has weight $M^*_n$, then it
represents some $(\alpha,\beta)\in S$. We do this via the following series of claims and observations.
\begin{claim}
\label{claim:one-in-out-main} $E'$ contains
\begin{itemize}
\item exactly one left source edge,
\item exactly one right sink edge,
\item exactly one top source edge, and
\item exactly one bottom sink edge.
\end{itemize}
\end{claim}
\begin{proof}
Since $E'$ satisfies the connectedness property, it must contain at least one edge of each of the above types.
Without loss of generality, suppose we have at least two left source edges in $E'$. Then the weight of the edge set $E'$ is
greater than or equal to the sum of weights of these two left source edges and the weight of a right sink edge, the weight
of a top source edge, and the weight of a bottom sink edge. Thus if $E'$ contains at least two left source edges, then its
weight is at least $5B^{4}$.
However, from Equation~\ref{eqn:m-star}, we get that
\begin{align*}
M^*_n &= 4B^{4} + B^{3} + B^{2}n^{2} + B(6n-4) + 2(n^2-1)\\
&\leq 4B^4 + n\cdot B^3 + n\cdot B^3 + 6n\cdot B^3 + 2n\cdot B^3\\
&= 4B^4 + 10n\cdot B^3 \\
&< 5B^4,
\end{align*}
since $B=11n^2 > 10n$.
\end{proof}
Therefore, we can set up the following notation:
\begin{itemize}
\item Let $i_{L}\in [n]$ be the unique index such that the left source edge in $E'$ is incident to $\ell_{i_L}$.
\item Let $i_{R}\in [n]$ be the unique index such that the right sink edge in $E'$ is incident to $r_{i_R}$.
\item Let $i_{T}\in [n]$ be the unique index such that the top source edge in $E'$ is incident to $t_{i_T}$.
\item Let $i_{B}\in [n]$ be the unique index such that the bottom sink edge in $E'$ is incident to $b_{i_B}$.
\end{itemize}
\begin{claim}
\label{claim:exactly-1-bridge} The edge set $E'$ contains exactly one bridge edge.
\end{claim}
\begin{proof}
To satisfy the connectedness property, we need at least one bridge edge, since the bridge edges form a cut
between the top-distinguished vertices and the right-distinguished vertices as well as between the top-distinguished vertices
and the bottom-distinguished vertices. Suppose that the edge set $E'$ contains at least two bridge edges. This contributes a weight of
$2B^3$. We already have a contribution on $4B^4$ to weight of $E'$ from Claim~\ref{claim:one-in-out-main}. Therefore, the weight of $E'$ is at least $4B^4 + 2B^3$.
However, from Equation~\ref{eqn:m-star}, we get that
\begin{align*}
M^*_n &= 4B^{4} + B^{3} + B^{2}n^{2} + B(6n-4) + 2(n^2-1)\\
&\leq 4B^4 + B^3 + B^{2}n^{2} + 6n\cdot B + 2n^2\\
&\leq 4B^4 + B^3 + B^{2}n^{2} + 6n^{2}B^2 + 2n^{2}B^2 \\
&= 4B^4 + B^3 + 9B^{2}n^{2}\\
&< 4B^8 + 2B^3,
\end{align*}
since $B=11n^2 > 9n^2$.
\end{proof}
Let the index of the unique bridge edge in $E'$ (guaranteed by Claim~\ref{claim:exactly-1-bridge}) be $\gamma\in [n^2]$. The
connectedness property implies that we need to select at least one source internal edge incident to $\ell'_{i_L}$ and
at least one sink internal edge incident to $r'_{i_R}$. Let the index of the source internal edge incident to $\ell'_{i_L}$ be
$j_L$ and the index of the sink internal edge incident to $r'_{i_R}$ be $j_R$.
\begin{claim}
$i_L = i_R$ and $j_L = j_R=\gamma$. \label{claim:il=jl-and-ir=jr}
\end{claim}
\begin{proof}
By the connectedness property, there is a path from $\ell_{i_L}$ to some vertex in $r_{i_R}\cup b_{i_B}$. The paths starts with $\ell_{i_L}\rightarrow \ell'_{i_L}\rightarrow v_{j_L}^{1}$ and has to use the $\gamma^{th}$ bridge edge. By the construction of
the main gadget (all edges are either downwards or towards the right), this path can never reach any row $R_{\ell}$ for $\ell<
j_L$. Therefore, $\gamma\geq j_L$. By similar logic, we get $j_R\geq \gamma$. Therefore $j_{R}\geq j_{L}$.
If $j_R > j_L$, then the weight of the source internal edge and the sink internal edge is $B^{5}(n^2 - j_L + j_R) \geq
B^{5}(n^2 + 1)$. We already have a contribution of $4B^{4}+B^3$ to the weight of $E'$ from
Claim~\ref{claim:one-in-out-main} and Claim~\ref{claim:exactly-1-bridge}. Therefore, the weight of $E'$ is at least $4B^4 + B^3 + B^{2}(n^{2}+1)$. However, from Equation~\ref{eqn:m-star}, we get that
\begin{align*}
M^*_n &= 4B^{4} + B^{3} + B^{2}n^{2} + B(6n-4) + 2(n^2-1)\\
&\leq 4B^4 + B^3 + B^{2}n^2 + 6n\cdot B + 2n^2\\
&\leq 4B^4 + B^3 + B^{2}n^2 + 6n^{2}\cdot B + 2n^{2}\cdot B \\
&= 4B^4 + B^3 + B^{2}n^{2} + 8n^{2}\cdot B\\
&< 4B^4 + B^3 + B^{2}(n^{2}+1),
\end{align*}
since $B=11n^2 > 8n^2$.
Hence $j_R = j_L=\gamma$. Observing that $i_L = \lceil \frac{j_L}{n} \rceil$ and $i_R = \lceil \frac{j_R}{n} \rceil$, we
obtain $i_L=i_R$.
\end{proof}
Let $i_L=i_R = \alpha$ and $\gamma = n(\alpha-1)+\beta$. We will now show that $E'$ represents the pair $(\alpha,\beta)$. By Definition~\ref{defn:represents-main}, we need to prove the following four conditions:
\begin{enumerate}
\item The only left source edge in $E'$ is the one incident to $\ell_{\alpha}$ and the only right sink edge in $E'$ is the
one incident to $r_{\alpha}$.
\item The pair $(\alpha,\beta)$ is in $S$.
\item The only top source edge in $E'$ is the one incident to $t_{\beta}$ and the only bottom sink edge in $E'$ is the one
incident to $b_{\beta}$.
\item $E'$ has an $\ell_{\alpha}\leadsto r_{\alpha}$ path and an $t_{\beta}\leadsto b_{\beta}$ path.
\end{enumerate}
The first statement above follows from Claim~\ref{claim:one-in-out-main} and Claim~\ref{claim:il=jl-and-ir=jr}.
We now continue with the proof of the other three statements mentioned above:
\begin{claim}
\label{claim:all-from-row-gamma} $E'$ contains exactly $2n-2$ inrow right edges, all of them from row $R_{\gamma}$. As a
corollary, we get that there are two shortcuts incident to row $R_{\gamma}$, i.e., $(\alpha,\beta)\in S$ and also that $E'$ uses both these shortcuts.
\end{claim}
\begin{proof}
Note that by the construction of the main gadget, there can be at most two shortcut edges incident on the vertices of row $R_{\gamma}$.
Claim~\ref{claim:il=jl-and-ir=jr} implies $j_L = j_R=\gamma$. Hence the $\ell_{\alpha}\leadsto r_{\alpha}\cup b_{i_B}$ path in
$E'$ contains a $v_{\gamma}^{0}\leadsto v_{\gamma}^{n}$ subpath $P_1$. By the construction of the main gadget, we cannot
reach an upper row from a lower row. Hence this subpath $P_1$ must be the path $v_{\gamma}^{0}\rightarrow
v_{\gamma}^{2}\rightarrow \ldots \rightarrow v_{\gamma}^{n}$. This path $P_1$ can at most use the unique shortcut edge
incident to row $R_{\gamma}$ and column $C_{\beta}$ to replace an inrow right edge. Hence $P_1$ uses at least $n-1$ inrow
right edges, with equality only if $R_{\gamma}$ has a shortcut incident to it.
Similarly, the $\ell_{\alpha}\cup t_{i_T}\leadsto r_{\alpha}$ path in $E'$ contains a $v_{\gamma}^{n+1}\leadsto v_{\gamma}^{2n+1}$
subpath $P_2$. By the construction of the main gadget, we cannot reach an upper row from a lower row. Hence this subpath
$P_2$ must be the path $v_{\gamma}^{n+1}\rightarrow v_{\gamma}^{n+2}\rightarrow \ldots \rightarrow v_{\gamma}^{2n+1}$. This
path $P_2$ can at most use the unique shortcut edge incident to row $R_{\gamma}$ and column $C_{n+\beta}$ to replace an inrow
right edge. Hence $P_2$ uses at least $n-1$ inrow right edges, with equality only if $R_{\gamma}$ has a shortcut incident to it.
Clearly, the sets of inrow edges used by $P_1$ and $P_2$ are disjoint, and hence $E'$ contains at least $2n-2$ inrow right
edges from row $R_{\gamma}$. Suppose $E'$ contains at least $2n-1$ inrow right edges. Then it incurs a weight of $3B\cdot
(2n-1)$. From Claim~\ref{claim:one-in-out-main}, Claim~\ref{claim:exactly-1-bridge} and Claim~\ref{claim:il=jl-and-ir=jr} we
already have a contribution of $4B^4 + B^3 + B^{2}n^2$. Therefore the weight of $E'$ is at least $4B^4 + B^3 + B^{2}n^2+
3B\cdot (2n-1)$.
However, from Equation~\ref{eqn:m-star}, we get that
\begin{align*}
M^*_n &= 4B^{4} + B^{3} + B^{2}n^{2} + B(6n-4) + 2(n^2-1)\\
&\leq 4B^4 + B^3 + B^{2}n^2 + B(6n-4) + 2n^2\\
&< 4B^4 + B^3 + B^{2}n^2 + 3B\cdot (2n-1),
\end{align*}
since $B=11n^2 > 2n^2$.
Therefore, $E'$ can only contain at most $2n-2$ inrow right edges. Hence there must be two shortcut edges incident to row $R_{\gamma}$, which are both used by $E'$. Since $\gamma=n(\alpha-1)+\beta$, the fact that row $R_{\gamma}$ has shortcut edges incident to it implies $(\alpha,\beta)\in S$.
\end{proof}
To prove the third claim it is sufficient to show that $i_T = i_B=\beta$, since Claim~\ref{claim:one-in-out-main} implies $E'$ contains exactly one top source edge and exactly one bottom sink edge. Note that the remaining budget left for the weight of $E'$ is at most $2(n^2-1)$.
\begin{claim}
$i_T = i_B=\beta$ \label{claim:i_T=beta=i_B}
\end{claim}
\begin{proof}
Recall that the only bridge edge used is the one on row $R_{\gamma}$. Moreover, the bridge edges form a cut between $T$ and $R\cup B$. Hence, to satisfy the connectedness property it follows that the $t_{i_T}\leadsto r_{\alpha}\cup b_{i_B}$ path in $E'$
contains a $v_{1}^{i_T}\leadsto v_{\gamma}^{n}$ subpath $P_3$. By Claim~\ref{claim:all-from-row-gamma},
all inrow right edges are only from row $R_{\gamma}$. As the only remaining budget is $2(n^2-1)$, we cannot use any other
shortcuts or inrow right edges since $B=11n^2 > 2(n^2 -1)$. Therefore, $P_3$ contains another $v_{1}^{i_T}\rightarrow v_{\gamma}^{i_T}$ subpath $P'_3$.
If $i_T\neq \beta$, then $P'_3$ incurs weight $2(\gamma-1)$. Note that we also pay a weight of 1 to use half of the interrow
edge when we use the shortcut edge (which we have to use due to Claim~\ref{claim:all-from-row-gamma}) which is incident to row $R_{\gamma}$ and column $C_{\beta}$.
Similarly, the $\ell_{\alpha}\cup t_{i_T}\leadsto b_{i_B}$ path in $E'$ contains a $v_{\gamma}^{n+1}\leadsto v_{n^2}^{n+i_B}$
subpath $P'_4$. By Claim~\ref{claim:all-from-row-gamma}, all inrow horizontal edges are only from row
$R_{\gamma}$. As the only remaining budget is $2(n^2-1)$, we cannot use any other shortcuts or inrow right edges. Therefore,
$P_4$ contains another $v_{\gamma}^{n+i_B}\leadsto v_{n^2}^{n+i_B}$ subpath $P'_4$. If $i_B\neq \beta$, then $P'_4$ incurs
weight $2(n^2 - \gamma)$. Note that we also pay a weight of 1 to use (half of) the interrow edge when we use the shortcut edge (which we have to use due to Claim~\ref{claim:all-from-row-gamma})
which is incident to row $R_{\gamma}$ and column $C_{n+\beta}$.
Suppose without loss of generality that $i_T\neq \beta$. Then $P'_3$ incurs a weight of $2(\gamma-1)$, and the half interrow edge
used incurs an additional weight of 1. In addition, path $P'_4$ incurs a weight of $2(n^2-\gamma)$. Hence the total weight incurred is
$2(\gamma-1)+ 1 + 2(n^2-\gamma) = 2(n^2 - 1)+ 1$ which is greater than our allowed budget. Hence $i_T = \beta$. It can be
shown similarly that $i_B = \beta$.
\end{proof}
\begin{claim}
$E'$ has an $\ell_{\alpha}\leadsto r_{\alpha}$ path and an $t_{\beta}\leadsto b_{\beta}$ path.
\label{claim:hori-vert-connectivity}
\end{claim}
\begin{proof}
First we show that $E'$ has an $\ell_{\alpha}\leadsto r_{\alpha}$ path by taking the following edges (in order)
\begin{itemize}
\item The path $\ell_\alpha \rightarrow \ell'_{\alpha}\rightarrow v_{\gamma}^{0}$ which exists since $i_L = \alpha$ and $j_L = \gamma$
\item The $v_{\gamma}^{0}\leadsto v_{\gamma}^{n}$ path $P_1$ guaranteed in proof of Claim~\ref{claim:all-from-row-gamma}
\item The bridge edge $v_{\gamma}^{n}\rightarrow v_{\gamma}^{n+1}$ guaranteed by Claim~\ref{claim:exactly-1-bridge}
\item The $v_{\gamma}^{n+1}\leadsto v_{\gamma}^{2n+1}$ path $P_2$ guaranteed in proof of Claim~\ref{claim:all-from-row-gamma}
\item The path $r_\alpha \leftarrow r'_{\alpha}\leftarrow v_{\gamma}^{2n+1}$ which exists since $i_R = \alpha$ and $j_R = \gamma$
\end{itemize}
Next we show that $E'$ has an $t_{\beta}\leadsto b_{\beta}$ path by taking the following edges (in order)
\begin{itemize}
\item The edge $t_\beta \rightarrow v_{1}^{\beta}$ which exists since $i_T = \beta$
\item The $v_{1}^{\beta}\leadsto v_{\gamma}^{n}$ path $P_3$ guaranteed in proof of Claim~\ref{claim:i_T=beta=i_B}
\item The bridge edge $v_{\gamma}^{n}\rightarrow v_{\gamma}^{n+1}$ guaranteed by Claim~\ref{claim:exactly-1-bridge}
\item The $v_{\gamma}^{n+1}\leadsto v_{n^2}^{n+\beta}$ path $P_4$ guaranteed in proof of Claim~\ref{claim:i_T=beta=i_B}
\item The edge $b_\beta \leftarrow v_{n^2}^{n+\beta}$ which exists since $i_B = \beta$
\end{itemize}
\end{proof}
Claim~\ref{claim:one-in-out-main}, Claim~\ref{claim:il=jl-and-ir=jr}, Claim~\ref{claim:all-from-row-gamma}, Claim~\ref{claim:i_T=beta=i_B} and Claim~\ref{claim:hori-vert-connectivity} together imply that $E'$ represents $(\alpha,\beta)\in S$ (see Definition~\ref{defn:represents-main}). We now show that weight of $E'$ is exactly $M^*_n$.
\begin{lemma}
Weight of $E'$ is exactly $M^*_n$
\end{lemma}
\begin{proof}
Claim~\ref{claim:one-in-out-main} contributes a weight of $4B^{4}$ to $E'$. Claim~\ref{claim:exactly-1-bridge} contributes a weight of $B^3$ to $E'$. From the proof of Claim~\ref{claim:il=jl-and-ir=jr}, we can see that $E'$ incurs weight $B^{2}n^2$ from the source
internal edge and sink internal edge. Claim~\ref{claim:all-from-row-gamma} implies that $E'$ contains exactly $2n-2$ inrow right
edges from row $R_{\gamma}$ and also both shortcuts incident to row $R_{\gamma}$. This incurs a cost of $3B(2n-2)+2B = B(6n-4)$. By arguments similar to that in the proof of Claim~\ref{claim:i_T=beta=i_B}, $E'$ contains at least $(\gamma-1)$ interrow edges from column $C_{\beta}$ and at least $(n^2 -\gamma)$ interrow edges from column $C_{n+\beta}$. Therefore, we have weight of $E'\geq 4B^4 + B^3 + B^{2}n^2 + B\cdot(6n-4)+ 2(\gamma-1) + 2(n^2 -\gamma)= 4B^4 + B^3 + B^{2}n^2 + B\cdot(6n-4)+ 2(n^2 -1) = M^*_n$. Hence the weight of $E'$ is exactly $M^*_n$.
\end{proof}
This completes the proof of the second statement of
Lemma~\ref{lem:main-gadget}.
\section{W[1]-hardness for SCSS in general graphs}
The main goal of this section is to prove Theorem~\ref{thm:scss-main-hardness-general-graphs}.
We note that the reduction of Guo et al.~\cite{guo-et-al}
gives a reduction from \textsc{Multicolored Clique} \xspace which builds an equivalent instance of \textsc{Strongly Connected Steiner Subgraph} \xspace with quadratic blowup in the number of terminals.
Hence using the reduction of Guo et al.~\cite{guo-et-al} only an $f(k)\cdot n^{o(\sqrt{k})}$ algorithm for SCSS can be ruled out under
ETH. We are able to improve upon this hardness by using the \textsc{Partitioned Subgraph Isomorphism}\xspace (PSI) problem introduced by Marx~\cite{marx-beat-treewidth}. Our
reduction is also slightly simpler than the one given by Guo et al.
\begin{center}
\noindent\framebox{\begin{minipage}{6.00in}
\textbf{\textsc{Partitioned Subgraph Isomorphism}\xspace} (PSI)\\
\emph{Input }: Undirected graphs $G=(V_G=\{g_1,g_2,\ldots,g_{\ell}\},E_G)$ and $H=(V_H,E_H)$, and a partition of $V_H$ into
disjoint subsets
$H_1,H_2,\ldots, H_{\ell}$ \\
\emph{Question}: Is there an injection $\phi: V_G\rightarrow V_H$ such that
\begin{enumerate}
\item For every $i\in [\ell]$ we have $\phi(g_i)\in H_i$.
\item For every edge $\{g_i,g_j\}\in E_G$ we have $\{\phi(g_i),\phi(g_j)\}\in E_H$.
\end{enumerate}
\end{minipage}}
\end{center}
The PSI problem is so-called because the vertices of $H$ are partitioned into parts: one part corresponding to every vertex of $G$. Marx~\cite{marx-beat-treewidth} showed the following hardness result:
\begin{theorem}
\label{thm:marx-csi}
Unless ETH fails, \textsc{Partitioned Subgraph Isomorphism}\xspace cannot be solved in time $f(r)\cdot n^{o(r/\log r)}$ where $f$ is any
computable function, $r$ is the number of edges in $G$ and $n$ is the number of vertices in~$H$.
\end{theorem}
\begin{figure}[t]
\centering
\includegraphics[width=6in]{scss}
\vspace{-15mm}
\caption{An illustration of the reduction from PSI to SCSS described in Theorem~\ref{thm:scss-main-hardness-general-graphs} for the special case when $V_G=\{g_1, g_2\}, E_G = g_1 - g_2$ and $H$ is a path on three vertices $v-u-w$ with $H_1 = \{v,w\}$ and $H_2 = \{u\}$.
\label{fig:scss}}
\end{figure}
By giving a reduction from \textsc{Partitioned Subgraph Isomorphism}\xspace to \textsc{Strongly Connected Steiner Subgraph} \xspace where $k=O(|E_G|)$ we will obtain a $f(k)\cdot n^{o(k/\log k)}$ hardness for SCSS under the ETH,
where $k$ is the number of terminals. Consider an instance $(G,H)$ of \textsc{Partitioned Subgraph Isomorphism}\xspace. We now build an instance $(G^*, T^*)$ of \textsc{Strongly Connected Steiner Subgraph} \xspace as follows:
\begin{itemize}
\item $B = \{b_{i}\ |\ i\in [\ell]\}$
\item $C = \{c_v\ |\ v\in V_H\}$
\item $H = \{h_v\ |\ v\in V_H\}$
\item $D = \{d_{uv}\cup d_{vu}\ |\ \{u,v\}\in E_H\}$
\item $A = \{a_{uv}\cup a_{vu}\ |\ \{u,v\}\in E_H\}$
\item $F = \{ f_{ij}\ |\ 1\leq i,j\leq \ell \ |\ g_{i}g_{j}\in E_G \}$
\item $V^{*}= B\cup C\cup H\cup D\cup A\cup F$
\item $E_1 = \{ (c_{v},b_{i})\ |\ v\in H_i, 1\leq i\leq \ell \}$
\item $E_2 = \{ (b_{i},h_{v})\ |\ v\in H_i, 1\leq i\leq \ell \}$
\item $E_3 = \{ (h_{v},c_{v})\ |\ v\in V_H \}$
\item $E_4 = \{ (c_{v},d_{vu})\ |\ \{u,v\}\in E_H \}$
\item $E_5 = \{ (a_{vu},h_{u})\ |\ \{u,v\}\in E_H \}$
\item $E_6 = \{ (d_{vu},a_{vu})\ |\ \{u,v\}\in E_H \}$
\item $E_7 = \{ (f_{ij},d_{vu})\cup (a_{vu},f_{ij})\ |\ \{u,v\}\in E_H; v\in H_i; u\in H_j; 1\leq i,j \leq \ell \}$
\item $E^{*} = E_1 \cup E_2 \cup E_3 \cup E_4 \cup E_5\cup E_6\cup E_7$
\item The set of terminals is $T^*=B\cup F$.
\end{itemize}
This completes the construction of the graph $G^* = (V^*, E^*)$.
An illustration of the construction for a small graph is given in Figure~\ref{fig:scss}. In the instance of \textsc{Partitioned Subgraph Isomorphism}\xspace we can assume the graph $G$ is connected,
otherwise we can solve the problem for each connected component. Therefore, we have that $k=|T| = \ell + 2|E_G| = O(|E_G|)$. For ease of argument, we distinguish the different types of edges of $G^*$ as follows (see Figure~\ref{fig:scss}):
\begin{itemize}
\item Edges of $E_1 \cup E_2\cup E_3$ are denoted using black edges
\item Edges of $E_4 \cup E_5$ are denoted using light/gray edges
\item Edges of $E_6 \cup E_7$ are denoted using dotted edges
\end{itemize}
We now show two lemmas which complete the reduction from \textsc{Partitioned Subgraph Isomorphism}\xspace to \textsc{Strongly Connected Steiner Subgraph} \xspace.
\begin{lemma}
\label{lemma:scss-hardness-general-graphs-reduction-easy} If the instance $(G,H)$ of \textsc{Partitioned Subgraph Isomorphism}\xspace answers YES then the instance $(G^*,T^*)$ of \textsc{Strongly Connected Steiner Subgraph} \xspace has a solution of size $\leq 3\ell+10|E_G|$.
\end{lemma}
\begin{proof}
Suppose the instance $(G,H)$ of \textsc{Partitioned Subgraph Isomorphism}\xspace answers YES and let $\phi$ be the injection from $V_G \rightarrow V_H$. Then we claim the
following set $M'$ of $3\ell+10|E_G|$ edges forms a solution for the \textsc{Strongly Connected Steiner Subgraph} \xspace instance:
\begin{itemize}
\item $M_1 = \{ (h_{\phi(g_i)},c_{\phi(g_i)})\ |\ i\in [\ell] \}$
\item $M_2 = \{ (b_{i},h_{\phi(g_i)})\ |\ i\in [\ell] \}$
\item $M_3 = \{ (c_{\phi(g_i)},b_{i})\ |\ i\in [\ell] \}$
\item $M_4 = \{ (c_{\phi(g_i)},d_{\phi(g_i)\phi(g_j)})\cup (d_{\phi(g_i)\phi(g_j)},a_{\phi(g_i)\phi(g_j)})\cup
(a_{\phi(g_i)\phi(g_j)},h_{\phi(g_j)})\ |\ g_{i}g_{j}\in E_G; 1\leq i,j\leq \ell \}$.
\item $M_5 = \{ (f_{ij},d_{\phi(g_i)\phi(g_j)})\cup (a_{\phi(g_i)\phi(g_j)},f_{ij}) \ |\ g_{i}g_{j}\in E_G; 1\leq
i,j\leq \ell \}$.
\item $M' = M_1 \cup M_2 \cup M_3 \cup M_4\cup M_5$
\end{itemize}
First consider $i\neq j$ such that $g_{i}g_{j}\in E_G$. Then there is a $b_i\leadsto b_j$ path in $M'$, namely $b_i
\rightarrow h_{\phi(g_i)}\rightarrow c_{\phi(g_i)}\rightarrow d_{\phi(g_i)\phi(g_j)}\rightarrow
a_{\phi(g_i)\phi(g_j)}\rightarrow h_{\phi(g_j)}\rightarrow c_{\phi(g_j)}\rightarrow b_j$. Generalizing this and observing
$G$ is connected we can see any two terminals in $B$ are strongly connected. Now consider two terminals $f_{ij}$ and $b_{q}$
such that $1\leq i,j,q\leq \ell$. The existence of the terminal $f_{ij}$ implies $g_{i}g_{j}\in E_G$ and hence
$\phi(g_i)\phi(g_j)\in E_H$. There is a path in $M'$ from $f_{ij}$ to $b_q$: use the path $f_{ij}\leadsto
d_{\phi(g_i)\phi(g_j)}\rightarrow a_{\phi(g_i)\phi(g_j)}\rightarrow h_{\phi(g_j)}\rightarrow c_{\phi(g_j)}\rightarrow b_j$
followed by the $b_{j}\leadsto b_{q}$ path (which was shown to exist above). Similarly there is a path in $M'$ from $b_q$ to
$f_{ij}$: use the $b_q\leadsto b_i$ path (which was shown to exist above) followed by the path $b_i \leadsto
h_{\phi(g_i)}\rightarrow c_{\phi(g_i)}\rightarrow d_{\phi(g_i)\phi(g_j)}\rightarrow a_{\phi(g_i)\phi(g_j)}\rightarrow
f_{ij}$. Hence each terminal in $B$ can reach every terminal in $F$ and vice versa. Finally consider any two terminals $f_{ij}$ and $f_{st}$ in $F$: the terminal $f_{ij}$ can first reach $b_{i}$ and we have seen above that $b_i$ can reach any terminal in $F$.
This shows $M'$ forms a solution for the \textsc{Strongly Connected Steiner Subgraph} \xspace instance.
\end{proof}
\begin{lemma}
\label{lemma:scss-hardness-general-graphs-reduction-hard}If the instance $(G^*,T^*)$ of \textsc{Strongly Connected Steiner Subgraph} \xspace has a solution of size $\leq 3\ell+10|E_G|$ then the instance $(G,H)$ of \textsc{Partitioned Subgraph Isomorphism}\xspace answers YES.
\end{lemma}
\begin{proof}
Let $X$ be a solution of size $3\ell+10|E_G|$ for the instance $(G^*, T^*)$ of SCSS. Consider a terminal $f_{ij}\in F$. The only out-neighbors of $f_{ij}$ are vertices from $D$, and hence $X$ must contain an
edge $(f_{ij},d_{vu})$ such that $v\in H_i$ and $u\in H_j$. However the only neighbor of $d_{vu}$ is $a_{vu}$, and hence $X$ has to contain this edge as well. Finally, $X$ must also contain one incoming edge into $f_{ij}$ since we desire strong connectivity. So for each terminal $f_{ij}$, we need three ``private" dotted edges in the sense that every terminal in $F$ needs three such edges in any
optimum solution. This uses up $6|E_G|$ of the budget since $|F|=2|E_G|$. Referring to Figure~\ref{fig:scss}, we can see any $f_{ij}\in F$ needs
two ``private" light edges in $X$: one edge coming out of some vertex in $A$ and some edge going into a vertex of $D$. This uses up
$4|E_G|$ more from the budget leaving us with only $3\ell$ edges.
Consider $b_i$ for $i\in [\ell]$. First we claim that $X$ must contain at least three black edges for $b_i$ to have incoming and outgoing
paths to the other terminals. The only outgoing edge from $b_i$ is to vertices of $H$, and hence we need to pick an edge $(b_i,h_{v})$ such that $v\in H_i$. Since the only out-neighbor of $h_{v}$ is $c_{v}$, it follows that $X$ must pick this edge as well. Additionally, $X$
also needs to contain at least one incoming edge into $b_i$ to account for incoming paths from other terminals to $b_i$. So each $b_i$ needs to
have at least three edges selected in order to have incoming and outgoing paths to other terminals. Moreover, all these edges
are clearly ``private", i.e., different for each $b_i$. But as seen in the previous paragraph, our remaining budget was at most $3\ell$. Hence $X$ selects exactly three such edges for each $b_i$. We now claim that once $X$ contains the edges $(b_i,h_{v})$ and $h_{v},c_{v}$ such that $v\in H_i$ then $X$ must also contain the edge $(c_{v},b_i)$. Suppose not, and for incoming towards $b_i$ the solution $X$
selects the edge $(c_{w},b_i)$ for some $w\in H_i$ such that $w \neq v$. Then since $h_{w}$ is the only neighbor of $c_{w}$, the solution $X$ would
be forced to select this edge as well. This implies that at least four edges have been selected for $b_i$, which is a contradiction. So for every $i\in [\ell]$, there
is a vertex $v_i\in H_i$ such that the edges $(b_i,h_{v_i}), (h_{v_i},c_{v_i})$ and $(c_{v_i},b_i)$ are selected in the
solution for the \textsc{Strongly Connected Steiner Subgraph} \xspace instance. Further these are the only black edges in $X$ corresponding to $b_i$ (refer to
Figure~\ref{fig:scss}). It also follows for each $f_{ij}\in F$, the solution $X$ contains exactly three of the dotted edges (we argued above that each $f_{ij}$ needs three dotted edges, and the budget now implies that this is the maximum we can allow).
Define $\phi: V_G\rightarrow V_H$ by $\phi(g_i)=v_i$ for each $i\in [\ell]$. Since $v_i\in H_i$ and the sets $H_{1}, H_2, \ldots, H_{\ell}$ form a
disjoint partition of $V_H$, it follows that the function $\phi$ is an injection. Consider any edge $g_{i}g_{j}\in E_G$. We have seen above
that the solution $X$ contains exactly three dotted edges per $f_{ij}\in F$. Suppose for $f_{ij}\in F$ the solution $X$ contains the edges $(f_{ij}, d_{vu}),
(d_{vu},a_{vu})$ and $(a_{vu},f_{ij})$ for some $v\in H_i, u\in H_j$. The only incoming path for $f_{ij}$ is via $d_{vu}$.
Also the only outgoing path from $b_i$ is via $c_{v_i}$. If $v_{i}\neq v$ then we will need two other dotted edges to
reach $f_{ij}$, which is a contradiction since have already picked the allocated budget of three such edges. Hence, $v_i=v$. Similarly, it follows that $v_j=u$.
Finally, the existence of the vertex $d_{vu}$ implies $vu\in E_H$, i.e., $\phi(g_i)\phi(g_j)\in E_H$.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:scss-main-hardness-general-graphs}}
Finally, we are now ready to prove Theorem~\ref{thm:scss-main-hardness-general-graphs} which is restated below:
\begin{reptheorem}{thm:scss-main-hardness-general-graphs}
Under ETH, the edge-unweighted version of the SCSS problem cannot be solved in time $f(k)\cdot n^{o(k/\log k)}$ where $f$ is any
computable function, $k$ is the number of terminals and $n$ is the number of vertices in the instance.
\end{reptheorem}
\begin{proof}
Lemma~\ref{lemma:scss-hardness-general-graphs-reduction-easy} and Lemma~\ref{lemma:scss-hardness-general-graphs-reduction-hard} together give a parameterized reduction from PSI to SCSS.
Observe that the number of terminals $k$ of the SCSS instance is $|B\cup F|=|V_G|+ 2|E_G| = O(|E_G|)$ since we had the assumption
that $G$ is connected. The number of vertices in the SCSS instance is $|V^*| = |V_G|+2|V_H|+4|E_H|+2|E_G| = O(|E_H|)$.
Therefore from Theorem~\ref{thm:marx-csi} we can conclude that under ETH there is no $f(k)\cdot n^{o(k/\log k)}$ algorithm for SCSS
where $n$ is the number of vertices in the graph and $k$ is the number of terminals.
\end{proof}
\section{W[1]-hardness for DSN in planar DAGs}
The main goal of this section is to prove Theorem~\ref{thm:dsn-w[1]-hardness} which is restated below
\begin{reptheorem}{thm:dsn-w[1]-hardness}
The edge-unweighted version of the \textsc{Directed Steiner Network}\xspace problem is W[1]-hard parameterized by the number $k$ of terminal pairs, even when the input is restricted to planar directed acyclic graphs (DAGs). Moreover, there is no $f(k)\cdot n^{o(k)}$ algorithm for any computable function $f$, unless the ETH fails.
\end{reptheorem}
Note that this shows that the $n^{O(k)}$ algorithm of Feldman-Ruhl is asymptotically optimal. To prove Theorem~\ref{thm:dsn-w[1]-hardness}, we reduce from the
\textsc{Grid Tiling} problem introduced by Marx~\cite{marx-beat-treewidth}.
\begin{center}
\noindent\framebox{\begin{minipage}{6.00in}
\textbf{$k\times k$ \textsc{Grid Tiling}}\\
\emph{Input }: Integers $k, n$, and $k^2$ non-empty sets $S_{i,j}\subseteq [n]\times [n]$ where $1\leq i, j\leq k$\\
\emph{Question}: For each $1\leq i, j\leq k$ does there exist an entry $s_{i,j}\in S_{i,j}$ such that
\begin{itemize}
\item If $s_{i,j}=(x,y)$ and $s_{i,j+1}=(x',y')$ then $x=x'$.
\item If $s_{i,j}=(x,y)$ and $s_{i+1,j}=(x',y')$ then $y=y'$.
\end{itemize}
\end{minipage}}
\end{center}
\begin{figure}[t]
\centering
\includegraphics[height=5in]{dsn-hardness}
\caption{The instance of \textsc{Directed Steiner Network}\xspace created from an instance of \textsc{Grid Tiling}\xspace.
\label{fig:dsn}}
\end{figure}
Consider an instance $(k,n, \{S_{i,j\ :\ 1\leq i,j \leq k}\})$ of \textsc{Grid Tiling}. We now build an instance $(G,T)$ of edge-weighted \textsc{Directed Steiner Network}\xspace as shown in Figure~\ref{fig:dsn}.
Set $T= \{(a_i,b_i) \cup(c_i,d_i)\ : i\in [k]\}$, i.e., we have $2k$ terminal pairs. We introduce $k^2$ red gadgets where
each gadget is an $n\times n$ grid. Set the weight of each black edge to $2$.
\begin{definition}
An $a_i \leadsto b_i$ \emph{canonical} path is a path from $a_i$ to $b_i$ which starts with a blue edge coming out of $a_i$,
then follows a horizontal path of black edges and finally ends with a blue edge going into $b_i$. Similarly, a $c_j\leadsto
d_j$ \emph{canonical} path is a path from $c_j$ to $d_j$ which starts with a blue edge coming out of $c_j$, then follows a
vertically downward path of black edges and finally ends with a blue edge going into $d_j$.
\end{definition}
There are $n$ edge-disjoint $a_i \leadsto b_i$ canonical paths: let us call them $P^{1}_{i}, P^{2}_{i}, \ldots, P^{n}_i$ as
viewed from top to bottom. They are named using magenta color in Figure~\ref{fig:dsn}. Similarly we call the canonical paths
from $c_j$ to $d_j$ as $Q^{1}_{j}, Q^{2}_{j}, \ldots, Q^{n}_j$ when viewed from left to right. For each $i\in [k]$ and
$\ell\in [n]$ we assign a weight of $\Delta(n+1-\ell)$ and $\Delta\ell$ to the first and last blue edges of $P^{\ell}_{i}$,
respectively. Similarly for each $j\in [k]$ and $\ell\in [n]$ we assign a weight of $\Delta(n+1-\ell)$ and $\Delta\ell$ to the
first and last blue edges of $Q^{\ell}_{j}$, respectively. Thus the total weight of the first and the last blue edges on any canonical path
is exactly $\Delta(n+1)$. The idea is to choose $\Delta$ large enough such that in any optimum solution the paths between the
terminals will be exactly the canonical paths. We will see later that $\Delta=5n^2$ will suffice for this purpose. Any canonical path consists of the following set of edges:
\begin{itemize}
\item Two blue edges (which sum up to $\Delta(n+1)$)
\item $(k+1)$ black edges not inside the gadgets
\item $(n-1)$ black edges inside each gadget
\end{itemize}
Since the number of gadgets each canonical path visits is $k$ and the weight of each black edge is 2, we have the
total weight of any canonical path is $\Delta(n+1)+2(k+1)+2k(n-1)$.
\begin{figure
\centering
\includegraphics[height=3in]{savings}
\vspace{-40mm}
\caption{Let $u, v$ be two consecutive vertices on the canonical path $P^{\ell}_{i}$. Let $v$ be on the canonical path
$Q^{\ell'}_{j}$ and let $y$ be the vertex preceding it on this path. If $v$ is a green vertex then we subdivide the edge $(y,v)$ by introducing a new vertex $x$
and adding two edges $(y,x)$ and $(x,v)$ of weight 1. We also add an edge $(u,x)$ of weight 1. The idea is if both the edges $(y,v)$ and $(u,v)$ were
being used initially then now we can save a weight of 1 by making the horizontal path choose $(u,x)$ and then we get $(x,v)$ for free, as it is already being used
by the vertical canonical path.
\label{fig:savings}}
\end{figure}
Intuitively the $k^2$ gadgets correspond to the $k^2$ sets in the \textsc{Grid Tiling} instance. Let us denote by $G_{i,j}$ the gadget
which contains all vertices which lie on the intersection of any $a_i \leadsto b_i$ path and any $c_j \leadsto d_j$ path. If $(x,y)\in
S_{i,j}$ then we color green the vertex in the gadget $G_{i,j}$ which is the unique intersection of the canonical paths
$P_{i}^{x}$ and $Q_{j}^{y}$. Then we add a shortcut as shown in Figure~\ref{fig:savings}. The idea is if both the $a_i
\leadsto b_i$ path and $c_j \leadsto d_j$ path pass through the green vertex then the $a_i \leadsto b_i$ path can save a weight
of 1 by using the green edge and a vertical edge to reach the green vertex, instead of paying a weight of 2 to use the
horizontal edge reaching the green vertex. It is easy to see that there is a solution (without using green edges) for the DSN
instance of weight $B^* = 2k \Big ( \Delta(n+1)+2(k+1)+2k(n-1)\Big)$: each terminal pair just uses a canonical path and these
canonical paths are pairwise edge-disjoint.
The following assumption will be helpful in handling some of the border cases of the gadget construction. We may assume that $1 < \min\{x,y\}$ holds for every $(x,y)\in S_{i,j}$: indeed, we can increase $n$ by one
and replace every $(x,y)$ by $(x+1,y+1)$ without changing the problem. Hence, no green vertex can be in the first row or first column of any gadget. Combining this
fact with the orientation of the edges we get the only gadgets which can intersect any $a_i\leadsto b_i$ path are $G_{i,1},
G_{i,2}, \ldots, G_{i,k}$. Similarly the only gadgets which can intersect any $c_j\leadsto d_j$ path are $G_{1,j}, G_{2,j},
\ldots, G_{k,j}$. This completes the construction of the instance $(G,T)$ of \textsc{Directed Steiner Network}\xspace.
Lemmas~\ref{lem:dsn-redn-easy} and \ref{lem:dsn-redn-hard} below prove that the reduction described above is indeed a correct reduction from \textsc{Grid Tiling} to DSN.
\begin{lemma}
\label{lem:dsn-redn-easy} If the instance $(k,n, \{S_{i,j\ :\ 1\leq i,j \leq k}\})$ of \textsc{Grid Tiling} has a solution then the instance $(G,T)$ of \textsc{Directed Steiner Network}\xspace has a solution of weight at most $B^* - k^2$.
\end{lemma}
\begin{proof}
For each $1\leq i,j\leq k$ let $s_{i,j}\in S_{i,j}$ be the entry in the solution of the \textsc{Grid Tiling} instance.
Therefore for every $i\in k$ we know that each of the $k$ entries $s_{i,1}, s_{i,2}, \ldots, s_{i,k}$ have the same first coordinate $\alpha_{i}$. Similarly for every $j\in [k]$ each of the $k$ vertices $s_{1,j}, s_{2,j}, \ldots, s_{k,j}$
has the same second coordinate $\gamma_j$. For each $j\in [k]$ we use the canonical path $Q^{\gamma_j}_{j}$ to satisfy the terminal for $(c_j,d_j)$. For each $i\in [k]$, we essentially use the canonical path $P_{i}^{\alpha_i}$ with the following modifications: for each $j\in [k]$, take the shortcut green edge (as shown in Figure~\ref{fig:savings}) when we encounter the green vertex (this is guaranteed to happen since $(\alpha_i, \gamma_j)=s_{i,j}\in S_{i,j}$) in $G_{i,j}$ at intersection of the canonical paths $P_{i}^{\alpha_i}$ and $Q_{j}^{\gamma_j}$. Hence, overall we save a total of $k^2$: a saving of one per gadget. Thus, we have produced a solution for the instance $(G,T)$ of weight $2k \Big ( \Delta(n+1)+2(k+1)+2k(n-1)\Big) - k^2 = B^*- k^2$.
\end{proof}
We now prove the other direction which is more involved. First we show some preliminary claims:
\begin{claim}
\label{claim:vertical-canonical} Any optimum solution for $(G,T)$ contains a $c_j\leadsto d_j$ canonical path for each $j\in
[k]$.
\end{claim}
\begin{proof}
Suppose to the contrary that there is an optimum solution $N$ for $(G,T)$ which does not contain a canonical $c_{j}\leadsto d_{j}$ path for some $j\in [k]$.
From the orientation of the edges, we know that there is a $c_{j}\leadsto d_j$ path in $N$ that starts with the
blue edge from $Q_{j}^{\ell}$ and ends with a blue edge from $Q_{j}^{\ell'}$ for some $\ell'
> \ell$. We create a new set of edges $N'$ from $N$ as follows:
\begin{itemize}
\item Add all those edges of $Q_{j}^{\ell}$ which were not present in $N$. In particular, we add the last blue edge of $Q_{j}^{\ell}$ since $\ell'>\ell$
\item Delete the last blue edge of $Q^{\ell'}_{j}$.
\end{itemize}
It is easy to see that $N'$ is also a solution for $(G,T)$: this is because $N'$ contains the canonical path $Q_{j}^{\ell}$ to satisfy the pair $(c_j, d_j)$, and the last (blue) edge of any $c_j\leadsto d_j$ canonical
path cannot be on any $a_i\leadsto b_i$ path for any $i\in [k]$. Changing the last blue edge saves us $(\ell'-\ell)\Delta \leq \Delta = 5n^2$. However
we have to be careful since we added some edges to the solution. But these edges are the internal (black) edges of
$Q^{\ell}_{j}$, and their weight is $\leq 2(k+1) + 2k(n-1) =2kn+2 < 5n^2 = \Delta$ since $1\leq k\leq n$. Therefore we are able to create a new
solution $N'$ whose weight is less than that of an optimum solution $N$, which is a contradiction.
\end{proof}
\begin{definition}
\label{defn-almost-canonical} An $a_i\leadsto b_i$ path is called an \emph{almost canonical} path if its first and last edges are blue edges from the same $a_i \leadsto b_i$ canonical path.
\end{definition}
Hence, an $a_i\leadsto b_i$ almost canonical path looks very similar to an $a_i \leadsto b_i$ canonical path, except it can replace some of the horizontal black edges by green edges and vertical black edges as shown in Figure~\ref{fig:savings}. However, note that by definition, an almost canonical path must however end on the same horizontal level on which it began. The proof of the next claim is very similar to that of Claim~\ref{claim:vertical-canonical}.
\begin{claim}
\label{claim:horizontal-canonical} Any optimum solution for DSN contains an $a_i\leadsto b_i$ \emph{almost canonical} path for every
$i\in [k]$.
\end{claim}
\begin{proof}
Suppose to the contrary that there is an optimum solution $N$ which does not contain an almost canonical $a_{i}\leadsto b_{i}$ path for some $i\in [k]$. Hence, the $a_{i}\leadsto b_{i}$ path in $N$ starts and ends at different levels. From the orientation of the edges, we know that there is a $a_{i}\leadsto b_i$ path in the optimum solution that starts
with the blue edge from $P_{i}^{\ell}$ and ends with a blue edge from $P_{i}^{\ell'}$ for some $\ell'
> \ell$ (note that the construction in Figure~\ref{fig:savings} does not allow any $a_i\leadsto b_i$ path to climb onto an upper level).
We create a new set of edges $N'$ from $N$ as follows:
\begin{itemize}
\item Add all those edges of $P_{i}^{\ell}$ which were not present in $N$.
Note that in particular, we add the last blue edge of $P_{i}^{\ell}$ since $\ell'>\ell$.
\item Delete the last blue edge of $P^{\ell'}_{i}$.
\end{itemize}
It is easy to see that $N'$ is also a solution for $(G,T)$: this is because $N'$ contains the canonical path $P_{i}^{\ell}$ to satisfy the pair $(a_i, b_i)$, and the last (blue) edge of any $a_i\leadsto b_i$ canonical
path cannot be on any $c_j\leadsto d_j$ path for any $j\in [k]$. Changing the last edge saves us $(\ell'-\ell)\Delta \leq \Delta = 5n^2$. But we
have to careful since we also added some edges to the solution. The total weight of edges added is $\leq 2(k+1) + 2k(n-1) = 2kn+2 < 5n^2 = \Delta$ since $1\leq k\leq n$. So we are able to create a new solution $N'$ whose weight is less than that of an optimum solution $N$, which is a contradiction.
\end{proof}
\begin{lemma}
\label{lem:dsn-redn-hard} If the instance $(G,T)$ of \textsc{Directed Steiner Network}\xspace has a solution of weight at most $B^* - k^2$ then the instance $(k,n, \{S_{i,j\ :\ 1\leq i,j \leq k}\})$ of \textsc{Grid Tiling} has a solution.
\end{lemma}
\begin{proof}
Consider any optimum solution $X$. By Claim~\ref{claim:vertical-canonical} and
Claim~\ref{claim:horizontal-canonical} we know that $X$ has an $a_i\leadsto b_i$ almost canonical path and a $c_j\leadsto
d_j$ canonical path for every $1\leq i,j\leq k$. Moreover these set of $2k$ paths form a solution for DSN. Since any optimum
solution is minimal, $X$ is the union of these $2k$ paths: one for each terminal pair. For each $i,j\in [k]$ let the $a_i \leadsto b_i$ almost canonical path in $X$ be $\overline{P}_{i}^{\alpha_i}$ and the $c_j \leadsto d_j$ canonical path in $X$ be $Q_{j}^{\gamma_j}$.
The $a_i\leadsto b_i$ almost canonical path $\overline{P}_{i}^{\alpha_i}$ and $c_j\leadsto d_j$ canonical path $Q_{j}^{\gamma_j}$ in $X$ intersect in a unique vertex in the gadget $G_{i,j}$. If each $a_i \leadsto b_i$ path was canonical instead of almost canonical, then the weight of $X$ would have been exactly $B^*$. However we know that weight of $X$ is at most $B^* -k^2$. It is easy to see any $a_i\leadsto b_i$ almost canonical path and any $c_j\leadsto d_j$ canonical path can have at most one edge in common: the edge which comes vertically downwards into the green
vertex (see Figure~\ref{fig:savings}). There are $k^2$ gadgets, and there is at most one edge per gadget which is used for two paths in $X$. Hence for each gadget $G_{i,j}$ there is exactly one edge which is used by both the $a_i\leadsto b_i$ almost canonical path and the $c_j\leadsto d_j$ canonical path in $X$. So the endpoint of each of these common edges must be a green vertex, i.e., $(\alpha_i, \gamma_j)\in S_{i,j}$ for each $i,j\in [k]$.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:dsn-w[1]-hardness}}
Finally, we are now ready to prove ~\autoref{thm:dsn-w[1]-hardness} which is restated below:
\begin{reptheorem}{thm:dsn-w[1]-hardness}
The edge-unweighted version of the \textsc{Directed Steiner Network}\xspace problem is W[1]-hard parameterized by the number $k$ of terminal pairs, even when the input is restricted to planar directed acyclic graphs (DAGs). Moreover, there is no $f(k)\cdot n^{o(k)}$ algorithm for any computable function $f$, unless the ETH fails.
\end{reptheorem}
\begin{proof}
Given an instance $(k,n, \{S_{i,j\ :\ 1\leq i,j \leq k}\})$ of \textsc{Grid Tiling}, we use the reduction described earlier in this section to build an instance $(G,T)$ of edge-weighted \textsc{Directed Steiner Network}\xspace (see Figure~\ref{fig:dsn} for an illustration). It is easy to see that the total number of vertices in $G$ is $O(n^{2}k^{2})$ and moreover can be constructed in ${\mathrm{poly}}(n,k)$ time. Each grid is planar (green shortcut edges do not destroy planarity), and the grids are arranged again in a grid-like manner. Figure~\ref{fig:dsn} actually gives a planar embedding of $G$. Moreover, it is not hard to observe that $G$ is a DAG.
It is known~\cite[Theorem 14.28]{fpt-book} that $k\times k$ \textsc{Grid Tiling}\xspace is W[1]-hard parameterized by $k$, and under ETH cannot be solved in $f(k)\cdot n^{o(k)}$ for any computable function $f$. Combining the two directions from Lemma~\ref{lem:dsn-redn-easy} and Lemma~\ref{lem:dsn-redn-hard}, we get a parameterized reduction from $k\times k$ \textsc{Grid Tiling}\xspace to an instance of DSN which is a planar DAG and has $O(k)$ terminal pairs. Hence, it follows that DSN on planar DAGs is W[1]-hard and under ETH cannot be solved in $f(k)\cdot n^{o(k)}$ time for any computable function $f$.
\end{proof}
Note that Theorem~\ref{thm:dsn-w[1]-hardness} shows that the $n^{O(k)}$ algorithm of
Feldman-Ruhl~\cite{feldman-ruhl} for DSN is asymptotically optimal.
\newpage
\bibliographystyle{splncs03}
|
2,877,628,091,222 | arxiv | \section{Introduction}
\label{1}
Detection of intermediate- and high-energy neutrons is important to identify reaction channels and to extract nuclear structure information for experiments using nuclear reactions, e.g. ($p$,$ppn$) \cite{p2pn}, ($p$,$nd$) \cite{pnd} and ($e$,$e'pn$) \cite{eepn1,eepn2} reactions. However, neutron detection in an experimental environment has remained a challenge due mainly to $\gamma$-ray background and high counting rate. The most dominant sources of background are prompt and delayed $\gamma$ rays from the reaction target, and $\gamma$ rays from the surroundings. Such background cannot be easily eliminated by shielding materials, and in some instances, may result in a high counting rate.
One of the suppression methods of prompt $\gamma$ rays from the reaction target is the time-of-flight (TOF) method since all $\gamma$ rays travel at the speed of light regardless of their energies. This method is usually adopted by neutron detectors that use large-area plastic scintillators, such as LAND \cite{LAND} at GSI and HAND \cite{SubediPhD,HANDPRL} at JLab. Although plastic scintillators can operate at a high rate, a longer flight path, which may lead to a limited solid angle, is usually necessary both to achieve a better $\it{n}$-$\gamma$ discrimination and to reduce the counting rate mainly produced near the reaction target. The TOF method, moreover, cannot eliminate the time-uncorrelated $\gamma$ rays which also exist in most experimental conditions. Such $\gamma$-ray background can be suppressed by increasing the pulse height detection threshold, but this will be at the expense of a reduced neutron detection efficiency. An alternative to separate neutrons from $\gamma$ rays is via the pulse shape discrimination ($\mbox{\rm PSD}$) technique which utilizes the difference in the slow decay components of the induced light output of organic scintillators \cite{PSD}. Neutron detectors with PSD capability are now commercially available in the form of liquid scintillators \cite{liquid}, e.g. DEMON \cite{DEMON} and Neutron Shell \cite{NeutronShell}, as well as the recently developed solid scintillator EJ-299-33 \cite{EJ299}. These scintillation detectors offer neutron detection with $\it{n}$-$\gamma$ discrimination capability at a low detection energy threshold, but may suffer from pile-up in a high-rate environment because of the longer tail components of the induced pulses compared with normal plastic scintillators. Neutron detectors employing the two conventional techniques described above are not always able to fulfill our demands for the detection of high-energy neutrons, especially in an overwhelming background environment. Therefore, a neutron detector with fast counting and $\it{n}$-$\gamma$ discrimination capabilities at a low energy threshold is highly desirable.
In this article, we report on the development of a new type of neutron detector, named Stack Structure Solid organic Scintillator (S$^4$), which has the capability to suppress low-energy $\gamma$ rays efficiently. The detector consists of plastic scintillators with short decay time, and is thus highly affordable and flexible compared with liquid scintillators. This detector offers the capability to discriminate neutrons and $\gamma$ rays via exploitation of the difference of ranges of secondary charged particles, protons and electrons typically, in a plastic scintillator. Because it does not require timing information for $\it{n}$-$\gamma$ separation, it can be placed closer to the reaction target to gain solid angle. The principle of the discrimination technique and related simulations are presented in Section \ref{2}. Sections \ref{3} and \ref{4} describe the configuration and experimental performance of the detector. A conclusion and the future prospect are given in Sections \ref{5} and \ref{6}.
\section{Principle of $\it{n}$-$\gamma$ discrimination and design concept}
\label{2}
\subsection{Principle of $\it{n}$-$\gamma$ discrimination}
\label{2.1}
The main secondary particles generated by neutrons and $\gamma$ rays in an organic scintillator material are protons or carbon ions and electrons, respectively. Given the same energy deposit in the scintillator, the ranges of the electrons and other particles are significantly different. Figure \ref{range} shows the ranges of electron and proton in plastic scintillator calculated within the Continuous Slowing Down Approximation (CSDA) \cite{CSDA}. Below 100 MeV, the CSDA range of a proton (or a carbon ion) is about one or two orders of magnitude shorter than that of an electron. Such difference provides an important means to distinguish the neutron and $\gamma$-ray events.
\begin{figure}
\centering
\includegraphics[width=7cm,clip]{range.pdf}
\caption{\label{range}
Ranges of electron and proton in plastic scintillator calculated within the Continuous Slowing Down Approximation (CSDA) \cite{CSDA}. The density of plastic scintillator is 1.03 g/cm$^{3}$.}
\end{figure}
To this end, we consider a detector consisting of multiple layers of plastic scintillators. The most straightforward way to obtain the range of a secondary particle is to read out the signal from every single layer and identify the number of layers with signals. This method, however, requires a large number of readout electronics. As shown in Fig. \ref{range}, the recoiled protons from neutrons of energy below 100 MeV have ranges from sub-millimeter up to centimeter scale, thus most of them can be stopped in a layer of scintillator with a sub-centimeter thickness. On the other hand, the secondary electrons of the same energy have ranges from a few to more than a hundred times longer than protons and can penetrate many layers of scintillators of the same thickness. To distinguish this difference, only the information of the energy-loss difference in neighboring scintillators is necessary. The simplest way to detect this difference is to take two signals, one from the odd layers and the other from the even layers, as illustrated in Fig. \ref{multilayer}. By reading out signals from the odd and even layers, denoted as $E_{\rm odd}$ and $E_{\rm even}$, respectively, one can define the total signal $E_{\rm all}$ and the balance ratio $\rho$ as,
\begin{eqnarray}
\label{eq1}
\begin{aligned}
& E_{\rm all}=E_{\rm even}+E_{\rm odd}, \\
& \rho=(E_{\rm even}-E_{\rm odd})/(E_{\rm even}+E_{\rm odd}).
\end{aligned}
\end{eqnarray}
$\rho$ equals to $-1$ or 1 if a secondary particle stops within the layer where the conversion takes place. The $\rho$ values are expected to be around zero if a secondary particle penetrates many layers, since the deposited energy will be shared by both odd and even layers. Thus, neutron and $\gamma$ ray can be separated by $\rho$ when an appropriate thickness of plastic scintillator is selected.
\begin{figure}
\centering
\includegraphics[width=7cm,clip]{multilayer.pdf}
\caption{\label{multilayer}
Illustration of the typical ranges of secondary particles induced by neutrons and $\gamma$ rays with the same energy, and the suggested readout method for the multi-layer plastic scintillators.}
\end{figure}
\subsection{Design concept based on Monte Carlo simulations}
\label{2.2}
To investigate the feasibility of the $\it{n}$-$\gamma$ discrimination and to determine the appropriate thickness of plastic scintillator, we performed Monte Carlo simulations using the Geant4 toolkit version 10.2.p02 \cite{geant4,10.2}. We employed the conventional electromagnetic and elastic hadronic scattering models, coupled with the Li$\grave{\rm e}$ge Intranuclear Cascade model (INCL++) \cite{INCL1,INCL2} for inelastic channels above 20 MeV, and the high-precision neutron model (NeutronHP) for all hadronic processes below 20 MeV. The neutron multiple scatterings were also considered for all simulations in the present work.
For simplicity as well as practical reason, we fixed the total thickness of the plastic scintillators to 80 mm, and assumed the scintillators to be of sufficiently large area in all simulations. The simulations were performed assuming several different layer thicknesses: 1, 5 and 10 mm, which correspond to 80, 16 and 8 layers of scintillators in total, respectively. A pencil beam of generated particles was injected perpendicular to the center of the detector. We assumed incident neutrons and $\gamma$ rays with uniform energy distribution from 20 to 60, and from 0 to 10 MeV, respectively. The selected 20-60 MeV range is the typical energy range of the recoil neutrons, e.g. in the ($p$,$nd$) reaction \cite{pnd}, while 10 MeV is almost the maximum energy of $\gamma$ rays from nuclear excited states below particle-emission thresholds. Here, we generated equal number of neutrons and $\gamma$ rays per unit of energy, namely $N^{n}=4\times N^{\gamma}$ in total, where the superscript indicates the type of particle. The energy response of the plastic scintillators is expressed in terms of the electron-equivalent energy (MeV$_{\rm ee}$) using the equations taken from Ref.\cite{MeVee}, since the light outputs of the scintillator produced by an electron and a proton are different functions of the energy loss.
In the following subsections, we investigate the characteristics of the $\it{n}$-$\gamma$ discrimination qualitatively and quantitatively, and choose the appropriate layer thickness for the prototype S$^{4}$ detector for our experimental purpose.
\subsubsection{$\rho$ distributions and principle for layer-thickness determination}
\label{2.2.1}
To demonstrate the principle of the $\it{n}$-$\gamma$ discrimination, we show the detector responses to neutrons and $\gamma$ rays in Fig. \ref{SRNG}, which are the scatter plots for the total pulse height from the scintillators $E_{\rm all}$ in electron-equivalent energy and the balance ratio $\rho$, defined by Eq. \ref{eq1}, of all detected events obtained with different thicknesses. The $\it{n}$-$\gamma$ discrimination efficiency by $\rho$ can be examined by selecting the same region of energy deposit from 5 to 10 MeV$_{\rm ee}$, as shown by the black histograms in Fig. \ref{RNG}. Clear differences in the $\rho$ distributions for neutrons and $\gamma$ rays are observed. The neutrons are more likely to distribute around $\rho$=$-1$ or 1, whereas most of the $\gamma$ rays distribute in the region around $\rho$=0. These trends demonstrate the practical use of $\rho$ to discriminate neutrons and $\gamma$ rays.
The simplest method of $\it{n}$-$\gamma$ discrimination is to define the neutron and $\gamma$-ray events as follows: an event is identified as a neutron candidate if $\mid$$\rho$$\mid$$>$0.9, otherwise it is regarded as a $\gamma$-ray candidate. This discrimination method, however, has certain probabilities of mis-identification. Depending on the reaction points and layer thicknesses, some neutrons may penetrate more than one layer, resulting in their $\rho$ values being distributed between $-1$ and 1, and thus are mis-identified as $\gamma$ rays. Some low-energy $\gamma$ rays, on the other hand, may appear at around $\rho$=$-1$ or 1, and are incorrectly identified as neutrons. The mis-identifications of neutron and $\gamma$ ray have opposite thickness dependencies. Although the simulations suggest that a lower-threshold operation and a better identification of $\gamma$ rays can be achieved with thinner scintillators, the mis-identification of neutrons increases at the same time. On the contrary, the mis-identification of $\gamma$ rays increases when the layers become too thick for the secondary electron to enter the next layer, although thicker layers do help to provide better identification of neutrons. Furthermore, the identification efficiencies for neutron and $\gamma$ ray depend strongly on the energy deposit, as can be seen in Fig. \ref{SRNG} and the red-dashed histograms in Fig. \ref{RNG} (a)--(c). Namely, better $\gamma$-ray but worse neutron identifications are observed for all thicknesses, with increased energy deposit.
\begin{figure}
\centering
\includegraphics[width=9cm,clip]{SRNG.pdf}
\caption{\label{SRNG}
The scatter plots for the total pulse height $E_{\rm all}$ in electron-equivalent energy and the balance ratio $\rho$ of all detected events with neutrons of 20 to 60 MeV in (a)--(c) and $\gamma$ rays of 0 to 10 MeV in (d)--(f). The assumed thickness of the layer is shown in each panel.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=9cm,clip]{RNG.pdf}
\caption{\label{RNG}
The $\rho$ distributions for events with $E_{\rm all}$ between 5 and 10 MeV$_{\rm ee}$ (black histograms for neutrons in (a)--(c) and $\gamma$ rays in (d)--(f)) and above 10 MeV$_{\rm ee}$ (red-dashed histograms for neutrons in (a)--(c)), obtained by slicing the corresponding scatter plots in Fig. \ref{SRNG}.}
\end{figure}
The appropriate layer thickness should not only offer a clear separation between neutrons and $\gamma$ rays, but also provide a good compromise between mis-identifications of $\gamma$ rays and neutrons. On one hand, good $\gamma$-ray identification (i.e. suppression) at a low threshold is important to optimize the purity of the identified neutrons. On the other hand, mis-identification of neutrons should be minimized (i.e. neutron survival rate should be maximized) to ensure sufficient efficiency. Thus the thickness should be optimized to achieve minimal mis-identifications of neutrons and $\gamma$ rays at thresholds as low as reasonably possible.
\subsubsection{Efficiency of $n$-$\gamma$ discrimination and determination of layer thickness}
\label{2.2.2}
To evaluate the performances for different layer thicknesses quantitatively, we calculated the efficiencies of $n$-$\gamma$ discrimination. The detected particles in the simulations were identified using the $\rho$ difference defined above at different pulse height thresholds. The identification efficiencies of neutrons $\epsilon_{n}^{\rm ID}$ and $\gamma$ rays $\epsilon_{\gamma}^{\rm ID}$ can be expressed as
\begin{eqnarray}
\label{eq2}
\begin{aligned}
\epsilon_{\it n}^{\rm ID} &=N^{\it n}_{\it n}/(N^{\it n}_{\it n}+N^{\it n}_{\gamma}),\\
\epsilon_{\gamma}^{\rm ID} &=N^{\gamma}_{\gamma}/(N^{\gamma}_{\gamma}+N^{\gamma}_{\it n}),
\end{aligned}
\end{eqnarray}
where $N_{b}^{a}$ is the number of detected particle ``$\it a$'' that is identified as ``$\it b$'' at each threshold. Figure \ref{1D} shows the dependence of $\epsilon_{n}^{\rm ID}$ and $\epsilon_{\gamma}^{\rm ID}$ on the pulse height threshold with different thicknesses. It can be inferred that 1-mm thickness is not suitable for our purpose. Although the desired suppression of $\gamma$ rays is achieved at a relatively low threshold ($\epsilon_{\gamma}^{\rm ID}$$>$99$\%$ at 1-MeV$_{\rm ee}$ threshold), most of the neutrons are mis-identified as $\gamma$ rays, resulting in a very poor $\epsilon_{n}^{\rm ID}$. The 10-mm thickness design seems to offer a better option with a rather high $\epsilon_{n}^{\rm ID}$ at all thresholds, but the $\it{n}$-$\gamma$ discrimination is not satisfactory at low thresholds, namely $\epsilon_{\gamma}^{\rm ID}\le$90 $\%$ below 5-MeV$_{\rm ee}$ threshold. The 5-mm thickness design offers a balanced solution with 99.9$\%$ $\gamma$-ray suppression, and 53.5$\%$ neutron survival at 5-MeV$_{\rm ee}$ threshold.
\begin{figure}
\centering
\includegraphics[width=11.5cm,clip]{1D.pdf}
\caption{\label{1D}
The identification efficiencies of (a) neutrons $\epsilon_{n}^{\rm ID}$ and (b) $\gamma$ rays $\epsilon_{\gamma}^{\rm ID}$ as functions of pulse height threshold with different thicknesses of layers. See text for details.}
\end{figure}
The neutron survival efficiency is relatively low using the above discrimination method. Such situation occurs because the separation by $\rho$ strongly depends on the region of energy deposit, and mis-identification of neutrons is biased at high energy deposit, as mentioned in Sec. \ref{2.2.1}. For practical use, it is necessary to determine a two-dimensional discrimination cut in a $E_{\rm all}$-$\rho$ plot. Particle selection can be made by a discrimination curve $f(\rho)$. An event with $E_{\rm all}$$<$$f(\rho)$ is identified as a $\gamma$-ray candidate, whereas the one with $E_{\rm all}$$\ge$$f(\rho)$ is identified as a neutron candidate.
For simplicity, we adopted a rectangular cut defined by $\rho$=$\pm$$\rho^{\rm cut}$ and $E_{\rm all}$=$E_{\rm all}^{\rm cut}$ for the two-dimensional discrimination. We investigated the $\epsilon_{\gamma}^{\rm ID}$ and $\epsilon_{n}^{\rm ID}$ as functions of incident energy with $\rho^{\rm cut}$=0.9 and $E_{\rm all}^{\rm cut}$=10 MeV$_{\rm ee}$, as shown in Fig. \ref{2D}. The value in the $x$-axis is the mean value per incident-energy step (4-MeV step for neutrons and 1-MeV step for $\gamma$ rays, respectively). No pulse height threshold was applied in this analysis. One sees immediately improved $\epsilon^{\rm ID}_n$'s for all thicknesses. $\epsilon^{\rm ID}_n$ improves considerably with thickness from about 76$\%$ at 1 mm to about 87$\%$ at 5 mm, but increases only marginally after that. Note also that for neutrons with incident energies from 20 to 60 MeV, $\epsilon^{\rm ID}_n$'s do not have strong dependence on the incident energy because the proportion of the mis-identified neutrons (enclosed by the rectangular cut), which mainly result from the conversions near the interfaces between layers, depends less on the incident energy, but more on the geometry of the detector. The $\epsilon^{\rm ID}_\gamma$ values, on the other hand, decrease with thickness. Similar to the case with one-dimensional ($\rho$) discrimination, thinner layers are preferable to achieve better $\gamma$-ray suppression and operation at a lower threshold.
Therefore, considering the optimal $\epsilon^{\rm ID}_n$ and $\epsilon^{\rm ID}_\gamma$, we have chosen to use 5-mm-thick plastic scintillators for our prototype S$^{4}$ detector. We note that the selection of $E^{\rm cut}_{\rm all}$ = 10 MeV$_{\rm ee}$, which is the maximal energy deposit attributed to de-excitation $\gamma$ rays, is sufficient in the above example for the following reason. Below 10 MeV$_{\rm ee}$, most of the neutrons concentrate at around $\rho$ = $\pm 1$, and those within $| \rho |$ $<$ 0.9 are distributed uniformly for all layer thicknesses (see Fig. 3). Hence, a further reduction in $E^{\rm cut}_{\rm all}$ will not alter the conclusion. In practical use, however, the values of $E_{\rm all}^{\rm cut}$ and $\rho^{\rm cut}$ as well as the shape of the two-dimensional discrimination cut need to be optimized based on the realistic experimental conditions, i.e. energy distributions and fluxes of neutrons and $\gamma$ rays, as well as the detector response.
From Fig. \ref{2D} (b), we have realized that it is difficult to separate $\gamma$ rays of very low energies from neutrons, which is a limitation of this technique, because a low-energy secondary electron behaves just like a proton, i.e. the electron is stopped in the same layer where it is generated. We note that further suppression of $\gamma$ rays, especially the prompt $\gamma$ rays, may be achieved by considering the TOF measurement. The details of $n$-$\gamma$ discrimination with the experimental data will be described in Sec. \ref{4.3}.
\begin{figure}
\centering
\includegraphics[width=11.5cm,clip]{2D.pdf}
\caption{\label{2D}
The identification efficiencies of (a) neutrons $\epsilon_{n}^{\rm ID}$ and (b) $\gamma$ rays $\epsilon_{\gamma}^{\rm ID}$ as functions of incident energy with a rectangular discrimination cut of $\rho^{\rm cut}$=0.9 and $E_{\rm all}^{\rm cut}$=10 MeV$_{\rm ee}$. No pulse height threshold is applied here. See text for details.}
\end{figure}
\section{Detector configuration and readout system}
\label{3}
The constructed neutron detector, named Beihang-Osaka university Stack Structure Solid organic Scintillator (BOS$^4$) detector, consists of two sizes of 5-mm-thick BC408-equivalent plastic scintillator plates, stacked together to form a sixteen-layer scintillator with a total thickness of 80 mm. Figure \ref{detector} shows the schematic view of the BOS$^4$ detector. The dimensions of the detector are 320 mm and 160 mm in horizontal and vertical directions, respectively. An odd layer is horizontally segmented into four plastic scintillator plates, each with an active area of 80$\times$160 mm$^{2}$. An even layer is vertically segmented into two plates, each with a 320$\times$80 mm$^{2}$ active area. Each plate is wrapped with a 12-$\mu$m-thick aluminized Mylar along the long sides to reflect light and prevent light leakage into adjacent plate(s). To read out the odd or even layers collectively, the short (80 mm) ends of the odd or even plates are attached to light guides by optical grease, as shown in Fig. \ref{detector}.
The segmentation of odd and even layers of the BOS$^4$ detector allows position determination; the detector has eight sub-sections defined by four sets of odd (vertical) plates denoted as 1o, 2o, 3o, 4o and two sets of even (horizontal) plates as 1e, 2e. Every sub-section can be used as a self-contained unit for neutron detection and $\it{n}$-$\gamma$ discrimination. Moreover, such segmentation offers a good means to reduce background by utilizing the ``firing'' information in the odd and even layers. Detailed discussions are given in Sec. \ref{4.1} and \ref{4.2}. In the following text, 1o-4o and 1e-2e are defined as the odd and even sub-components, respectively.
All sub-components are read out by six pairs of photomultipliers (PMTs; Hamamatsu H7195/H6410); the odd sub-components are read out by four pairs of PMTs connected vertically, and the even sub-components by two pairs of horizontal PMTs attached at both ends of the long side. The analogue output of every PMT was divided into two parts, one of which was fed into a Fast Encoding and Readout Analogue-to-Digital Converter (FERA; LeCroy 4300B) module and the other one into a constant fraction discriminator (CFD; ORTEC 935) to generate timing signals. One of the CFD outputs was sent to a Time-to-Digital Converter (TDC) system which consisted of Time-to-FERA Converter (TFC; LeCroy 4303) and FERA modules. CFD output signals from both ends of the same sub-component were fed into a Mean Timer (REPIC; RPN-070) module to generate a coincident signal. The timings of the two signals were averaged to minimize the position dependence of the output coincidence timing. The trigger signal of the BOS$^{4}$ detector was made by the logic OR of the coincident signals of the six sub-components.
\begin{figure}
\centering
\includegraphics[width=3.5cm,clip]{lightguide.pdf}
\includegraphics[width=7.3cm,clip]{exploded.pdf}
\includegraphics[width=6.5cm,clip]{section.pdf}
\caption{\label{detector}
Assembly diagrams of (a) the light guide and (b) plastic arrangement. (c) Cross-sectional view of the assembled BOS$^4$ detector.
In the cross-sectional view, the odd and even layers as well as the attached light guides are displayed in white and grey, respectively.}
\end{figure}
\section{Experiment}
\label{4}
The BOS$^4$ detector was tested using cosmic rays and ion beams in a series of experiments at the cyclotron facility of Research Center for Nuclear Physics (RCNP), Osaka University. Cosmic rays and protons produced by the $^{12}$C($\it{p}$,$\it{pd}$) reaction were used to calibrate the light output. Monoenergetic neutrons generated by $\it{d}$+$\it{d}\to\it{n}$+$^{3}$He reaction, denoted simply as ($d$,$^3$He) hereafter, were used to measure the neutron detection efficiency as well as to investigate the $\it{n}$-$\gamma$ discrimination. It is worth noting that the $^{12}$C($\it{p}$,$\it{pd}$) and ($d$,$^3$He) experiments were performed at the recently constructed Grand Raiden Forward mode beam line (GRAF) \cite{GRAF}. The use of the GRAF beam line, of which the Faraday cup is located more than 20 meters downstream from the scattering chamber, has helped to reduce the $\gamma$-ray background significantly.
\subsection{Experimental setup}
\label{4.0}
In the ($d$,$^3$He) experiment, a deuterated polyethylene (CD$_{2}$) target with a thickness of 24.3 mg/cm$^{2}$ was irradiated by a 196-MeV deuteron beam with an intensity of 10 nA. To obtain monoenergetic neutrons, the scattered $^{3}$He particles at 5.5$^{\circ}$ and 13.5$^{\circ}$ were momentum-analyzed by the Grand Raiden (GR) spectrometer \cite{GR}. Recoil neutrons with kinetic energies centered at 16.5 and 31.2 MeV were detected by the BOS$^4$ detector at the corresponding kinematical central angles of 148$^{\circ}$ and 112$^{\circ}$, respectively. The experimental setup around the target is displayed in Fig. \ref{setup}. A membrane made of 400-${\rm \mu}$m-thick stainless steel, which was installed on the scattering chamber for vacuum isolation, worked as a shield to eliminate low-energy electron background from the target. A set of $\Delta E$ and $E$ plastic scintillation detectors, with thicknesses of 3 and 60 mm respectively, was placed between the CD$_{2}$ target and the BOS$^4$ detector mainly for other experimental purpose which will not be discussed in this article. In the present work, the thin $\Delta E$ scintillator was used as the veto detector to reject charged particles. The total active area of the telescope system is 240$\times$90 mm$^{2}$ with the same angular coverage as the BOS$^4$ from the target. The flight paths from the target to the surface of the $\Delta E$, $E$ and BOS$^4$ detectors for both settings were 45, 49 and 71 cm, respectively. The kinetic energies of the neutrons were determined by the TOF between the target and the BOS$^4$ detector. The details are described later in Sec. \ref{4.2}.
\begin{figure}
\centering
\includegraphics[width=9cm,clip]{setup.pdf}
\caption{\label{setup}
Schematic view of the experimental setup around the target for the ($d$,$^3$He) measurements at 5.5$^{\circ}$ and 13.5$^{\circ}$.}
\end{figure}
\subsection{Light output calibration}
\label{4.1}
To define the energy threshold so as to determine the neutron detection efficiency, we calibrated the light output in unit MeV$_{\rm ee}$. The light output ($Q^{i{\rm o(e)}}$; $i$=1,2,3,4 (or 1,2)) measured in the $i$-th odd (even) sub-component is constructed by taking the geometric mean of the analogue signals from the PMT outputs at both ends, $Q^{i{\rm U(L)}}$ and $Q^{i{\rm D(R)}}$, i.e.
\begin{equation}
\label{eq3}
Q^{i{\rm o(e)}}=\sqrt{Q^{i{\rm U(L)}}\cdot Q^{i{\rm D(R)}}},
\end{equation}
where U (D) and L (R) represent the upper (lower) signal for the odd and the left (right) signal for the even sub-components, respectively.
The relationship between the deposited energy in MeV$_{\rm ee}$ and the ADC outputs $Q^{i{\rm o(e)}}$ was calibrated for each sub-component. For the BOS$^{4}$ detector, it is necessary to lower the pulse height thresholds for odd and even layers to observe the Compton edge of $\gamma$ rays. Such low threshold condition is hard to fulfill for most $\gamma$ rays. Therefore, a usual calibration method using standard $\gamma$-ray sources is not practical. In the present work, we used cosmic rays and intermediate-energy protons to study the energy response of the detector.
The cosmic rays were measured with the BOS$^4$ detector laid face up on a table. Using the observed peak of muons, we adjusted the relative gain between two PMTs at both ends of each sub-component. Next, we used the protons with continuous energy up to 140 MeV, produced by the $^{12}$C($\it{p}$,$\it{pd}$) reaction. The $E$ detector in front of the BOS$^4$ was removed during this measurement. When charged particles with continuous energy enter the BOS$^4$ detector, the deposited energies are distributed among all of the penetrated layers, resulting in a zig-zag structure in the $E_{\rm all}$-$\rho$ plot. Figure \ref{proton}(a) shows the $E_{\rm all}$-$\rho$ plot obtained with a Monte Carlo simulation using the Geant4 code taking into account the realistic geometry of the experimental setup. Each turning point in the plot indicates that the proton of a certain energy stops at the end of a certain layer. Here, the theoretical light output of the maximum stopping energy in each layer was estimated by identifying each turning point. The zig-zag structure in the experimental data was observed for each sub-section by using $Q^{i{\rm o}}$ and $Q^{i{\rm e}}$. Since we had no means to calibrate the light output layer by layer, a linear coefficient $C^{i{\rm o(e)}}$ between the theoretical values (MeV$_{\rm ee}$) and ADC outputs (channel) was determined for the $i$-th odd(even) sub-component based on the first eight turning points. The turning points for the deeper layers are hard to identify in the experimental data due mainly to the energy resolution and multiple scattering effect.
After the calibration of each sub-component, the sum of the energy deposits from the constitutive odd (even) sub-components is taken as the total energy deposit in the odd (even) layers $E_{\rm odd (even)}$. In determining $E_{\rm odd(even)}$, we have suppressed background signals by taking advantage of the layer segmentation as described below. We consider the $i$-th odd (even) sub-component to be ``fired'', denoted as $F^{i{\rm o(e)}}$=1, if the recorded timings from both ends are within the reasonable range; otherwise it is considered as ``unfired'' and denoted as $F^{i{\rm o(e)}}$=0. At a normal event rate, only one of the odd and/or even sub-components (one of the sub-sections) can be fired except for some events that occur near the borders of adjacent sub-components. Instead of summing up all sub-components and in the process adding electronic noise from the unfired ones, we determine the $E_{\rm odd(even)}$ by adding the outputs of the constitutive odd (even) sub-components depending on their firing conditions, namely,
\begin{eqnarray}
\label{eq4}
\begin{aligned}
E_{\rm odd}&=\sum_{i=1}^{4}C^{i{\rm o}} Q^{i{\rm o}} F^{i{\rm o}},\\
E_{\rm even}&=\sum_{i=1}^{2}C^{i{\rm e}} Q^{i{\rm e}} F^{i{\rm e}}.
\end{aligned}
\end{eqnarray}
In the case where only the odd (even) sub-components are fired ($E_{\rm even(odd)}$=0), the light output of the even (odd) sub-component with the highest pulse height is taken as $E_{\rm even(odd)}$. The definitions of $E_{\rm all}$ and $\rho$ are given by Eq. \ref{eq1}. Unless otherwise stated, the $E_{\rm all}$-$\rho$ plots shown throughout this article refer to the sum of all fired sub-component(s).
The zig-zag structure of the $E_{\rm all}$-$\rho$ plot for the measurement of protons is shown in Fig. \ref{proton}(b). Minor differences are observed for some turning points that may be attributed to the difference of light collection between layers, due probably to the contact condition with the light guides. Such differences are considered and discussed when determining the energy resolution in the following section.
\begin{figure}
\centering
\includegraphics[width=11.5cm,clip]{proton.pdf}
\caption{\label{proton}
The zig-zag structure of the $E_{\rm all}$-$\rho$ plots for protons with continuous energy in (a) simulation (sim.) and (b) experimental data (exp.) after calibration. The simulation result is the original output without any adjustment.}
\end{figure}
\subsection{Energy, timing and position resolutions}
\label{4.2}
The intrinsic energy resolution $\sigma(L)$ of a scintillator attributed to photon statistics can be simply described as follows \cite{Leo},
\begin{equation}
\label{eq5}
\frac{\sigma(L)}{L}=\frac{\alpha}{\sqrt{L}},
\end{equation}
where $L$ is the light output in MeV$_{\rm ee}$, and $\alpha$ is a proportionality factor. As mentioned in the last section, there exists light collection difference between layers due possibly to the contact problem. Such fluctuations of detected photon number also contribute to the spread of the total light output $E_{\rm all}$. Therefore, we have considered both contributions in the simulations to fully understand the energy resolution of the experimental data. The proportionality factor $\alpha$ of the intrinsic resolution was determined to be 0.33 by fitting the width of the output from the first and second layers in Fig. \ref{proton}(b). We assume that $\alpha$ is a common factor for all layers. Next, we introduced photon number fluctuations for layers to the original outputs of the simulation. The fluctuation of each of the shallower eight layers was determined by comparing the simulated and experimental turning points in Fig. \ref{proton}(a) and (b), respectively. The fluctuations of the deeper layers are assumed to be the same as the shallower ones. After the adjustment in the simulation, the thickness of the zig-zag line is well reproduced, as shown in Fig. \ref{resolution}(a). The total light output response of all layers is simulated for the cosmic rays assuming only muons, and compared with the experimental data, as displayed in Fig. \ref{resolution}(b). Here, we have added an exponential background in addition to the simulated response function of cosmic rays. The peak position and structure of cosmic rays with a Landau tail are well reproduced, which confirms the validity of the energy calibration.
\begin{figure}
\centering
\includegraphics[width=11.5cm,clip]{resolution.pdf}
\caption{\label{resolution}
(a) The simulated (sim.) $E_{\rm all}$-$\rho$ plot for protons and (b) the experimental (exp.) and simulated (sim.) light outputs of cosmic rays. The energy resolution and light collection fluctuations of layers are considered in the simulations. An additional exponential background (b.g.) is added to the simulated response function of cosmic rays. See text for details.}
\end{figure}
The time steps of the TDC outputs were calibrated by the radio-frequency (RF) period of the cyclotron in the ($d$,$^3$He) experiment. The timing signals from two PMTs attached to both ends of the $i$-th odd(even) sub-component are denoted as $t^{i{\rm U(L)}}$ and $t^{i{\rm D(R)}}$. For simplicity, we omit the index $i$ hereafter. The width of the time difference between $t^{\rm U(L)}$ and $t^{\rm D(R)}$ is about 300 psec in $\sigma$. It was estimated by restricting the spatial distribution of the neutrons (to minimize contribution from the position dependence) using two-body kinematics with the position spectrum of $^{3}$He. The width reflects the intrinsic time resolution of the detector associated with the PMTs and electronics.
The TOF information from the reaction target to each odd(even) sub-component $t^{\rm o(e)}$ is determined by the average of $t^{\rm U(L)}$ and $t^{\rm D(R)}$ as well as the RF signal $t_{\rm RF}$, i.e.
\begin{equation}
\label{eq6}
t^{\rm o(e)}=\frac{t^{\rm U(L)}+t^{\rm D(R)}}{2}-t_{\rm RF}+t_{0}^{\rm o(e)},
\end{equation}
where $t_{0}^{\rm o(e)}$ is an offset, which can be calibrated by the prompt $\gamma$ rays for each sub-component. Slewing corrections using the pulse height information of the prompt $\gamma$ rays were applied for this analysis. The typical TOF resolution of prompt $\gamma$ rays is 350 psec in $\sigma$, which includes the intrinsic time resolution, the time fluctuation of the RF signal and the effect of finite thickness of the detector. All of these components can be consistently understood. The width of TOF corresponds to energy resolution of about 8$\%$ for neutron kinetic energy of 40 MeV.
To deduce the angular distribution of neutrons, information of the hit position on the detector is necessary. The position on each sub-component $x^{\rm o(e)}$ is given by the difference between $t^{\rm U(L)}$ and $t^{\rm D(R)}$, as expressed below,
\begin{equation}
\label{eq7}
x^{\rm o(e)}=c_{\rm eff}\frac{t^{\rm U(L)}-t^{\rm D(R)}}{2}+x_{0}^{\rm o(e)},
\end{equation}
where $c_{\rm eff}$ is the effective propagation speed of light in the detector, and $x_{0}^{\rm o(e)}$ is a calibration constant for each sub-component. $c_{\rm eff}$ is about 16 cm/ns which was deduced from the distribution of time difference $t^{\rm U(L)}-t^{\rm D(R)}$ (about 2 nsec difference in 32-cm distance). The position resolution derived from the time difference width of 300 psec is around 2.4 cm in $\sigma$. Firing of both odd and even sub-components allows a two-dimensional position determination. In a case that only the odd (even) sub-component is fired, the position in the orthogonal direction is determined by the segmentation.
\subsection{Neutron detection efficiency determined by ($\it{d}$,$^{3}$He) measurement}
\label{4.3}
To measure the neutron detection efficiency, we irradiated the BOS$^4$ detector with monoenergetic neutron beams centered at two different energies produced by the $\it{d}$+$\it{d}\to\it{n}$+$^{3}$He reaction. The measurements covered two kinetic energy regions from 15.4 to 17.6 MeV and from 28.6 to 33.8 MeV at 5.5$^\circ$ and 13.5$^\circ$ settings of the ($\it{d}$,$^{3}$He) experiment, respectively. The detection efficiency was determined by the ratio of the number of detected and identified neutrons in coincidence with $^3$He to the number of the incoming neutron flux, defined by the $^{3}$He events. The $^{3}$He particles were detected and identified using the GR spectrometer. By employing $\rho$ and TOF information simultaneously, mis-identified $\gamma$ rays were estimated and subtracted, and the number of neutrons in coincidence with the $^3$He events was deduced. In the following paragraphs, we describe two deduction methods to check the consistency and to estimate the systematic error of neutron events.
In the first method, we identified the neutrons using the TOF and estimated the number of mis-identified $\gamma$ rays using $\rho$. Thanks to the low background environment, the monoenergetic neutrons from the $d$($d$,$^3$He) reaction and the prompt $\gamma$ rays from the $^{12}$C($\it{d}$,$^{3}$He) reaction, denoted by ``$n$-mono'' and ``$\gamma$-prom'', respectively, can be easily identified in the $^3$He-gated TOF spectrum for the 13.5$^\circ$ setting, as shown in Fig. \ref{discrimination}(a). For convenience, we define these neutron and $\gamma$-ray events as TOF$_{n{\mhyphen}{\rm mono}}$ and TOF$_{\gamma{\mhyphen}{\rm prom}}$ events, respectively. The $^3$He-gated TOF$_{n{\mhyphen}{\rm mono}}$- and $^3$He-ungated TOF$_{\gamma{\mhyphen}{\rm prom}}$-selected $E_{\rm all}$-$\rho$ plots are shown in Fig. \ref{discrimination}(b) and (c), respectively. The reason for showing the $^3$He-ungated plot in Fig. \ref{discrimination}(c) will become clear later. As expected, the difference in $\rho$ distribution between neutrons and $\gamma$ rays is clearly observed. The broadenings at $\rho$ around $-1$ and 1 are due to pedestal fluctuation. We should note the possible presence of $\gamma$ rays, which are expected to distribute especially around $\rho$ = 0 at $E_{\rm all}$ below 10 MeV$_{\rm ee}$, in Fig. \ref{discrimination}(b). In this analysis, we defined a two-dimensional discrimination cut for the neutrons and $\gamma$ rays, as shown by the red-dashed parabolic curves in Fig. \ref{discrimination}(b) and (c), with roots $\rho^{\rm cut}$ = $\pm 0.6$ and a vertex point at ($\rho$ = 0, $E^{\rm cut}_{\rm all}$ = 10 MeV$_{\rm ee}$). The events enclosed by the red-dashed lines, denoted by the ``$\gamma$-like'' region, are taken as the $\gamma$-ray-like events, whereas those outside the ``$\gamma$-like'' region as the neutron-like events, denoted by ``$n$-like''. We chose the values of $\rho^{\rm cut}$ and $E_{\rm all}^{\rm cut}$ to cover the $\gamma$ rays in the middle as much as possible, while simultaneously reducing the loss of neutrons at the side regions. The non-hatched histogram in Fig. \ref{discrimination}(d) is the $\rho$ spectrum of the TOF$_{n{\mhyphen}{\rm mono}}$-selected events, which was obtained by the projection of Fig. \ref{discrimination}(b). Assuming that the $\gamma$-like region consists of predominantly $\gamma$-ray events, the number of neutrons $N^n$ was determined by integrating the $\rho$ spectrum of the TOF$_{n{\mhyphen}{\rm mono}}$-selected events after subtracting the accidental $\gamma$ rays as follows:
\begin{equation}
\label{eq8}
\begin{split}
N^n=&\int_{\rho}[N_{\rho}(^{3}{\rm He}\hspace{-0.7 mm}\cap\hspace{-0.7 mm}{\rm TOF}_{n{\mhyphen}{\rm mono}})\\
&-N_{\rho}(^{3}{\rm He}\hspace{-0.7 mm}\cap\hspace{-0.7 mm}{\rm TOF}_{\gamma{\mhyphen}{\rm prom}})\times
\frac{N(\gamma{\mhyphen}{\rm like}\hspace{-0.7 mm}\cap^{3}\hspace{-0.7 mm}{\rm He}\hspace{-0.7 mm}\cap\hspace{-0.7 mm}{\rm TOF}_{n{\mhyphen}{\rm mono}})}{N(\gamma{\mhyphen}{\rm like}\hspace{-0.7 mm}\cap^{3}\hspace{-0.7 mm}{\rm He}\hspace{-0.7 mm}\cap\hspace{-0.7 mm}{\rm TOF}_{\gamma{\mhyphen}{\rm prom}})}]d\rho,
\end{split}
\end{equation}
where the terms $N$ with and without the subscript ``$\rho$'' represent the $\rho$ distributions and $\rho$-integrated events obtained with the conditions in the brackets, respectively. The accidental $\gamma$ rays are given by the second integral on the right hand side of Eq. \ref{eq8}. Since the $E_{\rm all}$-$\rho$ distributions for all $^3$He-gated or $^3$He-ungated $\gamma$ rays, including the prompt $\gamma$ rays, are expected to be almost similar, i.e. $\frac{N(\gamma{\mhyphen}{\rm like}\cap^{3}{\rm He}\cap{\rm TOF}_{\gamma{\mhyphen}{\rm prom}})}{N(\gamma{\mhyphen}{\rm like}\cap{\rm TOF}_{\gamma{\mhyphen}{\rm prom}})}$$\cong$$\frac{N_{\rho}(^{3}{\rm He}\cap{\rm TOF}_{\gamma{\mhyphen}{\rm prom}})}{N_{\rho}({\rm TOF}_{\gamma{\mhyphen}{\rm prom}})}$, we have replaced $\frac{N_{\rho}(^{3}{\rm He}\cap{\rm TOF}_{\gamma{\mhyphen}{\rm prom}})}{N(\gamma{\mhyphen}{\rm like}\cap^{3}{\rm He}\cap{\rm TOF}_{\gamma{\mhyphen}{\rm prom}})}$ in Eq. \ref{eq8} by $\frac{N_{\rho}({\rm TOF}_{\gamma{\mhyphen}{\rm prom}})}{N(\gamma{\mhyphen}{\rm like}\cap{\rm TOF}_{\gamma{\mhyphen}{\rm prom}})}$ to reduce statistical uncertainty. The hatched histogram in Fig. \ref{discrimination}(d) shows the $\rho$ spectrum of the estimated accidental $\gamma$ rays, which was obtained by multiplying the projected spectrum of Fig. \ref{discrimination}(c) by the normalization factor $\frac{N(\gamma{\mhyphen}{\rm like}\cap^{3}{\rm He}\cap{\rm TOF}_{n{\mhyphen}{\rm mono}})}{N(\gamma{\mhyphen}{\rm like}\cap{\rm TOF}_{\gamma{\mhyphen}{\rm prom}})}$.
\begin{figure}
\centering
\includegraphics[width=11cm,clip]{discrimination.pdf}
\caption{\label{discrimination}
(a) Typical TOF spectrum with $^{3}$He coincidence at 13.5$^{\circ}$ setting in the ($\it{d}$,$^{3}$He) experiment. Monoenergetic neutrons ($n$-mono) and prompt $\gamma$ rays ($\gamma$-prom) are indicated by the blue and red regions, respectively. $E_{\rm all}$-$\rho$ plots of (b) neutrons and (c) $\gamma$ rays, obtained by selecting the corresponding regions, denoted by TOF$_{n{\mhyphen}{\rm mono}}$ and TOF$_{\gamma{\mhyphen}{\rm prom}}$, in the TOF spectra. For practical reason, we show the $^3$He-gated and $^3$He-ungated plots in (b) and (c), respectively. The red-dashed lines in (b) and (c) describe the two-dimensional discrimination cut of neutrons and $\gamma$ rays. The ``$n$-like'' region represents all events outside the ``$\gamma$-like'' region. (d) The $\rho$ distributions of neutrons (blue histogram) and $\gamma$ rays (red-dashed histogram with hatched area), obtained by the projections of the plot in (b) and the normalized plot in (c), respectively. See text for details.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=11.5cm,clip]{TOFsub.pdf}
\caption{\label{TOFsub}
(a) $E_{\rm all}$-$\rho$ plot of the detected neutral-particle events in coincidence with $^3$He for the 13.5$^{\circ}$ setting, with the same two-dimensional discrimination cut as Fig. \ref{discrimination}, shown by the red-dashed line. (b) The corresponding TOF spectra of a sub-component by selecting the two regions. See text for details.}
\end{figure}
The number of neutrons $N^n$ can also be determined using procedures opposite to that in the first method, namely neutrons are selected using the $E_{\rm all}$-$\rho$ information and integrated after subtracting the mis-identified $\gamma$ rays estimated from the TOF spectrum. Figure \ref{TOFsub} (a) shows the $E_{\rm all}$-$\rho$ plot of the detected neutral-particle events in coincidence with $^3$He for the 13.5$^{\circ}$ setting of the ($\it{d}$,$^{3}$He) reaction. Here the same two-dimensional discrimination cut in the first method, shown by the red-dashed parabolic curve, was used. As discussed in Sec. \ref{2.2.2}, the $\it n$-$\gamma$ discrimination by $\rho$ has a non-zero probability of mis-identification of $\gamma$ rays below 5 MeV$_{\rm ee}$, due to the possible mixture of neutrons and $\gamma$ rays. The mis-identified $\gamma$ rays in the $n$-like region of the $E_{\rm all}$-$\rho$ plot can be estimated and subtracted by comparing the TOF spectra of the $n$-like and $\gamma$-like regions as follows:
\begin{equation}
\label{eq9}
\begin{split}
N^n=&\sum_{{\rm sub}{\mhyphen}{\rm component}}\int_{\rm TOF}[N_{\rm TOF}(^{3}{\rm He}\hspace{-0.7 mm}\cap\hspace{-0.7 mm}{n{\mhyphen}{\rm like}})\\
&-N_{\rm TOF}(^{3}{\rm He}\hspace{-0.7 mm}\cap\hspace{-0.7 mm}{\gamma{\mhyphen}{\rm like}})\times
\frac{N({\rm TOF}_{\gamma{\mhyphen}{\rm prom}}\hspace{-0.7 mm}\cap^{3}\hspace{-0.7 mm}{\rm He}\hspace{-0.7 mm}\cap\hspace{-0.7 mm}{n}{\mhyphen}{\rm like})}{N({\rm TOF}_{\gamma{\mhyphen}{\rm prom}}\hspace{-0.7 mm}\cap^{3}\hspace{-0.7 mm}{\rm He}\hspace{-0.7 mm}\cap\hspace{-0.7 mm}{\gamma}{\mhyphen}{\rm like})}]d({\rm TOF}),
\end{split}
\end{equation}
where the terms $N$ with and without the subscript ``TOF'' represent the TOF distributions and TOF-integrated events obtained with the conditions in the brackets, respectively. Since the TOF$_{\gamma{\mhyphen}{\rm prom}}$ events consist mostly of $\gamma$ rays, and the TOF$_{\gamma{\mhyphen}{\rm prom}}$ events found in the $n$-like region are due to the ``leaked'' $\gamma$ rays, we expect the ratios of the leaked $\gamma$ rays to those in the $\gamma$-like region to be similar for the $^3$He-gated and $^3$He-ungated measurements, i.e. $\frac{N({\rm TOF}_{\gamma{\mhyphen}{\rm prom}}\cap^{3}{\rm He}\cap{n}{\mhyphen}{\rm like})}{N({\rm TOF}_{\gamma{\mhyphen}{\rm prom}}\cap^{3}{\rm He}\cap{\gamma}{\mhyphen}{\rm like})}$$=$$\frac{N({\rm TOF}_{\gamma{\mhyphen}{\rm prom}}\cap{n}{\mhyphen}{\rm like})}{N({\rm TOF}_{\gamma{\mhyphen}{\rm prom}}\cap{\gamma}{\mhyphen}{\rm like})}$. Hence, to reduce the statistical uncertainty attributed to the normalization factor in Eq. \ref{eq9}, we determined and adopted the ratio for the $^3$He-ungated measurements. Figure \ref{TOFsub} (b) shows a typical TOF spectrum for the $n$-like region (black histogram) and a normalized TOF spectrum for the $\gamma$-like region (red-dashed histogram with hatched area) of a sub-component with $^{3}$He coincidence. We subtracted the normalized TOF spectrum of the $\gamma$-like region from the TOF spectrum of the $n$-like region with $^{3}$He coincidence, and then integrated the number of net neutrons. The same procedure was applied to the TOF spectrum of each sub-component. The net neutrons were later summed up taking into consideration the event multiplicities, i.e. the number of sub-components that were fired by one event, to avoid double counting.
We deduced the neutron detection efficiencies for different energy thresholds. The values obtained with the two methods are consistent with each other within the statistical uncertainty. The difference between the two, which is much smaller than the statistical error of the neutron events, is taken as the systematic error. The deduction procedures described above were performed for both angular settings. Table \ref{tab1} shows the experimental efficiencies of the BOS$^{4}$ detector at two energies for different thresholds, with each value followed by the statistical and systematic errors, respectively.
\begin{table}
\centering
\caption{\label{tab1} The measured detection efficiencies at two neutron energies for different thresholds from 5 to 10 MeV$_{\rm ee}$. The statistical and systematic errors of the efficiencies are also listed. For simplicity, we show the superscripts that describe the errors only for one data point.}
\begin{tabular}{ c | c c }
\toprule
T$_{n}$ [MeV] &Threshold [MeV$_{\rm ee}$] &Detection efficiency [$\%$]\\
\hline
\multirow{6}{*}{15.4$\sim$17.6}
&5.0 &3.82$\pm$0.20$^{\rm (stat)}$$\pm$0.04$^{\rm (syst)}$\\
&6.0 &3.13$\pm$0.18$\pm$0.03 \\
&7.0 &2.32$\pm$0.16$\pm$0.02 \\
&8.0 &1.68$\pm$0.16$\pm$0.02 \\
&9.0 &1.19$\pm$0.11$\pm$0.02 \\
&10.0 &0.70$\pm$0.08$\pm$0.01 \\
\hline
\multirow{6}{*}{28.6$\sim$33.8}
&5.0 &6.71$\pm$0.24$\pm$0.02\\
&6.0 &5.97$\pm$0.22$\pm$0.02 \\
&7.0 &5.37$\pm$0.21$\pm$0.01 \\
&8.0 &4.74$\pm$0.20$\pm$0.01\\
&9.0 &4.25$\pm$0.19$\pm$0.01\\
&10.0 &3.86$\pm$0.17$\pm$0.01\\
\bottomrule
\end{tabular}
\end{table}
The experimental efficiencies are compared with the simulations employing the INCL++ model \cite{INCL1,INCL2}. For direct comparison, the same conditions for neutron selection and the subtraction methods were applied in the analysis of the simulation results to include the possible loss of neutrons during the discrimination process. To examine any possible model dependence, we also performed simulations employing the Bertini intranuclear cascade model \cite{BERT}. All materials between the target and the BOS$^4$ detector were included in the simulations. The simulation results using the two models and the experimental data are shown in Fig. \ref{efficiency}. Here, we have taken the quadratic sum of the statistical and systematic errors as the total experimental error. The average kinetic energies at two angular settings are calculated to be 16.5$\pm1.1$ MeV and 31.2$\pm2.6$ MeV, respectively. Good agreements are observed between simulations and experiment at 16.5$\pm1.1$ MeV. At energy region beyond 30 MeV, systematic discrepancies are seen between the two models. Both models agree with the experimental data at 31.2$\pm2.6$ MeV within one to three standard deviations. Due to the small difference between the two models at this energy and the sizable experimental error bars, it is difficult to decide the appropriate model with the present data. Designated measurement of neutron detection efficiency is needed to determine absolute cross sections involving higher-energy neutrons in the future.
\begin{figure}
\centering
\includegraphics[width=6cm,clip]{efficiency.pdf}
\caption{\label{efficiency}
Comparison between the measured efficiencies and simulation results for different thresholds.
The solid curves were calculated using the INCL++ model coupled with NeutronHP model,
while the dashed ones were by the Bertini model coupled with NeutronHP model. }
\end{figure}
\section{Summary}
\label{5}
A multi-layer plastic scintillation detector, the S$^4$ detector, is proposed for the detection of neutrons with kinetic energy up to 100 MeV. We constructed a sixteen-layer prototype detector with a 5-mm layer thickness, named BOS$^4$, which has an active volume of 320$\times$160$\times$80 mm$^3$. The BOS$^4$ detector was tested in a series of experiments using cosmic rays, protons, $\gamma$ rays and neutrons. Both simulations and experimental data indicate that a good $\it{n}$-$\gamma$ discrimination is achieved above 5 MeV$_{\rm ee}$ by means of the range difference of secondary particles induced by neutrons and $\gamma$ rays in the plastic scintillator. By employing the range discrimination, accidental (time-uncorrelated) $\gamma$ rays can be efficiently suppressed, thus enabling a lower detection threshold. This discrimination technique shows an advantage from other existing methods for plastic scintillator, which usually employs the conventional TOF method and suffers from accidental $\gamma$ rays. With the capability of high rate detection, the S$^4$ detector is superior in a high background environment, and can make an ideal neutron detector especially when neutron events are overwhelmed by $\gamma$-ray background. The detection efficiency of the detector was determined at two neutron energies by the $\it{d}$+$\it{d}\to\it{n}$+$^{3}$He reaction, and a good agreement was obtained between experimental data and Monte Carlo simulations using the commonly-used models in the Geant4 code.
\section{Future prospect}
\label{6}
As shown in Fig. \ref{SRNG}, there are always some neutrons distributed randomly between $\rho$ values of $-1$ and 1 and thus mis-identified as $\gamma$ rays, even at a very low threshold. One possible reason is that some of the conversions occur near the interface between layers and the low-energy secondary particles from neutrons can easily reach the next layer. In order to reduce such mis-identification of neutrons, especially for low-statistic experiments, we suggest to use three separate readouts rather than only two (odd and even) readouts, namely the output of one layer in every three layers are connected to one readout. Instead of the balance ratio $\rho$, a Dalitz plot can then be defined by the three readouts. Considering the bordering reactions, protons or carbon ions from neutrons below 100 MeV are expected to stop within two layers after conversions, whereas electrons from $\gamma$ rays with the same energy most likely penetrate three layers or more. By differentiating the number of penetrated layers up to three, the three-separate-readout method is expected to provide further versatility for the identification of neutrons.
\section*{Acknowledgement}
We thank the RCNP Ring Cyclotron staff for delivering the proton and deuteron beams stably throughout the experiments. H.J.Ong and D.T.Tran appreciate the support of Hirose and Nishimura International Scholarship Foundations, respectively. I.Tanihata acknowledge the support of the PR China government and Beihang University under the Thousand Talent Program. This work was supported in part by Grand-in-Aid for Scientific Research No. 23224008 from Monbukagakusho, Japan.
\section*{Reference}
|
2,877,628,091,223 | arxiv | \section{Introduction}\label{Sec_01}
The one--sample log--rank test is the method of choice for single--arm Phase II trials with time--to--event endpoint. It allows to compare the survival of the patients to a prefixed reference survival curve that typically represents the expected survival under standard of care. First proposed by \cite{Breslow:1975}, its practical implementation including sample size calculation has been described by \cite{Finkelstein:2003}.
The one--sample log--rank test is often criticized in different directions. First, it has been reported repeatedly in the literature that the classical one--sample log--rank statistic tends to be conservative (see \cite{Wu:2014, Schmidt:2015}).
One reason for the test's inaccuracy is the dependence between the estimators of mean and variance of the original one--sample log--rank statistic when sample size is small.
Several attempts have been made in the literature to correct for this (see\cite{Wu:2014, Schmidt:2015, Sun:2011, Wu:2015, Wu:2016, Kerschke:2020}).
Amongst those, the proposal made by \cite{Wu:2015} is presently implemented in the commercial software PASS \cite{PASS:2018} for sample size calculation for the one--sample log--rank test.
Another more conceptual point of criticism against the one--sample log--rank test relates to the process of selecting the reference survival curve. It is common practice to choose the reference survival curve in the light of historic data on standard treatment. This implies that choice of the reference survival curve itself is thus prone to statistical error which, however, is ignored in the classical one--sample log--rank statistic. As lined out in \cite{Korn:2006}, this is as general problem in clinical trials with historical controls. Accordingly, common one--sample log--rank tests rather assume that the reference survival curve is a priori known and deterministic as in \cite{Finkelstein:2003, Wu:2014, Schmidt:2015, Sun:2011, Wu:2015, Wu:2016, Kerschke:2020}. This ignores that the reference curve resulted from an estimation process and complicates interpretation of the test results. Moreover, historic data often suffer from not reflecting recent advances in diagnostics and/or concomitant therapy for standard of care.
To overcome these interpretative limitations we propose a new one--sample log--rank test that explicitly accounts for statistical error made in the process of estimating and fixing the reference survival curve. Principally, the new test applies to both historic and prospective comparisons of a new treatment to a standard in the framework of Phase II survival trials. In the latter case, the new test may also be interpreted as a two--sample test for survival distributions.
The paper is organized as follows. After settling notation and the testing problem, we describe the test statistic and it distributional properties. Additionally, we provide sample size calculation methods. Calculation of rejection regions and sample size are based on the approximate distribution of the new test statistic in the large sample limit. Therefore small sample properties of the new test regarding type I and type II error rate control are studied by simulation, and compared to the classical one--sample log--rank test / two--sample log--rank test. These simulations and a case study shed light on the inflation of the type I error rate that results from ignoring the sampling variability of the reference curve in the planning phase of a new single-armed trial. We conclude with a discussion of future research. Mathematical proofs are shifted to Appendix A.
\section{General Aspects}\label{Sec_02}
\subsection{Notation}\label{Sec_02_01}
We consider a survival trial with survival data from two treatment groups A (control intervention, prospectively collected or historic data) and B (experimental intervention, prospectively collected data). Let ${\cal{N}}_x$ denote the set of patients from group $x=A,B$, $n_x\coloneqq|{\cal{N}}_x|$ the number of such patients, and $n\coloneqq n_A+n_B$ the total number of patients. In particular, we denote by $\pi=n_B/n_A$ the treatment group allocation ratio. We denote by $T_{x,i}$ or $C_{x,i}$ the time from entry to event or censoring for patient $i$ from group $x=A,B$, respectively. Let $X_{x,i}\coloneqq T_{x,i} \wedge C_{x,i}$ denote the minimum of both. As usual, we assume that the $T_{x,i}$ and $C_{x,i}$ are mutually independent (non--informative censoring). For each $s \geq 0$, we denote by ${\cal{F}}_s$ the $\sigma$--algebra of information available by study time $s$:
\begin{align}\label{Sec_02_01_eq01}
{\cal{F}}_s\coloneqq
\sigma \left( I{(T_{x,i} \leq s) }, T_{x,i} \cdot I{(T_{x,i} \leq s)}, I{(C_{x,i} \leq s)}, C_{x,i} \cdot I{(C_{x,i} \leq s)}; i=1,\ldots,n_x, x=A,B \right).
\end{align}
Based on the observed data, we calculate the \emph{number of events} from treatment group $x=A,B$ up to study time $s \geq 0$ as
\begin{align}\label{Sec_02_01_eq02}
\begin{split}
N_{x}(s)\coloneqq \sum_{i \in {\cal{N}}_{x}} N_{x,i}(s), \quad N_{i}(s)\coloneqq I( T_{x,i} \leq s, T_{x,i} \leq C_{x,i}),
\end{split}
\end{align}
and the \emph{number at risk} $Y_{x}(s)\coloneqq \sum_{i \in {\cal{N}}_{x}} I( T_{x,i} \wedge C_{x,i} \geq s )$ by study time $s \geq 0$ in treatment group $x=A,B$.
Let $J_{x}(s)\coloneqq I( Y_{x}(s)> 0)$ indicate whether there are still patients at risk in treatment group $x$ by study time $s$. As usual, we let $\lambda_x(s)\coloneqq \lim_{\Delta \to 0} P(s \leq T_{x,i} < s + \Delta| T_{x,i} \geq s)/\Delta$ denote the hazard of a patient $i$ from treatment group $x=A,B$. We denote by $\Lambda_x(s)\coloneqq \int_0^s \lambda_x(u)du$ the corresponding cumulative hazard function for treatment group $x=A,B$, respectively.
Finally, we denote by $f_{T_x}$, $F_{T_x}$, $S_{T_x}$ (or $f_{C_x}$, $F_{C_x}$, $S_{C_x}$) the density, distribution function and survival function of the time to event $T_{x,i}$ (or time to censoring $C_{x,i}$) in treatment group $x=A,B$. Notice that $\lambda_x$, $\Lambda_x$, $f_{T_x}$, $F_{T_x}$, $S_{T_x}$ and $f_{C_x}$, $F_{C_x}$, $S_{C_x}$ are assumed to coincide for all patients from the same treatment group $x=A,B$.
We will also need the Nelsen--Aalen estimator
\begin{align}\label{Sec_02_01_eq04}
\begin{split}
\widehat{\Lambda}_x(s)
\coloneqq \int_0^s \frac{J_x(u)}{Y_x(u)} dN_x(u)
\equiv \ \sum_{\substack{ i \in {\cal{N}}_{x}, \\ N_{x,i} (s) = 1 }} \frac{J_x(T_{x,i})}{Y_x(T_{x,i})}
\end{split}
\end{align}
of the cumulative hazard function $\Lambda_x(s)$ for group $x=A,B$, and the corresponding estimator of the variance function
\begin{align}\label{Sec_02_01_eq05}
\begin{split}
\widehat{\sigma}_x(s)
\coloneqq n_x \cdot \int_0^s \frac{J_x(u)}{Y_x^2(u)} dN_x(u) \
\equiv \ n_x \cdot \sum_{\substack{ i \in {\cal{N}}_{x}, \\ N_{x,i} (s) = 1 }} \frac{J_x(T_{x,i})}{Y_x^2(T_{x,i})}.
\end{split}
\end{align}
We consider $N_{x}$, $Y_{x}$, $J_{x}$, $\widehat{\Lambda}_x$ and $\widehat{\sigma}_x$ as stochastic processes in study time $s \geq 0$, adapted to the filtration $({\cal{F}}_s)_{s \geq 0}$.
Notice that we define $0/0\coloneqq0$ whenever formal division of zero by zero occurs in a mathematical expression.
\subsection{The Testing Problem}\label{Sec_02_02}
We consider testing the null hypothesis that the survival function of patients from the experimental treatment group $B$ coincides with the reference curve that is given by the \emph{true} survival function under standard of care, i.e.
\begin{equation*}
H_0: \Lambda_B(s) = \Lambda_A(s) \text{ for all } s \in [0,s_{max}]
\end{equation*}
for some maximum observation time $s_{max}>0$.\\
Notice that $H_0$ deviates from the null hypothesis of the classical one-sample log-rank tests (see \cite{Breslow:1975} or \cite{Finkelstein:2003}) which assumes a known reference survival curve. Nevertheless, $H_0$ typically is the null hypothesis of actual interest also in a one-sample setting.
\section{The Testing Procedure}\label{Sec_03}
\subsection{Motivation}\label{Sec_03_01}
Starting point is the stochastic process $M_0(s)\coloneqq n_B^{-1/2}[N_B(s) - \sum_{i \in {\cal{N}}_{B}} \Lambda_A(s \wedge X_{B,i})]$. When $H_0$ holds true, $M_0$ is (known to be) a mean--zero ${\cal{F}}_s$--martingale. $M_0$ depends on data from the experimental treatment arm $B$, only, and is commonly used as a basis to construct one--sample log--rank tests (see \cite{Aalen:2008}). Notice, however, the difficulty that $M_0$ depends on the true unknown cumulative hazard function $\Lambda_A$ under standard of care. In the context of the classical one--sample log--rank test it is common practice to estimate $\Lambda_A$ from historic data, and to identify the obtained estimate $\widetilde{\Lambda}_A$ with $\Lambda_A$, while treating $\widetilde{\Lambda}_A$ as a deterministic function. I.e., the classical one--sample log--rank test effectively assesses the null hypothesis $\widetilde{H}_0:\Lambda_B = \widetilde{\Lambda}_A$ using the test statistic $\widetilde{M}_0(s)\coloneqq n_B^{-1/2}[N_B(s) - \sum_{i \in {\cal{N}}_{B}} \widetilde{\Lambda}_A(s \wedge X_{B,i})]$ while pretending that $\widetilde{\Lambda}_A$ is an a priori known deterministic reference function representing the expected survival under standard of care. This, however, may detract from the actually interesting null hypothesis $H_0: \Lambda_B = \Lambda_A$ when random deviation of $\widetilde{\Lambda}_A$ from $\Lambda_A$ is large.
To avoid those interpretive difficulties, we here propose to incorporate the process of reference curve estimation into the one--sample log--rank statistic: Replacing $\Lambda_A$ with its Nelsen--Aalen estimate $\widehat{\Lambda}_A$ (see \cite{Nelson:1969, Nelson:1972,Aalen:1978}) in the definition of $M_0$ while treating $\widehat{\Lambda}_A$ as random, we obtain a new stochastic process $\widehat{M}_0(s)\coloneqq n_B^{-1/2}[N_B(s) - \sum_{i \in {\cal{N}}_{B}} \widehat{\Lambda}_A(s \wedge X_{B,i})]$ that (i) can be calculated from the data, and (ii) may be used as test statistic for the original null hypothesis $H_0$ as we will see below. Notice that replacing $\Lambda_A$ with $\widehat{\Lambda}_A$ in $M_0$ increases the variance of the stochastic process since $\widehat{\Lambda}_A$ contributes additional variability. Deriving the correct rejection regions thus requires separate consideration which is not covered by the underlying one--sample test methodology. The resulting significance test may also be interpreted as a two--sample survival test, as the reference curve coincides with the true survival function under standard of care. Our proceeding defines a general strategy to lift existing methodology for one--sample survival tests to a multi--sample setting for a variety of different design settings as will be further discussed.
\subsection{Test Statistic and Significance Test}\label{Sec_03_02}
Consider the ${\cal{F}}_s$--adapted stochastic processes
\begin{align}\label{Sec_03_02_eq01}
\begin{split}
\widehat{M}_0(s) &\coloneqq n_B^{-1/2}\left[N_B(s) - \sum_{i \in {\cal{N}}_{B}} \widehat{\Lambda}_A(s \wedge X_{B,i})\right] \\
\widehat{\Sigma}^2(s) &\coloneqq n_B^{-1} N_B(s) + n_B^{-1} n_A^{-1} \sum_{i,j \in {\cal{N}}_{B}} \widehat{\sigma}_A(s \wedge X_{B,i} \wedge X_{B,j})
\end{split}
\end{align}
with $N_B$, $\widehat{\Lambda}_A$ and $\widehat{\sigma}_A$ acc. to (\ref{Sec_02_01_eq02}), (\ref{Sec_02_01_eq04}) and (\ref{Sec_02_01_eq05}). Assume that the null hypothesis $H_0$ holds true. Then by Theorem 1 (see Appendix A) the following applies: (i) $\widehat{M}_0$ is a mean--zero ${\cal{F}}_s$--martingale with asymptotically independent increments, i.e. for any $0<s_1<s_2 < s_{max}$, $\widehat{M}_0(s_1)$ and $\widehat{M}_0(s_2)-\widehat{M}_0(s_1)$ are approximately independent when sample size $n$ is sufficiently large, and (ii) for each fixed $s>0$ we have $\widehat{M}_0(s) \stackrel{d}{\to} {\cal{N}}(0,\Sigma^2(s))$ in distribution as $n\to\infty$, where $\Sigma(s)\coloneqq\text{plim}_{n \to \infty} \widehat{\Sigma}(s) = \lim_{n\to\infty} E[\widehat{\Sigma}(s)]$ (see Appendix A, Lemma 1).
In particular, the random variable
\begin{align}\label{Sec_03_02_eq02}
\begin{split}
Z\coloneqq \frac{\widehat{M}_0(\infty)}{\widehat{\Sigma}(\infty)} \stackrel{H_0}{\sim} {\cal{N}}(0,1)
\end{split}
\end{align}
is approximately standard normally distributed under the null hypothesis $H_0$. Notice that the parameters $n_A$ and $n_B$ cancel out in the definition of $Z$, so that $Z$ can be calculated from the observed data. Thus an approximate level $\alpha$ test of $H_0$ is defined by rejecting $H_0$ whenever $|Z| \geq \Phi^{-1}(1-\alpha/2)$, where $\alpha$ is the desired two--sided significance level and $\Phi$ is the standard normal distribution function.
To enable easy application of the proposed significance test in clinical practice, we provide R code (see \cite{R}) that calculates the value of the test statistic $Z$ and the corresponding two--sided $p$--value for given input data set. The R code as well as instructions how to prepare the input data are given in \nameref{S1_File}.
\section{Sample Size Calculation}\label{Sec_04}
Sample size is calculated under the proportional hazards planning alternative $K_0: \Lambda_B(s) = \omega_0 \cdot \Lambda_A(s)$ for some prefixed hazard ratio $0<\omega_0<1$. By Theorem 2 (see Appendix A), the test statistic $Z$ from (\ref{Sec_03_02_eq02}) is approximately normally distributed under planning alternative hypothesis $K_0$ with unit variance and mean $\log(\omega_0) \cdot \mu \cdot \sigma^{-1}$ where
\begin{align}\label{Sec_04_eq01}
\begin{split}
\mu &= n^{1/2} \sqrt{\frac{\pi}{1+\pi}} \int_0^{\infty} F_{T_A}(u) f_{C_A}(u) du
\\
\sigma^2 &= \int_0^{\infty} F_{T_A}(u) f_{C_A}(u) du + 2 \pi \cdot \int_0^{\infty} \sigma_A(u) [f_{T_A}(u)S_{C_A}(u)+S_{T_A}(u)f_{C_A}(u)]S_{T_A}(u)S_{C_A}(u) du
\end{split}
\end{align}
with ${\sigma}_A(s) \coloneqq \int_0^s \frac{\lambda_A(u)}{S_{T_A}(u) \cdot S_{C_A}(u)} du$ and $\pi=n_B/n_A$ denoting the treatment arm allocation ratio. Large negative values of the test statistic $Z$ support validity of the planning alternative $K_0$. The power $1-\beta\coloneqq P_{K_0}(Z \leq \Phi^{-1}(\alpha/2))$ of the trial is thus approximately given by
\begin{align}\label{Sec_04_eq02}
\begin{split}
1- \beta = \Phi \left( \Phi^{-1}(\alpha/2) - \log(\omega_0) \cdot \mu \cdot \sigma^{-1} \right).
\end{split}
\end{align}
In practice, the following assumptions on accrual and censoring are commonly made when calculating the required sample size of a survival trial:
\begin{itemize}
\item Patients enter the trial uniformly between year $0$ and year $a$ with prefixed constant accrual rate $r$, say, and are then followed--up for further $f \geq s_{max}$ years until the time of final analysis in year $a+f$ year.
\item No loss to follow--up, i.e. $C_{x,i}\sim {\cal{U}}(f,a+f)$ is uniformly distributed on $[f,a+f]$.
\end{itemize}
These assumption amount to
\begin{align}\label{Sec_04_eq03}
\begin{split}
n &= r \cdot a \\
f_{C_x}(s) &= a^{-1} \cdot I\left( s \in [f,a+f] \right), \qquad \text{for } s \geq 0, \\
S_{C_x}(s) &= \min\left(1, \frac{a+f-s}{a}\right) \cdot I\left( s \leq a+f \right), \qquad \text{for } s \geq 0, \\
\sigma_A(s) &= \int_0^{s} \frac{\lambda_A(u)}{S_{T_A}(u)} \max\left(1, \frac{a}{a+f-u}\right) du \qquad \text{for } 0 \leq s < a+f.
\end{split}
\end{align}
thus further simplifying above expressions for $\mu$ and $\sigma$.
For prefixed two--sided significance level $\alpha$, hazard ratio $0<\omega_0<1$, treatment group allocation ratio $\pi=n_B/n_A$, overall accrual rate $r$, length of the follow--up period $f$, and control arm cumulative hazard function $\Lambda_A(s)$, it remains to choose the only remaining free parameter $a$ in (\ref{Sec_04_eq02}) such that a desired power $1-\beta$ is achieved. With the parameter $a$ calculated this way, the required number of patients $n$ to achieve a power of $1-\beta$ under planning alternative $K_0$ is $n=r \cdot a$.
In \nameref{S1_File} we provide R code (see \cite{R}) that calculates the required number of patients $n$ for settings when the survival times in the reference group $A$ are Weibull distributed $\Lambda_A(s)\coloneqq - \log(S_1) \cdot t^{\kappa}$ with prefixed shape parameter $\kappa$ and prefixed 1-year survival rate $S_A(1)=S_1$.
\section{Simulation Study I: Comparison with the Classical One--Sample Log--Rank Test}\label{Sec_06}
\subsection{Design}\label{Sec_06_01}
In the application of the classical one--sample log--rank test from \cite{Breslow:1975,Finkelstein:2003} it is common practice to estimate the standard arm hazard function $\Lambda_A$ from historic data, and to choose the obtained estimate $\widetilde{\Lambda}_A$ as the reference curve, while treating $\widetilde{\Lambda}_A$ as deterministic. This may lead to type I error rate inflation when the underlying null hypothesis to be tested is $H_0: \Lambda_B = \Lambda_A$, because the random deviation of $\widetilde{\Lambda}_A$ from $\Lambda_A$ is neglected and the variance of the involved test statistics is thus underestimated. The objective of this simulation study I is to quantify the amount of type I error rate inflation in settings of clinical relevance: We study and compare the empirical type I error rates when (i) the classical one--sample log--rank test (without correction for sampling variability of the reference curve) and (ii) the new one--sample log--rank test (with correction for sampling variability of the reference curve) is used to test null hypotheses $H_0: \Lambda_B = \Lambda_A$.
In our simulations, patients were assumed to enter the trial uniformly between year $0$ and year $a$ with overall accrual rate of $r=100$ per year.
Accordingly the calendar times of entry were generated according to a uniform distribution $Y_{x,i} \sim {\cal{U}}(0,a)$ on $[0,a]$.
After the end of the accrual period, patients were assumed to be followed up for further $f=3$ years, while assuming no loss to follow--up.
Accordingly, we set $C_{x,i}\coloneqq a+f-Y_{x,i}\sim {\cal{U}}(f,a+f)$.
Survival times $T_{A,i}$ in the control intervention group $A$ were generated according to a Weibull distribution $\Lambda_A(s)\coloneqq - \log(S_1) \cdot t^{\kappa}$ with prefixed shape parameter $\kappa$ and 1-year survival rate $S_{T_A}(1)=S_1=0.5$. Survival times $T_{B,i}$ in the experimental intervention group $B$ were generated acc. to a Weibull distribution with $\Lambda_B(s)\coloneqq \omega \cdot \Lambda_A(s)$, where $\omega$ is the true hazard ratio.
To perform the classical one--sample log--rank test, the standard arm data was used to calculate the Nelsen--Aalen estimate $\widehat{\Lambda}_A$ of $\Lambda_A$. The obtained estimate $\widehat{\Lambda}_A$ was then treated as a deterministic function and used as (prefixed, deterministic) reference cumulative hazard function in the classical one--sample log--rank statistic (see \cite{Breslow:1975,Finkelstein:2003}). In contrast, the new test was performed according to our previously shown results.
To study the impact of sample size and allocation ratio on the amount of type I error rate inflation, the total sample size $n = r \cdot a$ of the virtual data sets was chosen as $n= 100, 500, 1000$. For each of these total sample sizes we considered allocation ratios $\pi \in \{2, 1, 1/2, 1/4, 1/8, 1/16\}$. Scenarios with $\pi \leq 1/2$ are more likely to reflect common practice as the size of the experimental cohort is typically smaller than the size of the historical control cohort. To study the impact of different shapes of the survival distribution, we considered different values for the Weibull shape parameter from the interval $[0.1,5]$.
For each parameter constellation, we generated 10000 samples of size $n$ to which we applied both the new test as well as the classical one--sample log--rank test. The desired two--sided significance level was $5\%$. Results are shown in Table \ref{tab:AlphaInflation} and discussed below.
\begin{table}[!ht]
\small
\centering
\caption{\bf Empirical type I error rates under consideration of sampling variability}
\begin{tabular}{cc||cc|cc|cc|cc|cc|cc}
\hline
& &
\multicolumn{2}{c|}{$\pi = 2$ } &
\multicolumn{2}{c|}{$\pi = 1$ } &
\multicolumn{2}{c|}{$\pi = 1/2$ } &
\multicolumn{2}{c|}{$\pi = 1/4$ } &
\multicolumn{2}{c|}{$\pi = 1/8$ } &
\multicolumn{2}{c}{$\pi = 1/16$ } \\[0.14cm]
$\kappa$ & $n$ & $\alpha_{new}$ & $\alpha_{LR}$& $\alpha_{new}$ & $\alpha_{LR}$& $\alpha_{new}$ & $\alpha_{LR}$& $\alpha_{new}$ & $\alpha_{LR}$& $\alpha_{new}$ & $\alpha_{LR}$& $\alpha_{new}$ & $\alpha_{LR}$ \\[0.14cm]
\hline
0.1 & 100 & 0.041 & 0.262 & 0.044 & 0.170 & 0.047 & 0.113 & 0.055 & 0.084 & 0.064 & 0.075 & 0.107 & 0.071 \\
0.1 & 500 & 0.044 & 0.249 & 0.047 & 0.163 & 0.052 & 0.112 & 0.053 & 0.084 & 0.050 & 0.062 & 0.056 & 0.062 \\
0.1 & 1000 & 0.047 & 0.257 & 0.046 & 0.160 & 0.049 & 0.106 & 0.051 & 0.079 & 0.054 & 0.068 & 0.051 & 0.059 \\[0.14cm]
0.25 & 100 & 0.043 & 0.267 & 0.045 & 0.172 & 0.047 & 0.114 & 0.053 & 0.086 & 0.062 & 0.075 & 0.086 & 0.070 \\
0.25 & 500 & 0.047 & 0.252 & 0.049 & 0.162 & 0.049 & 0.111 & 0.052 & 0.082 & 0.050 & 0.065 & 0.057 & 0.065 \\
0.25 & 1000 & 0.048 & 0.259 & 0.048 & 0.161 & 0.049 & 0.106 & 0.052 & 0.081 & 0.051 & 0.066 & 0.053 & 0.058 \\[0.14cm]
0.5 & 100 & 0.044 & 0.278 & 0.048 & 0.174 & 0.049 & 0.118 & 0.052 & 0.090 & 0.058 & 0.077 & 0.069 & 0.078 \\
0.5 & 500 & 0.049 & 0.260 & 0.050 & 0.168 & 0.051 & 0.112 & 0.052 & 0.083 & 0.056 & 0.068 & 0.056 & 0.067 \\
0.5 & 1000 & 0.051 & 0.263 & 0.052 & 0.164 & 0.052 & 0.112 & 0.055 & 0.083 & 0.053 & 0.068 & 0.055 & 0.061 \\[0.14cm]
0.75 & 100 & 0.043 & 0.289 & 0.050 & 0.184 & 0.049 & 0.126 & 0.053 & 0.093 & 0.056 & 0.080 & 0.056 & 0.082 \\
0.75 & 500 & 0.047 & 0.272 & 0.051 & 0.173 & 0.051 & 0.116 & 0.052 & 0.084 & 0.058 & 0.076 & 0.052 & 0.066 \\
0.75 & 1000 & 0.053 & 0.269 & 0.053 & 0.171 & 0.052 & 0.114 & 0.054 & 0.084 & 0.050 & 0.067 & 0.051 & 0.062 \\[0.14cm]
1 & 100 & 0.036 & 0.300 & 0.051 & 0.194 & 0.049 & 0.130 & 0.052 & 0.097 & 0.052 & 0.082 & 0.052 & 0.085 \\
1 & 500 & 0.047 & 0.273 & 0.050 & 0.176 & 0.052 & 0.116 & 0.052 & 0.085 & 0.053 & 0.073 & 0.052 & 0.069 \\
1 & 1000 & 0.053 & 0.269 & 0.052 & 0.170 & 0.051 & 0.115 & 0.052 & 0.084 & 0.051 & 0.069 & 0.050 & 0.063 \\[0.14cm]
1.25 & 100 & 0.026 & 0.293 & 0.043 & 0.197 & 0.043 & 0.133 & 0.050 & 0.097 & 0.049 & 0.084 & 0.039 & 0.087 \\
1.25 & 500 & 0.047 & 0.275 & 0.051 & 0.177 & 0.049 & 0.116 & 0.052 & 0.086 & 0.050 & 0.072 & 0.049 & 0.066 \\
1.25 & 1000 & 0.051 & 0.270 & 0.051 & 0.171 & 0.052 & 0.114 & 0.053 & 0.085 & 0.049 & 0.068 & 0.050 & 0.063 \\[0.14cm]
1.5 & 100 & 0.023 & 0.270 & 0.036 & 0.185 & 0.037 & 0.128 & 0.043 & 0.097 & 0.037 & 0.082 & 0.026 & 0.085 \\
1.5 & 500 & 0.046 & 0.276 & 0.052 & 0.177 & 0.049 & 0.116 & 0.051 & 0.086 & 0.050 & 0.072 & 0.048 & 0.066 \\
1.5 & 1000 & 0.050 & 0.271 & 0.051 & 0.172 & 0.051 & 0.116 & 0.053 & 0.084 & 0.048 & 0.067 & 0.048 & 0.064 \\[0.14cm]
2 & 100 & 0.024 & 0.261 & 0.034 & 0.174 & 0.032 & 0.122 & 0.035 & 0.090 & 0.029 & 0.077 & 0.015 & 0.082 \\
2 & 500 & 0.045 & 0.276 & 0.051 & 0.177 & 0.049 & 0.116 & 0.050 & 0.087 & 0.049 & 0.072 & 0.047 & 0.065 \\
2 & 1000 & 0.049 & 0.271 & 0.052 & 0.172 & 0.050 & 0.116 & 0.053 & 0.084 & 0.048 & 0.067 & 0.048 & 0.064 \\[0.14cm]
5 & 100 & 0.024 & 0.261 & 0.034 & 0.173 & 0.032 & 0.122 & 0.035 & 0.090 & 0.029 & 0.077 & 0.014 & 0.082 \\
5 & 500 & 0.045 & 0.275 & 0.051 & 0.177 & 0.049 & 0.116 & 0.050 & 0.087 & 0.049 & 0.071 & 0.047 & 0.065 \\
5 & 1000 & 0.049 & 0.271 & 0.052 & 0.172 & 0.050 & 0.116 & 0.052 & 0.084 & 0.048 & 0.067 & 0.048 & 0.064 \\
\hline
\end{tabular}
\begin{flushleft} Empirical type I error rates $\alpha_{new}$ and $\alpha_{LR}$ for testing $H_0:\Lambda_B = \Lambda_A$ using the new test statistic $Z$ and the classical one--sample log--rank statistic, respectively, for Weibull distributed survival times with shape parameter $\kappa$ and 1--year survival rate $S_1=0.5$ in the control arm. Theoretical two--sided significance level: $5 \%$. Underlying total sample size of $n$ with allocation ratio $\pi$.
\end{flushleft}
\label{tab:AlphaInflation}
\end{table}
\subsection{Results}\label{Sec_06_02}
The classical one--sample log--rank test does not account for sampling variability of the reference curve estimate. This leads to type I error rate inflation when the underlying null hypothesis to be tested is $H_0: \Lambda_B = \Lambda_A$. As expected, our simulations support that the amount of type I error rate inflation of the classical one--sample log--rank test is most pronounced when the allocation ratio $\pi$ is large. For any fixed allocation ratio, the inflation slightly decreases with increasing overall sample size but remains on a similar level. For ratios $\pi \geq 1$, the true type I error rate is more than three times higher than the desired one ($\sim 17\%$ instead of $5\%$ for $\pi=1$). For low allocation ratios as $1/8$ or $1/16$, the actual type I error still exceeds the nominal level, but to an extent that might be acceptable for a phase II trial ($\sim 6.5\%$ for $\pi = 1/16$ and $n = 1000$). With a view to the classical one-sample log-rank test, this supports that choice of the reference curve should be based on a historic control that is at least 10 times larger than the new experimental trial cohort. Reassuringly, the new test that explicitly accounts for reference curve variability realizes an empirical type I error rate close to the desired $5\%$ in almost all scenarios. Notice that the new test would hardly be applied in the scenario with $n=100$ and $\pi = 1/16$ as this implies a trial with $n_B=100/17 \approx 6$ only. So the entries for $n=100$ and $\pi = 1/16$ have to be interpreted with care, but are shown for reasons transparency and completeness.
The simulations thus support that neglecting the reference curve variability relevantly compromises type I error rate control when testing null hypotheses $H_0: \Lambda_B = \Lambda_A$. Notice that the classical one--sample log--rank test only realizes strict type I error rate control for testing the null hypothesis $\widetilde{H}_0:\Lambda_B = \widetilde{\Lambda}_A$ which, however, detracts from the null hypothesis $H_0: \Lambda_B = \Lambda_A$ when random deviation of $\widetilde{\Lambda}_A$ from $\Lambda_A$ is large.
\section{Simulation Study II: Comparison with the Two--Sample Log--Rank Test}\label{Sec_05}
\subsection{Design}\label{Sec_05_01}
We proposed a significance test for null hypothesis $H_0$ based on the approximate large sample distribution of the test statistic $Z$ introduced before. Despite of being derived as a one--sample log--rank test with consideration of reference curve variability, the new test may also be interpreted as a two--sample survival test. This simulation therefore aims to study performance of the new survival test for sample sizes of practical relevance, as compared to the classical two--sample log--rank test (see \cite{Mantel:1966, Peto:1972}). Asymptotically (i.e. for sufficiently large sample size) the classical two--sample log-rank test is known to be the optimal test under proportional hazards (PH) alternatives. It is thus of particular interest to compare performance of both tests under PH alternatives.
In our simulations, patients were assumed to enter the trial uniformly between year $0$ and year $a$ with overall accrual rate of $r=100$ per year.
Accordingly, the calendar times of entry were generated according to a uniform distribution $Y_{x,i} \sim {\cal{U}}(0,a)$ on $[0,a]$.
Patients were allocated equally to both treatment arms $A$ and $B$ (allocation ratio $\pi =1$), corresponding to an annual accrual rate of $50$ patients per group.
After the end of the accrual period, patients were assumed to be followed up for further $f=3$ years, while assuming no loss to follow--up.
Accordingly, we set $C_{x,i}\coloneqq a+f-Y_{x,i}\sim {\cal{U}}(f,a+f)$.
Survival times $T_{A,i}$ in the control intervention group $A$ were generated acc. to a Weibull distribution $\Lambda_A(s)\coloneqq - \log(S_1) \cdot t^{\kappa}$ with prefixed shape parameter $\kappa$ and 1-year survival rate $S_{T_A}(1)=S_1$. To implement the PH condition, survival times $T_{B,i}$ in the experimental intervention group $B$ were generated according to a Weibull distribution with $\Lambda_B(s)\coloneqq \omega \cdot \Lambda_A(s)$, where $\omega$ is the true hazard ratio. The true hazard ratio $\omega$ has to be distinguished from the expected hazard ratio $\omega_0$, which defines the planning alternative $K_0: \Lambda_B = \omega_0 \cdot \Lambda_A$ underlying sample size calculation.
The classical two--sample log--rank test serves as reference. Sample size $n$ of the virtual trials was thus calculated as follows:
In a first step, we used Schoenfeld's formula from \cite{Schoenfeld:1981} to calculate the required number of events $d$ for the two--sample log--rank test to achieve a power of $1- \beta$ under the planning alternative $K_0: \Lambda_B = \omega_0 \cdot \Lambda_A$ for allocated two--sided significance level $\alpha$. The expected number of events under the planning alternative $K_0: \Lambda_B = \omega_0 \cdot \Lambda_A$ by calendar time $a+f$ is $E(d_A)=r/2 \cdot \int_f^{a+f}[1-(S_1)^{t^{\kappa}}] dt$ in the standard treatment group A, and $E(d_B)=r/2 \cdot \int_f^{a+f}[1-(S_1)^{\omega_0 \cdot t^{\kappa}}] dt$ in the experimental treatment group B. Solving the condition $d=E(d_A)+E(d_B)$ for the indeterminate $a$ yields the required length of the accrual period. The required total sample size is $n\coloneqq r \cdot a$ (i.e. $n/2$ per treatment group).
To cover scenarios of larger and smaller sample sizes, we let the expected hazard ratio $\omega_0$ range in the set $\{0.5, 0.67, 0.8\}$.
To study the impact of different shapes of the survival distribution, we considered different values for the Weibull shape parameter from the interval $[0.1,5]$.
To study the impact of the event rate, we chose a reference arm 1-year survival rate $S_1$ of $0.5$ (Table \ref{tab:IntEventRate}), $0.8$ (see Appendix C) and $0.2$ (see Appendix D).
\begin{table}[!ht]
\small
\caption{\bf Comparison of empirical type I and II errors of new procedure and two-sample log-rank test}
\begin{tabular}{cc|ccccc|ccccc}
\hline
& & \multicolumn{5}{c|}{Scenario 1} & \multicolumn{5}{c}{Scenario 2} \\[0.14cm]
$\kappa$ & $\omega_0$ & $n$ & $\alpha_{new}$ & $\alpha_{LR}$ & $1-\beta_{new}$ & $1-\beta_{LR}$ & $n'$ & $\alpha_{new}$ & $\alpha_{LR}$ & $1-\beta_{new}$ & $1-\beta_{LR}$ \\
\hline
0.1 & 0.50 & 150 & 0.048 & 0.049 & 0.802 & 0.798 & 123 & 0.046 & 0.050 & 0.708 & 0.710 \\
0.1 & 0.67 & 402 & 0.049 & 0.049 & 0.805 & 0.799 & 359 & 0.047 & 0.048 & 0.757 & 0.752 \\
0.1 & 0.80 & 1180 & 0.045 & 0.045 & 0.803 & 0.798 & 1116 & 0.044 & 0.045 & 0.781 & 0.776 \\ [0.14cm]
0.25 & 0.50 & 122 & 0.048 & 0.051 & 0.803 & 0.800 & 111 & 0.045 & 0.049 & 0.717 & 0.720 \\
0.25 & 0.67 & 346 & 0.051 & 0.050 & 0.809 & 0.802 & 319 & 0.050 & 0.050 & 0.772 & 0.764 \\
0.25 & 0.80 & 986 & 0.049 & 0.048 & 0.814 & 0.806 & 957 & 0.049 & 0.049 & 0.798 & 0.790 \\[0.14cm]
0.5 & 0.50 & 110 & 0.047 & 0.051 & 0.809 & 0.800 & 96 & 0.045 & 0.050 & 0.749 & 0.743 \\
0.5 & 0.67 & 284 & 0.049 & 0.050 & 0.815 & 0.802 & 270 & 0.051 & 0.051 & 0.795 & 0.780 \\
0.5 & 0.80 & 798 & 0.050 & 0.050 & 0.818 & 0.803 & 789 & 0.051 & 0.051 & 0.811 & 0.797 \\[0.14cm]
0.75 & 0.50 & 94 & 0.049 & 0.051 & 0.811 & 0.801 & 84 & 0.049 & 0.056 & 0.765 & 0.756 \\
0.75 & 0.67 & 244 & 0.050 & 0.051 & 0.816 & 0.796 & 236 & 0.050 & 0.050 & 0.802 & 0.782 \\
0.75 & 0.80 & 702 & 0.053 & 0.050 & 0.826 & 0.802 & 698 & 0.053 & 0.049 & 0.823 & 0.799 \\[0.14cm]
1 & 0.50 & 82 & 0.050 & 0.056 & 0.810 & 0.799 & 76 & 0.048 & 0.056 & 0.778 & 0.766 \\
1 & 0.67 & 220 & 0.051 & 0.052 & 0.821 & 0.798 & 216 & 0.051 & 0.052 & 0.815 & 0.792 \\
1 & 0.80 & 658 & 0.051 & 0.050 & 0.826 & 0.803 & 657 & 0.050 & 0.051 & 0.823 & 0.801 \\[0.14cm]
1.25 & 0.50 & 76 & 0.037 & 0.059 & 0.803 & 0.801 & 70 & 0.036 & 0.058 & 0.758 & 0.768 \\
1.25 & 0.67 & 208 & 0.051 & 0.054 & 0.817 & 0.803 & 203 & 0.047 & 0.054 & 0.758 & 0.768 \\
1.25 & 0.80 & 640 & 0.051 & 0.052 & 0.816 & 0.800 & 639 & 0.050 & 0.052 & 0.811 & 0.800 \\[0.14cm]
1.5 & 0.50 & 72 & 0.029 & 0.059 & 0.731 & 0.801 & 67 & 0.024 & 0.059 & 0.616 & 0.764 \\
1.5 & 0.67 & 200 & 0.045 & 0.055 & 0.799 & 0.799 & 198 & 0.045 & 0.055 & 0.796 & 0.792 \\
1.5 & 0.80 & 634 & 0.051 & 0.053 & 0.814 & 0.800 & 633 & 0.050 & 0.052 & 0.809 & 0.792 \\[0.14cm]
2 & 0.50 & 68 & 0.027 & 0.058 & 0.496 & 0.793 & 66 & 0.026 & 0.059 & 0.464 & 0.779 \\
2 & 0.67 & 198 & 0.042 & 0.056 & 0.757 & 0.797 & 196 & 0.041 & 0.055 & 0.748 & 0.792 \\
2 & 0.80 & 632 & 0.053 & 0.052 & 0.811 & 0.799 & 631 & 0.050 & 0.052 & 0.806 & 0.798 \\[0.14cm]
5 & 0.50 & 66 & 0.026 & 0.059 & 0.402 & 0.779 & 66 & 0.026 & 0.059 & 0.402 & 0.779 \\
5 & 0.67 & 196 & 0.041 & 0.055 & 0.742 & 0.792 & 195 & 0.037 & 0.055 & 0.724 & 0.788 \\
5 & 0.80 & 632 & 0.053 & 0.052 & 0.811 & 0.799 & 631 & 0.050 & 0.052 & 0.805 & 0.798 \\[0.14cm]
\hline
\end{tabular}
\begin{flushleft} Empirical type I error rates ($\alpha_{new}$ and $\alpha_{LR}$) and powers ($1-\beta_{new}$ and $1-\beta_{LR}$) for the new test and for the classical two--sample log--rank test, respectively, under proportional hazards alternatives for Weibull distributed survival times with shape parameter $\kappa$ and 1--year survival rate $S_1=0.5$ in the control arm. Theoretical two--sided significance level: $5 \%$. Underlying total sample size $n$ (or $n'$) in Scenario 1 (or Scenario 2) calculated to achieve a theoretical power of $80 \%$ under the planning alternative $H_1:\Lambda_B = \omega_0 \cdot \Lambda_A$for the classical log--rank test using Schoenfeld's formula (or for the new test using formula (\ref{Sec_04_eq03})).
\end{flushleft}
\label{tab:IntEventRate}
\end{table}
For each parameter constellation, we generated 10000 samples of size $n$ to which we applied both the new test as well as the classical two--sample log--rank test. We finally also used formula (\ref{Sec_04_eq02}) to calculate the sample size $n'$ such that our new test achieves a power of $1 - \beta$ under planning alternative $K_0: \Lambda_B = \omega_0 \cdot \Lambda_A$ for allocated two--sided significance level of $5\%$, and then repeated above simulations based on a total sample size of $n'$ instead of $n$. Reported in Tables 1--3 are the empirical type I and type II error rates for each parameter constellation and test based on a sample size of $n$ (Scenario 1) or $n'$ (Scenario 2).
\subsection{Results of the Main Setting (Table \ref{tab:IntEventRate}, Scenario 1)}\label{Sec_05_02}
Reassuringly, for large sample sizes ($\omega_0 = 0.8$), both tests preserve the desired significance level and achieve similar power levels close to the desired $80\%$ for all shape parameter values $\kappa$. On closer inspection, one notices that both tests tend to be conservative for small values of $\kappa$, and slightly anti--conservative for larger values of $\kappa$ on an acceptable degree (empirical type I error ranging between $4.5\%$ for $\kappa=0.1$ and $5.3\%$ for $\kappa=5$ and $\omega_0 = 0.8$). For the classical two--sample log--rank test, this effect is overlapped by a general tendency to anti--conservativeness when sample size is small ($\omega = 0.5$), resulting in an empirical type I error up to $5.9\%$ for the classical two--sample log--rank test when $\kappa=5$ and $\omega=0.5$.
For shape parameters $\kappa \leq 1$, both test perform similarly well with empirical type I error rate close to $5\%$. Interestingly, the new test even surpasses the classical two--sample log--rank test regarding power performance when $\kappa \leq 1$. This effect is most pronounced for exponentially distributed survival time ($\kappa = 1$), when the new test achieves a power up to $83\%$ as compared to $80\%$ for the classical two--sample log--rank test. For shape parameter close to the exponential distribution $\kappa \approx 1$, the new test is observed to show even better type I error rate control than the classical two--sample log--rank test when sample size is small ($\omega = 0.5$).
For the extreme scenario of large shape parameters $\kappa \geq 2$ in combination with small sample size $\omega = 0.5$, however, the new test is observed to become quite conservative with profound loss in power ($40\%$ instead of $80\%$ for $\kappa = 5$ and $\omega = 0.5$). This is due to the fact that the new test requires estimation of the control arm cumulative hazard function, which seems to fail when sample size of the control arm is small and at the same time early events are rare ($n_A=34$ for $\kappa = 5$ and $\omega = 0.5$). In contrast, the classical two--sample log--rank test maintains power also in these extreme scenarios, with a tendency towards anti--conservativeness, though.
This behavior of both tests is consistently observed amongst scenarios with different event rates (see the tables in Appendices C and D).
\section{Case Study}\label{sec:case_study}
As seen in the preceding simulations, the type I error of the classical one-sample log-rank test always exceeds the nominal type I error level if the sampling variability of the reference curve is not taken into account. However, the magnitude of this excess depends on the data from the reference cohort as well as the sample size in the new, experimental cohort.\\
The only difference between the test statistic of the classical one-sample log-rank test
\begin{equation*}
Z_{\text{OSLR}} \coloneqq \frac{\widehat{M}_0(\infty)}{\widehat{\Sigma}_{\text{OSLR}}(\infty)} \qquad \text{with} \qquad {\widehat{\Sigma}^2_{\text{OSLR}}(s)}\coloneqq n_B^{-1} N_B(s)
\end{equation*}
and the new test is the denominator. Let $R\coloneqq {\widehat{\Sigma}_{\text{OSLR}}(\infty)}/\widehat{\Sigma}(\infty)$ denote the ratio of the standardisations without and with consideration of the sampling variability. The expected level of a two-sided classical one-sample log-rank test with nominal level $\alpha$ neglecting the sampling variability is then given by $E[2\cdot\Phi\left( \sqrt{R} \cdot z_{\frac{\alpha}{2}} \right)]$ which can be approximated by $2\cdot\Phi\left( E[\sqrt{R}] \cdot z_{\frac{\alpha}{2}} \right)$. Analogously, $E[\sqrt{R}]$ can be approximated via a first-order Taylor expansion by
\begin{equation*}
\sqrt{E[{\widehat{\Sigma}_{\text{OSLR}}(\infty)}]/E[\widehat{\Sigma}(\infty)]}
\end{equation*}
which is now a quantity we can estimate from given historical control data and design parameters of a trial.\\
From the computations in \cite{Wu:2015} we get
\begin{equation}\label{eq:expectation_classical_variance}
E[{\widehat{\Sigma}^2_{\text{OSLR}}(\infty)}]=\int_0^{\infty} F_{T_B}(u) dF_{C_B}(u).
\end{equation}
After another approximation and some computations (see Appendix B), we also get
\begin{align*}
E[\widehat{\Sigma}^2(\infty)]\approx &\int_0^{\infty} F_{T_B}(u) dF_{C_B}(u) \\&+ 2\pi \cdot\left( \int_0^{\infty} \widehat{\sigma}_A(u)S_{T_B}^2(u)S_{C_B}(u) dF{C_B}(u) + \int_0^{\infty} \widehat{\sigma}_A(u)S_{T_B}(u)S_{C_B}^2(u) dF_{T_B}(u)\right)
\end{align*}
Under the null hypothesis, this can be estimated by plugging in Kaplan-Meier estimates from the control group for $F_{T_B}$ resp. $S_{T_B}$. For a given historical control group, these formulas can now be used to compute the type I error inflation when sampling variability is not taken into account. Of course, the treatment group allocation ratio $\pi$ is essential for the extent of this inflation.\\
We will now illustrate the influence of basic design parameters on the type I error inflation with a practical example. We employ the setting of the Mayo Clinical trial in primary biliary cirrhosis of the liver (PBC), which is a rare but fatal chronic disease whose cause is still unknown (see \cite{Fleming:2005}). In this double-blinded randomized trial the drug D-penicillamine (DPCA) was compared with a placebo. The study data is publicly available via the survival package in R \cite{Therneau:2020, R}.\\
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth]{"control_cohort_distributions.pdf"}
\caption{{\bf Distribution of survival and censoring variable.}
Distribution of overall survival and censoring in the cohort treated with DPCA of the Mayo Clinical trial in primary biliary cirrhosis. Left: Cumulative hazards according to the Nelson-Aalen estimator. Right: Survival distributions accoring to the Kaplan-Meier estimator}
\label{fig:control_cohort}
\end{figure}
Among the 158 patients of the cohort treated with DPCA, 65 died during the trial. The Kaplan-Meier survival curve of these patients can be found in Fig \ref{fig:control_cohort}. The time scale is given in years. There, we also display the empirical distribution of the censoring variable $C$ in this cohort. As we will see below, this distribution also plays a substantial role for our computations here. We now suppose, that a new treatment becomes available and the data from this trial shall be used to compare the survival under this treatment to the survival under treatment with DPCA. This shall be accomplished in a trial in which patients are recruited uniformly over a accrual period of length $a$ and followed-up in an additional period of length $f$. The allocation ratio will again be denoted by $\pi$. If one cannot find a suitable parametric model to be fitted to the data, the Kaplan-Meier resp. Nelson-Aalen estimates (see Fig \ref{fig:control_cohort}) are employed as reference curves for the one-sample log-rank test.\\
Similar to our first simulation study (see \nameref{Sec_06}), we investigate the influence of the allocation ratio on the inflation of the type I error level in the first part of our study. We choose $\pi\in \{0.01, 0.02, 0.03,\dots, 1\}$, $a=2$ and $f\in\{2, 4, 6, 8\}$. The results in terms of the actual type I error level of the one-sample log-rank test can be found on the left hand side of Fig \ref{fig:type_1_errors}. For any fixed $f$, the actual type I error level increases nearly linearly in the range of allocation ratios considered here. So, as a rule of thumb, each additional trial patient raises the level by a fixed number of percentage points. This number however seems to depend on the length of the follow-up, where a longer duration of the follow-up period leads to steeper increases.\\
In the second part of this case study, we take a closer look at the role of the trial duration. As already seen in the first part, longer trials lead to a larger inflation of the error levels. To analyse this dependence, we now choose $\pi=0.5$, $a\in\{2,4,6\}$ and $f\in\{0, 0.05, 0.1,\dots, 6\}$. The results can be found on the right hand side of Fig \ref{fig:type_1_errors}. As we can see, trials with a longer total duration ($a+f$) tend to lead to a higher type I error inflation. This effect is most substantial if the total trial duration is close to the longest observation in the reference data set which amounts to about 12.5 years. In this case, the testing procedure needs to utilize parts of the Nelson-Aalen estimator which are affected by a high amount of variability because of the high number of censored observations. However, the inflated type I error neither behaves completely monotonically w.r.t. the accrual duration $a$ nor the follow-up duration $f$. Even if the variance of the classical one-sample log-rank test and the additional variance which is due to the sampling variability (see appendix A) increase monotonically in $a$ and $f$, the ratio $R$ can increase if the increase of the former is steeper than the increase of the latter. Nevertheless, there is a clear tendency towards a larger inflation of the type I error if either $a$ or $f$ increases.
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth]{"type_1_errors.pdf"}
\caption{{\bf Type I error inflation.}
Actual type I error levels of the classical one-sample log-rank test. Left: Variation of the allocation ratio with fixed accrual duration and four different durations of the follow-up period. Right: Variation of the length of the follow-up period for a fixed allocation ratio and three different durations of the accrual period.}
\label{fig:type_1_errors}
\end{figure}
\section{Discussion}\label{sec:discussion}
Traditional one--sample log--rank tests compare the survival function of an experimental treatment to a prefixed reference survival curve, which typically represents the expected survival under standard of care. Choice of the reference survival curve is typically based on historic data on standard therapy and thus prone to statistical error. Nevertheless, traditional one--sample log--rank tests do not account for this variance of the reference curve estimator. Here we study and propose a non--parametric one--sample log--rank test that explicitly accounts for sampling variability of the reference curve.
The new test may also be interpreted as two--sample test for survival distributions, while inheriting the interpretability from the underlying one--sample log--rank test. Admittedly, our simulations suggest that it may be advisable to compare the data of a historical control cohort with the new data in a single-arm Phase II trial via the two-sample log-rank test if one wants to account for sampling variability of the reference curve or in case of allocation ratios close to 1. Nevertheless, in Phase II settings with fast events (Weibull shape parameter $\kappa \leq 1$), our simulations reveal the potential of the new test to outperform the classical two--sample log--rank test even under PH alternatives. A non-consideration of the sampling variability leads to an inflation of the type I error rate. The extent of this inflation depends in particular on the size of the control cohort. A major objective of this work was to investigate how large this control must be chosen so that the type I error inflation remains within an acceptable range. In this regard, our simulations support that the classical one-sample log-rank test is adequate if the historical control cohort is at least about 10 times larger than the new cohort ($\pi \leq 1/10$) and the maximum follow-up in the new trial is reasonably small in view of the follow-up duration in the historic cohort (see \nameref{sec:case_study}).
Conceptually, the proposed new test also sheds light on a general strategy for lifting existing methodology for single--arm survival trials to a randomized, multi--arm setting. This might be of interest for designing confirmatory survival trials with interim analyses. Performance of interim analyses in clinical trials is of ethical and economic interest. On the one hand, interim analyses enable faster decisions regarding rejection or acceptance of the underlying null hypothesis when the treatment effect is larger or smaller than initially expected. Moreover, interim analyses offer the possibility for data dependent modifications of the trial (e.g. sample size recalculation) in the case of new insights, thus increasing the prospects of the trial. Trial designs with interim analyses offering such kind of flexibility at full type I error rate control are commonly referred to as \emph{confirmatory adaptive designs} \cite{Bauer:1989, Bauer:1994}. Whereas methodology for confirmatory adaptive designs is well understood for trials with short--term endpoints as in \cite{BPB02, Hommel:2001}, subtle problems arise for adaptive survival trials. With standard methodology for group--sequential adaptive survival trials from \cite{Wassmer:2006}, the degree of flexibility is highly limited. For example, in a survival trial with primary endpoint \emph{overall survival (OS)}, essentially only interim information on the survival status of the patients may be used for design modifications (e.g. sample size recalculation). Further interim information, e.g. on progression status of the patients, must not be used for design modifications in these classical adaptive Phase III survival trials, because otherwise type I error rate inflation may occur (see \cite{Bauer:2004}). This situation is clinically unsatisfactory. If larger degree of flexibility is desired, the patient--wise separation approach as initially proposed by \cite{Jenkins:2011} has to be chosen which, however, either implies neglection of some part of the observed survival data in the test statistic or requires some worst--case adjustments resulting in a conservative design as shown in \cite{Magirr:2016}. Until today, no satisfactory methodology for adaptive Phase III survival trial exists, that offers larger flexibility while avoiding those problems involved with the patients--wise separation approach.
Recently, however, such methodology was proposed for single arm Phase II survival trials. In \cite{Danzer:2021b}, an adaptive one--sample log--rank test was suggested that allows the simultaneous use of several time--to--event endpoints for data--dependent design modifications, while avoiding those problems involved with the patient--wise separation approach. In a similar way the common one--sample log--rank test was lifted to a two--sample setting in this paper, we expect that the multivariate adaptive one--sample log--rank test proposed by \cite{Danzer:2021b} may be lifted to a two--sample setting, thus solving an outstanding problem in the theory of adaptive design methodology. Implementation of this idea, however, is beyond the scope of this paper and will be contents of an upcoming paper. The objective of this paper is to provide methodology for accounting for sampling variability of the reference curve in classical one--sample log--rank tests, and to show feasibility of the underlying lifting procedure regarding type I and type II error rate control.
\section{Acknowledgments}
The work of Moritz Fabian Danzer was funded by the German Science Foundation (Deutsche Forschungsgemeinschaft, DFG, grant number 413730122).
\newpage
\section*{Appendix A: Proof of Distributional Properties}\label{appA}
\textbf{Theorem 1.}
\textit{Let $s_0 > 0$ be given s.t. $S_{X_A}(s_0)=S_{T_A}(s_0)S_{C_A}(s_0)\eqqcolon p_0 > 0$ and assume
that the null hypothesis $H_0: \Lambda_A(s)=\Lambda_B(s)$ for all $0 \leq s \leq s_0$ is true. Set
\begin{align}\label{appA_eq01}
\begin{split}
\widehat{M}_0(s) &\coloneqq n_B^{-1/2}\left[N_B(s) - \sum_{i \in {\cal{N}}_{B}} \widehat{\Lambda}_A(s \wedge X_{B,i})\right] \\
\widehat{\Sigma}^2(s) &\coloneqq n_B^{-1} N_B(s) + n_B^{-1} n_A^{-1} \sum_{i,j \in {\cal{N}}_{B}} \widehat{\sigma}_A(s \wedge X_{B,i} \wedge X_{B,j})
\end{split}
\tag{A.1}
\end{align}
Then the following is true:
\begin{itemize}
\item $\widehat{M}_0|_{[0,s_0]}$ has asymptotically independent increments, i.e. for all $0 \leq s_1 \leq s_2 \leq s_0$ and sufficiently large sample size $n$, the random variables $\widehat{M}_0(s_1)$ and $\widehat{M}_0(s_2)-\widehat{M}_0(s_1)$ are approximately independent.
\item Pointwise, for each $0 \leq s \leq s_0$, we have $\widehat{M}_0(s) \stackrel{\cal{D}}{\rightarrow} {\cal{N}}(0,\Sigma^2(s))$ as $n \to \infty$, where $\Sigma(s)\coloneqq \emph{plim}_{n \to \infty} \widehat{\Sigma}(s) = \lim_{n \to \infty} E[\widehat{\Sigma}(s)]$ is the large sample limit of $\widehat{\Sigma}(s)$ (existing acc. to Lemma 1 below).
\end{itemize}
}
{\small{\textsc{Proof.}}}
It is well known that $M_{x,i}(s)\coloneqq N_{x,i} - \int_0^s I(X_{x,i} \geq u) \lambda_x(u) du$ is a mean--zero ${\cal{F}}_s$--martingale with optional covariation $[M_{x,i}](s)\coloneqq N_{x,i}$. By independence of the summands, it follows that $M_{x}(s)\coloneqq N_{x} - \int_0^s Y_x(u) \lambda_x(u) du$ is a mean--zero ${\cal{F}}_s$--martingale with optional covariation $[M_{x}](s)\coloneqq N_{x}(s)$. In particular, for any left--continuous ${\cal{F}}_s$--adapted process $H(s)$, $M_H(s)\coloneqq\int_0^s H(u) dM_A(u)$ is a mean zero ${\cal{F}}_s$--martingale with optional covariation $\int_0^sH^2(u)dN_A(u)$. Choosing $H(u)\coloneqq J_A(u)/Y_A(u)$ we recover the well--known result that $M_{J_A/Y_A}(s)\coloneqq\int_0^s \frac{J_A(u)}{Y_A(u)} dM_A(u)$ is a mean zero ${\cal{F}}_s$--martingale with optional covariation $[M_{J_A/Y_A}](s)=\int_0^s \frac{J_A(u)}{Y_A^2(u)} dN_A(u)$. Notice that $M_{J_A/Y_A}(s)=\widehat{\Lambda}_A(s)-\Lambda_A^*(s)$, where $\widehat{\Lambda}_A(s)\coloneqq\int_0^s \frac{J_A(u)}{Y_A(u)} dN_A(u)$ is the Nelsen--Aalen estimate of $\Lambda_A(s)$, and $\Lambda_A^*(s)\coloneqq\int_0^s J_A(u)\lambda_A(u) du$. By independence of the treatment groups it follows that
\begin{align}\label{appA_eq02}
M_i(s)\coloneqq M_{B,i}(s) - M_{J_A/Y_A}(s)
\tag{A.2}
\end{align}
is a mean--zero ${\cal{F}}_s$--martingale with optional covariation $[M_{i}](s)\coloneqq [M_{B,i}](s) + [M_{J_A/Y_A}](s)$. Since $\tau_i\coloneqq X_{B,i}$ is an ${\cal{F}}_s$--stopping--time (see Lemma 2 below), we conclude from appeal to the \emph{optional stopping theorem} and \emph{compatibility of stopping with covariation} that the stopped process $M_i^{\tau_i}(s)\coloneqq M_i(s \wedge \tau_i)$
is a mean--zero ${\cal{F}}_s$--martingale with
\begin{align}\label{appA_eq03}
\begin{split}
&\bullet \quad [M_i^{\tau_i}](s)=[M_{i}](s \wedge \tau_i) = [M_{B,i}](s\wedge \tau_i) - [M_{J_A/Y_A}](s\wedge \tau_i), \\
&\bullet \quad [M_i^{\tau_i},M_j^{\tau_j}](s) = [M_{J_A/Y_A}](s\wedge \tau_i \wedge \tau_j) \quad \text{for } i \neq j.
\end{split}
\tag{A.3}
\end{align}
To see the last assertion in (\ref{appA_eq03}), use bilinearity of the covariation operator $[\cdot,\cdot]$ together with $[M_{B,i}^{\tau_i}M_{B,j}^{\tau_j}]=0$ and $[M_{B,i}^{\tau_i},M_{J_A/Y_A}^{\tau_j}]=0$ for $i \neq j$ by independence of the patients and treatment groups, where for any ${\cal{F}}_s$--adapted process $Q$ and any ${\cal{F}}_s$--stopping--time $\tau$ we use the common notation $Q^{\tau}(s)\coloneqq Q(s \wedge \tau)$.
We are finally interested in the large sample properties of the mean zero ${\cal{F}}_s$--martingale
\begin{align}\label{appA_eq03b}
\begin{split}
\widehat{M}(s)\coloneqq n_B^{-1/2} \sum_{i \in {\cal{N}}_B} M_i^{\tau_i}(s).
\end{split}
\tag{A.4}
\end{align}
First notice that the jumpsize of $\widehat{M}$ is bounded by $2n_B^{-1/2}$ and thus vanishes in the large sample limit $n \to \infty$, because the jump sizes of $M_{B,i}$ and $M_{J_A/Y_A}$ are bounded by $1$, as no two event indicators $N_{x,i}$ and $N_{x',j}$ jump simultaneously a.s..
Making use of $N_{B,i}(s \wedge \tau_i) = N_{B,i}(s)$ (see Lemma 3 below) and noticing that $n_A \cdot [M_{J_A/Y_A}](s) \equiv \widehat{\sigma}_A(s)$ from (\ref{Sec_02_01_eq05}), some algebra shows that $\widehat{M}$ has optional covariation $[\widehat{M}](s)=\widehat{\Sigma}^2(s)$. So, by Lemma 1 below, $[\widehat{M}](s)$ converges pointwise in $s$ in probability to the strictly increasing, deterministic function $\Sigma^2(s) = \lim_{n \to \infty} E[\widehat{\Sigma}^2(s)]$.
All in all, we conclude from appeal to Rebolledo's martingale central limit theorem that $\widehat{M}$ converges on $[0,s_0]$ in distribution to a mean zero Gaussian martingale $M^{(\infty)}$ with independent increments and variance function $\Sigma^2(s)$.
To finish the proof, it suffices to notice that the processes $\widehat{M}$ from (\ref{appA_eq03b}) and $\widehat{M}_0$ from (\ref{appA_eq01}) coincide in the limit when null hypothesis $H_0: \Lambda_A = \Lambda_B$ hold true.
\quad $\Box$
\ \\ \\
\textbf{Theorem 2.}
\textit{Fix $s_0>0$. Under the contiguous alternatives $\Lambda_B(\cdot)= \omega_n \Lambda_A(\cdot)$ with $\omega_n=\exp(-n^{-1/2}\gamma)$ for some $\gamma \geq 0$, the process $\widehat{M}_0$ defined in (\ref{appA_eq01}) converges on $[0,s_0]$ in distribution to a Gaussian process with independent increments, drift function $\mu(s)\coloneqq - \gamma \sqrt{\frac{\pi}{1+\pi}} \int_0^{\infty} F_{T_A}(s \wedge u) f_{C_A}(u) du$ and variance function $\Sigma^2(s)$ from (\ref{appA_eq11}).
}
{\small{\textsc{Proof.}}}
Under the contiguous alternatives, the difference between the mean--zero martingale $\widehat{M}$ from (\ref{appA_eq03b}) and $\widehat{M}_0$ is
\begin{align}\label{appA_eq14}
\begin{split}
\widehat{M}_0 - \widehat{M}
= n_B^{-1/2} \sum_{i \in {\cal{N}}_B} [ \Lambda_B(s \wedge X_{B,i}) - \Lambda_A(s \wedge X_{B,i})]
= n_B^{1/2}[ 1 - \exp(n^{-1/2} \gamma) ] \cdot n_B^{-1} \sum_{i \in {\cal{N}}_B} \Lambda_B(s \wedge X_{B,i}).
\end{split}
\tag{A.5}
\end{align}
As $n \to \infty$, the first factor converges to $- \gamma \sqrt{\frac{\pi}{1+\pi}}$. Since $\Lambda_B \approx \Lambda_A$ under the contiguous alternatives when $n$ is large, the second converges in probability to $E[\Lambda_A(s \wedge X_{A,1})]$ by law of large numbers as $n \to \infty$, which in turn coincides with $E[N_{A,1}(s)] = \int_0^{\infty} F_{T_A}(s \wedge u) f_{C_A}(u) du$ due to the martingale property of $M_{A,1}$. So the assertion follows from Theorem 1 and appeal to Slutsky's theorem.
\quad $\Box$
\ \\ \\
\textbf{Lemma 1.}
\textit{Let $s_0 > 0$ be given s.t. $S_{X_A}(s_0)=S_{T_A}(s_0)S_{C_A}(s_0)\eqqcolon p_0 > 0$. Let $\widehat{\Sigma}$ be the process defined in (\ref{appA_eq01}). Then there is a strictly increasing, deterministic function $\Sigma(s)$ with $\Sigma(0)=0$ such that pointwise for each $0 \leq s \leq s_0$, we have $\widehat{\Sigma}(s) \stackrel{{\cal{P}}}{\rightarrow} {\Sigma}(s)$ as $n \to \infty$. More specifically, $\Sigma(s) = \lim_{n \to \infty} E[\widehat{\Sigma}(s)]$.
}
\ \\ \\
{\small{\textsc{Proof.}}}
For this proof, we introduce the following abbreviations:
\begin{align}
\begin{split}
\Theta&\coloneqq n_B^{-1} n_A^{-1} \sum_{i,j \in {\cal{N}}_{B}} \widehat{\sigma}_A(s \wedge X_{B,i} \wedge X_{B,j}), \\
\Psi_{N_A}(s)&\coloneqq \int_0^s \frac{J_A(u)}{Y_A^2(u)} dN_A(u), \\
\Psi_{M_A}(s)&\coloneqq \int_0^s \frac{J_A(u)}{Y_A^2(u)} dM_A(u), \\
\Psi_{\Lambda_A}(s)&\coloneqq \int_0^s \frac{J_A(u)}{Y_A(u)} \lambda_A(u) du.
\end{split}
\tag{A.6}
\end{align}
Since $n_B^{-1} N_B(s) \stackrel{{\cal{P}}}{\rightarrow} E[N_{B,1}(s)]$ as $n \to \infty$ by law of large numbers, it remains to prove that $\Theta \stackrel{{\cal{P}}}{\rightarrow} \lim_{n \to \infty} E[\Theta]$ as $n \to \infty$. The proof decomposes into several steps.
From appeal to triangle inequality we conclude that for any $\varepsilon > 0$
\begin{align}\label{appA_eq04}
\begin{split}
P\left(|\Theta - \lim_{n \to \infty} E[\Theta]| \geq 2 \varepsilon \right)
\leq P\left(|\lim_{n \to \infty} E[\Theta] - E[\Theta]| \geq \varepsilon \right) + P\left(|\Theta - E[\Theta]| \geq \varepsilon \right) .
\end{split}
\tag{A.7}
\end{align}
By Lemma 6, $\lim_{n \to \infty} E[\Theta]$ exists, i.e. the first summand on the right hand side of equation (\ref{appA_eq04}) vanishes in the limit $n \to \infty$. By conditioning on the outcomes in group B, the second summand can be rewritten as
\begin{align*}
P\left(|\Theta - E[\Theta]| \geq \varepsilon \right) &= E[P\left(|\Theta - E[\Theta]| \geq \varepsilon \right | X_B)]\\
&=\int_{[0,\infty)^{n_B}} P\left(|\Theta - E[\Theta]| \geq \varepsilon \right | X_B=x_B) \cdot f_{X_B}(x_B) dx_B
\end{align*}
where $X_B=(X_{B,1}, \dots, X_{B,n_B})$ and $f_{X_B}=\prod_{i=1}^{n_B} f_{X_{B,1}}(x_{B,i})$ as the observations from group B are independent and identically distributed. Now, if $P\left(|\Theta - E[\Theta]| \geq \varepsilon \right | X_B=x_B)\leq c(n_A)$ for some function $c$ which does only depend on $n_A$ and not on $x_B$ with $c(n_A) \to 0$ as $n_A \to \infty$, we also have $P\left(|\Theta - E[\Theta]| \geq \varepsilon \right)$ as $n \to \infty$.\\
By Chebyshev's inequality, the Cauchy-Schwarz inequality and since $\Psi_{M_A}(s) = \Psi_{N_A}(s) - \Psi_{\Lambda_A}(s)$ we have for any $x_B\in [0, \infty)^{n_B}$ and any $\varepsilon > 0$
\begin{align}\label{appA_eq05}
\begin{split}
P(|\Theta - E[\Theta]| \geq \varepsilon|X_B = x_B)
&\leq \frac{\text{Var}[\Theta|X_B = x_B]}{\varepsilon^2}\\
&= \text{Var}[n_B^{-1} n_A^{-1} \sum_{i,j \in \mathcal{N_B}} \widehat{\sigma}_A(s \wedge x_{B,i} \wedge x_{B,j})]\\
&\leq \frac{n_B^2}{n_A^2} \cdot \sum_{i,j \in \mathcal{N}_B} \text{Var} \left[\int_0^{s \wedge x_{B,i} \wedge x_{B,j}} n_A \cdot \frac{J_A(u)}{Y_A^2(u)} dN_A(u) \right]\\
&\leq \pi^2 \sup_{s^{\star} \in [0,s_0]} \text{Var} \left[\int_0^{s^{\star}} n_A \cdot \frac{J_A(u)}{Y_A^2(u)} dN_A(u) \right]\\
&\leq \pi^2 \left( \sup_{s^{\star} \in [0,s_0]} \sqrt{\text{Var}[n_A \cdot \Psi_{M_A}(s^{\star})]} + \sup_{s^{\star} \in [0,s_0]} \sqrt{\text{Var}[n_A \cdot \Psi_{\Lambda_A}(s^{\star})]} \right)
\end{split}
\tag{A.8}
\end{align}
where $\pi=n_B/n_A$ is the prefixed treatment group allocation ratio. Notice that the third inequality holds because $x_{B}$ is a fixed value and not a random variable. To finish the proof, we show that $\sup_{s^{\star} \in [0,s_0]} \text{Var}[n_A \cdot \Psi_{M_A}(s^{\star})] \to 0$ (Step I) and $\sup_{s^{\star} \in [0,s_0]} \text{Var}[n_A \cdot \Psi_{\Lambda_A}(s^{\star})] \to 0$ (Step II) as $n_A \to \infty$.
\emph{Proof of Step I}:
\begin{align*}
\text{Var}\left[n_a \cdot \Psi_{\Lambda_A}(s)\right] & = n_A^2 \cdot \text{Var}\left[ \int_0^s \frac{J_A(u)}{Y_A(u)} \right]\\
=&n_A^2 \cdot \int_0^s \int_0^s \lambda_A(u)\lambda_A(v)\cdot \underbrace{\text{Cov}\left[\frac{J_A(u)}{Y_A(u)}, \frac{J_A(v)}{Y_A(v)}\right]}_{\leq \sup_{s^{\star} \in [0, s_0]} \text{Var}\left[ \frac{J_A(s^{\star})}{Y_A(s^{\star})} \right]} du dv
\end{align*}
For any $s \in[0,s_0]$, we also have
\begin{align*}
\text{Var} \left[ \frac{J_A(s)}{Y_A(s)} \right]&=E\left[\left( \frac{J_A(s)}{Y_A(s)} - E\left[ \frac{J_A(s)}{Y_A(s)}\right] \right)^2\right]\\
&\leq E\left[\left( \frac{J_A(s)}{Y_A(s)} - \frac{1}{n_A\cdot S_{X_A}(s)} \right)^2\right]\\
&\leq \frac{1}{n_A^2} \left|E\left[ \frac{n_A^2\cdot J_A(s)}{Y^2_A(s)} - \frac{1}{S_{X_A}(s)^2} \right]\right| + \frac{2}{n_A^2 \cdot S_{X_A}(s)} \left|E\left[ \frac{n_A\cdot J_A(s)}{Y_A(s)} - \frac{1}{S_{X_A}(s)} \right]\right|
\end{align*}
For both summands we can apply the results from Lemma 4.2 (i) of \cite{McKeague:1990} as the probability that $Y_A(s)=0$ goes to zero uniformly on $[0,s_0]$ to get
\begin{equation*}
\text{Var} \left[ \frac{J_A(s)}{Y_A(s)} \right] \leq \frac{K}{n_A^3} + \frac{2K}{n_A^3\cdot p_0^3}
\end{equation*}
We can finally plug this estimate in to obtain
\begin{equation*}
\sup_{s^{\star} \in [0,s_0]} \text{Var}\left[n_A \cdot \Psi_{\Lambda_A}(s^{\star}) \right] \leq \Lambda_A^2(s_0) \cdot \left( \frac{K}{n_A} + \frac{2K}{n_A\cdot p_0^3} \right)
\end{equation*}
which goes to zero on $[0,s_0]$ as $n_A \to \infty$.\\
\emph{Proof of Step II}:
$\Psi_{M_A}(s)$ is a mean zero ${\cal{F}}_s$--martingale since $M_A(s)$ is so. Consequently, $\Psi_{M_A}^2(s) - [\Psi_{M_A}](s)$ is a mean zero ${\cal{F}}_s$--martingale, where $[\Psi_{M_A}](s) = \int_0^s \frac{J_A(u)}{Y_A^4(u)} dN_A(u)$ is the optional covariation of $\Psi_{M_A}(s)$. Thus
\begin{align}\label{appA_eq07}
\begin{split}
\text{Var}\left[ \Psi_{M_A}(s) \right]
&= E \left[ \Psi_{M_A}^2(s) \right]
= E \left[ [\Psi_{M_A}](s) \right]
= E \left[ \int_0^s \frac{J_A(u)}{Y_A^4(u)} dN_A(u) \right] \\
&= E \left[ \int_0^s \frac{J_A(u)}{Y_A^3(u)} \lambda_A(u) du \right]
=\int_0^s E\left[\frac{J_A(u)}{Y^3_A(u)} \lambda_A(u) du \right] \\
&\leq \int_0^s \frac{4^3}{n_A^3 \cdot S^3_{X_A}(u)} \lambda_A(u) du
\leq \frac{4^3}{n_A^3 \cdot p_0^3} \cdot \Lambda_A(s_0).
\end{split}
\tag{A.9}
\end{align}
where the first inequality from the third row follows from Lemma 1 from \cite{Aalen:1976}. Those inequalities hold for all $s\leq s_0$. We thus conclude that
\begin{align}\label{appA_eq08}
\begin{split}
\sup_{s^{\star} \in [0,s_0]}\text{Var}\left[ n_A \cdot \Psi_{M_A}(s^{\star}) \right]
\leq n_A^{-1} \Lambda_A(s_0) \cdot \frac{4^3}{p_0^3}.
\end{split}
\tag{A.10}
\end{align}
for any $s\leq s_0$ which finishes the proof
\quad $\Box$
\ \\ \\
\textbf{Lemma 2.}
\textit{$\tau_i\coloneqq X_{B,i}$ is an ${\cal{F}}_s$--stopping--time.
}
{\small{\textsc{Proof.}}}
$\{\tau_i > s\} = \{ T_{B,i} \wedge C_{B,i} > s\} = \{ T_{B,i} > s\} \cap \{ C_{B,i} > s\} \in {\cal{F}}_s$.
\quad $\Box$
\ \\ \\
\textbf{Lemma 3.}
\textit{For any two random variables $S$ and $T$ we have $I(T \leq S) = I(T \leq S \wedge T)$. In particular, $N_{B,i}(s \wedge \tau_i) = N_{B,i}(s)$ for the ${\cal{F}}_s$--stopping--time $\tau_i\coloneqq X_{B,i}$.
}
{\small{\textsc{Proof.}}}
$\{ T \leq T \wedge S \} = \{ T \leq T \wedge S, T \leq S \} \cup \{ T \leq T \wedge S, T > S \} = \{ T \leq T, T \leq S \} \cup \{ T \leq S, T > S \} = \{ T \leq S \}$.
\quad $\Box$
\ \\ \\
\textbf{Lemma 4.}
\textit{Consider $\widehat{\sigma}_x(s)$ from Eq. (\ref{Sec_02_01_eq05}). Then in probability as $n \to \infty$
\begin{align}\label{appA_eq09}
\widehat{\sigma}_x(s) \stackrel{{\cal{P}}}{\rightarrow} {\sigma}_x(s) \coloneqq \int_0^s \frac{\lambda_x(u)}{S_{T_x}(u) \cdot S_{C_x}(u)} du,
\tag{A.11}
\end{align}
where $S_{T_x}$ ($S_{C_x}$) is the survival function of the time to event $T_{x,i}$ (time to censoring $C_{x,i}$) in treatment group $x=A,B$.
}
{\small{\textsc{Proof.}}}
This can easily be shown with Hellands proposition \cite{Andersen} and using the fact that $Y_{x}(u)/n_x \stackrel{{\cal{P}}}{\rightarrow} y_{x}(u) \equiv S_{T_x}(u) \cdot S_{C_x}(u)$.
\quad $\Box$
\ \\ \\
\textbf{Lemma 5.}
\textit{Let $X_{x,i}\coloneqq T_{x,i} \wedge C_{x,i}$. Then, for any $i \neq j$, the density $f_{X_{x,i}}(u)$ and survival function $S_{X_{x,i}}(u)$ of $X_{x,i}$ as well as the density $f_{X_{x,i} \wedge X_{x,j}}(u)$ of $X_{x,i} \wedge X_{x,j}$ are
\begin{align}\label{appA_eq10}
\begin{split}
f_{X_{x,i}}(u) &= f_{T_x}(u)S_{C_x}(u)+S_{T_x}(u)f_{C_x}(u) \\
S_{X_{x,i}}(u) &= S_{T_x}(u)S_{C_x}(u)\\
f_{X_{x,i} \wedge X_{x,j}}(u) &= 2 [f_{T_x}(u)S_{C_x}(u)+S_{T_x}(u)f_{C_x}(u)]S_{T_x}(u)S_{C_x}(u).
\end{split}
\tag{A.12}
\end{align}
where $f_{T_x}$ and $S_{T_x}$ ($f_{C_x}$ and $S_{C_x}$) are density and survival function of the time to event $T_{x,i}$ (time to censoring $C_{x,i}$) in treatment group $x=A,B$.
}
{\small{\textsc{Proof.}}}
Follows from elementary calculation with probability distributions using the independence of $T_{x,i}$, $T_{x,j}$, $C_{x,i}$ and $C_{x,j}$ for $i \neq j$.
\quad $\Box$
\ \\ \\
\textbf{Lemma 6.}
\textit{Let $\widehat{\Sigma}$ the process defined in (\ref{appA_eq01}). Then, pointwise in $s$, the limit $\Sigma(s) \coloneqq \lim_{n \to \infty} E[\widehat{\Sigma}(s)]$ exists. Under the contiguous alternatives $\Lambda_B(\cdot)= \omega_n \Lambda_A(\cdot)$ with $\omega_n=\exp(-n^{-1/2}\gamma)$ for some $\gamma \geq 0$, $\Sigma(s)$ amounts to
\begin{align}\label{appA_eq11}
\begin{split}
\Sigma^2(s) = \int_0^{\infty} F_{T_A}(s \wedge u) f_{C_A}(u) du + 2 \pi \cdot \int_0^{\infty} \sigma_A(s \wedge u) [f_{T_A}(u)S_{C_A}(u)+S_{T_A}(u)f_{C_A}(u)]S_{T_A}(u)S_{C_A}(u) du
\end{split}
\tag{A.13}
\end{align}
where $f_{T_x}$, $F_{T_x}$, $S_{T_x}$ ($f_{C_x}$, $F_{C_x}$, $S_{C_x}$) are density, distribution function and survival function of the time to event $T_{x,i}$ (time to censoring $C_{x,i}$) in treatment group $x=A,B$, where $\sigma_A(\cdot)$ is the function from (\ref{appA_eq09}), and $\pi = n_B/n_A$ the prefixed treatment arm allocation ratio.
}
{\small{\textsc{Proof.}}}
Since each of the families of random variables $\{N_{B,i}(s)\}_{i \in {\cal{N}}_B}$, $\{\widehat{\sigma}_x(s\wedge X_{x,i})\}_{i \in {\cal{N}}_B}$ and $\{\widehat{\sigma}_x(s \wedge X_{x,i} \wedge X_{x,j})\}_{i \neq j \in {\cal{N}}_B}$ is identically distributed, we have
\begin{align}\label{appA_eq12}
\begin{split}
E[\widehat{\Sigma}^2(s)] = E[N_{B,1}(s)] + \frac{n_B}{n_B n_A} E[\widehat{\sigma}_A(s\wedge X_{B,1})] + \frac{n_B(n_B-1)}{n_Bn_A} E[\widehat{\sigma}_A(s\wedge X_{B,1} \wedge X_{B,2})].
\end{split}
\tag{A.14}
\end{align}
Thus, the limit $\Sigma(s) = \lim_{n \to \infty} E[\widehat{\Sigma}(s)]$ exists.
Now additionally assume the contiguous alternatives. Then $\lim_{n \to \infty} E[N_{B,1}(s)] = E[N_{A,1}(s)]$, because $\lim_{n \to \infty} \Lambda_B = \Lambda_A$ under the contiguous alternatives. Moreover, by independence of the treatment groups and by (\ref{appA_eq09}), we conclude that
\begin{align}\label{appA_eq13}
\begin{split}
\lim_{n \to \infty} E[\widehat{\sigma}_A(s\wedge X_{B,i} \wedge X_{B,j})]
&= \lim_{n \to \infty} E_u[E[\widehat{\sigma}_A(s\wedge X_{B,i} \wedge X_{B,j})|X_{B,i} \wedge X_{B,j}=u]] \\
&= \lim_{n \to \infty} \int_0^{\infty} E[\widehat{\sigma}_A(s\wedge u)] f_{X_{B,i} \wedge X_{B,j}}(u) du \\
&= \int_0^{\infty} {\sigma}_A(s\wedge u) \lim_{n \to \infty} f_{X_{B,i} \wedge X_{B,j}}(u) du \\
&= \int_0^{\infty} {\sigma}_A(s\wedge u) f_{X_{A,i} \wedge X_{A,j}}(u) du.
\end{split}
\tag{A.15}
\end{align}
where the third equality holds by dominated convergence under application of the estimate from Lemma 1 in \cite{McKeague:1990} and the convergence of the expectation holds by Lemma 4.2 in \cite{Aalen:1976}. In particular, the second summand on the right hand side of (\ref{appA_eq12}) vanishes as $n \to \infty$. So we are done by supplying the explicit value of the density function $f_{X_{A,i} \wedge X_{A,j}}$ from (\ref{appA_eq10}). Notice that the last equality in (\ref{appA_eq13}) holds, because $\lim_{n \to \infty} \Lambda_B = \Lambda_A$ under the contiguous alternatives.
\quad $\Box$
\ \\ \\
\section*{Appendix B: Computation of Expectation of $\widehat{\Sigma}(\infty)$ }\label{appB}
In this section we derive the expectation of $\widehat{\Sigma}(\infty)$ as shown in Section \ref{sec:case_study}. Firstly, $E[\widehat{\Sigma}^2(\infty)]$ can be decomposed into
\begin{equation*}
n_B^{-1} E[N_B(s)] + n_B^{-1} n_A^{-1} \sum_{i,j \in {\cal{N}}_{B}} E[\widehat{\sigma}_A(\infty \wedge X_{B,i} \wedge X_{B,j})]
\end{equation*}
where the first summand is the same as the quantity in \eqref{eq:expectation_classical_variance}. For the second summand, we have
\begin{align*}
n_B^{-1} n_A^{-1} \sum_{i,j \in {\cal{N}}_{B}} E[\widehat{\sigma}_A(X_{B,i} \wedge X_{B,j})]
&=n_A^{-1} E[\widehat{\sigma}_A(X_{B,1})] + (n_B - 1) n_A^{-1} E[\widehat{\sigma}_A(X_{B,1} \wedge X_{B,2})]\\
&\rightarrow \pi \cdot E[\widehat{\sigma}_A(X_{B,1} \wedge X_{B,2})]
\end{align*}
as $n\to\infty$. The expectation on the right hand side is given by
\begin{align*}
&\int_0^{\infty} \widehat{\sigma}_A(u) dF_{X_{B,1} \wedge X_{B,2}}(u)\\
=&\int_0^{\infty} \widehat{\sigma}_A(u)(1-S_{T_B}(u)S_{C_B}(u)) d(S_{T_B}S_{C_B})(u) - \int_0^{\infty} \widehat{\sigma}_A(u)(1+S_{T_B}(u)S_{C_B}(u)) d(S_{T_B}S_{C_B})(u)\\
=&- 2\int_0^{\infty} \widehat{\sigma}_A(u)S_{T_B}^2(u)S_{C_B}(u) dS_{C_B}(u) - 2\int_0^{\infty} \widehat{\sigma}_A(u)S_{T_B}(u)S_{C_B}^2(u) dS_{T_B}(u)\\
=&2\int_0^{\infty} \widehat{\sigma}_A(u)S_{T_B}^2(u)S_{C_B}(u) dF_{C_B}(u) + 2\int_0^{\infty} \widehat{\sigma}_A(u)S_{T_B}(u)S_{C_B}^2(u) dF_{T_B}(u)
\end{align*}
where we applied the product rule several times.
\newpage
\section*{Appendix C: Empirical type I error rates in scenarios with high survival rates}\label{appC}
\begin{table}[!h]
\caption{Empirical type I error rates ($\alpha_{new}$ and $\alpha_{LR}$) and powers ($1-\beta_{new}$ and $1-\beta_{LR}$) for the new test and for the classical two--sample log--rank test, respectively, under proportional hazards alternatives for Weibull distributed survival times with shape parameter $\kappa$ and 1--year survival rate $S_1=0.8$ in the control arm. Theoretical two--sided significance level: $5 \%$. Underlying total sample size $n$ (or $n'$) in Scenario 1 (or Scenario 2) calculated to achieve a theoretical power of $80 \%$ under the planning alternative $H_1:\Lambda_B = \omega_0 \cdot \Lambda_A$for the classical log--rank test using Schoenfeld's formula (or for the new test using formula (\ref{Sec_04_eq03})). \label{tab:LowEventRate}}
\begin{center}
\small
\begin{tabular}{cc|ccccc|ccccc}
\toprule
& & \multicolumn{5}{c|}{Scenario 1} & \multicolumn{5}{c}{Scenario 2} \\[0.14cm]
$\kappa$ & $\omega_0$ & $n$ & $\alpha_{new}$ & $\alpha_{LR}$ & $1-\beta_{new}$ & $1-\beta_{LR}$ & $n'$ & $\alpha_{new}$ & $\alpha_{LR}$ & $1-\beta_{new}$ & $1-\beta_{LR}$ \\
\midrule
0.1 & 0.50 & 372 & 0.048 & 0.049 & 0.785 & 0.784 & 294 & 0.048 & 0.050 & 0.677 & 0.675 \\
0.1 & 0.67 & 968 & 0.049 & 0.049 & 0.799 & 0.797 & 845 & 0.049 & 0.049 & 0.738 & 0.737 \\
0.1 & 0.80 & 2736 & 0.049 & 0.049 & 0.799 & 0.798 & 2554 & 0.050 & 0.050 & 0.771 & 0.770 \\[0.14cm]
0.25 & 0.50 & 308 & 0.049 & 0.048 & 0.787 & 0.785 & 253 & 0.047 & 0.049 & 0.696 & 0.696 \\
0.25 & 0.67 & 766 & 0.047 & 0.049 & 0.800 & 0.798 & 693 & 0.045 & 0.046 & 0.756 & 0.754 \\
0.25 & 0.80 & 2032 & 0.049 & 0.050 & 0.803 & 0.800 & 1995 & 0.049 & 0.049 & 0.786 & 0.783 \\[0.14cm]
0.5 & 0.50 & 232 & 0.050 & 0.052 & 0.796 & 0.792 & 200 & 0.047 & 0.048 & 0.724 & 0.721 \\
0.5 & 0.67 & 554 & 0.048 & 0.048 & 0.806 & 0.800 & 523 & 0.047 & 0.048 & 0.775 & 0.770 \\
0.5 & 0.80 & 1386 & 0.045 & 0.045 & 0.806 & 0.800 & 1376 & 0.045 & 0.046 & 0.800 & 0.795 \\[0.14cm]
0.75 & 0.50 & 182 & 0.047 & 0.048 & 0.801 & 0.798 & 161 & 0.047 & 0.050 & 0.731 & 0.732 \\
0.75 & 0.67 & 426 & 0.048 & 0.049 & 0.810 & 0.801 & 412 & 0.048 & 0.049 & 0.790 & 0.781 \\
0.75 & 0.80 & 1048 & 0.048 & 0.048 & 0.814 & 0.803 & 1054 & 0.047 & 0.048 & 0.817 & 0.806 \\[0.14cm]
1 & 0.50 & 146 & 0.048 & 0.051 & 0.805 & 0.800 & 141 & 0.046 & 0.050 & 0.784 & 0.782 \\
1 & 0.67 & 344 & 0.050 & 0.050 & 0.814 & 0.802 & 354 & 0.049 & 0.048 & 0.827 & 0.814 \\
1 & 0.80 & 984 & 0.053 & 0.052 & 0.818 & 0.801 & 892 & 0.051 & 0.050 & 0.837 & 0.819 \\[0.14cm]
1.25 & 0.50 & 120 & 0.047 & 0.052 & 0.800 & 0.794 & 110 & 0.046 & 0.050 & 0.760 & 0.756 \\
1.25 & 0.67 & 288 & 0.050 & 0.051 & 0.817 & 0.800 & 283 & 0.050 & 0.052 & 0.806 & 0.791 \\
1.25 & 0.80 & 748 & 0.052 & 0.050 & 0.823 & 0.801 & 792 & 0.052 & 0.051 & 0.826 & 0.804 \\[0.14cm]
1.5 & 0.50 & 102 & 0.047 & 0.051 & 0.806 & 0.799 & 94 & 0.046 & 0.053 & 0.766 & 0.759 \\
1.5 & 0.67 & 252 & 0.049 & 0.050 & 0.818 & 0.798 & 246 & 0.051 & 0.051 & 0.809 & 0.790 \\
1.5 & 0.80 & 688 & 0.051 & 0.051 & 0.817 & 0.799 & 691 & 0.050 & 0.050 & 0.814 & 0.799 \\[0.14cm]
2 & 0.50 & 80 & 0.043 & 0.057 & 0.797 & 0.798 & 71 & 0.036 & 0.056 & 0.707 & 0.735 \\
2 & 0.67 & 212 & 0.048 & 0.056 & 0.803 & 0.799 & 202 & 0.046 & 0.054 & 0.785 & 0.776 \\
2 & 0.80 & 644 & 0.051 & 0.052 & 0.815 & 0.802 & 631 & 0.049 & 0.052 & 0.800 & 0.793 \\[0.14cm]
5 & 0.50 & 66 & 0.026 & 0.059 & 0.402 & 0.779 & 66 & 0.026 & 0.059 & 0.402 & 0.779 \\
5 & 0.67 & 196 & 0.041 & 0.055 & 0.742 & 0.792 & 196 & 0.041 & 0.055 & 0.742 & 0.792 \\
5 & 0.80 & 632 & 0.053 & 0.052 & 0.811 & 0.799 & 631 & 0.050 & 0.052 & 0.805 & 0.798 \\[0.14cm]
\bottomrule
\end{tabular}
\end{center}
\end{table}
\newpage
\section*{Appendix D: Empirical type I error rates in scenarios with low survival rates}\label{appD}
\begin{table}[!h]
\caption{Empirical type I error rates ($\alpha_{new}$ and $\alpha_{LR}$) and powers ($1-\beta_{new}$ and $1-\beta_{LR}$) for the new test and for the classical two--sample log--rank test, respectively, under proportional hazards alternatives for Weibull distributed survival times with shape parameter $\kappa$ and 1--year survival rate $S_1=0.2$ in the control arm. Theoretical two--sided significance level: $5 \%$. Underlying total sample size $n$ (or $n'$) in Scenario 1 (or Scenario 2) calculated to achieve a theoretical power of $80 \%$ under the planning alternative $H_1:\Lambda_B = \omega_0 \cdot \Lambda_A$for the classical log--rank test using Schoenfeld's formula (or for the new test using formula (\ref{Sec_04_eq03})). \label{tab:HighEventRate}}
\begin{center}
\small
\begin{tabular}{cc|ccccc|ccccc}
\toprule
& & \multicolumn{5}{c|}{Scenario 1} & \multicolumn{5}{c}{Scenario 2} \\[0.14cm]
$\kappa$ & $\omega_0$ & $n$ & $\alpha_{new}$ & $\alpha_{LR}$ & $1-\beta_{new}$ & $1-\beta_{LR}$ & $n'$ & $\alpha_{new}$ & $\alpha_{LR}$ & $1-\beta_{new}$ & $1-\beta_{LR}$ \\
\midrule
0.1 & 0.50 & 92 & 0.051 & 0.053 & 0.815 & 0.804 & 79 & 0.047 & 0.055 & 0.736 & 0.736 \\
0.1 & 0.67 & 252 & 0.049 & 0.051 & 0.814 & 0.799 & 235 & 0.048 & 0.052 & 0.782 & 0.770 \\
0.1 & 0.80 & 768 & 0.051 & 0.051 & 0.817 & 0.804 & 743 & 0.050 & 0.048 & 0.802 & 0.790 \\[0.14cm]
0.25 & 0.50 & 86 & 0.052 & 0.049 & 0.818 & 0.801 & 75 & 0.045 & 0.057 & 0.745 & 0.743 \\
0.25 & 0.67 & 234 & 0.052 & 0.049 & 0.818 & 0.801 & 222 & 0.054 & 0.053 & 0.803 & 0.781 \\
0.25 & 0.80 & 706 & 0.055 & 0.052 & 0.824 & 0.806 & 694 & 0.053 & 0.052 & 0.815 & 0.798 \\[0.14cm]
0.5 & 0.50 & 78 & 0.040 & 0.058 & 0.818 & 0.808 & 71 & 0.033 & 0.057 & 0.748 & 0.761 \\
0.5 & 0.67 & 214 & 0.055 & 0.054 & 0.833 & 0.805 & 207 & 0.053 & 0.055 & 0.821 & 0.788 \\
0.5 & 0.80 & 654 & 0.053 & 0.051 & 0.832 & 0.803 & 650 & 0.055 & 0.051 & 0.830 & 0.800 \\[0.14cm]
0.75 & 0.50 & 72 & 0.030 & 0.059 & 0.750 & 0.799 & 68 & 0.029 & 0.059 & 0.709 & 0.776 \\
0.75 & 0.67 & 204 & 0.051 & 0.056 & 0.823 & 0.801 & 200 & 0.048 & 0.055 & 0.816 & 0.795 \\
0.75 & 0.80 & 636 & 0.052 & 0.051 & 0.822 & 0.799 & 635 & 0.052 & 0.053 & 0.817 & 0.798 \\[0.14cm]
1 & 0.50 & 68 & 0.027 & 0.059 & 0.599 & 0.789 & 66 & 0.026 & 0.060 & 0.568 & 0.775 \\
1 & 0.67 & 198 & 0.044 & 0.055 & 0.784 & 0.797 & 197 & 0.039 & 0.054 & 0.764 & 0.792 \\
1 & 0.80 & 632 & 0.053 & 0.052 & 0.815 & 0.798 & 632 & 0.053 & 0.052 & 0.815 & 0.798 \\[0.14cm]
1.25 & 0.50 & 68 & 0.027 & 0.058 & 0.506 & 0.793 & 66 & 0.026 & 0.059 & 0.476 & 0.779 \\
1.25 & 0.67 & 198 & 0.042 & 0.056 & 0.760 & 0.798 & 196 & 0.041 & 0.055 & 0.751 & 0.792 \\
1.25 & 0.80 & 632 & 0.053 & 0.052 & 0.812 & 0.799 & 631 & 0.050 & 0.052 & 0.808 & 0.798 \\[0.14cm]
1.5 & 0.50 & 66 & 0.026 & 0.059 & 0.424 & 0.779 & 66 & 0.026 & 0.059 & 0.424 & 0.779 \\
1.5 & 0.67 & 196 & 0.041 & 0.055 & 0.744 & 0.792 & 196 & 0.041 & 0.055 & 0.744 & 0.792 \\
1.5 & 0.80 & 632 & 0.053 & 0.052 & 0.811 & 0.799 & 631 & 0.050 & 0.052 & 0.805 & 0.798 \\[0.14cm]
2 & 0.50 & 66 & 0.026 & 0.059 & 0.403 & 0.779 & 66 & 0.026 & 0.059 & 0.403 & 0.779 \\
2 & 0.67 & 196 & 0.041 & 0.055 & 0.742 & 0.792 & 196 & 0.041 & 0.055 & 0.742 & 0.792 \\
2 & 0.80 & 632 & 0.053 & 0.052 & 0.811 & 0.799 & 630 & 0.052 & 0.052 & 0.809 & 0.798 \\[0.14cm]
5 & 0.50 & 66 & 0.026 & 0.059 & 0.402 & 0.779 & 66 & 0.026 & 0.059 & 0.402 & 0.779 \\
5 & 0.67 & 196 & 0.041 & 0.055 & 0.742 & 0.792 & 195 & 0.037 & 0.055 & 0.724 & 0.788 \\
5 & 0.80 & 632 & 0.053 & 0.052 & 0.811 & 0.799 & 631 & 0.050 & 0.052 & 0.805 & 0.798 \\[0.14cm]
\bottomrule
\end{tabular}
\end{center}
\end{table}
\section{Supporting information}
\paragraph{S1 File.}
\label{S1_File}
{\bf R code.} Supplementary R code for the calculation of the test statistic $Z$ from \eqref{Sec_03_02_eq02} and corresponding two-sided $p$-values as well as functions for the analytic calculation of power and sample size (see \eqref{Sec_04_eq02} and \eqref{Sec_04_eq03}). Additionally, we provide another function to compute empirical type I and II errors on which our simulation study is based.
|
2,877,628,091,224 | arxiv | \section{Introduction}\label{s:intro}
\subsection{Interval and circular-arc hypergraphs}
An \emph{interval ordering} of a hypergraph $\ensuremath{\mathcal{H}}$ with a finite vertex set $V=V(\ensuremath{\mathcal{H}})$ is
a linear ordering $v_1,\ldots,v_n$ of $V$ such that every hyperedge of $\ensuremath{\mathcal{H}}$ is an interval of
consecutive vertices. This notion admits generalization to an \emph{arc ordering} where
$v_1,\dots,v_n$ is \emph{circularly ordered} (i.e., $v_1$ succeeds $v_n$)
so that every hyperedge is an \emph{arc} of consecutive vertices.
An \emph{interval hypergraph} is a hypergraph
admitting an interval ordering.
Similarly, if a hypergraph admits an arc ordering, we call it \emph{circular-arc}
(using also the shorthand \emph{CA}).
In the terminology stemming from computational genomics,
interval hypergraphs are exactly those hypergraphs
whose incidence matrix has the \emph{consecutive ones property}; e.g.,~\cite{Dom09}.
Similarly, a hypergraph is CA exactly when its
incidence matrix has the \emph{circular ones property}; e.g.,~\cite{HM03,GPZ08,OBS11}.
Our goal is to study the conditions under which interval and circular-arc hypergraphs
are \emph{rigid} in the sense that they have a unique interval or, respectively, arc
ordering. Since any interval (or arc) ordering can be changed to another interval (or arc)
ordering by reversing, we always mean uniqueness \emph{up to reversal}.
An obvious necessary condition of the uniqueness
is that a hypergraph has no \emph{twins}, that is, no two vertices such that
every hyperedge contains either both or none of them.
We say that two sets~$A$ and~$B$ \emph{overlap} and write $A\between B$ if
$A$ and $B$ have nonempty intersection and neither of the two sets includes the other.
To facilitate notation, we use the same character $\ensuremath{\mathcal{H}}$ to denote a hypergraph
and the set of its hyperedges.
We call $\ensuremath{\mathcal{H}}$ \emph{overlap-connected} if it has no isolated vertex
(i.e., every vertex is contained in a hyperedge) and the graph
$(\ensuremath{\mathcal{H}},\between)$ is connected.
As a starting point, we refer to the following rigidity result.
\begin{theorem}[Chen and Yesha~\cite{ChenY91}]\label{thm:unique-overlap-1}
A twin-free, overlap-connected interval hypergraph has,
up to reversal, a unique interval ordering.
\end{theorem}
If we want to extend this result to CA hypergraphs,
the overlap-connectedness obviously does not suffice.
For example, the twin-free overlap-connected hypergraph $\ensuremath{\mathcal{H}}=\big\{\{a,b\},\{a,b,c\},\{b,c,d\}\big\}$
has essentially different arc orderings.
We, therefore, need a stronger
notion of connectedness. When $A$ and $B$ are overlapping subsets of $V$
(i.e.,~$A\between B$) that additionally satisfy $A\cup B\ne V$, we say that
$A$ and~$B$ \emph{strictly overlap} and write $A\between^* B$.
Quilliot~\cite{Quilliot84} proves that
a CA hypergraph~$\ensuremath{\mathcal{H}}$ on~$n$ vertices has a unique
arc ordering if and only if for every set $X\subset V(\ensuremath{\mathcal{H}})$ with
$1<|X|< n-1$ there exists a hyperedge $H\in\ensuremath{\mathcal{H}}$ such that $H\between^* X$.
Note that this criterion does not admit efficient verification
as it involves quantification over all subsets~$X$.
We call a hypergraph~$\ensuremath{\mathcal{H}}$
\emph{strictly overlap-connected} if it has no isolated vertex and the graph
$(\ensuremath{\mathcal{H}},\between^*)$ is connected.
We prove the following analog of Theorem~\ref{thm:unique-overlap-1}
for CA hypergraphs.
\begin{theorem}\label{thm:unique-overlap-2}
A twin-free, strictly overlap-connected CA hypergraph has,
up to reversal, a unique arc representation.
\end{theorem}
\subsection{Tight orderings}
Let us use notation $A\bowtie B$ to say that sets $A$ and $B$
have a non-empty intersection. By the standard terminology,
a hypergraph $\ensuremath{\mathcal{H}}$ is \emph{connected} if it has no isolated vertex
and the graph $(\ensuremath{\mathcal{H}},\bowtie)$ is connected.
Note that the assumption made in Theorem~\ref{thm:unique-overlap-1}
cannot be weakened just to connectedness;
consider $\ensuremath{\mathcal{H}}=\big\{\{a\},\{a,b,c\}\big\}$ as the simplest example.
Thus, if we want to weaken the assumption,
we have also to weaken the conclusion.
Call an arc ordering of a hypergraph $\ensuremath{\mathcal{H}}$ \emph{tight} if,
for any two hyperedges $A$ and $B$ such that
$\emptyset\ne A\subseteq B\ne V$,
the corresponding arcs share an endpoint.
The definition of a \emph{tight interval ordering}
is similar: We require that the arcs corresponding
to hyperedges $A$ and $B$ share an endpoint whenever $\emptyset\ne A\subseteq B$
(the condition $B\ne V$ is now dropped as the complete interval $V$ has two endpoints, while the
complete arc $V$ has none).\footnote{%
The class of hypergraphs admitting a tight interval ordering
is characterized in terms of forbidden subhypergraphs in~\cite{Moore77};
such a characterization of interval hypergraphs is given in~\cite{TrotterM76}.}
For nonempty $A$ and~$B$, note that
$A\bowtie B$ iff $A\between B$ or $A\subseteq B$ or $A\supseteq B$.
By similarity, we define
\[
A\bowtie^* B\text{ iff }A\between^* B\text{ or }A\subseteq B\text{ or }A\supseteq B
\]
and say that such two nonempty sets \emph{strictly intersect}.
We call a hypergraph~$\ensuremath{\mathcal{H}}$
\emph{strictly connected} if it has no isolated vertex and the graph
$(\ensuremath{\mathcal{H}},\bowtie^*)$ is connected.
In Section~\ref{s:hgs} we establish the following result.
\begin{theorem}\label{thm:unique}
\begin{bfenumerate}
\item
A twin-free, connected hypergraph has, up to reversal,
at most one tight interval ordering.
\item
A twin-free, strictly connected hypergraph has, up to reversal,
at most one tight arc ordering.
\end{bfenumerate}
\end{theorem}
\subsection{The neighborhood hypergraphs of proper interval and proper circular-arc graphs}
For a vertex $v$ of a graph $G$, the set of vertices adjacent to $v$
is denoted by $N(v)$. Furthermore, $N[v]=N(v)\cup\{v\}$.
We define the \emph{closed neighborhood hypergraph} of~$G$
by $\ensuremath{\mathcal{N}}[G]=\{N[v]\}_{v\in V(G)}$.
Roberts~\cite{Roberts71} discovered that $G$ is a proper interval graph
if and only if $\ensuremath{\mathcal{N}}[G]$ is an interval hypergraph.
The case of proper circular-arc (PCA) graphs is more complex.\footnote{For a
definition of proper interval and PCA graphs, see the beginning of
Section~\ref{s:Nhgs}.}
If $G$ is a PCA graph, then $\ensuremath{\mathcal{N}}[G]$ is a CA hypergraph.
The converse is not always true.
The class of graphs with circular-arc closed neighborhood hypergraphs,
known as \emph{concave-round graphs}~\cite{Bang-JHY00},
contains PCA graphs as a proper subclass.
Taking a closer look at the relationship between PCA graphs
and CA hypergraphs, Tucker~\cite{Tucker71}\footnote
Tucker~\cite{Tucker71} uses an equivalent language of matrices.}
distinguishes the case when
the complement graph $\overline{G}$ is non-bipartite and shows that then
$G$ is PCA exactly when $\ensuremath{\mathcal{N}}[G]$ is~CA.
Our interest in tight orderings has the following motivation.
In fact, $G$ is a proper interval graph
if and only if the hypergraph $\ensuremath{\mathcal{N}}[G]$ has a tight interval ordering
(this follows from the Roberts theorem and Lemma~\ref{lem:geomistight} in
Section~\ref{s:Nhgs}). Moreover, $G$ is a PCA graph
if and only if $\ensuremath{\mathcal{N}}[G]$ has a tight arc ordering
(we observed this in~\cite{fsttcs} based on Lemma~\ref{lem:geomistight}
and Tucker's analysis in~\cite{Tucker71}).
Now, it is natural to consider the connectedness properties of $\ensuremath{\mathcal{N}}[G]$
for proper interval and PCA graphs and derive from here
rigidity results. For proper interval graphs this issue has been
studied in the literature earlier, but we discuss also this class of graphs for
expository purposes.
We call two vertices~$u$ and~$v$ of a graph $G$ \emph{twins} if $N[u]=N[v]$.
Note that $u$ and~$v$ are twins in the graph $G$ if
and only if they are twins in the hypergraph $\ensuremath{\mathcal{N}}[G]$.
Thus, the absence of twins in $G$ is a necessary condition for rigidity of $\ensuremath{\mathcal{N}}[G]$.
Another obvious necessary condition is the connectedness of $G$ (and, hence, of
$\ensuremath{\mathcal{N}}[G]$).\footnote{Small graphs are an exception, as all interval orderings
of at most two vertices are the same up to reversal, and all arc orderings of
up to three vertices are the same up to reversal and rotation.}
By Theorem~\ref{thm:unique}.1, if a proper interval graph $G$ is
twin-free and connected, then $\ensuremath{\mathcal{N}}[G]$ has a unique tight interval ordering.
Making the same assumptions, Roberts~\cite{Roberts71} proves that
even an interval ordering of $\ensuremath{\mathcal{N}}[G]$ is unique.
Suppose now that $G$ is a PCA graph. Consider first the case
when $\overline{G}$ is non-bipartite. In Section~\ref{s:Nhgs} we prove
that then $\ensuremath{\mathcal{N}}[G]$ is strictly connected. Theorem~\ref{thm:unique}.2
applies and shows that, if $G$ is also twin-free and connected, then
$\ensuremath{\mathcal{N}}[G]$ has a unique tight arc ordering.
Moreover, we prove that any arc ordering of $\ensuremath{\mathcal{N}}[G]$
is tight and, hence, unique as well.
If $\overline{G}$ is bipartite, it is convenient to switch to
the complement hypergraph $\overline{\ensuremath{\mathcal{N}}[G]}=\{V(G)\setminus N[v]\}_{v\in V(G)}$.
This hypergraph is interval. Applying Theorem~\ref{thm:unique}.1
to the connected components of $\overline{\ensuremath{\mathcal{N}}[G]}$,
we conclude that $\ensuremath{\mathcal{N}}[G]$ has, up to reversing, exactly two tight arc orderings
provided $\overline{G}$ is connected.
In~\cite{KoeblerKLV11} we noticed that,
if a proper interval graph $G$ is connected, then the hypergraph
$\ensuremath{\mathcal{N}}[G]\setminus\{V(G)\}$ is overlap-connected.
This allows to derive Roberts' aforementioned rigidity result
from Theorem~\ref{thm:unique-overlap-1}.
In Section~\ref{s:ov-conn}, we use Theorem~\ref{thm:unique-overlap-2}
to obtain a similar result for PCA graphs:
If $G$ is an $n$-vertex connected PCA graph with non-bipartite complement,
then removal of all $(n-1)$-vertex hyperedges from $\ensuremath{\mathcal{N}}[G]$
gives a strictly overlap-connected hypergraph.
\subsection{Intersection representations of graphs}
A proper interval representation $\alpha$ of a graph $G$
determines a linear ordering of $V(G)$
accordingly to the appearance of the left (or, equivalently, right)
endpoints of the intervals $\alpha(v)$, $v\in V(G)$, in the intersection model.
We call it the \emph{geometric order} associated with $\alpha$.
Similarly, a PCA representation of $G$ determines the \emph{geometric}
circular order on the vertex set. Any geometric order is a tight
interval or, respectively, arc ordering of $\ensuremath{\mathcal{N}}[G]$
(see Lemma~\ref{lem:geomistight}). The rigidity results
overviewed above imply that the geometric order is unique
for twin-free, connected proper interval graphs and
twin-free, connected PCA graphs with non-bipartite complement.
In Section~\ref{s:repr} we show that this holds true also
in the case of PCA graphs with bipartite connected complement.
Let us impose reasonable restrictions on proper interval and PCA
models of graphs. Specifically, we always suppose that a model of an $n$-vertex
graph has $2n$ points and consists of intervals/arcs that never
share an endpoint. It turns out that such intersection representations
are determined by the associated geometric order uniquely up to
reflection (and rotation in the case of arcs representations).
This implies that any twin-free, connected proper interval
or PCA graph has a unique intersection representation.
The last result is implicitly contained in the work by Deng, Hell, and Huang~\cite{DengHH96}, that relies on
a theory of local tournaments~\cite{Huang95}; see the discussion in the end of Section~\ref{s:repr}.
\section{Interval and circular-arc hypergraphs}\label{s:hgs}
Let $V=\{v_1,\dots,v_n\}$.
Saying that the sequence $v_1,\dots,v_n$
is \emph{circularly ordered},
we mean that $V$ is endowed with
the circular successor relation~$\prec$ under which
$v_i\prec v_{i+1}$ for $i<n$ and $v_n\prec v_1$.
An ordered pair of elements $a^-,a^+\in V$ determines an \emph{arc}~$A=[a^-,a^+]$
that consists of the vertices
appearing in the directed path from~$a^-$ to~$a^+$.
This notation will be used under the assumption that $A\ne V$,
though we also allow the
\emph{complete arc} $A=V$ and the \emph{empty arc} $A=\emptyset$.
By $\mathbb{C}_n$ we denote the set $\{1,\ldots,n\}$
endowed with the circular order $1\prec 2\prec \ldots\prec n\prec 1$.
We here prove Theorems~\ref{thm:unique-overlap-2} and~\ref{thm:unique}.
Since the proofs are very similar, we treat in detail the most
complicated of these statements, namely part~2 of Theorem~\ref{thm:unique},
and argue that the argument also covers the other statements (including also
Theorem~\ref{thm:unique-overlap-1}).
We will use an inductive argument. To establish the rigidity of
a connected hypergraph $\ensuremath{\mathcal{H}}$, we prove this property for every
connected subhypergraph $\ensuremath{\mathcal{K}}\subseteq\ensuremath{\mathcal{H}}$ by induction
on the number of hyperedges in $\ensuremath{\mathcal{K}}$.
Since subhypergraphs of a twin-free hypergraph $\ensuremath{\mathcal{H}}$
can contain twins, we need an appropriate generalization
of our rigidity concept. To this end, we switch to an equivalent language.
An \emph{arc representation of a hypergraph~$\ensuremath{\mathcal{H}}$} is a hypergraph isomorphism~$\rho$
from~$\ensuremath{\mathcal{H}}$ to an arc system~$\ensuremath{\mathcal{A}}$ on the circle~$\mathbb{C}_n$.
Note that $\ensuremath{\mathcal{H}}$ has an arc representation exactly when it admits an arc ordering~$\prec$.
Indeed, if $\rho\function{V(\ensuremath{\mathcal{H}})}{\{1,\ldots,n\}}$ is an arc representation of~$\ensuremath{\mathcal{H}}$,
we can define $\prec_\rho$ by $\rho^{-1}(1)\prec_\rho\rho^{-1}(2)\prec_\rho\ldots\prec_\rho\rho^{-1}(n)\prec_\rho\rho^{-1}(1)$.
Conversely, if $v_1\prec v_2\prec\ldots\prec v_n\prec v_1$ is an arc ordering of~$\ensuremath{\mathcal{H}}$,
then $\rho_\prec(v_i)=i$ is an arc representation of~$\ensuremath{\mathcal{H}}$.
Furthermore, we call $\rho$ \emph{tight} if the circular order of $\mathbb{C}_n$
is tight for $\ensuremath{\mathcal{A}}$. Obviously, tight arc representations correspond to
tight arc orderings and vice versa.
The notions of an \emph{interval representation},
corresponding to an interval ordering, is introduced similarly.
The rotations $x\mapsto(x+s)\bmod n+1$ and the reflection $x\mapsto n+1-x$
will be considered \emph{symmetries} of the circle~$\mathbb{C}_n$.
The linearly ordered segment $\{1,\ldots,n\}$ has a unique symmetry, namely the reflection.
Note that, if $\rho$~is an interval or arc representation of~$\ensuremath{\mathcal{H}}$
and $\sigma$~is a symmetry of the circle or the interval, respectively,
then the composition $\sigma\circ\rho$ is an
interval or arc representation of~$\ensuremath{\mathcal{H}}$ as well.
Turning back to the equivalence between arc representations and orderings,
note that while $\rho$ determines $\prec_\rho$ uniquely, $\prec$ determines $\rho_\prec$
up to a rotation.
Now, notice that $\ensuremath{\mathcal{H}}$ admits a unique, up to reversing, (tight) interval ordering
if and only if $\ensuremath{\mathcal{H}}$ has a unique, up to reflection, (tight) interval representation.
Furthermore, $\ensuremath{\mathcal{H}}$ admits a unique, up to reversing, (tight) arc ordering
if and only if $\ensuremath{\mathcal{H}}$ has a unique, up to reflection and rotation, (tight) arc representation.
A \emph{transposition of twins} in a hypergraph $\ensuremath{\mathcal{H}}$
is a one-to-one map of~$V(\ensuremath{\mathcal{H}})$ onto itself
that fixes each vertex except, possibly, a pair of twins of~$\ensuremath{\mathcal{H}}$.
Any composition~$\pi$ of transpositions of twins is called a \emph{permutation of twins}.
Note that, if $\rho$~is an interval or arc representation of~$\ensuremath{\mathcal{H}}$,
then the composition $\rho\circ\pi$ is, respectively, an
interval or arc representation of~$\ensuremath{\mathcal{H}}$ as well.
We call two representations~$\rho$ and~$\rho'$ equivalent \emph{up to permutation of twins}
if $\rho'=\rho\circ\pi$ for some permutation of twins $\pi$.
For twin-free hypergraphs this relation just coincides with equality of representations.
\begin{lemma}\label{lem:arcreprequi}
Arc representations~$\rho$ and~$\rho'$ are equivalent up to permutation of twins
iff $\rho(H)=\rho'(H)$ for every hyperedge~$H\in\ensuremath{\mathcal{H}}$.
\end{lemma}
\begin{proof}
Suppose that $\rho'=\rho\circ\pi$ for a permutation of twins $\pi$.
Let $H$ be an arbitrary hyperedge of~$\ensuremath{\mathcal{H}}$. Since $\tau(H)=H$ for any
transposition of twins $\tau$, we have $\pi(H)=H$ and, hence, $\rho(H)=\rho'(H)$.
To prove the claim in the backward direction, define
a \emph{twin-class} of a hypergraph~$\ensuremath{\mathcal{H}}$ to be an inclusion-maximal subset~$S$ of~$V(\ensuremath{\mathcal{H}})$ such that
each hyperedge~$H\in\ensuremath{\mathcal{H}}$ contains either all of~$S$ or none of it.
Thus, two vertices of~$\ensuremath{\mathcal{H}}$ are twins iff they are in the same twin-class.
It follows that $\pi$~is a permutation of twins exactly when
$\pi(S)=S$ for every twin-class $S$ of~$\ensuremath{\mathcal{H}}$.
Suppose that $\rho(H)=\rho'(H)$ for every hyperedge~$H\in\ensuremath{\mathcal{H}}$.
Since a twin-class is a Boolean combination of hyperedges,
we have $\rho(S)=\rho'(S)$ for every twin-class $S$ of~$\ensuremath{\mathcal{H}}$ and,
therefore, $\rho'\circ\rho^{-1}(S)=S$ for all twin-classes.
It follows that $\rho'\circ\rho^{-1}$ is a permutations of twins,
hence $\rho$ and~$\rho'$ are equivalent up to permutation of twins.
\end{proof}
Let $\rho\function{V(\ensuremath{\mathcal{H}})}{\{1,\ldots,n\}}$ and $\rho'\function{V(\ensuremath{\mathcal{H}})}{\{1,\ldots,n\}}$
be arc or interval representations of~$\ensuremath{\mathcal{H}}$. We call $\rho$ and~$\rho'$
equivalent \emph{up to symmetry and permutation of twins} if
$\rho'=\sigma\circ\rho\circ\pi$ for some symmetry $\sigma$ and
permutation of twins $\pi$. This is an equivalence relation between
representations because both symmetries and permutations of twins form groups.
The following lemma translates Theorems~\ref{thm:unique-overlap-1},~\ref{thm:unique-overlap-2}, and~\ref{thm:unique}
into the language of interval/arc representations and generalizes these results
to hypergraphs with twins. Theorems~\ref{thm:unique-overlap-1},~\ref{thm:unique-overlap-2}, and~\ref{thm:unique}
follow from here immediately by the equivalence between representations and orderings.
\begin{lemma}\label{lem:unique}
\begin{bfenumerate}
\item
An overlap-connected interval hypergraph~$\ensuremath{\mathcal{H}}$~has,
up to symmetry (i.e.\ reflection) and permutation of twins,
a unique interval representation.
\item
A strictly overlap-connected CA hypergraph~$\ensuremath{\mathcal{H}}$~has,
up to symmetry (i.e., reflection and rotation) and permutation of twins,
a unique arc representation.
\item
A connected hypergraph has, up to symmetry and permutation of twins,
at most one tight interval representation.
\item
A strictly connected hypergraph has, up to symmetry
and permutation of twins, at most one tight arc representation.
\end{bfenumerate}
\end{lemma}
\begin{proof}
We will prove part~4. The proof of the interval variant (part~3) is virtually the same
and even somewhat simpler as not all arc configurations can occur on the line (like one of the
dashed configurations in Fig.~\ref{fig:proof:unique2} that closes the circle).
Parts 1 and 2 are actually covered by the argument as well;
they correspond exactly to the case shown in Fig.~\ref{fig:proof:unique2}
as all the other cases involve inclusions.
\begin{figure}
\centering
\begin{tikzpicture}
\begin{camodel}[every carc/.style={}]
\carc[labelpos=.33]{180}{60}{3}{$\rho(A)$}
\carc[labelpos=.66]{115}{-20}{2}{$\rho(B)$}
\carc{150}{85}{1}{}
\carc[dashed]{85}{20}{1}{}
\carc[dashed,labelangle=-150]{-40}{-230}{1}{$\rho(H)$}
\end{camodel}
\begin{camodel}[xshift=7cm,every carc/.style={}]
\carc[labelpos=.33]{180}{60}{3}{$\rho(A)$}
\carc[labelpos=.66]{115}{-20}{2}{$\rho(B)$}
\carc[dashed]{85}{20}{1}{}
\carc{20}{-30}{1}{}
\carc[dashed,labelangle=-150]{-30}{-220}{1}{$\rho(H)$}
\end{camodel}
\end{tikzpicture}
\caption{Proof of Lemma~\protect\ref{lem:unique}.4, case 1: $A\between^*B$ and
$\rho(B)$~intersects~$\rho(A)$ clockwise; $B\between^*H$.
On the left side:
$\rho(H)$~intersects~$\rho(B)$ counter-clockwise.
On the right side:
$\rho(H)$~intersects~$\rho(B)$ clockwise.}\label{fig:proof:unique2}
\end{figure}
Let us first explain the strategy of the proof.
A set of hyperedges $\ensuremath{\mathcal{K}}\subseteq\ensuremath{\mathcal{H}}$ will be regarded as
a \emph{spanning subhypergraph} of $\ensuremath{\mathcal{H}}$, that is, $V(\ensuremath{\mathcal{K}})=V(\ensuremath{\mathcal{H}})$
(note that $\ensuremath{\mathcal{K}}$ can have isolated vertices).
Given a hypergraph~$\ensuremath{\mathcal{H}}$ with $n$ vertices,
we will prove that any strictly connected subhypergraph $\ensuremath{\mathcal{K}}\subseteq\ensuremath{\mathcal{H}}$ with $k$ hyperedges
has, up to symmetry and permutation of twins, a unique representation on the circle of size~$n$.
This will be done by induction on~$k$. The base cases are $k=1,2$.
In order to make the inductive step, it suffices to show that, whenever $k\ge2$,
any representation~$\rho$ of~$\ensuremath{\mathcal{K}}$ has, up to permutation of twins, a unique
extension to a representation of $\ensuremath{\mathcal{K}}\cup\{H\}$,
for any $H\in\ensuremath{\mathcal{H}}\setminus\ensuremath{\mathcal{K}}$ such that $\ensuremath{\mathcal{K}}\cup\{H\}$ is strictly connected.
By Lemma~\ref{lem:arcreprequi} this actually means to show that the whole arc~$\rho(H)$
(though not necessarily each point~$\rho(v)$ for $v\in H$) is uniquely determined.
Moreover, it suffices to do this job in the case of $k=2$.
The reason is that $\ensuremath{\mathcal{K}}$ always contains two hyperedges~$A$ and~$B$ such that
the sequence $A,B,H$ forms a strictly connected path.
Before going into detail, we introduce some terminology.
Consider two arcs $[a^-,a^+]$ and $[b^-,b^+]$ and suppose that
$[b^-,b^+]\between^*[a^-,a^+]$ or $[b^-,b^+]\subset[a^-,a^+]$.
We will say that $[b^-,b^+]$ \emph{intersects} $[a^-,a^+]$ \emph{clockwise}
if $a^+\in[b^-,b^+]$ and \emph{counter-clockwise} if $a^-\in[b^-,b^+]$.
All possible positions of a single hyperedge~$A$ on the circle are congruent by rotation.
All possible positions of two strictly overlapping hyperedges~$A$ and~$B$
are congruent by rotation and reflection because the intersection of the corresponding
arcs~$\rho(A)$ and~$\rho(B)$ is always an arc of length $|A\cap B|$.
If $A$ and~$B$ are comparable under
inclusion, recall that we only consider tight representations.
For the inductive step, consider three hyperedges~$A$, $B$, and~$H$
such that $A\bowtie^*B$ and $B\bowtie^*H$.
We have to show that the arc~$\rho(H)$ is completely determined by
the arcs~$\rho(A)$ and~$\rho(B)$.
The relation $B\bowtie^*H$ fixes the length of the intersection $\rho(H)\cap\rho(B)$
and, hence, leaves for~$\rho(H)$ exactly two possibilities depending on whether
this intersection is clockwise or counter-clockwise.
We split our analysis into three cases.
\Case1{$A\between^*B$.}
Without loss of generality, we suppose that $\rho(B)$ intersects~$\rho(A)$
clockwise; the case of counter-clockwise intersection is symmetric.
Consider first the subcase in which $B\between^*H$.
Looking at the possible configurations
for the arc system $\{\rho(A),\rho(B),\rho(H)\}$,
all shown in Fig.~\ref{fig:proof:unique2},
we see that $\rho(H)$ intersects~$\rho(B)$ counter-clockwise
exactly if the sets $A\setminus B$ and $H\setminus B$ are comparable under inclusion, i.e.,
\begin{equation}
\label{eq:ccw}
A\setminus B\subseteq H\setminus B\text{\ \ or\ \ }H\setminus B\subseteq A\setminus B.
\end{equation}
\begin{figure}
\centering
\begin{tikzpicture}[baseline=0cm]
\begin{camodel}[every carc/.style={},cabase=.75cm]
\node[anchor=west] at (-2.5cm,1.75cm) {$B\subset H$:};
\carc[nodepos=.33]{180}{60}{3}{$\rho(A)$}
\carc[nodepos=.66]{115}{-20}{2}{$\rho(B)$}
\carc{115}{-40}{1}{}
\carc[dashed,nodepos=.66]{-40}{-210}{1}{$\rho(H)$}
\end{camodel}
\begin{camodel}[yshift=-2.75cm,every carc/.style={},cabase=.75cm]
\carc[nodepos=.33]{180}{60}{3}{$\rho(A)$}
\carc[nodepos=.66]{115}{-20}{2}{$\rho(B)$}
\carc{150}{-20}{1}{}
\carc[dashed]{-80}{-230}{1}{$\rho(H)$}
\end{camodel}
\end{tikzpicture}\hfil
\begin{tikzpicture}[baseline=0cm]
\begin{camodel}[every carc/.style={},cabase=.75cm]
\node[anchor=west] at (-2.5cm,1.75cm) {$H\subset B$:};
\carc[nodepos=.33]{180}{60}{3}{$\rho(A)$}
\carc[nodepos=.66]{115}{-20}{2}{$\rho(B)$}
\carc{115}{60}{1}{}
\carc[dashed,swap,nodepos=.25]{60}{15}{1}{$\rho(H)$}
\end{camodel}
\begin{camodel}[yshift=-2.75cm,every carc/.style={},cabase=.75cm]
\carc[nodepos=.33]{180}{60}{3}{$\rho(A)$}
\carc[nodepos=.66]{115}{-20}{2}{$\rho(B)$}
\carc[swap]{25}{-20}{1}{$\rho(H)$}
\carc[dashed]{90}{20}{1}{}
\end{camodel}
\end{tikzpicture}
\caption{Proof of Lemma~\protect\ref{lem:unique}.4, case 1: $A\between^*B$ and
$\rho(B)$ intersects~$\rho(A)$ clockwise;
$B$ and~$H$ are comparable under inclusion.}\label{fig:proof:unique4a}
\end{figure}
For the remaining subcases, when $B$ and~$H$ are comparable under inclusion,
all possible configurations for the arc system $\{\rho(A),\rho(B),\rho(H)\}$
are shown in Fig.~\ref{fig:proof:unique4a}. If $B\subset H$,
we see that $\rho(B)$ intersects~$\rho(H)$ clockwise
exactly under the condition~\refeq{ccw}.
If $H\subset B$, then $\rho(H)$ intersects~$\rho(B)$ counter-clockwise iff
$A\cap B\subseteq H$.
\begin{figure}
\centering
\begin{tikzpicture}[baseline=0cm]
\begin{camodel}[every carc/.style={},cabase=.75cm]
\node[anchor=west] at (-2.25cm,1.75cm) {$B\between^*H$:};
\carc[nodepos=.25]{180}{0}{2}{$\rho(A)$}
\carc{90}{0}{3}{$\rho(B)$}
\carc{135}{45}{1}{}
\carc[dashed]{280}{135}{1}{$\rho(H)$}
\end{camodel}
\begin{camodel}[yshift=-2.75cm,every carc/.style={},cabase=.75cm]
\carc[nodepos=.25]{180}{0}{2}{$\rho(A)$}
\carc{90}{0}{3}{$\rho(B)$}
\carc[labelangle=-30]{45}{-45}{1}{$\rho(H)$}
\carc[dashed]{-45}{-215}{1}{}
\end{camodel}
\end{tikzpicture}\hfill
\begin{tikzpicture}[baseline=0cm]
\begin{camodel}[xshift=5cm,every carc/.style={},cabase=.75cm]
\node[anchor=west] at (-2.25cm,1.75cm) {$B\subset H$:};
\carc[nodepos=.25]{180}{0}{2}{$\rho(A)$}
\carc{90}{0}{3}{$\rho(B)$}
\carc{135}{0}{1}{}
\carc[dashed]{280}{135}{1}{$\rho(H)$}
\end{camodel}
\begin{camodel}[xshift=5cm,yshift=-2.75cm,every carc/.style={},cabase=.75cm]
\carc[nodepos=.25]{180}{0}{2}{$\rho(A)$}
\carc{90}{0}{3}{$\rho(B)$}
\carc[labelangle=-30]{90}{-45}{1}{$\rho(H)$}
\carc[dashed]{-45}{-215}{1}{}
\end{camodel}
\end{tikzpicture}\hfill
\begin{tikzpicture}[baseline=0cm]
\begin{camodel}[xshift=10cm,every carc/.style={},cabase=.75cm]
\node[anchor=west] at (-2.25cm,1.75cm) {$H\subset B$:};
\carc[nodepos=.25]{180}{0}{2}{$\rho(A)$}
\carc{90}{0}{3}{$\rho(B)$}
\carc[swap]{55}{0}{1}{$\rho(H)$}
\end{camodel}
\end{tikzpicture}
\caption{Proof of Lemma~\protect\ref{lem:unique}.4, case 2: $B\subset A$ and
$\rho(B)$ intersects~$\rho(A)$ clockwise.}\label{fig:proof:unique4b}
\end{figure}
\Case2{$A\supset B$.}
Without loss of generality, we suppose that $\rho(B)$ intersects~$\rho(A)$
clockwise; see Fig.~\ref{fig:proof:unique4b}.
If $B\between^*H$, then $\rho(H)$ intersects~$\rho(B)$ counter-clockwise
exactly when the familiar condition~\refeq{ccw} holds true.
If $B\subset H$, then $\rho(B)$ intersects~$\rho(H)$ clockwise
exactly under the same condition. Thus, these two subcases do not differ much
from the corresponding subcases of Case 1.
If $H\subset B$, then $\rho(H)$~is forced to intersect~$\rho(B)$ clockwise
by the condition that $\{\rho(A),\rho(B),\rho(H)\}$ is a tight arc system.
\begin{figure}
\centering
\begin{tikzpicture}[baseline=0pt]
\begin{camodel}[every carc/.style={},cabase=.75cm]
\node[anchor=west] at (-2.25cm,1.75cm) {$B\between^*H$:};
\carc[nodepos=.25]{180}{0}{2}{$\rho(B)$}
\carc{90}{0}{3}{$\rho(A)$}
\carc[labelpos=.4]{280}{135}{1}{$\rho(H)$}
\carc[dashed]{135}{45}{1}{}
\end{camodel}
\begin{camodel}[yshift=-2.75cm,every carc/.style={},cabase=.75cm]
\carc[nodepos=.25]{180}{0}{2}{$\rho(B)$}
\carc{90}{0}{3}{$\rho(A)$}
\carc{50}{-100}{1}{$\rho(H)$}
\carc[dashed]{90}{50}{1}{}
\end{camodel}
\end{tikzpicture}\hfill
\begin{tikzpicture}[baseline=0pt]
\begin{camodel}[every carc/.style={},cabase=.75cm]
\node[anchor=west] at (-2.25cm,1.75cm) {$H\subset B$:};
\carc[nodepos=.25]{180}{0}{2}{$\rho(B)$}
\carc{90}{0}{3}{$\rho(A)$}
\carc[swap]{180}{130}{1}{$\rho(H)$}
\carc[dashed]{130}{45}{1}{}
\end{camodel}
\begin{camodel}[yshift=-2.75cm,every carc/.style={},cabase=.75cm]
\carc[nodepos=.25]{180}{0}{2}{$\rho(B)$}
\carc{90}{0}{3}{$\rho(A)$}
\carc[swap]{50}{0}{1}{$\rho(H)$}
\carc[dashed]{135}{50}{1}{}
\end{camodel}
\end{tikzpicture}\hfill
\begin{tikzpicture}[baseline=0pt]
\begin{camodel}[every carc/.style={},cabase=.75cm]
\node[anchor=west] at (-2.25cm,1.75cm) {$B\subset H$:};
\carc[nodepos=.25]{180}{0}{2}{$\rho(B)$}
\carc{90}{0}{3}{$\rho(A)$}
\carc[nodepos=.25]{270}{0}{1}{$\rho(H)$}
\end{camodel}
\end{tikzpicture}
\caption{Proof of Lemma~\protect\ref{lem:unique}.4, case 3: $A\subset B$ and
$\rho(A)$ intersects~$\rho(B)$ clockwise.}\label{fig:proof:unique4c}
\end{figure}
\Case3{$A\subset B$.}
Without loss of generality, we suppose that $\rho(A)$ intersects~$\rho(B)$
clockwise; see Fig.~\ref{fig:proof:unique4c}.
If $B\between^*H$, then $\rho(H)$ intersects~$\rho(B)$ clockwise
iff $H\cap B\subseteq A$.
If $H\subset B$, then $\rho(H)$ intersects~$\rho(B)$ clockwise
iff the sets~$H$ and~$A$ are comparable under inclusion.
Finally, if $B\subset H$, then $\rho(B)$~is forced to intersect~$\rho(H)$ clockwise
by the tightness condition.
\end{proof}
\section{The neighborhood hypergraphs}
\label{s:Nhgs}
Let $\ensuremath{\mathcal{A}}$ be a family of arcs on the circle $\mathbb{C}_m$.
A bijection $\alpha\function{V(G)}{\ensuremath{\mathcal{A}}}$ is an \emph{arc representation
of a graph $G$} if two vertices $u$ and $v$ are adjacent in $G$ exactly when
the arcs $\alpha(u)$ and $\alpha(v)$ intersect.
A representation $\alpha$ is \emph{proper} if $\alpha(u)\subseteq\alpha(v)$
for no two vertices $u$ and $v$. Restriction to \emph{intervals}
in the linearly ordered set $\{1,\ldots,m\}$ gives the notion of
a \emph{proper interval representation} of $G$.
Graphs having such intersection representations
are known as \emph{proper interval} and \emph{proper circular-arc (PCA) graphs}.
The aim of this section is to prove the rigidity properties for the closed
neighborhood hypergraphs of PCA graphs that will be given in
Theorem~\ref{thm:Nrigid}.
Recall that a proper arc (resp.\ interval)
representation $\alpha\function{V(G)}{\ensuremath{\mathcal{A}}}$ of a graph~$G$
determines the circular (resp.\ linear) geometric order~$\prec_\alpha$ on the vertex set $V(G)$
accordingly to the appearance of the left (or, equivalently, right)
endpoints of the arcs $\alpha(v)$, $v\in V(G)$, in the circle $\mathbb{C}_m$.
The following lemma implies that the closed neighborhood hypergraph of any PCA
graph admits a tight arc ordering.
\begin{lemma}[see~\cite{fsttcs}]\label{lem:geomistight}
The geometric order~$\prec_\alpha$ on~$V(G)$ associated with a proper
arc (resp.\ interval) representation~$\alpha$ of a graph~$G$ is a tight arc
(resp.\ interval) ordering of the hypergraph~$\ensuremath{\mathcal{N}}[G]$.
\end{lemma}
For the remainder of this section, we will consider two subclasses of PCA
graphs, namely those with bipartite and those with non-bipartite complement.
The \emph{complement of a graph~$G$} is the graph~$\overline{G}$ with
$V(\overline{G})=V(G)$
such that two vertices are adjacent in~$\overline{G}$
if and only if they are not adjacent in~$G$.
In what follows, we will repeatedly need the following property of
non-co-bipartite PCA graphs.
A vertex~$u$ is \emph{universal} if $N[u]=V(G)$.
\begin{lemma}\label{lem:nouniv}
A PCA graph $G$ with non-bipartite complement
contains no universal vertex.
\end{lemma}
\begin{proof}
Let~$\alpha\colon V(G)\to\ensuremath{\mathcal{A}}$ be a
proper arc representation of~$G$ and
denote~$\alpha(u)=[a^-,a^+]$.
Notice now that $N[u]$ is
covered by two cliques $\setdef{x\in
N[u]}{a^-\in\alpha(x)}$ and $\setdef{x\in
N(u)}{a^+\in\alpha(x)}$,
which excludes $N[u]=V(G)$ as $\overline{G}$ is not bipartite.
\end{proof}
If $\prec$ is an arc ordering of the hypergraph $\ensuremath{\mathcal{N}}[G]$ for
a non-co-bipartite PCA graph $G$, the closed neighborhood $N[u]$
of a vertex $u$ is an arc on the directed cycle $(V(G),\prec)$.
Lemma~\ref{lem:nouniv} allows us to use notation $N[u]=[u^-,u^+]$,
since the left endpoint $u^-$ and the right endpoint $u^+$
are uniquely determined.
In the following three lemmas, we establish several facts about arc orderings of
non-co-bipartite PCA graphs.
\begin{lemma}\label{lem:twocliques}
Let $G$ be a PCA graph with non-bipartite complement.
For any arc ordering $\prec$ of~$\ensuremath{\mathcal{N}}[G]$,
every vertex $u\in V(G)$ has the following property:
$u$ divides $N[u]=[u^-,u^+]$ into two parts $[u^-,u]$~and~$[u,u^+]$
that both are cliques in~$G$.
\end{lemma}
\begin{proof}
\begin{figure} \centering
\begin{tikzpicture}[baseline=0cm]
\node at (-1.5cm,1.5cm) {(a)};
\begin{camodel}[cabase=.75cm,castep=.15cm,capoints=true]
\carc[swap,nodeangle=20]{150}{410}{2}{$y$}
\carc[swap,nodeangle=180]{-20}{200}{4}{$u$}
\carc[nodeangle=0]{120}{390}{5}{$x$}
\end{camodel}
\end{tikzpicture}
\qquad\qquad
\begin{tikzpicture}[baseline=0cm]
\node at (-1.5cm,1.5cm) {(b)};
\begin{camodel}[cabase=.75cm,castep=.15cm]
\carc[startlabel=$u^-$,endlabel=$u^+$,startlabelpos=inside,endlabelpos=inside]{180}{0}{1}{}
\carc[swap,nodeangle=90,every label/.style={inner sep=5pt}]{90}{90}{1}{$u$}
\carc[swap,every label/.style={inner sep=5pt,anchor=80}]{70}{70}{1}{$v$}
\carc{180}{-80}{2}{}
\carc{120}{0}{3}{}
\carc{120}{-80}{4}{}
\end{camodel}
\end{tikzpicture}
\caption{(a) Proof of Lemma~\protect\ref{lem:twocliques}: Vertices
$u\in V(G)$, $x\in D^+[u]$, and $y\in D^+[u]\cap[u,x]$ along
with their neighborhoods in $(V(G),\prec)$.
\quad
(b) Proof of Lemma~\protect\ref{lem:CA-order}.3:
Possible mutual positions of~$N[u]$ and~$N[v]$. The most
inward arc $[u^-,u^+]$ represents~$N[u]$; the other four arcs show
possible positions of $[v^-,v^+]=N[v]$.}
\label{fig:geomistight+proof}
\end{figure}
Call a vertex $x\in N[u]$ a
\emph{close neighbor} of~$u$ if
\[
x\in [u^-,u]\text{ and }u\in [x,x^+]\text{ or if }x\in [u,u^+]\text{ and
}u\in [x^-,x]
\]
and a \emph{distant neighbor} otherwise.
Denote the sets of close neighbors of~$u$ in~$[u^-,u]$
and~$[u,u^+]$ by~$C^-[u]$ and~$C^+[u]$,
respectively.
Similarly, the sets of distant neighbors of~$u$ in~$[u^-,u]$
and~$[u,u^+]$ will be denoted by~$D^-[u]$ and~$D^+[u]$.
Each of the four sets~$C^-[u]$, $C^+[u]$, $D^-[u]$,
and~$D^+[u]$ is a clique. Indeed, if for example $x$ and~$y$ are
two vertices in~$D^+(u)$, then $u$ belongs to both $[x,x^+]$
and~$[y,y^+]$. Since $x$ and $y$ are both in~$[u,u^+]$, this
implies that either $x$ belongs to~$[y,y^+]$ or $y$ belongs
to~$[x,x^+]$ (depending on whether $y\in[u,x]$ or
$x\in[u,y]$; see Fig.~\ref{fig:geomistight+proof}).
Hence, $x$ and $y$ are adjacent and~$D^+(u)$ is a
clique. The other three cases are similar.
To complete the proof we show that,
if $\overline{G}$ is non-bipartite, then
$D^-[u]=D^+[u]=\emptyset$. Assume to the contrary that $D^+[u]$
contains a vertex~$x$; see Fig.~\ref{fig:geomistight+proof} (the case $x\in
D^-[u]$ is similar). Since $u\in [x,x^+]$, the sets
$[u,u^+]\cap[u,x]=[u,x]$ and $[x,x^+]\cap[x,u]=[x,u]$ cover $V(G)$. Splitting
the former set into $C^+[u]\cap[u,x]$ and $D^+[u]\cap[u,x]$ and
the latter into $C^+[x]\cap[x,u]$ and $D^+[x]\cap[x,u]$, consider
the cover of~$V(G)$ by two sets
$(C^+[u]\cap[u,x])\cup(D^+[x]\cap[x,u])$ and
$(D^+[u]\cap[u,x])\cup(C^+[x]\cap[x,u])$ and show that they are
cliques. This will give us a contradiction since $\overline{G}$~is not
bipartite. By symmetry, it suffices to prove that
$(D^+[u]\cap[u,x])\cup(C^+[x]\cap[x,u])$ is a clique. Since both
$D^+[u]$ and~$C^+[x]$ are cliques, we have to show that any
vertex~$y$ in $D^+[u]\cap[u,x]$ is adjacent to all vertices in
$C^+[x]\cap[x,u]$. This is true because we have $
N[y]\supseteq[y,u]\supseteq[x,u] $ by the definition of~$D^+[u]$.
\end{proof}
Lemma~\ref{lem:twocliques} allows us to derive the following lemma, which will
be needed for the main rigidity result of this section and also is of
independent interest.
\begin{lemma}\label{lem:anyistight}
If $G$ is a PCA graph with non-bipartite complement,
then any arc ordering $\prec$ of~$\ensuremath{\mathcal{N}}[G]$ is tight.
\end{lemma}
This complements Lemma~\ref{lem:geomistight}, which implies that there are tight
arc orderings of $\ensuremath{\mathcal{N}}[G]$.
\begin{proof}
Let $N[u]\subseteq N[v]$.
Consider the arcs $N[u]=[u^-,u^+]$ and $N[v]=[v^-,v^+]$ w.r.t.~$\prec$.
Suppose that $u\in [v,v^+]$ (the case that $u\in [v^-,v]$ is symmetric).
By Lemma~\ref{lem:twocliques}, $[v,v^+]$ is a clique in $G$.
Therefore, this arc is contained in~$N[u]$, which implies
that $u^+=v^+$.
\end{proof}
With the next lemma we show that all arc orderings for closed neighborhood
hypergraph of non-co-bipartite PCA graphs follow several simple patterns.
\begin{lemma}\label{lem:CA-order}
Let $G$ be a PCA graph with non-bipartite complement.
For any arc ordering $\prec$ of~$\ensuremath{\mathcal{N}}[G]$ and any vertices $u,v\in V(G)$,
the following conditions are met.
\begin{bfenumerate}
\item $v\in [u,u^+]$ if and only if $u\in [v^-,v]$.
\item If $v\in [u,u^+]$, then $v^-\in [u^-,u]$ and $u^+\in [v,v^+]$.
\item If $u\prec v$ and these vertices are adjacent, then $u^-$, $v^-$,
$u$, $v$, $u^+$, and $v^+$ occur under the
order~$\prec$ exactly in this circular sequence, where some
of the neighboring vertices except $u^-$~and~$v^+$ may coincide.
Moreover, it is impossible that $v^+\prec u^-$.
\end{bfenumerate}
\end{lemma}
\begin{proof}
{\bf 1.} Let $u\ne v$ and assume that $v\in [u,u^+]$. If $u\in
[v,v^+]$, then Lemma~\ref{lem:twocliques} shows that
$V(G)$ is covered by two cliques
$[u,u^+]$~and~$[v,v^+]$, contradicting the assumption
that~$\overline{G}$~is not bipartite. Therefore, $u\in [v^-,v]$. The
other implication follows by symmetry.
{\bf 2.} If the two conditions in~part~1 are true, then $v^-$,
$u$, and $v$ occur in this circular order. Since $[v^-,v]$ is a
clique, all vertices in $[v^-,u)$ are adjacent to~$u$ and hence,
$v^-\in [u^-,u]$. The second containment follows
by symmetry.
{\bf 3.} Parts~1~and~2 imply that $u^-$, $v^-$, $u$, $v$,
$u^+$, and $v^+$ occur in this circular order; see Fig.~\ref{fig:geomistight+proof}.
However, it is still not excluded $v^+$~and~$u^-$ can coincide or be swapped.
We have to show that the condition $u\prec v$ rules out the last two possibilities
as well as the possibility of $v^+\prec u^-$. Indeed, any of these configurations
would give covering of $V(G)$ by two cliques $[u^-,u]$ and~$[v,v^+]$.
\end{proof}
The following theorem allows us to invoke Theorem~\ref{thm:unique}.2 for the
closed neighborhood hypergraphs of connected non-co-bipartite PCA graphs,
proving that these hypergraphs have a unique tight arc ordering.
\begin{theorem}\label{thm:strict-conn}
If $G$~is a connected PCA graph with non-bipartite complement,
then $\ensuremath{\mathcal{N}}[G]$~is strictly connected.
\end{theorem}
\begin{proof}
Let $\alpha$ be a proper arc representation of~$G$.
By Lemma~\ref{lem:geomistight}, $\ensuremath{\mathcal{N}}[G]$~has an arc
(geometric) ordering $\prec_\alpha$.
Since the complement of $G$ is not bipartite,
$G$ has at least two vertices and, by Lemma~\ref{lem:nouniv}, no universal vertex.
Since $G$~is connected, there is at most one pair of non-adjacent vertices~$x$ and~$y$
satisfying the relation $x\prec_\alpha y$. Therefore, all vertices of~$G$
can be arranged into a path $v_1,\ldots,v_n$ such that $v_i$ and~$v_{i+1}$
are adjacent and $v_i\prec_\alpha v_{i+1}$ for every $1\le i<n$.
By Lemma~\ref{lem:CA-order}.3, we have $N[v_i]\bowtie^* N[v_{i+1}]$,
which gives us a strictly connected path passing through all hyperedges of~$\ensuremath{\mathcal{N}}[G]$.
\end{proof}
Now we are ready to prove our rigidity result for the closed neighborhood
hypergraphs of PCA graphs.
\begin{theorem}\label{thm:Nrigid}
Let $G$ be a twin-free, connected PCA graph.
\begin{bfenumerate}
\item
If $\overline{G}$ is non-bipartite, then $\ensuremath{\mathcal{N}}[G]$ has, up to reversal,
a unique arc ordering.
\item
If $\overline{G}$ is bipartite and connected, then $\ensuremath{\mathcal{N}}[G]$ has, up to reversal,
exactly two tight arc orderings.
\end{bfenumerate}
\end{theorem}
\begin{proof}
{\bf 1.}
$\ensuremath{\mathcal{N}}[G]$ has an arc ordering by Lemma~\ref{lem:geomistight}.
By Lemma~\ref{lem:anyistight}, any arc ordering of $\ensuremath{\mathcal{N}}[G]$
is tight. The uniqueness follows from Theorem~\ref{thm:strict-conn}
by Theorem~\ref{thm:unique}.2.
{\bf 2.}
The \emph{open neighborhood hypergraph} of a graph $G$ is defined by
$\ensuremath{\mathcal{N}}(G)=\{N(v)\}_{v\in V(G)}$.
In place of $\ensuremath{\mathcal{N}}[G]$, it is now practical to consider
the complement hypergraph $\overline{\ensuremath{\mathcal{N}}[G]}=\Set{V(G)\setminus N[v]}_{v\in V(G)}$.
Note that $\overline{\ensuremath{\mathcal{N}}[G]}=\ensuremath{\mathcal{N}}(\overline{G})$.
This hypergraph is disconnected, and any (tight) arc ordering
of $\ensuremath{\mathcal{N}}[G]$ induces a (tight) interval ordering of
each connected component of $\ensuremath{\mathcal{N}}(\overline{G})$.
Conversely, arbitrary (tight) interval orderings
of the components of $\ensuremath{\mathcal{N}}(\overline{G})$ can be merged
into a (tight) arc ordering of $\ensuremath{\mathcal{N}}[G]$.
Since $\overline{G}$ is connected, $\ensuremath{\mathcal{N}}(\overline{G})$ has exactly two components.
Applying Theorem~\ref{thm:unique}.1 to each of them, we
conclude that each of the two components has, up to reversing,
a single interval ordering. Since there are exactly two essentially
different ways to merge them, we see
that $\ensuremath{\mathcal{N}}[G]$ has, up to reversing, exactly two tight arc orderings.
\end{proof}
Let us stress that part 2 of Theorem~\ref{thm:Nrigid} concerns
only tight orderings. To show that it
cannot be strengthened to the class of all arc orderings,
consider the \emph{half-graph}~$H_m$ that is the bipartite graph
with vertex classes $\{u_1,\ldots,u_m\}$ and $\{v_1,\ldots,v_m\}$
where $u_i$ is adjacent to $v_j$ if $i\le j$.
The complement $G_m=\overline{H_m}$ is a twin-free, connected PCA graph.
Note that, besides two pairs of mutually reversed tight interval orderings,
the hypergraph $\ensuremath{\mathcal{N}}(H_3)$ has another interval ordering
and, hence, $\ensuremath{\mathcal{N}}[G_3]$ has a non-tight arc ordering.
If we increase the parameter $m$, the number
of non-tight arc orderings of $\ensuremath{\mathcal{N}}[G_m]$ grows exponentially.
\section{Overlap-connectedness}\label{s:ov-conn}
Let $G$ be a twin-free, connected PCA graph with non-bipartite complement.
In Theorem~\ref{thm:strict-conn} we established that $\ensuremath{\mathcal{N}}[G]$
is strictly connected. This implies the uniqueness of a tight arc
ordering of this hypergraph by Theorem~\ref{thm:unique}.2.
Since by Lemma~\ref{lem:anyistight} any arc ordering is actually tight,
we obtain the uniqueness for the class of all arc orderings.
\begin{figure}
\centering
\begin{tikzpicture}[baseline=0cm]
\node at (-1.75cm,1.75cm) {(a)};
\begin{camodel}[cabase=.75cm,castep=.175cm]
\carc{140}{40}{1}{$A$}
\carc{-40}{-140}{2}{\strut$X$}
\carc[labelpos=.67]{50}{-50}{3}{$\!C$}
\carc[labelpos=.33]{230}{130}{3}{$Y$}
\carc[swap,labelpos=.67]{60}{-5}{2}{$B$}
\carc[swap,labelpos=.33]{185}{120}{2}{$Z$}
\end{camodel}
\end{tikzpicture}\hfil
\begin{tikzpicture}[baseline=0cm]
\node at (-1.5cm,1.75cm) {(b)};
\path[every node/.style={circle,fill,inner sep=1pt},every label/.append style=rectangle]
(90:1cm) node[label=above:$a$] (a) {}
(30:1cm) node[label=right:\strut$b$] (b) {} edge (a)
(-30:1cm) node[label=right:$c$] (c) {} edge (a) edge (b)
(-90:1cm) node[label=below:$x$] (x) {} edge (c)
(-150:1cm) node[label=left:\strut$y$] (y) {} edge (a) edge (x)
(150:1cm) node[label=left:$z$] (z) {} edge (a) edge (y);
\end{tikzpicture}\hfil
\begin{tikzpicture}[baseline=0cm]
\node at (-2cm,1.75cm) {(c)};
\begin{camodel}[capoints=true,cabase=.75cm,castep=.175cm,
spaced/.style={every label/.append style={inner sep=4pt}},
close/.style={every label/.append style={inner sep=1pt}}]
\carc[nodeangle=90,thick,swap,spaced]{210}{-30}{1}{$a$}
\carc[nodeangle=-90,swap,spaced]{-30}{-150}{2}{$x$}
\carc[nodeangle=150,swap,close]{210}{90}{3}{$z$}
\carc[nodeangle=30,swap,close]{90}{-30}{4}{$b$}
\carc[nodeangle=-150]{-90}{-270}{5}{$y$}
\carc[nodeangle=-30]{90}{-90}{6}{$c$}
\end{camodel}
\end{tikzpicture}%
\caption{An example: (a) A proper arc system; (b) The corresponding intersection graph~$G$.
Its complement $\overline{G}$~is non-bipartite; (c) The closed neighborhood hypergraph~$\ensuremath{\mathcal{N}}[G]$ is not strictly
overlap-connected: the hyperedge~$N[a]$ forms a single component.
Nevertheless, since after removal of~$N[a]$ the hypergraph~$\ensuremath{\mathcal{N}}[G]$ stays twin-free
and becomes strictly overlap-connected, it has a unique, up to reversal, arc ordering.}\label{fig:example}
\end{figure}
The same conclusion could be derived more directly
by Theorem~\ref{thm:unique-overlap-2} when $\ensuremath{\mathcal{N}}[G]$
would be strictly overlap-connected.
However, the last condition is not always true
because $\ensuremath{\mathcal{N}}[G]$ can have hyperedges of size $n-1$ and
each such hyperedge forms a separate
strictly overlap-connected component; see an example in Fig.~\ref{fig:example}.
Nevertheless, if we remove the $(n-1)$-element hyperedge
from $\ensuremath{\mathcal{N}}[G]$ in this example, the remaining hypergraph becomes
strictly overlap-connected and stays twin-free.
It turns out that this is a general phenomenon.
In fact, we derive this result from the uniqueness of an arc ordering,
that we already established for $\ensuremath{\mathcal{N}}[G]$ in the preceding section,
and a criterion of uniqueness given below.
\begin{lemma}\label{lem:ov-conn-gen}
Given a hypergraph~$\ensuremath{\mathcal{H}}$ on $n\ge4$ vertices, let $\ensuremath{\mathcal{H}}'$
be the hypergraph on the same vertex set
obtained from~$\ensuremath{\mathcal{H}}$ by removing all hyperedges of size $0$, $1$, $n-1$, and~$n$.
Then $\ensuremath{\mathcal{H}}$~has a unique, up to reversing, arc ordering if and only if
$\ensuremath{\mathcal{H}}'$~is twin-free and either is strictly overlap-connected
or has a single isolated vertex and becomes strictly overlap-connected
after its removal.
\end{lemma}
\begin{proof}
Assume that $\ensuremath{\mathcal{H}}'$~is twin-free and either is strictly overlap-connected
or becomes so after removing a single isolated vertex.
By Theorem~\ref{thm:unique-overlap-2}, $\ensuremath{\mathcal{H}}'$~has a unique arc ordering.
Recall that uniqueness is always meant up to reversal.
This holds also for~$\ensuremath{\mathcal{H}}$ because the two hypergraphs have
the same arc orderings.
Let us prove the lemma in the other direction.
Assume that $\ensuremath{\mathcal{H}}$~has a unique arc ordering.
Since $n\ge4$, $\ensuremath{\mathcal{H}}$~has no twins (for else transposition of two twins would give
us another arc ordering).
Fix an arbitrary vertex~$x$ of~$\ensuremath{\mathcal{H}}$. For a hyperedge $H\in\ensuremath{\mathcal{H}}$,
set $H_x=H$ if $x\notin H$ and $H_x=V(\ensuremath{\mathcal{H}})\setminus H$ otherwise.
Define the interval hypergraph $\ensuremath{\mathcal{H}}_x=\Set{H_x}_{H\in\ensuremath{\mathcal{H}}}$
where any empty hyperedge $H_x=\emptyset$ is removed.
If vertices~$u$ and~$v$ are distinguished by the incidence relation to a hyperedge~$H$,
they are distinguished as well by the complement of $H$. This shows that $\ensuremath{\mathcal{H}}_x$~is, like $\ensuremath{\mathcal{H}}$, twin-free.
In particular, $x$~is the only isolated vertex of~$\ensuremath{\mathcal{H}}_x$.
Let $\ensuremath{\mathcal{H}}_x^\circ$ denote the hypergraph obtained from~$\ensuremath{\mathcal{H}}_x$ by removing the vertex~$x$.
Any arc ordering for $\ensuremath{\mathcal{H}}_x$~is also an arc ordering for~$\ensuremath{\mathcal{H}}$.
Therefore $\ensuremath{\mathcal{H}}_x$~has a unique arc ordering.
It readily follows that $\ensuremath{\mathcal{H}}_x^\circ$~has a unique interval ordering.
The set of all interval orderings of a given hypergraph is described in~\cite{KoeblerKLV11}
in terms of the tree of its overlap-connected components, which is an analog of
the classical structure known as a \emph{PQ-tree}~\cite{BoothL76}.
This description is based on an observation that any two overlap-connected components
of a hypergraph either are vertex-disjoint or one is contained in a twin-class of the other.
With respect to this containment relation, the overlap-connected components
form a directed forest. By Theorem~\ref{thm:unique-overlap-1}, every overlap-connected component
admits a unique interval ordering, and we have a freedom to reverse it within each of the
components. It follows that $\ensuremath{\mathcal{H}}_x^\circ$~is connected
and that the root overlap-connected component $\ensuremath{\mathcal{R}}$ of~$\ensuremath{\mathcal{H}}_x^\circ$ is twin-free.
Therefore, any other overlap-connected component of~$\ensuremath{\mathcal{H}}_x^\circ$
must consist of a single one-vertex hyperedge (possibly obtained by complementing
a $(n-1)$-vertex hyperedge of~$\ensuremath{\mathcal{H}}$).
It follows that $\ensuremath{\mathcal{R}}$
is equal to the hypergraph~$(\ensuremath{\mathcal{H}}')_x^\circ$ obtained from~$\ensuremath{\mathcal{H}}'$ by complementing
all hyperedges that contain~$x$ and removing~$x$.
Thus, $(\ensuremath{\mathcal{H}}')_x^\circ$ is twin-free and overlap-connected.
Since $(\ensuremath{\mathcal{H}}')_x$~is obtainable from the connected hypergraph~$(\ensuremath{\mathcal{H}}')_x^\circ$ by adding
an isolated vertex, $(\ensuremath{\mathcal{H}}')_x$~is twin-free as well.
By the same argument as used above for~$\ensuremath{\mathcal{H}}$ and~$\ensuremath{\mathcal{H}}_x$, the hypergraph~$\ensuremath{\mathcal{H}}'$ must be twin-free too.
In particular, $\ensuremath{\mathcal{H}}'$~has at most one isolated vertex.
Note that if hyperedges~$H_x$ and~$K_x$ of the hypergraph~$(\ensuremath{\mathcal{H}}')_x$
overlap, then they strictly overlap in the full circle $V(\ensuremath{\mathcal{H}})$ and, therefore,
the corresponding hyperedges~$H$ and~$K$ of the hypergraph~$\ensuremath{\mathcal{H}}'$
strictly overlap. It now follows from the overlap-connectedness of~$(\ensuremath{\mathcal{H}}')_x^\circ$ that
any two hyperedges in~$\ensuremath{\mathcal{H}}'$ are connected by a $\between^*$-path.
This readily implies that $\ensuremath{\mathcal{H}}'$~is strictly overlap-connected if it has no isolated
vertex or becomes such after removal of the (single) isolated vertex.
\end{proof}
Recall that $\ensuremath{\mathcal{K}}$ is a \emph{spanning subhypergraph} of
a hypergraph~$\ensuremath{\mathcal{H}}$ if
$V(\ensuremath{\mathcal{K}})=V(\ensuremath{\mathcal{H}})$ and any hyperedge of~$\ensuremath{\mathcal{K}}$ is a hyperedge of~$\ensuremath{\mathcal{H}}$.
\begin{theorem}\label{thm:ov-conn-str}
If $G$~is a twin-free, non-co-bipartite, connected PCA graph on~$n$ vertices, then
the spanning subhypergraph of~$\ensuremath{\mathcal{N}}[G]$ obtained by removing
all hyperedges of size $n-1$ is twin-free and strictly overlap-connected.
\end{theorem}
\begin{proof}
Assume that a graph~$G$ satisfies all the assumptions.
Note that then $G$~has $n\ge4$ vertices.
Denote $\ensuremath{\mathcal{H}}=\ensuremath{\mathcal{N}}[G]$.
Since $G$~is connected, $\ensuremath{\mathcal{H}}$~has no hyperedge of size~$1$.
It also has no hyperedge of size $n$
because $G$~has no universal vertex by Lemma~\ref{lem:nouniv}.
Remove all hyperedges of size $n-1$ and denote the result by~$\ensuremath{\mathcal{H}}'$.
We have to show that $\ensuremath{\mathcal{H}}'$~is twin-free and strictly overlap-connected.
By Theorem~\ref{thm:Nrigid}.1,
$\ensuremath{\mathcal{H}}$~has a unique, up to reversing, arc ordering.
By Lemma~\ref{lem:ov-conn-gen},
$\ensuremath{\mathcal{H}}'$~is twin-free and strictly overlap-connected unless it has an isolated vertex.
It remains to exclude the last possibility.
Suppose that all hyperedges of~$\ensuremath{\mathcal{H}}$ containing a vertex~$v$ are of size $n-1$.
In particular, $N[v]=V(G)\setminus\{v'\}$, where $v'$~is a unique vertex non-adjacent
to~$v$. Note that $v$~is contained in the hyperedge~$N[u]$ for any $u\ne v'$.
For each of these hyperedges we, therefore, have $|N[u]|=n-1$.
Moreover, $v'\in N[u]$ for else $u$ and~$v$ would be twins.
It follows that $|N[v']|=n-1$ as well. We conclude that $\overline{G}$~is a perfect matching,
contradicting the assumptions that $\overline{G}$~is not bipartite.
\end{proof}
Note that Theorem~\ref{thm:ov-conn-str} gives us a complete description of the decomposition of~$\ensuremath{\mathcal{N}}[G]$
into strictly overlap-connected components because each hyperedge of size $n-1$ forms such a
component alone. Figure~\ref{fig:manylargehyperedges} shows that the number of
such single-hyperedge components can be linear.
\begin{figure}
\centering
\numdef\k{3}
\numdef\n{3*\k-1}
\numdef\p{2*\k}
\numdef\km{\k-1}
\numdef\rot{-180/\p}
\dimdef\base{1.2cm}
\begin{tikzpicture}
\begin{camodel}[cabase=\base,caanglestep=360/\p,castep=.1cm]
\node[anchor=east] at (-2,2) {$\ensuremath{\mathcal{A}}_3$};
\begin{scope}[rotate=\rot]
\begin{pgfonlayer}{background}
\draw[black!25,line width=3mm,line cap=round]
(360/\p:1cm+\base) -- (360/\p:\base)
node[black,anchor=\rot+360/\p,inner sep=1pt] {$u_1$};
\end{pgfonlayer}
\CArc{start=1,end=(\k-1),startlevel=2,endlevel=5.5}
\foreach \i in {2,...,\k} {%
\begin{pgfonlayer}{background}
\draw[black!25,line width=3mm,line cap=round]
(\i*360/\p:1cm+\base) -- (\i*360/\p:\base)
node[black,anchor=\rot+\i*360/\p,inner sep=1pt] {$u_\i$};
\end{pgfonlayer}
\CArc{start=\i,end=(\k+\i-1),startlevel=2,endlevel=(3*\k)}
}
\foreach \i in {1,...,\k} {%
\begin{pgfonlayer}{background}
\draw[black!25,line width=3mm,line cap=round]
({(\k+\i)*360/\p}:1cm+\base) -- ({(\k+\i)*360/\p}:\base)
node[black,anchor=\rot+(\k+\i)*360/\p,inner sep=1pt] {$v_\i$};
\end{pgfonlayer}
\CArc{start=(\k+\i),end=(\p+\i-1),startlevel=2,endlevel=(3*\k)}
}
\foreach \i in {1,...,\km} {%
\begin{pgfonlayer}{background}
\draw[black!25,line width=3mm,line cap=round]
({(\k+\i+1/2)*360/\p}:1cm+\base) -- ({(\k+\i+1/2)*360/\p}:\base)
node[black,anchor=\rot+(\k+\i+1/2)*360/\p,inner sep=1pt] {$w_\i$};
\end{pgfonlayer}
\CArc{start=(\k+\i+1/2),end=(\p+\i-1),startlevel=2,endlevel=(3*\k-1.5)}
}
\end{scope}
\end{camodel}
\begin{camodel}[cabase=\base,xshift=7cm,caanglestep=360/\p,castep=.1cm,capoints=true]
\node[anchor=east] at (-2,2) {$\ensuremath{\mathcal{N}}[G_3]$};
\begin{scope}[rotate=\rot]
\begin{pgfonlayer}{background}
\draw[black!25,line width=3mm,line cap=round]
(360/\p:1.25cm+\base) -- (360/\p:\base)
node[black,anchor=\rot+360/\p,inner sep=1pt] {$u_1$};
\end{pgfonlayer}
\CArc{start=(1+1-\k),end=(\k-1),startlevel=2,endlevel=(4*\k-2.5),labelangle=1}
\foreach \i in {2,...,\k} {%
\begin{pgfonlayer}{background}
\draw[black!25,line width=3mm,line cap=round]
(\i*360/\p:1.25cm+\base) -- (\i*360/\p:\base)
node[black,anchor=\rot+\i*360/\p,inner sep=1pt] {$u_\i$};
\end{pgfonlayer}
\CArc{start=(\i+1-\k),end=(\k+\i-1),startlevel=2,endlevel=(4*\k)} }
\foreach \i in {1,...,\k} {%
\begin{pgfonlayer}{background}
\draw[black!25,line width=3mm,line cap=round]
({(\k+\i)*360/\p}:1.25cm+\base) -- ({(\k+\i)*360/\p}:\base)
node[black,anchor=\rot+(\k+\i)*360/\p,inner sep=1pt] {$v_\i$};
\end{pgfonlayer}
\CArc{gray,start=(\i+1),end=(\p+\i-1),startlevel=2,endlevel=(4*\k)} }
\foreach \i in {1,...,\km} {%
\begin{pgfonlayer}{background}
\draw[black!25,line width=3mm,line cap=round]
({(\k+\i+1/2)*360/\p}:1.25cm+\base) -- ({(\k+\i+1/2)*360/\p}:\base)
node[black,anchor=\rot+(\k+\i+1/2)*360/\p,inner sep=1pt] {$w_\i$};
\end{pgfonlayer}
\CArc{start=(\i+2),end=(\p+\i-1),startlevel=3.25,endlevel=(4*\k-1.25)}
}
\end{scope}
\end{camodel}
\end{tikzpicture}
\caption{A non-co-bipartite, twin-free, connected PCA graph on~$n$ vertices
can have more than $n/3$ vertices of degree $n-2$ (a vertex~$v$ has degree $n-2$ iff
$\left|N[v]\right|=n-1$). For each $k\ge2$, define a graph~$G_k$ on $3k-1$
vertices by its arc model~$\ensuremath{\mathcal{A}}_k$ such that $k$ of the vertices
will have degree $n-2$. On the circle
$\mathbb{C}=\{u_1,u_2,\dotsc,u_k,v_1,w_1,v_2,w_2,\dotsc,v_{k-1},w_{k-1},v_k\}$,
whose points go in this circular order, consider arcs
$U_1=[u_1,u_{k-1}]$, $U_i=[u_i,v_{i-1}]$ for $2\le i\le k$,
$V_1=[v_1,v_k]$, $V_i=[v_i,u_{i-1}]=\mathbb{C}\setminus U_i$ for $2\le i\le k$, and
$W_i=V_i\setminus\{v_i\}$ for $i\le k-1$. As $\ensuremath{\mathcal{A}}_k$~is tight, it can easily
be made proper and, hence, $G_k$~is PCA. Since the arcs have pairwise
different left endpoints, we can identify $V(G_k)=\mathbb{C}$. The graph can be described
by listing the pairs of non-adjacent vertices, namely $v_iu_i$, $w_iu_i$, $w_iu_{i+1}$,
and~$u_1u_k$. There are no twins, and $G_k$~is not co-bipartite
because its complement contains an odd cycle, namely $u_1w_1u_2w_2\ldots u_k$.
In accordance with Theorem~\protect\ref{thm:ov-conn-str},
when we remove from~$\ensuremath{\mathcal{N}}[G_k]$ the hyperedges $N[v_i]=\mathbb{C}\setminus\{u_i\}$,
the hypergraph stays twin-free and becomes strictly overlap-connected.
This can be seen by looking at the following path in the complementing hypergraph:
$\overline{N[u_1]}=\{u_k,v_1,w_1\}$, $\overline{N[u_2]}=\{w_1,v_2,w_2\}$, \ldots,
$\overline{N[u_{k-1}]}=\{w_{k-2},v_{k-1},w_{k-1}\}$, $\overline{N[u_k]}=\{w_{k-1},v_k,u_1\}$,
$\overline{N[w_1]}=\{u_1,u_2\}$, \ldots, $\overline{N[w_{k-1}]}=\{u_{k-1},u_k\}$.
The figure shows~$\ensuremath{\mathcal{A}}_3$
and an arc model for~$\ensuremath{\mathcal{N}}[G_3]$ which has the arcs of size $n-1$ grayed
out.}\label{fig:manylargehyperedges}
\end{figure}
\section{Intersection representations of graphs}\label{s:repr}
Part 1 of Theorem~\ref{thm:Nrigid} implies that a twin-free, connected PCA graph $G$
with non-bipartite complement has a unique geometric order.
If $\overline{G}$ is bipartite and connected, part~2 (combined with Lemma~\ref{lem:geomistight})
leaves two different possibilities. It turns out that, nevertheless,
a geometric order is unique also in this case. We state this fact
in Theorem~\ref{thm:Aunique2} below, proving an auxiliary lemma
beforehand.
\begin{lemma}\label{lem:one-endpoint}
If a graph $G$ contains at most one universal vertex, then
any proper arc representation $\alpha$ of $G$ has the following property:
If $v$ and $v'$ are adjacent vertices of $G$,
then the arcs~$\alpha(v)$ and~$\alpha(v')$
strictly overlap, that is, contain exactly one endpoint of each other.
\end{lemma}
\begin{proof}
Otherwise we would have either $\alpha(v)\subset\alpha(v')$,
or $\alpha(v)\supset\alpha(v')$, or the union $\alpha(v)\cup\alpha(v')$
would cover the whole circle.
The first two conditions would contradict the assumption that $\alpha$~is proper,
while the last condition would imply that every $\alpha(u)$ intersects both $\alpha(v)$
and~$\alpha(v')$. Therefore, both $v$ and $v'$ were universal,
a contradiction.
\end{proof}
\begin{theorem}\label{thm:Aunique2}
Let $G$ be a twin-free, connected PCA graph.
If its complement $\overline{G}$ is non-bipartite or connected,
then all geometric orders associated with proper arc representations of~$G$
are equal up to reversing.
\end{theorem}
\begin{proof}
As we just discussed, it is enough to consider the case that $\overline{G}$
is bipartite and connected.
Let $V(G)=U\cup W$ be the bipartition of~$\overline{G}$ into two independent sets.
Note that $U$ and $W$ span two connected components of the hypergraph $\ensuremath{\mathcal{N}}(\overline{G})$.
Consider an arbitrary proper arc representation~$\alpha$ of~$G$ and the corresponding
geometric order $\prec_\alpha$ on~$V(G)$.
Like in the proof of Theorem~\ref{thm:Nrigid}.2,
notice that $\prec_\alpha$~is a tight
interval order for each connected component of~$\ensuremath{\mathcal{N}}(\overline{G})$ and, therefore,
the restrictions of~$\prec_\alpha$ to~$U$ and~$W$,
each considered up to reversing, do not depend on~$\alpha$ by Theorem~\ref{thm:unique}.1.
This still leaves two distinct possibilities of merging them into~$\prec_\alpha$.
We now show that one of them is actually ruled out, and this will
prove that $\prec_\alpha$ does not depend on~$\alpha$,
if this order is considered up to reversing.
\begin{figure}
\centering
\begin{tikzpicture}
\begin{camodel}
\carc{140}{45}{1}{$\alpha(w')$}
\carc[endlabel=$\alpha(u)$,endlabelpos=inside]{0}{-90}{1}{}
\carc{70}{-30}{2}{$\alpha(w)$}
\carc{-45}{-140}{2}{$\alpha(u')$}
\end{camodel}
\end{tikzpicture}
\caption{Proof of Theorem~\protect\ref{thm:Aunique2}.}\label{fig:wwuu}
\end{figure}
Note that, since $\overline{G}$ is connected, $G$ cannot have universal vertices
and we can use Lemma~\ref{lem:one-endpoint}.
The assumptions of the theorem imply that both $U$ and~$W$ contain at least two
vertices. Let $w$ and~$w'$ be two vertices in~$W$. Since $w$ and~$w'$ are not twins,
they are distinguished by adjacency to some vertex $u\in U$. W.l.o.g.\
suppose that $w$ and~$u$ are adjacent, while $w'$ and~$u$ are not.
Since $w$~is not universal in~$G$, there is a vertex $u'$
in~$U$ non-adjacent to~$w$. Note that $\alpha(w')$ and~$\alpha(u)$
contain different endpoints of~$\alpha(w)$, and $\alpha(u')$ and~$\alpha(w)$
contain different endpoints of~$\alpha(u)$; see Fig.~\ref{fig:wwuu}.
It follows that the arcs~$\alpha(w')$, $\alpha(w)$, $\alpha(u)$, and~$\alpha(u')$ occur
in the arc model exactly in this circular order, irrespective of whether $\alpha(w')$
and~$\alpha(u')$ intersect or not.
Since the quadruple $w',w,u,u'$ was chosen in terms of the graph~$G$ alone
and $\alpha$ was supposed to be arbitrary, this conclusion holds true for
any proper arc representation of~$G$. Therefore, there is a unique way
of merging the restrictions of~$\prec_\alpha$ to~$U$ and~$W$ so as to obtain
an ordering of~$V(G)$ consistent with some geometric order.
This proves the desired uniqueness result.
\end{proof}
If a graph has one interval or arc representation,
we can obtain many other representations, for example, by cloning some points
of the circle. To disallow such trivial modifications, let us
impose some nonrestrictive conditions on interval and arc models.
Let $x$ be a point in an arc system~$\ensuremath{\mathcal{A}}$. We call $x$ \emph{inner}
if no arc of~$\ensuremath{\mathcal{A}}$ begins or ends at~$x$.
Suppose now that $\ensuremath{\mathcal{A}}$~is a proper intersection model of a graph~$G$.
We can modify $\ensuremath{\mathcal{A}}$ so that it remains a proper model of~$G$
while the following two conditions are true:
\begin{itemize}
\item
no two arcs share an endpoint point; this can always be achieved by cloning
a shared endpoint point (if $x$ the right endpoint of~$A_1$ and the left endpoint of~$A_2$,
replace $x$ with the pair $x_1,x_2$ so that $A_1$ ends at $x_1$, $A_2$ starts at $x_2$,
and $x_1$~is the right neighbor of~$x_2$);
\item
there is no inner point (just remove all of them).
\end{itemize}
We call such arc models and representations \emph{sharp}.
Sharp interval representations are defined similarly.
Note that, if $G$~has $n$ vertices, any sharp
model of~$G$~has $m=2n$ points.
We now show that a sharp proper representation is
reconstructible from the associated geometric order.
Given two sharp arc representations $\alpha$ and $\alpha'$
of a graph $G$, we say that they are equal \emph{up to rotation}
of the circle $\mathbb{C}_{m}$ if there is a rotation $\sigma$
of $\mathbb{C}_{m}$ such that $\alpha'=\sigma\circ\alpha$.
If $\sigma$ is allowed to be also the reflection of $\mathbb{C}_{m}$,
then we say that $\alpha$ and $\alpha'$ are equal \emph{up to rotation and reflection}.
\begin{lemma}\label{lem:ReprOrder}
\begin{bfenumerate}
\item
If sharp proper interval representations of a graph~$G$ determine the same
geometric order, then they are equal; that is, if $\prec_\alpha=\prec_{\alpha'}$,
then $\alpha=\alpha'$.
\item
Suppose that $G$~has at most one universal vertex.
If sharp proper arc representations of~$G$ determine the same geometric order,
then they are equal up to rotation.
\end{bfenumerate}
\end{lemma}
\begin{proof}
{\bf 1.}
Let $\alpha$ be a sharp proper interval representation of a graph~$G$ and
$v_1,\ldots,v_n$ be the ordering of~$V(G)$ according to the geometric order $\prec_\alpha$
associated with $\alpha$. It suffices to notice that
$\alpha$~is completely determined by this order. Indeed, let $\alpha(v_i)=[a_i,b_i]$.
Then we must have
\begin{eqnarray}
b_i&=&a_i+1+|N(v_i)|\text{\ \ and}\label{eq:bi}\\
a_i&=&i+\left|\setdef{j<i}{v_j\notin N[v_i]}\right|.\label{eq:ai}
\end{eqnarray}
{\bf 2.}
Given the geometric order of the vertex set
associated with a sharp proper arc representation~$\alpha$ of~$G$,
we have to show that it determines $\alpha$ up to rotation.
Fix a sequence $v_1,\ldots,v_n$ according to this order.
From this sequence we can determine the clockwise neighborhood~$N^+[v_i]$
and the counter-clockwise neighborhood~$N^-[v_i]$ of any vertex~$v_i$.
Suppose that $\alpha(v_i)=[a_i,b_i]$ and $a_1=1$. The latter condition
can be ensured by shifting (renaming) the points in the circle~$\mathbb{C}_{2n}$,
which results just in a rotation of~$\alpha$.
By Lemma~\ref{lem:one-endpoint}, if
$v_i$~is adjacent to~$v_j$ then $\alpha(v_i)$ contains exactly
one of the endpoints~$a_j$ and~$b_j$.
We can now see that the start points~$a_i$
and end points~$b_i$ are uniquely determined. Equality~\refeq{bi} holds true
exactly as in part~1; whenever the right hand side exceeds $2n$, it has
to be decreased by this number. Equality~\refeq{ai} has to be adjusted.
If $v_i\notin N^+[v_1]$, then
\[
[1,a_i]=[1,b_1]\cup\setdef{a_j}{1<j\le i,a_j\notin[1,b_1]}\cup
\setdef{b_j}{1<j<i,b_j\notin[a_i,b_i]}.
\]
It follows that
\[
a_i=|[1,a_i]|=2+|N(v_1)|+\left|\setdef{j}{1<j\le i,v_j\notin N^+[v_1]}\right|+
\left|\setdef{j}{1<j<i,v_j\notin N^-[v_i]}\right|.
\]
If $v_i\in N^+[v_1]$, then
\[
[1,a_i]=\setdef{a_j}{1\le j\le i}\cup\setdef{b_j}{v_j\in N^-[v_1]\setminus N^-[v_i]},
\]
hence
\[
a_i=|[1,a_i]|=i+|N^-[v_1]\setminus N^-[v_i]|,
\]
completing the proof.
\end{proof}
Combining Lemma~\ref{lem:ReprOrder} with Theorem~\ref{thm:Aunique2}
(or with Roberts' uniqueness theorem for proper interval graphs),
we immediately obtain the following result.
\begin{theorem}\label{thm:prop-repr-unique}\mbox{}
\begin{bfenumerate}
\item
A twin-free, connected proper interval graph has, up to reflection,
a unique sharp proper interval representation.
\item
A twin-free, connected PCA graph with non-bipartite or connected complement has,
up to rotation and reflection, a unique sharp proper arc representation.
\end{bfenumerate}
\end{theorem}
\paragraph{Straight and round orientations.}
We conclude with discussing how Theorem~\ref{thm:prop-repr-unique} is related
to the results of Deng, Hell, and Huang~\cite{DengHH96}.
We begin with an overview of some concepts introduced in~\cite{DengHH96}.
Recall that an \emph{orientation} of a graph $G$ is a directed graph
obtained from $G$ by directing each edge. A directed graph obtained
in such a way is called \emph{oriented}.
Let $D$ be an orientation of a graph $G$.
A \emph{straight enumeration} of $D$ is an interval ordering of $\ensuremath{\mathcal{N}}[G]$
such that every vertex $v$ splits the interval $N[v]=[v^-,v^+]$
into the set $[v^-,v)$ of in-neighbors of $v$ and into the set
$(v,v^+]$ of out-neighbors of $v$. Similarly,
a \emph{round enumeration} of $D$ is an arc ordering of $\ensuremath{\mathcal{N}}[G]$
such that $[v^-,v)$ and $(v,v^+]$ consist of, respectively, the in- and the out-neighbors
of $v$. If the orientation $D$ admits a straight (resp.\ round) enumeration,
it is called \emph{straight (resp.\ round)}.
Deng, Hell, and Huang~\cite{DengHH96} notice a close connection between
proper interval/arc representations of a graph $G$ and its straight/round orientations:
$G$ is proper interval (resp.\ PCA) if and only if its has a straight (resp.\ round)
orientation. This connection enables obtaining Theorem~\ref{thm:prop-repr-unique}
from the following result in~\cite{DengHH96}.
\begin{theorem}[cf.\ Deng, Hell, and Huang~\cite{DengHH96}]\label{thm:orient-unique}\mbox{}
\begin{bfenumerate}
\item
A twin-free, connected proper interval graph has, up to reversal,
a unique straight orientation.
\item
A twin-free, connected PCA graph with non-bipartite or connected complement has,
up to reversal, a unique round orientation.
\end{bfenumerate}
\end{theorem}
Though obtained in our paper and in~\cite{DengHH96} by completely different methods,
Theorems~\ref{thm:prop-repr-unique} and~\ref{thm:orient-unique}
are equivalent statements of the same nature. For completeness we now
derive the latter result from the former.
\begin{proofof}{Theorem~\ref{thm:orient-unique}}
We show part 2; part 1 is similar.
Following~\cite{DengHH96}, we first notice that a proper arc representation of
a twin-free graph $G$ determines a round orientation of $G$ and vice versa.
Given a proper arc representation $\alpha$ of $G$,
define an orientation $D_\alpha$ of $G$ by directing each edge $\{u,v\}$
as $uv$ if $\alpha(u)$ contains the left endpoint of $\alpha(v)$.
The definition is unambiguous by Lemma~\ref{lem:one-endpoint}.
Note that the geometric order $\prec_\alpha$ is a round enumeration of~$D_\alpha$.
Note also that, if $\alpha'$ is a rotation of $\alpha$, then
$D_{\alpha'}=D_\alpha$. If $\alpha'$ is the reflection of $\alpha$, then
$D_{\alpha'}$ is the reversed version of $D_\alpha$.
As observed in~\cite[Proposition 2.6]{DengHH96} (see also~\cite[Theorem 3.1]{HellH95}),
every round orientation $D$ of $G$ is obtainable in this way,
that is, $D=D_\alpha$ for some proper arc representation $\alpha$ of $G$.
Under the assumptions made for $G$, Theorem~\ref{thm:prop-repr-unique}
states that $\alpha$ is unique up to rotation and reflection.
It follows that $D=D_\alpha$ is unique up to reversal.
\end{proofof}
|
2,877,628,091,225 | arxiv | \section{Related Work}
\label{sec:related}
Related work to this paper consists of approaches for source code summarization. As with many research areas, data-driven AI-based approaches have superseded heuristic/template-based techniques, though overall the field is quite new. Work by Haiduc~\emph{et al.}~\cite{haiduc2010supporting, haiduc2010use} in 2010 coined the term ``source code summarization'', and several heuristic/template-based techniques followed including work by Sridhara~\emph{et al.}~\cite{sridhara2010towards, sridhara2011automatically}, McBurney~\emph{et al.}~\cite{mcburney2016automatic}, and Rodeghero~\emph{et al.}~\cite{rodeghero2015eye}.
More recent techniques are data-driven, though the overall size of the field is small. Literature includes work by Hu~\emph{et al.}~\cite{hu2018deep, hu2018summarizing} and Iyer~\emph{et al.}~\cite{iyer2016summarizing}. Projects targeting problems similar to code summarization have been published widely, including on commit message generation~\cite{jiang2017automatically, loyola2017neural}, method name generation~\cite{allamanis2016convolutional}, pseudocode generation~\cite{oda2015learning}, and code search~\cite{gu2018deep}. Nazar~\emph{et al.}~\cite{nazar2016summarizing} provide a survey.
Of note is that no standard datasets for code summarization have yet been published. Each of the above papers takes an ad hoc approach, in which the authors download large repositories of code and apply their own preprocessing. There are few standard practices, leading to major differences in the reported results in different papers, as discussed in the previous section. For example, the works by LeClair~\emph{et al.}~\cite{leclair2019icse} and Hu~\emph{et al.}~\cite{hu2018deep} both modify the CODENN model from Iyer~\emph{et al.}~\cite{iyer2016summarizing} to work on Java methods and comments. LeClair~\emph{et al.} and Hu~\emph{et al.} report very disparate results: A BLEU-4 score of 6.3 for CODENN on one dataset, and 25.3 on another, even though both datasets were generated from Java source code repositories.
These disparate results happen for a variety of reasons, such as a difference in data set sizes and tokenization schemes. LeClair~\emph{et al.} use a data set of 2.1 million Java method-comment pairs while Hu~\emph{et al.} use a total of 69,708. Hu~\emph{et al.} also replace out of vocabulary (OOV) tokens in the comments with $<$UNK$>$ in the training, validation, and testing sets, while LeClair~\emph{et al.} remove OOV tokens from the training set only.
\section{Dataset Preparations}
\vspace{0.6cm}
The dataset we use in this paper is based on the dataset provided by LeClair~\emph{et al.}~\cite{leclair2019icse} in a pre-release. We used this dataset because it is both the largest and most recent in source code summarization. That dataset has its origins in the Sourcerer project by Lopes~\emph{et al.}~\cite{Lopes+Bajracharya+Ossher+Baldi:2010}, which includes over 51 million Java methods. LeClair~\emph{et al.} provided the dataset after minimal initial processing that filtered for Java methods with JavaDoc comments in English, and removed methods over 100 words long and comments $>$13 and $<$3 words. The result is a dataset of 2.1m Java methods and associated comments. LeClair~\emph{et al.} do additional processing, but do not quantify the effects of their decisions -- this is a problem because other researchers would not know which of the decisions to follow. We explore the following research questions to help provide guidelines and justifications for our design decisions in creating the dataset.
\subsection{Research Questions}
Our research objective and contribution in this paper is to quantify the effect of key dataset processing configurations, with the aim to make recommendations on which configurations should be used. We ask the following Research Questions:
\vspace{-0.1cm}
\begin{description}
\item[RQ$_{1}$] What is the effect of splitting by method versus splitting by project?
\vspace{-0.2cm}
\item[RQ$_{2}$] What is the effect of removing automatically generated Java methods?
\end{description}
\vspace{-0.1cm}
The scope of the dataset in this paper is source code summarization of Java methods -- the dataset contains pairs of Java methods and JavaDoc descriptions of those methods. However, we believe these RQs will provide guidance for similar datasets e.g. C/C++ functions and descriptions, or other units of granularity e.g. code snippets instead of methods/functions.
The rationale behind RQ$_1$ is that many papers split the dataset into training, validation, and test sets at the unit of granularity under study. For example, dividing all Java methods in the dataset into 80\% in training, 10\% in validation, and 10\% in testing. However, this results in a situation where it is possible for code from one project to be in both the testing set and the training set. It is possible that similar vocabulary and code patterns are used in methods from the same project, and even worse, it is possible that overloaded methods appear in both the training and test sets. However, this possibility is theoretical and a negative effect has never been shown. In contrast, we split by project: randomly divide the Java projects into training/validation/test groups, then place all methods from e.g. test projects into the test set.
The rationale behind RQ$_2$ is that automatically generated code is common in many Java projects~\cite{shimonaka2016identifying}, and that it is possible that very similar code is generated for projects in the training set and the testing/validation sets. Shimonaka~\emph{et al.}~\cite{shimonaka2016identifying} point out that the typical approach for identifying auto-generated code is a simple case-insensitive text search for the phrase ``generated by'' in the comments of the Java files. LeClair~\emph{et al.}~\cite{leclair2019icse} report that this search turns out to be quite aggressive, catching nearly all auto-generated code in the repository. However, as with RQ$_1$, the effect of this filter is theoretical and has not been measured in practice.
\begin{table*}[t!]
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{SplittingStrategy}&Set1&Set2&Set3&Set4 \\
\hline
Split by project&17.81&16.73&17.11&17.99 \\
\hline
Split by function&20.97&23.74&23.67&23.68 \\
\hline
Auto-generated code included&19.11&19.09&18.04&15.66\\
\hline
\end{tabular}
\caption{Average BLEU Scores from 15 epochs for each of the four sets.}
\label{table:1}
\end{table*}
\vspace{-0.2cm}
\subsection{Methodology}
\begin{figure}[t!]
\vspace{-0.5cm}
\centering
\includegraphics[width=8cm]{figures/rq3wc.png}
\vspace{-0.8cm}
\caption{Word count histogram for code, comment, and the book summaries. About 22\% of words occur one time across all Java methods, versus 35\% in the book summaries.}
\label{fig:rq3wc}
\vspace{-0.5cm}
\end{figure}
Our methodology for answering RQ$_1$ is to compare the results of a standard NMT algorithm with the dataset split by project, to the results of the same algorithm on the same dataset, except with the dataset split by function. But because random splits could be ``lucky'', we created four random datasets split by project, and four split by function, seen in Table~\ref{table:2}. We then use an off-the-shelf, standard NMT technique called {\small\texttt{attendgru}} provided pre-release by LeClair~\emph{et al.}~\cite{leclair2019icse} and used as a baseline approach in their recent paper. The technique is just an attentional encoder/decoder based on single-layer GRUs, and represents a strong NMT baseline used by many papers. We train {\small\texttt{attendgru}} with each of the four training sets, find the best-performing model using the validation set associated with that training set (out of 10 maximum epochs), and then obtain test performance for that model. We report the average of the results over the four random splits. Note that we used the same configuration for {\small\texttt{attendgru}} as LeClair~\emph{et al.} report, except that we reduced the output vocabulary to 10k to reduce model size.
Our process for RQ$_2$ is similar. We created four random split-by-project sets in which automatically generated code was \emph{not} removed. Then we compared them to the four random split-by-project sets we created for RQ$_1$ (in which auto-generated code was removed).
\subsection{Dataset Characteristics}
\begin{figure}[t!]
\vspace{-0.5cm}
\centering
\includegraphics[width=8cm]{figures/rq3dc.png}
\vspace{-0.8cm}
\caption{Histogram of word occurrences per document. Approximately 34\% of words occur in only one Java method, 20\% occur in two methods, etc.}
\label{fig:rq3dc}
\vspace{-0.5cm}
\end{figure}
We make three observations about the dataset that, in our view, are likely to affect how researchers design source code summarization algorithms. First, as depicted in Figure~\ref{fig:rq3wc}, words appear to be used more often in code as compared to natural language -- there are fewer words used only one or two times, and in general more used 3+ times. At the same time (Figure~\ref{fig:rq3dc}), the pattern for word occurrences per document appears similar, implying that even though words in code are repeated, they are repeated often in the same method and not across methods. Even though this may suggest that the occurrence of unique words in source code is isolated enough to have little affect on BLEU score, we show in Section~\ref{sec:results} that this word overlap causes BLEU score inflation when you split by function. This is important because the typical MT use case assumes that a ``dictionary'' can be created (e.g., via attention) to map words in a source to words in a target language. An algorithm applied to code summarization needs to tolerate multiple occurrences of the same words. To compare the source code, comments, and natural language datasets we tokenized our data by removing all special characters, lower casing, and for source code -- splitting camel case into separate tokens.
A related observation is that Java methods tend to be much longer than comments (Figure~\ref{fig:rq3box} areas (c) and (d)). Typically, code summarization tools take inspiration from NMT algorithms designed for cases of similar encoder/decoder sequence length. Many algorithms such as recurrent networks are sensitive to sequence length, and may not be optimal off-the-shelf.
\begin{figure}[b!]
\vspace{-0.5cm}
\centering
\includegraphics[width=8cm]{figures/rq3boxplots.png}
\vspace{-0.7cm}
\caption{Overlap of words between methods and comments (areas a and b). Over 30\% of words in comments, on average also occur in the method it describes. About 11\% of words in code, on average, also occur in the comment describing it. Also, word length of methods and comments (areas c and d). Methods average around 30 words, versus 10 for comments.}
\label{fig:rq3box}
\vspace{-0.6cm}
\end{figure}
A third observation is that the words in methods and comments tend to overlap, but in fact a vast majority of words are different (70\% of words in code summary comments do not occur in the code method, see Figure~\ref{fig:rq3box} area (b)). This situation makes the code summarization problem quite difficult because the words in the comments represent high level concepts, while the words in the source code represent low level implementation details -- a situation known as the ``concept assignment problem''~\cite{biggerstaff1993concept}. A code summarization algorithm cannot only learn a word dictionary as it might in a typical NMT setting, or select summarizing words from the method for a summary as a natural language summarization tool might. A code summarization algorithm must learn to identify concepts from code details, and assign high level terms to those concepts.
\section{Discussion}
\vspace{-0.1cm}
This paper provides benefits to researchers in the field of automatic source code summarization in two areas. First, we provide insight into the effects of splitting a Java method and comment dataset by project or by function, and how these different splitting methods effect the task of source code summarization. Second, we provide a dataset of 2.1m pairs of Java methods and one sentence method descriptions in a cleaned and tokenized format (discussed in \ref{sec:dataset}) as well as a training, validation, testing split.
Note however that there may be cases where researchers wish to adapt our recommendations for a specific context. For example, when generating comments in an IDE. The problem of code summarization in an IDE is slightly different than what we have presented, and would benefit from including code-comment pairs from the same project. IDEs have the advantage of access to a programmer's source code and edit history in real time -- they do not rely on a repository collected post-hoc. Moreno~\emph{et al.}~\cite{Moreno:2013:Jsummarizer} take advantage of this information to generate Java class summaries in an eclipse plugin -- their tool uses both the class and project level information from completed projects to generate these summaries, while not using any information from outside sources.
However, even in this case, care must be taken to avoid unrealistic scenarios, such as ensuring that the training set consists only of code older than the code in the test set. For example, consider a programmer at revision 75 of his or her project who requests automatically generated comments from the IDE, then goes on to write a total of 100 revisions for the project. An experiment simulating this situation should only use revisions 1-74 as training data -- revisions 76+ are ``in the future'' from the perspective of the real world situation.
\section{Downloadable Dataset}
\label{sec:dataset}
In our online appendix we have made three downloadable sets available as seen in Table \ref{table:3}. The first is our SQL database, generated using the tool from McMillan~\emph{et al.}~\cite{mcmillan2011portfolio}, that contains the file name, method comment, and start/end lines for each method, we call this dataset our ``Raw Dataset''. We also provide a link to the Sourcerer dataset \cite{springerlink:10.1007/s10618-008-0118-x} which is used as a base for the dataset in LeClair~\emph{et al.}~\cite{leclair2019icse}. In addition to the Raw Dataset, we also provide a ``Filtered Dataset'' that consists of a set of 2.1m method comment pairs. In the Filtered Dataset we removed auto-generated source code files, as well all method's that do not have an associated comment. No preprocessing was applied to the source code and comment strings in the Filtered Dataset. The third downloadable set we supply is the ``Tokenized Dataset''. In the Tokenized Dataset, we processed the source code and comments from the Filtered Dataset identically to the tokenization scheme described in Section 5 of~\cite{leclair2019icse}. This set also provides a training, validation, and test set as well as a script to easily reshuffle these sets.
\vspace{0.2cm}
The URL for download is:
\textbf{\url{http://leclair.tech/data/funcom}}
\begin{table}[h!]
\centering
\begin{tabular}{|c|c|c|}
\hline
Dataset&Methods&Comments \\
\hline
Raw Dataset&51,841,717&7,063,331 \\
\hline
Filtered Dataset&2,149,121&2,149,121 \\
\hline
Tokenized Dataset&2,149,121&2,149,121\\
\hline
\end{tabular}
\caption{Datasets available for download.}
\vspace{-0.6cm}
\label{table:3}
\end{table}
\section{Introduction}
\label{sec:intro}
Source Code Summarization is the task of writing short, natural language descriptions of source code~\cite{eddy2013evaluating}. The most common use for these descriptions is in software documentation, such as the summaries of Java methods in JavaDocs~\cite{kramer1999api}. Automatic generation of code summaries is a rapidly-expanding research area at the crossroads of Computational Linguistics and Software Engineering, as a growing tally of new workshops and NSF-sponsored meetings have recognized~\cite{NL4SEAAAI:2018, nlse2015}. The reason, in a nutshell, is that the vast majority of code summarization techniques are adaptations of techniques originally designed to solve NLP problems.
A major barrier to ongoing research is a lack of standardized datasets. In many NLP tasks such as Machine Translation there are large, curated datasets (e.g. Europarl~\cite{europarl}) used by several research groups. The benefit of these standardized datasets is twofold: First, scientists are able to evaluate new techniques using the same test conditions as older techniques. And second, the datasets tend to conform to community customs of best practice, which avoids errors during evaluation. These benefits are generally not yet available to code summarization researchers; while large, public code repositories do exist, most research projects must parse and process these repositories on their own, leading to significant differences on one project to another. The result is that research progress is slowed as reproducibilty of earlier results is difficult.
Inevitably, differences in dataset creation also occur that can mislead researchers and over or understate the performance of some techniques. For example, a recent source code summarization paper reports achieving 25 BLEU when generating English descriptions of Java methods with an existing technique~\cite{gu2018deep}, which is 5 points higher than the original paper reports~\cite{iyer2016summarizing}. The paper also reports 35+ BLEU for a vanilla seq2seq NMT model, which is 16 points higher than what we are able to replicate. While it is not our intent to single out any one paper, we do wish to call attention to a problem in the research area generally: a lack of standard datasets leads to results that are difficult to interpret and replicate.
In this paper, we propose a set of guidelines for building datasets for source code summarization techniques. We support our guidelines with related literature or experimentation where strong literary consensus is not available. We also compute several metrics related to word usage to guide future researchers who use the dataset. We have made a dataset of over 2.1m Java methods and summaries from over 28k Java projects available via an online appendix (URL in Section~\ref{sec:dataset}).
\section*{Acknowledgments}
{This work is supported in part by the NSF CCF-1452959, CCF-1717607, and CNS-1510329 grants. Any opinions, findings, and conclusions expressed herein are the authors and do not necessarily reflect those of the sponsors}
\section{Experimental Results \& Conclusion}
\label{sec:results}
\vspace{-0.3cm}
In this section, we answer our Research Questions and provide supporting evidence and rational.
\vspace{-0.1cm}
\subsection{RQ$_1$: Splitting Strategy}
\begin{figure}[b!]
\centering
\includegraphics[width=8cm]{figures/rq1.png}
\vspace{-0.8cm}
\caption{{\small Boxplots of BLEU scores from {\small\texttt{attendgru}} for four runs under configurations for RQ$_1$ and RQ$_2$.}
\label{fig:rq1rq2}
\end{figure}
We observe a large ``false'' boost in BLEU score when split by function instead of split by project (see Figure~\ref{fig:rq1rq2}). We consider this boost false because it involves placing functions from projects in the test set into the training set -- an unrealistic scenario. An average of four runs when split by project was 17.41 BLEU, a result relatively consistent across the splits (maximum was 18.28 BLEU, minimum 16.10). In contrast, when split by function, the average BLEU score was 23.02, and increase of nearly one third as seen in Table~\ref{table:1}. Our conclusion is that splitting by function is to be avoided during dataset creation for source code summarization. Beyond this narrow answer to the RQ, in general, any leakage of information from test set projects into the training or validation sets ought to be strongly avoided, even if the unit of granularity is smaller than a whole project. We reiterate from Section~\ref{sec:intro} that this is not a theoretical problem: many papers published using data-driven techniques for code summarization and other research problems split their data at the level of granularity under study.
\subsection{RQ$_2$: Removing Autogen. Code}
\begin{table*}[t!]
\centering
\begin{tabular}{|c|c|c|c|c||c|}
\hline
&BP Set1&BP Set2&BP Set3&BP Set4&BF All Sets \\
\hline
Training Set&1,935,860&1,950,026&1,942,291&1,933,677&1,943,723 \\
\hline
Validation Set&105,693&100,920&104,837&105,997&107,984 \\
\hline
Testing Set&107,568&98,175&101,993&109,447&107,984\\
\hline
\end{tabular}
\caption{Number of method-comment pairs in the train, validation, test sets used in each random split set when split by project (BP) and by function (BF).}
\label{table:2}
\vspace{-0.4cm}
\end{table*}
We also found a boost in BLEU score when not removing automatically generated code, though the difference was less than observed for RQ$_1$. The baseline performance increased to 18 BLEU when not removing auto-generated code, and it varied much more depending on the split (some projects have much more auto-generated code than others). Our recommendation is that, in general, reasonable precautions should be implemented to remove auto-generated code from the dataset because we do find evidence that auto-generated code can affect the results of experiments.
|
2,877,628,091,226 | arxiv | \section{Introduction}
One of the purposes of spintronics is to control the magnetization direction of a ferromagnet by using electric currents instead of an external magnetic field.
Recently the magnetization reversal due to spin-transfer-torque effect \cite{Slonczewski96, Berger96} has been intensively investigated, which requires multilayer structures such as spin valves, tunnel junctions or domain walls.
On the other hand, another method to switch magnetization direction, which is due to the spin-orbit coupling (SOC), was suggested theoretically \cite{Tan07, Obata08, Manchon08, Manchon09, Matos-Abiague09, Tan11} and verified experimentally \cite{Chernyshov09, Miron10}.
For instance, a giant spin torque is observed in an asymmetric ferromagnetic metal layer AlO${}_x$/Co/Pt, and a current-driven magnetic field of $1$ T for a driven current $10^8$ A/cm${}^2$ is reported \cite{Miron10}.
In the above systems, the Rashba-type spin splitting is dominant owing to the interplay between the asymmetric structure and a strong SOC derived from the Pt atom.
Rashba systems \cite{Rashba60} under an external electric field or a current injection become spin-polarized because the spin distribution on the Fermi surfaces becomes imbalanced.
In ferromagnetic metals, on the other hand, there exists an exchange coupling between conduction electrons spins and localized spins.
Under the non-equilibrium state, magnetization in ferromagnetic Rashba systems is macroscopically given a spin torque.
Now if we assume a single-domain ferromagnet, the stability of magnetic ordering is characterized by the anisotropic field.
Thus the magnetization can be reversed when the spin torque overcomes the anisotropic field.
Contrary to a spin-transfer torque, there is no transfer of spin angular momentum from outside.
Rather, orbital angular momentum in a crystal is converted to spin angular momentum via the spin-orbit coupling, and is transfered to a ferromagnet.
This mechanism is intrinsic to the band structure and does not require two non-collinear ferromagnets.
Moreover, it is theoretically reported that compared with some systems with the spin-orbit coupling due to impurities or Luttinger spin-orbit bands, the Rashba system presents a giant spin torque to reverse the magnetization owing to the inversion asymmetry \cite{Manchon09}.
Therefore, a spin torque due to the Rashba spin-orbit coupling attracts many interests as a realistic candidate in spintronic applications.
In this paper, we explore possibility of the current-induced magnetization reversal by examining three-dimensional models with the Rashba SOC (3D Rashba models) theoretically, and compare the results with the 2D Rashba models.
We focus on the low-density regime where only the lower band lies on the Fermi energy.
This low-density regime has not been studied for 2D Rashba models, either.
This low-density regime becomes realistic when the Rashba parameter is large, e.g. in the recent discovery of the new bulk Rashba semiconductor BiTeI \cite{Ishizaka11,Bahramy11}.
In this material, the Rashba effect is induced by the structural inversion asymmetry in the bulk crystal structure, and therefore the Rashba SOC is much stronger than the typical value of the Rashba SOC in 2D semiconductor heterostructures \cite{Nitta97} or that in metal surfaces \cite{LaShell96, Ast07}.
As we vary the Fermi energy, the topology of the Fermi surface changes.
Correspondingly, we found that the spin torque as a function of the Fermi energy $E_F$ behaves differently between the both sides of the topological transition.
Moreover, we examine the spin-torque efficiency which is the spin torque divided by the applied electric current, in order to discuss how to enhance the efficiency in association with the current-induced magnetization reversal.
\section{3D Rashba model}
We calculate the spin torque on the magnetization of a ferromagnet driven by the spin polarization of conduction electrons in systems with a strong SOC.
We consider three-dimensional models with the Rashba effect, i.e., 3D Rashba models \cite{Ishizaka11, Bahramy11}.
We take the direction of structural inversion symmetry breaking as $z$ axis and conduction electrons move in three dimensions.
In our model, we also include localized spins coupled to the conduction electrons via the exchange coupling.
Our Hamiltonian is thus described by
\begin{equation}
{\cal H}=\frac{\hbar^2}{2m_{xy}^*}(k_x^2+k_y^2)+\frac{\hbar^2}{2m_{z}^*}k_z^2+\alpha_R\bm{e}_z\cdot (\bm{k}\times \bm{\sigma})-J_{sd}\bm{M}\cdot \bm{\sigma},
\end{equation}
where $m_{xy}^*, m_z^*$ represent the effective masses of conduction electrons with the $xy$ plane and along the $z$ axis respectively, $\alpha_R$ represents the Rashba parameter, $J_{sd}$ is an exchange coupling between conduction electrons and magnetization, $\bm{M}=(\cos\varphi_M,\sin\varphi_M)$ is the direction of magnetization and $\bm{\sigma}=(\sigma_x, \sigma_y, \sigma_z)$ is the Pauli matrices.
$\bm{e}_z$ denotes the unit vector in the $z$ direction.
The eigenenergies and eigenstates of the above Hamiltonian are respectively given by
\begin{eqnarray}
E_s(\bm{k}) &=& \frac{\hbar^2}{2m_{xy}^*}(k_x^2+k_y^2)+\frac{\hbar^2}{2m_{z}^*}k_z^2 \nonumber \\
&+& s \sqrt{(\alpha_R k_y-J_{sd}\cos\varphi_M)^2+(\alpha_R k_x+J_{sd}\sin\varphi_M)^2}, \label{Es} \nonumber \\
& & \\
\Psi_{\bm{k},s} &=& \frac{1}{\sqrt{2}}\left(
\begin{array}{c}
se^{i\gamma_{\bm{k}}} \\
1
\end{array}
\right)e^{i\bm{k}\cdot \bm{r}}, \label{wf}
\end{eqnarray}
where $s$ is a band index with $+1$ for the upper band and $-1$ for the lower band and $\tan\gamma_{\bm{k}}\equiv \frac{\alpha_R k_x+J_{sd}\sin\varphi_M}{\alpha_R k_y-J_{sd}\cos\varphi_M}$.
Equations (\ref{Es}) and (\ref{wf}) look similar to 2D Rashba models, but the shape of Fermi surfaces is nontrivial in 3D Rashba models as shown in Fig. \ref{FS} (a).
In the regime of strong exchange coupling, i.e., $J_{sd}\gg k_F\alpha_R$, the Fermi surfaces are mainly governed by the Zeeman splitting, where $k_F$ denotes the Fermi wave number.
When the Fermi energy crosses both the upper and lower bands (which we refer to as the high-density regime), the system has a larger and a smaller ellipsoidal Fermi surfaces.
When the Fermi energy lies on only the lower band (which we refer to as the low-density regime), the system has a single ellipsoidal Fermi surface.
In the regime of weak exchange coupling, i.e., $k_F\alpha_R\gg J_{sd}$, on the other hand, the Fermi surfaces are mainly determined by the Rashba-type spin splitting.
In the high-density regime, an apple-like and lemon-like Fermi surfaces are obtained.
In the low-density regime, we obtain a donuts-like Fermi surface and the distribution of the spin density in the wave-number space is along the azimuthal direction in the $xy$ plane.
From Fig. \ref{FS} (a), we find that there exists a topological transition of the Fermi surfaces, namely, a Lifshitz transition \cite{Lifshitz60} in an intermediate value of the exchange coupling.
In the low-density regime, as $J_{sd}/\alpha_R$ grows, the topology of the Fermi surface changes from the torus $T^2$ to the sphere $S^2$.
We remark that the topological transition occurs on the curve $E=J_{sd}-m^*_{xy}\alpha_R^2/(2\hbar^2)\ (\alpha_R<\hbar \sqrt{J_{sd}/m^*_{xy}}),\ =\hbar^2 J_{sd}^2/(2m^*_{xy} \alpha_R^2)\ (\alpha_R\geq \hbar \sqrt{J_{sd}/m^*_{xy}})$, and the band bottom is on the curve $E=-J_{sd}-m^*_{xy}\alpha_R^2/(2\hbar^2)$ as shown in Fig. \ref{FS} (b).
\begin{figure}
\begin{center}
\includegraphics[width=80mm]{FS5.eps}
\end{center}
\caption{(Color online) Fermi surfaces in the 3D Rashba model and their topological change. (a) Schematic figures of Fermi surfaces in the limit of strong ($J_{sd}\gg k_F\alpha_R$), intermediate and weak exchange coupling regime ($k_F\alpha_R\gg J_{sd}$), and in high and low-density regimes.
(b) Phase diagram of the high- and low-density regimes as a function of $\alpha_R$.
}
\label{FS}
\end{figure}
\section{Spin torque}
From the Heisenberg equation of motion for the conduction-electron spin, the spin-continuity equation is deduced:
\begin{equation}
\frac{d \langle \bm{s}\rangle}{dt}+\nabla\cdot {\cal J}_s=-\frac{J_{sd}}{\hbar} \bm{M}\times \langle \bm{s}\rangle+\frac{\alpha_R}{\hbar} \langle (\bm{k}\times \bm{\sigma})\times \bm{e}_z \rangle, \label{sce}
\end{equation}
where $\bm{s}$ refers to the spin-density operator, ${\cal J}_s$ refers to the spin current tensor and $\langle\cdots \rangle$ denotes the quantum average.
The first term of the right hand in Eq. (\ref{sce}) means the current-induced spin torque to the magnetization, and is denoted by $\bm{T}$.
The second term of the right hand, on the other hand, means the torque due to an effective magnetic field introduced by the Rashba SOC.
We calculate the spin polarization and the electric current of conduction electrons under an electric field using the Boltzmann equation of transport
\begin{equation}
-\frac{e}{\hbar}\bm{E}\cdot \frac{\partial f^0_{\bm{k},s}}{\partial \bm{k}}=\sum_{\bm{k}',s'} W^{s s'}_{\bm{k} \bm{k}'} (f_{\bm{k},s}-f_{\bm{k}',s'}), \label{Blz1}
\end{equation}
where $e$ represents the electric charge, $\bm{E}=(E\sin\theta_E\cos\varphi_E,E\sin\theta_E\sin\varphi_E,E\cos\theta_E)$ is an external electric field, and $f^0_{\bm{k},s}=1/(e^{\beta(E_s-\mu)}+1)$ is the Fermi distribution function with band index $s$.
Within the Boltzmann transport theory, the electric current and the spin density respectively read $\bm{j}_{3D}=-\frac{e}{V} \sum_{\bm{k},s=\pm 1} f_{\bm{k},s} \bm{v}_{\bm{k},s}$, where $\hbar \bm{v}_{\bm{k},s}=\frac{\partial E_s}{\partial \bm{k}}=(\frac{\hbar^2}{m^*_{xy}}k_x+s\alpha_R \sin\gamma_{\bm{k}}, \frac{\hbar^2}{m^*_{xy}}k_y+s\alpha_R \cos\gamma_{\bm{k}}, \frac{\hbar^2}{m^*_z}k_z)$ and $\langle \bm{s}\rangle=\frac{1}{V} \sum_{\bm{k},s=\pm 1} f_{\bm{k},s} \bm{s}_{\bm{k},s}$, where $\bm{s}_{\bm{k},s}=\Psi^\dagger_{\bm{k},s} \bm{s} \Psi_{\bm{k},s}=s(\cos\gamma_{\bm{k}},-\sin\gamma_{\bm{k}},0)$.
Under the short-range impurity potential $V(\bm{r})\equiv V\delta(\bm{r})$, the scattering probability reads
\begin{equation}
W^{s s'}_{\bm{k} \bm{k}'}=\frac{\pi n_i}{\hbar} V^2 (1+ss' \cos(\gamma_{\bm{k}}-\gamma_{\bm{k}'})) \delta(E_s(\bm{k})-E_{s'}(\bm{k}')), \label{Blz2}
\end{equation}
where $n_i$ is the impurity concentration.
In this study, we adopt the approximation of a constant relaxation time and discuss effects beyond the present approximation later.
In the following, we consider the two limiting cases of $J_{sd}\gg \alpha_R k_F$ and $\alpha_R k_F \gg J_{sd}$, and calculate the spin torque in high-density and low-density regimes for each limiting case.
We emphasize that in the 2D Rashba models, the Rashba SOC is typically small, and only the high-density regime is considered in general; in contrast, in the 3D Rashba models such as BiTeI, the Rashba SOC is strong and the low-density regime is significant for experiments.
\subsection{Strong exchange coupling regime}
We consider the regime of a strong exchange coupling, i.e., $J_{sd}\gg k_F \alpha_R$.
For simplicity, we retain up to the first order in $k_F\alpha_R /J_{sd}$, that is, $E_s(\bm{k})\simeq \frac{\hbar^2}{2m_{xy}^*}(k_x^2+k_y^2)+\frac{\hbar^2}{2m_{z}^*}k_z^2+s\left( J_{sd}+\frac{1}{2}\alpha_R(k_x\sin\varphi_M-k_y\cos\varphi_M) \right)$.
Up to the first order in $k_F\alpha_R /J_{sd}$, we have
\begin{eqnarray}
\cos\gamma_{\bm{k}} &\simeq& -\cos\varphi_M \nonumber \\
& & +\frac{\alpha_R}{J_{sd}}(k_y\sin^2\varphi_M+k_x\sin\varphi_M \cos\varphi_M), \\
\sin\gamma_{\bm{k}} &\simeq& \sin\varphi_M \nonumber \\
& & +\frac{\alpha_R}{J_{sd}}(k_x\cos^2\varphi_M+k_y\sin\varphi_M \cos\varphi_M).
\end{eqnarray}
Fermi surfaces change topologically when $E_F=J_{sd}$ as shown in Fig. \ref{zee} (a).
In the high-density regime ($J_{sd}<E_F$), by integrating over the Fermi surfaces, the spin torque and the electric conductivity for the $xy$-plane projective and $z$-axis direction are calculated as
\begin{eqnarray}
\bm{T}_{3D} &=& \frac{\sqrt{2}}{3\pi^2}\frac{e\tau}{\hbar^5}\alpha_R m^*_{xy}\sqrt{m_z^*} \left[ (E_F-J_{sd})^{3/2}-(E_F+J_{sd})^{3/2} \right] \nonumber \\
& & \times E_{\parallel} \cos(\varphi_M-\varphi_E) \bm{e}_z, \label{hig} \\
\sigma^{\parallel}_{3D} &=& \frac{\sqrt{2}}{3\pi^2} \frac{e^2\tau}{\hbar^3} \sqrt{m_z^*} \left[ (E_F-J_{sd})^{3/2}+(E_F+J_{sd})^{3/2} \right], \\
\sigma^z_{3D} &=& \frac{m_{xy}^*}{m_z^*}\sigma^{\parallel}_{3D},
\end{eqnarray}
where $E_{\parallel}$ represents the external electric field along the $xy$-plane.
In Eq. (\ref{hig}), the first and second terms come from the upper and lower bands respectively.
We can see that these contributions to $\bm{T}_{3D}$ partially cancel each other.
We also calculate the spin torque in 2D Rashba models for a comparison between 3D and 2D Rashba models.
The Hamiltonian in 2D is described by ${\cal H}_{2D}=\frac{\hbar^2}{2m^*}(k_x^2+k_y^2)+\alpha_R\bm{e}_z\cdot (\bm{k}\times \bm{\sigma})-J_{sd}\bm{M}\cdot \bm{\sigma}$.
In a similar way, we obtain the spin torque as $\bm{T}_{2D}=-\frac{1}{\pi}\frac{e\tau E_{\parallel}}{\hbar^4} m^*\alpha_R J_{sd}\cos(\varphi_M-\varphi_E) \bm{e}_z$ and the electric conductivity as $\sigma^{\parallel}_{2D}=\frac{1}{\pi}\frac{e^2\tau}{\hbar^2} E_F$.
In the low-density regime ($-J_{sd}<E_F<J_{sd}$), on the other hand, the electric current and the spin density respectively read $\bm{j}_{3D}=-\frac{e}{V} \sum_{\bm{k}} f_{\bm{k},-1} \bm{v}_{\bm{k},-1}$ and $\langle \bm{s}\rangle=\frac{1}{V} \sum_{\bm{k}} f_{\bm{k},-1} \bm{s}_{\bm{k},-1}$.
With a similar calculation, we obtain
\begin{eqnarray}
\bm{T}_{3D} &=& -\frac{\sqrt{2}}{3\pi^2}\frac{e\tau}{\hbar^5}\alpha_R m^*_{xy}\sqrt{m_z^*} (E_F+J_{sd})^{3/2} \nonumber \\
& & \times E_{\parallel} \cos(\varphi_M-\varphi_E) \bm{e}_z, \label{Ts1} \\
\sigma^{\parallel}_{3D} &=& \frac{\sqrt{2}}{3\pi^2} \frac{e^2\tau}{\hbar^3} \sqrt{m_z^*} (E_F+J_{sd})^{3/2}, \label{ss1} \\
\sigma^z_{3D} &=& \frac{m_{xy}^*}{m_z^*}\sigma^{\parallel}_{3D}.
\end{eqnarray}
In 2D Rashba models, the spin torque is $\bm{T}_{2D}=-\frac{1}{2\pi}\frac{e\tau E_{\parallel}}{\hbar^4} m^*\alpha_R (E_F+J_{sd})\cos(\varphi_M-\varphi_E) \bm{e}_z$ and the electric conductivity is $\sigma^{\parallel}_{2D}=\frac{1}{2\pi}\frac{e^2\tau}{\hbar^2} (E_F+J_{sd})$.
\subsection{Weak exchange coupling regime}
Let us consider the regime of a weak exchange coupling, i.e., $\alpha_R k_F \gg J_{sd}$.
Up to the zeroth order in $J_{sd}/k_F \alpha_R$, $E_s(\bm{k})\simeq \frac{\hbar^2}{2m^*_{xy}}(k_{\parallel}+sk_0)^2+\frac{\hbar^2}{2m_{z}^*}k_z^2-E_0$, $\cos\gamma_{\bm{k}}\simeq k_y/k_{\parallel}$ and $\sin\gamma_{\bm{k}}\simeq k_x/k_{\parallel}$, where $k_{\parallel}\equiv \sqrt{k_x^2+k_y^2}$, $k_0\equiv \frac{\alpha_R m^*_{xy}}{\hbar^2}$ and $E_0\equiv \frac{\alpha_R^2 m_{xy}^*}{2\hbar^2}$.
The eigenvector given in the present approximation has the same form as that of 3D Rashba Hamiltonian and therefore the spin density of the total Hamiltonian is the spin polarization induced by only 3D Rashba SOC.
As shown in Fig. \ref{rash} (a), the low-density regime is given by $-E_0<E_F<0$ and Fermi surfaces change topologically at $E_F=0$.
In high-density regime ($0<E_F$), the spin torque and the electric conductivity are thus obtained as
\begin{widetext}
\begin{eqnarray}
\bm{T}_{3D} &=& -\frac{1}{2\pi^2}\frac{e\tau}{\hbar^4}\sqrt{m^*_{xy} m_z^*} J_{sd}\left[ \left( E_F+E_0\right) \arcsin \left( \sqrt{\frac{E_0}{E_F+E_0}} \right)+\sqrt{E_0 E_F} \right] E_{\parallel} \cos(\varphi_M-\varphi_E) \bm{e}_z, \\
\sigma^{\parallel}_{3D} &=& \frac{1}{2\pi^2}\frac{e^2\tau}{\hbar^4} \sqrt{m^*_{xy} m_z^*} \alpha_R \left[ (E_F+E_0)\arcsin \left( \sqrt{\frac{E_0}{E_F+E_0}} \right)+\left( \frac{4}{3}E_F+E_0 \right)\sqrt{\frac{E_F}{E_0}} \right], \\
\sigma^z_{3D} &=& \frac{1}{\pi^2}\frac{e^2\tau}{\hbar^4} \frac{m_{xy}^*}{m_z^*} \sqrt{m^*_{xy} m_z^*} \alpha_R \left[ (E_F+E_0)\arcsin \left( \sqrt{\frac{E_0}{E_F+E_0}} \right)+\left( \frac{2}{3}E_F+E_0 \right)\sqrt{\frac{E_F}{E_0}} \right].
\end{eqnarray}
\end{widetext}
Like strong-exchange-coupling regime, only $E_{\parallel}$ contributes to the spin torque.
This is because the direction of spatial inversion symmetry breaking is $z$ axis, which implies the nature of 3D Rashba-type SOC.
In 2D Rashba models, the spin torque reads $\bm{T}_{2D}=-\frac{1}{2\pi}\frac{e\tau E_{\parallel}}{\hbar^4} m^*\alpha_R J_{sd}\cos(\varphi_M-\varphi_E) \bm{e}_z$ and the electric conductivity reads $\sigma^{\parallel}_{2D}=\frac{1}{\pi}\frac{e^2\tau}{\hbar^2} (E_F+E_0)$.
For low-density regime ($-E_0<E_F<0$), on the other hand, we obtain
\begin{eqnarray}
\bm{T}_{3D} &=& -\frac{1}{4\pi}\frac{e\tau}{\hbar^4}\sqrt{m^*_{xy} m_z^*} J_{sd}\left( E_F+E_0\right) \nonumber \\
& & \times E_{\parallel} \cos(\varphi_M-\varphi_E) \bm{e}_z, \label{Ts2} \\
\sigma^{\parallel}_{3D} &=& \frac{1}{4\pi}\frac{e^2\tau}{\hbar^4} \sqrt{m^*_{xy} m_z^*} \alpha_R (E_F+E_0), \label{ss2} \\
\sigma^z_{3D} &=& \frac{2m_{xy}^*}{m_z^*}\sigma^{\parallel}_{3D}.
\end{eqnarray}
In 2D Rashba models, the spin torque is $\bm{T}_{2D}=-\frac{1}{\sqrt{2}\pi}\frac{e\tau}{\hbar^3} E_{\parallel}\sqrt{m^*} J_{sd}\sqrt{E_F+E_0} \cos(\varphi_M-\varphi_E) \bm{e}_z$ and the electric conductivity is $\sigma^{\parallel}_{2D}=\frac{1}{\sqrt{2}\pi}\frac{e^2\tau}{\hbar^3}\sqrt{m^*} \alpha_R \sqrt{E_F+E_0}$.
\subsection{Discussion}
We show the Fermi-energy dependence of the spin torque, the electric conductivity and the spin-torque efficiency, defined as the ratio between the spin torque and the electric-current density, in (b), (c) and (d) of Figs. \ref{zee} and \ref{rash}.
Here we set $\varphi_M=\varphi_E$ for simplicity.
Let $E_F^{(0)}$ denote the value of the Fermi energy, where the topology of the Fermi surface changes.
This is the Fermi energy which differentiates between the low- and high-density regimes.
In these figures, we scale the respective quantities by their values at $E_F= E_F^{(0)}$.
As a result, each quantity is represented as a dimensionless value, which facilitates comparison between various cases.
As one can see for both limits, the slope of the spin torque shown in the low-density regime is suppressed in the high-density regime (Figs. \ref{zee} (b) and \ref{rash} (b)), while the slope of the electric conductivity in low-density regime is enhanced in the high-density regime (Figs. \ref{zee} (c) and \ref{rash} (c)).
The reason is because within our analysis, the spin torque is proportional to the spin-density component perpendicular to the magnetization and the spin distribution in the upper band is opposite to that in the lower band.
Since the topology of the Fermi surface in the low-density regime differs from that in high-density regime, the behaviors are different between both regimes.
\begin{figure}
\begin{center}
\includegraphics[width=85mm]{zee5.eps}
\end{center}
\caption{(Color online) Strong exchange coupling regime of the 3D Rashba model. (a) Schematic of the band structure, (b) Fermi-energy dependence of the spin torque, (c) the electric conductivity and (d) the spin-torque efficiency. In each plot, the results for 2D Rashba models are shown for comparison. The dotted lines refer to a topological change of Fermi surfaces. The insets in (a) represent the Fermi surfaces and the spin density in the $k_x k_y$ plane ($k_z=0$) in low and high-density regimes. The arrows are parallel to the magnetization vector of the ferromagnet. Here the respective quantities are shown as ratios to these at the topological transition $E_F=J_{sd}$, i.e., $T_0^{3D}\equiv \frac{4}{3\pi^2}\frac{e\tau}{\hbar^5} m^*_{xy}\sqrt{m_z^*}\alpha_R J_{sd}^{3/2}$, $T_0^{2D}\equiv \frac{1}{\pi}\frac{e\tau}{\hbar^4} m^*\alpha_R J_{sd}$, $\sigma_{3D,0}^{\parallel}\equiv \frac{4}{3\pi^2} \frac{e^2\tau}{\hbar^3} \sqrt{m_z^*} J_{sd}^{3/2}$ and $\sigma_{2D,0}^{\parallel}\equiv \frac{1}{\pi}\frac{e^2\tau}{\hbar^2}J_{sd}$.}
\label{zee}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=85mm]{rash5.eps}
\end{center}
\caption{(Color online) Weak exchange coupling regime of the 3D Rashba model. (a) Schematic of the band structure, (b) Fermi-energy dependence of the spin torque, (c) the electric conductivity and (d) the spin-torque efficiency. In each plot, the results for 2D Rashba models are shown for comparison. The dotted lines refer to a topological change of Fermi surfaces. The insets in (a) represent the Fermi surfaces and the spin density in the $k_x k_y$ plane ($k_z=0$) in low and high-density regimes. The arrows are along the Fermi surfaces. Here the respective quantities are shown as ratios to these at the topological transition $E_F=0$, i.e., $T_0^{3D}\equiv \frac{1}{4\pi}\frac{e\tau}{\hbar^4}\sqrt{m^*_{xy} m_z^*} J_{sd}E_0$, $T_0^{2D}\equiv \frac{1}{\sqrt{2}\pi}\frac{e\tau}{\hbar^3} \sqrt{m^*} J_{sd}\sqrt{E_0}$, $\sigma_{3D,0}^{\parallel}\equiv \frac{\sqrt{2}}{4\pi}\frac{e^2\tau}{\hbar^3} \sqrt{m_z^*} E_0^{3/2}$ and $\sigma_{2D,0}^{\parallel}\equiv \frac{1}{\pi}\frac{e^2\tau}{\hbar^2}E_0$.}
\label{rash}
\end{figure}
The spin torque as well as the electric conductivity is different in 3D and in 2D Rashba models due to the difference in dimensionality such as the density of states.
If we assume that $m^*_{xy}, \alpha_R, \tau$ in 3D are equal to $m^*, \alpha_R, \tau$ in 2D, the ratio between $|\bm{T}_{3D}|$ and $|\bm{T}_{2D}|$ in the low-density regime is proportional to the square root of $m^*_z$ and Fermi energy from the bottom of the conduction band.
This is asymptotically true in the limiting case of $E_F\gg J_{sd}, k_F\alpha_R$.
We also find that within the present approximation, the spin-torque efficiency $|\bm{T}|/j^{\parallel}$ is a constant in low-density regime (Figs. \ref{zee} (d) and \ref{rash} (d)).
In high-density regime, on the other hand, the spin-torque efficiency decreases monotonically both in 2D and in 3D.
For generic types of spin-split bands, we can draw analogy from the present simple models.
The spin-torque efficiency is expected to be larger, when only one of the spin-split bands lies at the Fermi energy.
When the Fermi energy becomes larger and crosses both of the spin-split bands, their contributions are expected to cancel partially.
Now we have assumed the constant relaxation-time approximation for both limiting regimes.
Since the relaxation time generally depends on the Fermi energy, the relaxation times $\tau$ in the expressions of the spin torque and the electric conductivity are replaced by $\tau_s(E_F)$.
However, the spin-torque efficiency is independent of the Fermi energy in the low-density regime because the spin torque and the electric current are both proportional to the relaxation time $\tau_-(E_F)$.
In particular, the spin-torque efficiency in the low-density regime does not alter for the above replacement $\tau \to \tau_s(E_F)$.
According to the formalism by Schliemann and Loss \cite{Schliemann03}, on the other hand, the effects of the anisotropic scattering due to the SOC is expressed in the distribution function:
\begin{eqnarray}
f_{\bm{k},s} &=& f^0_{\bm{k},s}+\frac{e}{\hbar} \frac{\tau^{\parallel}_{\bm{k},s}}{1+(\tau^{\parallel}_{\bm{k},s}/\tau^{\perp}_{\bm{k},s})^2} \bm{E}\cdot \frac{\partial f^0_{\bm{k},s}}{\partial \bm{k}} \nonumber \\
& & +\frac{e}{\hbar} \frac{\tau^{\perp}_{\bm{k},s}}{1+(\tau^{\perp}_{\bm{k},s}/\tau^{\parallel}_{\bm{k},s})^2} (\bm{e}_z\times \bm{E})\cdot \frac{\partial f^0_{\bm{k},s}}{\partial \bm{k}},
\end{eqnarray}
where $\tau^{\parallel}_{\bm{k},s}$ and $\tau^{\perp}_{\bm{k},s}$ denote the longitudinal and the transverse relaxation time respectively.
For the strong exchange-coupling regime, the distribution function has a simple form $f_{\bm{k},s}=f^0_{\bm{k},s}+\frac{e}{\hbar}\tau_s(E_F) \bm{E}\cdot \frac{\partial f^0_{\bm{k},s}}{\partial \bm{k}}$ owing to an isotropic scattering probability.
For the weak exchange-coupling regime, the scattering probability becomes anisotropic, but $\tau^{\perp}_{\bm{k},s}$ vanishes within the zeroth order in $J_{sd}/k_F \alpha_R$, and the distribution function has a similar form with the case of isotropic scattering \cite{Manchon08}.
Therefore, we can ignore anisotropic scattering effects due to the SOC for both limits.
\section{Enhancement of spin-torque efficiency}
To realize magnetization-switching devices, it is indispensable to enhance the spin-torque efficiency in order to minimize the threshold electric current for the magnetization reversal.
So far we have found that the spin-torque efficiency in the low-density regime is a constant, both for weak and strong exchange coupling regime.
On the other hand, by numerical analysis in the intermediate exchange coupling regime, the spin-torque efficiency as a function of $E_F$ is not a constant in the low-density regime.
As shown in Fig. \ref{tre} (a), when $E_F$ goes to the band bottom, the spin-torque efficiency is largely enhanced in a non-monotonic fashion.
\begin{figure}
\begin{center}
\includegraphics[width=85mm]{tre10.eps}
\end{center}
\caption{(Color online) Numerical results for the spin-torque efficiency.
(a) Spin-torque efficiency for various values of $\alpha_R$ as a function of $E_F$.
The crosses denote topological changes of the Fermi surfaces.
(b) Spin-torque efficiency at the topological transition as a function of $\alpha_R$, and (c) that as a function of $J_{sd}$.
(d) Spin-torque efficiency as a function of $\alpha_R$ and $J_{sd}$ at the topological transition, and (e) that at the band bottom.
In (d) and (e), we adopted $m^*_{xy}/\hbar^2=0.014 \ \mathrm{/eV\AA^2}$, $m^*_z/m^*_{xy}=0.5$ and $\varphi_E=\varphi_M$.
The spin-torque efficiencies in (d)-(e) are plotted as a unit of $e$.
The broken curves in (b) (c) represent the asymptotic forms of the spin-torque efficiency for $J_{sd}\gg k_F\alpha_R$ and $k_F\alpha_R\gg J_{sd}$.
The white broken curves in (d) (e) represent the optimal condition $2E_0\sim J_{sd}$ and the dot-dashed curve in (e) denotes the modified optimal condition $2E_0\sim 9J_{sd}$.}
\label{tre}
\end{figure}
The spin-torque efficiencies in the low-density regime also depend on $m^*_{xy}$, $\alpha_R$ and $J_{sd}$.
Let us consider how to optimize the spin-torque efficiency by varying $\alpha_R$.
From Eqs. (\ref{Ts1}) and (\ref{ss1}) for $J_{sd}\gg k_F\alpha_R$, and Eqs. (\ref{Ts2}) and (\ref{ss2}) for $k_F\alpha_R\gg J_{sd}$, the spin-torque efficiencies read for the low-density regime
\begin{eqnarray}
|\bm{T}_{3D}|/j^{\parallel}_{3D} &=& \frac{m_{xy}^*}{e\hbar^2}\alpha_R \ :J_{sd}\gg k_F\alpha_R, \label{Tj1} \\
|\bm{T}_{3D}|/j^{\parallel}_{3D} &=& \frac{1}{e} \frac{J_{sd}}{\alpha_R} \ :k_F\alpha_R\gg J_{sd}. \label{Tj2}
\end{eqnarray}
When $J_{sd}$ is kept constant and $\alpha_R$ is varied, the spin-torque efficiency increases linearly in $\alpha_R$ for $J_{sd}\gg k_F\alpha_R$ and decreases inversely linear in $\alpha_R$ for $k_F\alpha_R\gg J_{sd}$.
Thus, it implies that the spin-torque efficiency becomes maximum at an intermediate value of $\alpha_R$.
From these asymptotics (\ref{Tj1}) and (\ref{Tj2}), we find that the optimal condition is expected to be $\alpha_R\sim \hbar \sqrt{J_{sd}/m^*_{xy}}$, i.e., $2E_0\sim J_{sd}$.
Namely, the spin-splitting energy $E_0$ by the SOC is comparable to the exchange energy $J_{sd}$ with the localized spins.
The maximum spin-torque efficiency is $k_0/e$, proportional to the Rashba momentum.
To confirm our expectations, we show the $\alpha_R$ dependence of the spin-torque efficiency at the topological transition $E_F=E_F^{(0)}$ in Fig. \ref{tre} (b).
The spin-torque efficiency becomes maximum near $\alpha_R\sim \hbar \sqrt{J_{sd}/m^*_{xy}}$, which confirms our expectation.
When $\alpha_R$ is kept constant and $J_{sd}$ is varied, on the other hand, the spin-torque efficiency increases linearly in $J_{sd}$ for $k_F\alpha_R\gg J_{sd}$ and goes to a constant $k_0/e$ for $J_{sd}\gg k_F\alpha_R$ as shown in Fig. \ref{tre} (c).
Using realistic parameters, we numerically show the spin-torque efficiency as a function of $\alpha_R$ and $J_{sd}$ at the topological transition and at the band bottom in Fig. \ref{tre} (d) and (e) respectively.
At the band bottom, the broken curve in Fig. \ref{tre} (e) shows that the optimal condition is largely shifted to the larger value of $\alpha_R$, roughly given by $2E_0\sim 9J_{sd}$.
This shift comes from the large enhancement of spin-torque efficiency at the band bottom for larger $\alpha_R$, shown in Fig. \ref{tre} (a).
Finally we suggest methods to realize the magnetization reversal in 3D Rashba systems experimentally.
One way is to synthesize ferromagnetic bulk Rashba semiconductors by doping magnetic impurities into bulk materials with a strong SOC, e.g., BiTeI.
Nevertheless, synthesis of a new ferromagnetic semiconductor is quite difficult, which would be a challenging and promising issue for materials science.
On the other hand, it is known that wide-gap semiconductors such as Mn-doped GaN, Co-doped ZnO and V-doped ZnO present ferromagnetism even over the room temperature \cite{Dietl00, Saeki01, Sonoda02}.
GaN and ZnO have wurtzite-type crystalline structure and are conventional Rashba semiconductors.
Since these semiconductors present much smaller Rashba spin splitting (about $\alpha_R=1.1$ meV$\AA$ in ZnO and $\alpha_R=9$ meV$\AA$ in GaN) \cite{Voon96, Majewski05}, BiTeI is suitable for demonstrating the magnetization reversal to fabricate ferromagnetic Rashba semiconductors.
Another way is to fabricate layered materials with a Rashba semiconductor film and a ferromagnetic metal as shown in Fig. \ref{mnbite}.
These materials are considered to be similar to a magnetically doped Rashba semiconductor macroscopically.
We also remark that in conventional ferromagnetic 2DEGs, only the magnetization near the interface is switched, while in ferromagnetic 3D Rashba semiconductors, the magnetization of the whole crystal is switched.
Let us evaluate the physical quantities using realistic parameters.
We take BiTeI doped with magnetic element as an example, and use the values $\alpha_R=3.85 \ \mathrm{eV \AA}$ and $m^*_{xy}/\hbar^2=0.014 \ \mathrm{/eV \AA^2}$ for BiTeI \cite{Ishizaka11}.
For numerical estimate, we assume Mn as dopant, and take the saturation magnetization $M_s=10^4 \ \mathrm{J/T\cdot m^3}$ and the anisotropy field $H_K=200 \ \mathrm{Oe}$ adopted from Mn-doped semiconductors \cite{Manchon08}.
The estimated maximum spin-torque efficiency is $k_0/e\sim 3\times 10^{27}\ \mathrm{/C\cdot m}$ for the low-density regime, which is realized for the optimal condition $J_{sd}\geq 20\ \mathrm{meV}$ if the Fermi energy lies in the vicinity of the band bottom.
Therefore, the critical electric current density is evaluated as $j_c\sim 6\times 10^4\ \mathrm{A/cm^2}$, which is much lower than that observed in the 2D Rashba system such as Pt/Co/AlO$_x$ junction \cite{Miron10}.
\begin{figure}[H]
\begin{center}
\includegraphics[width=60mm]{mnbite.eps}
\end{center}
\caption{(Color online) Candidate geometry for the magnetization reversal by spin torque from 3D Rashba system BiTeI.
Bulk BiTeI is sandwiched between ferromagnets Mn, similar to a Rashba semiconductor doped with a ferromagnet.
}
\label{mnbite}
\end{figure}
\section{Summary}
We have investigated a spin torque and its efficiency induced by the Rashba spin-orbit coupling in three dimensions using the Boltzmann transport theory.
It was shown that in the high-density regime, the increase of the spin torque as a function of the Fermi energy is slower than that in the low-density regime.
It is because in the high-density regime, there occurs cancellation between the two spin-split bands.
We also found that high spin-torque efficiency is achieved when the Fermi energy lies on only the lower band and there exists an optimal value for the Rashba parameter.
The spin-torque efficiency becomes maximum when the Rashba spin-splitting energy is comparable to the exchange energy with the localized spins, and then its maximum values are determined by the Rashba momentum.
Such optimization might be useful for the magnetization reversal.
\begin{acknowledgments}
This work was supported by Grant-in-Aids from MEXT, Japan (No. 21000004 and No. 22540327) and the Kurata Grant from The Kurata Memorial Hitachi Science and Technology Foundation.
\end{acknowledgments}
|
2,877,628,091,227 | arxiv | \section{\textnormal{Introduction}}
\end{center}
The Selberg and Ruelle zeta functions are dynamical zeta functions, which can be associated with the geodesic flow on the unit sphere bundle $S(X)$ of
a compact hyperbolic riemannian manifold $X$. Namely, they provide information about the lenghts of the closed geodesics, also
called length spectrum. If we consider the geodesic flow $\phi$ on $S(X)$ and $d\phi$ its differential, then $\phi$
has the Anosov property, i.e.,
there exists a $d\phi_{t}$-invariant continuous splitting
\begin{equation*}
TS(X)=E^{s}\oplus E^{c}\oplus E^{u},
\end{equation*}
where $E^{s},E^{u}$ consist of vectors that shrink, respectively expand exponentially,
and $E^{c}$ is the one dimensional subspace of vectors tangent to the flow, with respect to the riemannian metric, as $t\rightarrow\infty$.
These zeta functions can be represented by Euler products, which converge
in some right half-plane of $\mathbb{C}$. The main goal of this paper is to prove the meromorphic continuation of these functions to the whole complex plane.
The Ruelle zeta function associated with the geodesic flow on the unit sphere bundle of
a closed manifold with $C^{\omega}$-riemannian metric of negative curvature has been studied
by Fried in \cite{Friedmero}. It is defined by
\[
R(s)=\prod_{\gamma}(1-e^{-sl(\gamma)}),
\]
where $\gamma$ runs over all the prime closed geodesics
and $l(\gamma)$ denotes the length of $\gamma$.
In \cite[Corollary, p.180]{Friedmero}, it is proved that it
admits a meromorphic continuation to the whole complex plane.
We consider a compact hyperbolic manifold $X$ of odd dimension $d$, obtained as follows.
Let $G=\SO^{0}(d,1)$ and $K=\SO(d)$. Then, $K$ is a maximal compact subgroup of $G$.
Let $\widetilde{X}:=G/K$. $\widetilde{X}$ can be equipped with a $G$-invariant metric, which is unique up to scaling
and is of constant negative curvature.
If we normalize this metric such that it has constant negative curvature $-1$, then
$\widetilde{X}$, equipped with this metric, is isometric to $\H^{d}$.
Let $\Gamma$ be a discrete torsion-free subgroup of $G$ such that $\Gamma\backslash G$ is compact.
Then $\Gamma$ acts by isometries on $\widetilde X$ and $X=\Gamma\backslash \widetilde X$ is a
compact oriented hyperbolic manifold of dimension $d$.
This is a case of a locally symmetric space of non-compact type of real rank 1. This means that in the Iwasawa
decomposition $G=KAN$, $A$ is a multiplicative torus of dimension 1, i.e.,
$A\cong\mathbb{R}^+$.
For a given $\gamma\in\Gamma$ we denote by $[\gamma]$ the $\Gamma$-conjugacy class
of $\gamma$. If $\gamma\neq e$, then there is a unique closed geodesic
$c_{\gamma}$ associated with $[\gamma]$. Let $l(\gamma)$ denote the length of
$c_{\gamma}$.
The conjugacy class $[\gamma]$ is called prime if
there exist no $k\in\mathbb{N}$ with $k>1$ and $\gamma_{0}\in\Gamma$ such that $\gamma=\gamma_{0}^{k}$.
The prime geodesics correspond to the prime conjugacy classes and are those
geodesics that trace out their image exactly once.
Let $M$ be the centralizer of $A$ in $K$. We define the zeta functions associated
with unitary irreducible representations $\sigma$ of $M$ and
finite dimensional representations $\chi$ of $\Gamma$.
Let $\chi\colon\Gamma\to \GL(V_{\chi})$ be a finite dimensional representation of $\Gamma$. Let
$\sigma\in\widehat M$.
Then the twisted Selberg zeta function $Z(s;\sigma,\chi)$ is defined by the infinite product
\begin{equation*}
Z(s;\sigma,\chi):=\prod_{\substack{[\gamma]\neq e,\\ [\gamma]\prim}} \prod_{k=0}^{\infty}\det\Big(\Id-\big(\chi(\gamma)\otimes\sigma(m_\gamma)\otimes S^k(\Ad(m_\gamma a_\gamma)_{\overline{\mathfrak{n}}})\big)e^{-(s+|\rho|)l(\gamma)}\Big),
\end{equation*}
where $s\in \mathbb{C}$, $\overline{\mathfrak{n}}$ is the sum of the negative root spaces of the system $(\mathfrak{g},\mathfrak{a})$
and $S^k(\Ad(m_\gamma a_\gamma)_{\overline{\mathfrak{n}}})$ denotes the $k$-th
symmetric power of the adjoint map $\Ad(m_\gamma a_\gamma)$ restricted to $\overline{\mathfrak{n}}$.
The twisted Ruelle zeta function $R(s;\sigma,\chi)$ is defined by the infinite product
\begin{equation*}
R(s;\sigma,\chi):=\prod_{\substack{[\gamma]\neq{e}\\ [\gamma]\prim}}\det(\Id-\chi(\gamma)\otimes\sigma(m_{\gamma})e^{-sl(\gamma)})^{(-1)^{d-1}}.
\end{equation*}
Both $Z(s;\sigma,\chi)$, $R(s;\sigma,\chi)$ converge absolutely and uniformly on compact subsets of some half-plane of $\mathbb{C}$.
In our case, where $X=\Gamma/\H^{d}$,
the dynamical zeta functions
are twisted by a representation $\chi$ of $\Gamma$.
For unitary representations of $\Gamma$,
these zeta functions have been studied by Fried (\cite{Fried}) and Bunke and Olbrich (\cite{BO}).
In \cite[Theorem 1]{Fried}, Fried proved that the Ruelle zeta function on a closed oriented hyperbolic manifold associated with an
acyclic orthogonal representation of the fundamental group admits a meromorphic continuation to the whole complex plane
and furthermore the
absolute value of the Ruelle zeta function
evaluated at zero equals the Ray-Singer
analytic torsion as it is introduced in \cite{RS}. Apart from the fact that this is of great importance, since it
provides a connection between the geometry (geodesic flow expressed by the Ruelle zeta function) of the manifold and a
spectral invariant (analytic torsion), the theorem of Fried gave rise to other applications in the field of spectral geometry,
see for example \cite{MMar}, \cite{M2}.
In \cite{M2}, M\"{u}ller considered the asymptotic behavior of the analytic torsion of a closed hyperbolic $3$-manifold $X$,
associated with the $m$-th symmetric power of the standard representation $\tau_{m}$ of the group $\SL(2,\mathbb{C})$.
In particular, he proved that for a closed hyperbolic oriented $3$-manifold,
\begin{equation*}
-\log T_{X}(\tau_{m})=\frac{\Vol(X)}{4\pi}m^2+O(m),
\end{equation*}
as $m\rightarrow \infty$ (\cite[Theorem 1.1]{M2}).
In order to prove this theorem, he used the expression of the analytic torsion in terms of the Ruelle zeta function attached
to $\tau_{m}$ and based on the results of Wotzke (\cite{Wo}).
In his thesis, Wotzke generalized the theorem
of Fried to the case of the induced representation $\tau|_{\Gamma}$ of $\Gamma$
arising from a restriction of a finite dimensional complex representation $\tau\colon G\rightarrow \GL(V)$ of $G$.
By \cite[Proposition 3.1]{MM}, there exists an isomorphism between the locally homogenous vector bundle $E_{\tau}$ over $X$
associated with $\tau|_{K}$ and the flat vector bundle $E_{\fl}$ over $X$
associated with $\tau|_{\Gamma}$. i.e.,
\begin{equation*}
\Gamma\backslash(G/K\times V)\cong (\Gamma\backslash G\times V)/K.
\end{equation*}
Once this isomorphism is obtained, a hermitian
fiber metric in $E_{\tau}$ (\cite[Lemma 3.1]{MM}) descends to a fiber metric in $E_{\fl}$. Therefore, the Laplace operator
associated with $\tau|_{\Gamma}$ is a formally self-adjoint operator and the whole harmonic analysis on locally symmetric
spaces is provided.
On the other hand, Bunke and Olbrich in \cite{BO} consider a unitary representation of the fundamental group
for all the cases of compact locally symmetric spaces of real rank 1 and of non-compact type, i.e.,
the compact manifolds whose universal coverings are the real, complex, or quaternionic hyperbolic space, or the Cayley plane.
Using the Selberg trace formula as main tool for wave operators induced by certain Laplace-type and Dirac operators, they proved that the
Selberg and Ruelle zeta functions admit a meromorphic continuation to the whole complex and furthermore satisfy
functional equations.
Our main aim is to generalize the results of Bunke and Olbrich to the case of non-unitary representations of
the fundamental group.
Contrary to the setting of Wotzke, we can no longer use the results in \cite{MM},
since we treat the case of an arbitrary finite dimensional representation $\chi\colon\Gamma\to \GL(V_{\chi})$ of $\Gamma$.
Let $E_{\chi}$ be the associated to $\chi$ flat vector bundle over $X$, equipped with a flat connection $\nabla^{\chi}$.
In general, there is no hermitian metric $h^{\chi}$, which is compatible with the
flat connection. To overcome this problem we use the flat Laplacian, first introduced by M\"{u}ller in \cite{M1}.
In fact, we use a more general operator, the twisted Bochner-Laplace operator defined as follows.
Let $\tau\colon K\rightarrow \GL(V_{\tau})$ be a complex finite dimensional unitary representation of $K$. Let $\widetilde{E}_{\tau}:=G\times_{\tau}V_{\tau}\rightarrow \widetilde{X}$
be the associated homogenous vector bundle over $\widetilde{X}$.
Let $E_{\tau}:=\Gamma\backslash(G\times_{\tau}V_{\tau})\rightarrow X$ be the locally homogenous vector bundle over $X$.
Let $\Delta_{\tau}$ be the Bochner-Laplace operator associated with $\tau$.
We define the operator $\Delta^{\sharp}_{\tau,\chi}$ acting on $C^{\infty}(X,E_{\tau}\otimes E_{\chi})$.
Locally, it can be described as
\begin{equation*}
\widetilde{\Delta}^{\sharp}_{\tau,\chi}=\widetilde{\Delta}_{\tau}\otimes\Id_{V_{\chi}},
\end{equation*}
where $\widetilde{\Delta}^{\sharp}_{\tau,\chi}$ and $\widetilde{\Delta}_{\tau}$
are the lifts to $\widetilde{X}$ of
$\Delta^{\sharp}_{\tau,\chi}$ and $\Delta_{\tau}$, respectively.
Our operator is not self-adjoint anymore.
However, it has the same principal symbol as $\Delta_{\tau}$
and hence has nice spectral properties, i.e., the spectrum of
${\Delta}^{\sharp}_{\tau,\chi}$ is a discrete subset of a positive cone in $\mathbb{C}$.
We consider the corresponding heat semi-group $e^{-t\Delta^{\sharp}_{\tau,\chi}}$ acting on the space
of smooth sections of the vector bundle $E_{\tau}\otimes E_{\chi}$.
By \cite[Lemma 2.4]{M1}, it is an integral operator with smooth kernel.
Hence, we can consider the trace of the
operator $e^{-t\Delta^{\sharp}_{\tau,\chi}}$ and derive a corresponding trace formula.
We already associated the Selberg and Ruelle zeta functions with irreducible representations $\sigma$ of $M$. These representations are chosen precisely to be
the representations arising from restrictions of representations of $K$.
The keypoint of the proof of the meromorphic continuation of the Selberg zeta function is the Selebrg trace formula for the operator
$e^{-tA_{\tau,\chi}^{\sharp}(\sigma)}$,
where $A_{\tau,\chi}^{\sharp}(\sigma)$ is the twisted differential operator associated with
$\sigma\in \widehat{M}$ and induced by $\Delta^{\sharp}_{\tau,\chi}$.
\begin{thm}
For every $\sigma \in \widehat{M}$ we have
\begin{align*} \label{f:trace formula 1}
\Tr(e^{-tA_{\chi}^{\sharp}(\sigma)})=&\dim(V_{\chi})\Vol(X)\int_{\mathbb{R}}e^{-t\lambda^{2}}P_{\sigma}(i\lambda)d\lambda\\
&+\sum_{[\gamma]\neq e} \frac{l(\gamma)}{n_{\Gamma}(\gamma)}L_{sym}(\gamma;\sigma)
\frac{e^{-l(\gamma)^{{2}}/4t}}{(4\pi t)^{1/2}},
\end{align*}
where
\begin{equation*}
L_{sym}(\gamma;\sigma)= \frac{\tr(\sigma(m_{\gamma})\otimes\chi(\gamma))e^{-|\rho|l(\gamma)}}{\det(\Id-\Ad(m_{\gamma}a_{\gamma}))_{\overline{\mathfrak{n}}}}.
\end{equation*}
\end{thm}
Our main results are stated in the following theorems.
\begin{thm}
The Selberg zeta function $Z(s;\sigma,\chi)$ admits a meromorphic continuation to the whole complex plane $\mathbb{C}$. The set of the singularities
equals $\{s_{k}^{\pm}=\pm i \sqrt{t_{k}}:t_{k}\in \spec(A^{\sharp}_{\chi}(\sigma)), k\in\mathbb{N}\}$.
The orders of the singularities are equal to $m(t_{k})$, where $m(t_{k})\in\mathbb{N}$ denotes the algebraic multiplicity of the eigenvalue $t_{k}$.
For $t_{0}=0$, the order of the singularity $s_{0}$ is equal to $2m(0)$.
\end{thm}
\begin{thm}
For every $\sigma\in\widehat{M}$, the Ruelle zeta function $R(s;\sigma,\chi)$ admits a meromorphic continuation to the whole complex plane $\mathbb{C}$.
\end{thm}
As it is mentioned before, the
operator $A^{\sharp}_{\chi}(\sigma)$ is induced by the twisted Bochner-Laplace operator and therefore has similar nice spectral properties, i.e.,
its spectrum is discrete an contained in a translate of a positive cone in $\mathbb{C}$.
By Theorem 1.2, we observe that the singularities of the Selberg zeta function are located precisely at the points
$t_{k}$, which belong to the discrete spectrum of the twisted operator $A^{\sharp}_{\chi}(\sigma)$.
Hence, this theorem provides an interesting relation between the dynamics of hyperbolic manifolds and the spectral theory
of the twisted Bochner-Laplace operators.
Basic notions and facts from representation theory and the theory of locally symmetric spaces are presented in section 2.
In section 3, we introduce the Selberg and Ruelle zeta functions associated with the geodesic flow of a compact hyperbolic manifold
and prove that they converge in some right half-plane of $\mathbb{C}$ for an arbitrary finite dimensional representation of $\Gamma$.
Section 4 is devoted to the study of the twisted Bochner-Laplace operator on a compact hyperbolic odd dimensional
manifold $X$ and its spectral properties.
Next, in section 5, we obtain a trace formula for the heat operator, induced by specific twisted Bochner-Laplace operators.
Finally, in section 6, we provide the proof of the meromorphic continuation of the dynamical zeta functions to the whole complex plane,
as it is stated in Theorems 1.2 and 1.3.
\textbf{Acknowledgement.} The present paper is part of the author's PhD thesis and therefore she is
grateful to her supervisor, Werner M\"{u}ller, for his helpful suggestions, remarks and advices.
\begin{center}
\section{\textnormal{Preliminaries}}
\end{center}
A special case of a locally symmetric space of real rank 1 is a compact hyperbolic locally symmetric manifold with universal
covering the real hyperbolic space
\begin{equation*}
\H^{d}=\{(x_{1},\ldots,x_{d+1})\in\mathbb{R}^{d+1}:x_{1}^{2}-x_{2}^{2}\ldots-x_{d+1}^{2}=1,x_{1}>0\},
\end{equation*}
where $d=2n+1$, and $n\in\mathbb{N}_{>0}$ is an odd integer.
We consider the universal coverings $G=\Spin(d,1)$ of $\mathrm{SO}^0(d,1)$ and $K=\Spin(d)$ of $\mathrm{SO}(d)$, respectively.
We set $\widetilde{X}:=G/K$.
Let $\mathfrak{g},\mathfrak{k}$ be the Lie algebras of $G$ and $K$, respectively. Let $ \mathfrak{g}=\mathfrak{k}\oplus\mathfrak{p}$
be the Cartan decomposition of $\mathfrak{g}$.
We denote by $\Theta$ the Cartan involution of $G$ and $\theta$ be the differential of $\Theta$ at $e_{G}=e$, which is the identity element of $G$.
Let $\mathfrak{a}$ be a Cartan subalgebra of $\mathfrak{p}$, i.e., a maximal abelian subalgebra of $\mathfrak{p}$.
There exists a canonical isomorphism $T_{eK}\cong \mathfrak{p}$.
We consider the subgroup $A$ of $G$ with Lie algebra $\mathfrak{a}$. Let $M:=\cent_{K}(A)$ be the centralizer of $A$ in $K$.
Then, $M=\Spin(d-1)$ or $M=\SO(d-1)$. Let $\mathfrak{m}$ be its Lie algebra
and $\mathfrak{b}$ a Cartan subalgebra of $\mathfrak{m}$. Let $\mathfrak{h}$ be a Cartan subalgebra of $\mathfrak{g}$.
We consider the complexifications $\mathfrak{g}_{\mathbb{C}}:=\mathfrak{g}\oplus i\mathfrak{g}$, $\mathfrak{h}_{\mathbb{C}}:=\mathfrak{h}\oplus i\mathfrak{h}$
and $\mathfrak{m}_{\mathbb{C}}:=\mathfrak{m}\oplus i\mathfrak{m}$.
Let $B(X,Y)$ be the Killing form on $\mathfrak{g}\times \mathfrak{g}$ defined by $ B(X,Y)=\Tr (\ad(X)\circ \ad(Y))$.
It is a symmetric bilinear from.
We consider the inner product
\begin{equation}
\langle Y_1, Y_2\rangle_{0}:=\frac{1}{2(d-1)}B(Y_1,Y_2),\quad Y_1,Y_2 \in \mathfrak{g},
\end{equation}
induced by the Killing form.
The restriction of $\langle\cdot,\cdot\rangle_{0}$ to $\mathfrak{p}$ defines
an inner product on $\mathfrak{p}$ and
hence induces a $G$-invariant riemannian metric on $\widetilde{X}$, which
has constant curvature $-1$. Then, $\widetilde{X}$, equipped with this metric,
is isometric to $\H^{d}$.
Let $\Gamma\subset G$ be a lattice, i.e., a discrete subgroup of $G$ such that $\Vol(\Gamma\backslash G)<\infty$.
$\Gamma$ acts properly discontinuously on $\widetilde{X}$ and
$X:=\Gamma\backslash \widetilde{X}$
is a locally symmetric space of finite volume.
We assume that $\Gamma$ is torsion free. i.e.,
there exists no $\gamma\in\Gamma$ with $\gamma\neq e$ such that for $k=2,3,\ldots$, $\gamma^{k}=e$.
Then, $X$ is a locally symmetric manifold. If in addition $\Gamma$ is cocompact, then $X$ is a locally symmetric compact
hyperbolic manifold of odd dimension $d$.
Let $G=KAN$
be the standard Iwasawa decomposition of $G$.
Let $\Delta^{+}(\mathfrak{g},\mathfrak{a})$ be the set of positive roots of the system $(\mathfrak{g},\mathfrak{a})$.
Then, $\Delta^{+}(\mathfrak{g},\mathfrak{a})=\{\alpha\}$. Let $M'=\Norm_{K}(A)$ be the normalizer of $A$ in $K$.
We define the restricted Weyl group as the quotient $ W_{A}:=M'/M$.
Then, $W_{A}$ has order 2. Let $w\in W_{A}$ be a non-trivial element of $W_{A}$, and
$m_{w}$ a representative of $w$ in $M'$.
The action of $W_{A}$ on $\widehat{M}$ is defined by
\begin{equation*}
(w\sigma)(m):=\sigma(m_{w}^{-1}mm_{w}),\quad m\in M, \sigma\in\widehat{M}.
\end{equation*}
Let $H_{\mathbb{R}}\in\mathfrak{a}$ such that $\alpha(H_{\mathbb{R}})=1$. With respect to the inner product (2.1), $H_{\mathbb{R}}$ has norm 1.
We define
\begin{equation}
A^{+}:=\{\exp(tH_{\mathbb{R}})\colon t\in\mathbb{R}^{+}\}.
\end{equation}
We define also
\begin{align}
&\rho:=\frac{1}{2}\sum_{\alpha\in \Delta^+(\mathfrak{g},\mathfrak{a})}\dim(\mathfrak{g}_\alpha)\alpha,\\
&\rho_{\mathfrak{m}}:=\frac{1}{2}\sum_{\alpha\in \Delta^+(\mathfrak{m}_{\mathbb{C}},\mathfrak{b})}\alpha.\\\notag
\end{align}
The inclusion $i\colon M\hookrightarrow K$ induces the restriction map $i^{*}\colon R(K)\rightarrow R(M)$, where
$R(K),R(M)$ are the representation rings over $\mathbb{Z}$ of $K$ and $M$, respectively.
Let $\widehat{K},\widehat{M}$ be the sets of equivalent classes of
irreducible unitary representations of $K$ and $M$, respectively. Then,
for the highest weight $\nu_{\tau}$ of $\tau\in\widehat{K}$ we have
\begin{align*}
\nu_{\tau}=(\nu_{1},\ldots,\nu_{n}),
\end{align*}
where $\nu_{1}\geq\ldots\geq\nu_{n}$ and $\nu_{i},i=1,\ldots,n$ are all integers or all half integers
(that is $\nu_{i}=q_{i}/2, q_{i}\in\mathbb{Z}$) and for the highest weight $\nu_{\sigma}$ of $\sigma\in\widehat{M}$,
\begin{align}
\nu_{\sigma}=(\nu_{1},\ldots,\nu_{n-1},\nu_{n}),
\end{align}
where $\nu_{1}\geq\ldots\geq\nu_{n-1}\geq\lvert\nu_{n}\rvert$ and $\nu_{i},i=1,\ldots,n$ are all integers or all half integers (see \cite[p. 20]{BO}).
Let $s$ be the spin representation of $K$\index{spin representation}, given by
\begin{equation*}
s\colon K\rightarrow\End(\Delta_{2n})\oplus\End(\Delta_{2n})\xrightarrow{pr} \End(\Delta_{2n})
\end{equation*}
where $\Delta_{2n}:={\mathbb{C}^{2}}^{k}$ such that $n=k$, and $pr$ denotes the projection onto the first component (\cite[p.14]{Friedb}).
We set for abbreviation $S=\Delta_{2n}$. Let $(s^{+},S^{+})$, $(s^{-},S^{-})$ be the half spin representations of $M$\index{spin representation!half spin representations},
where $S^{\pm}:=\Delta^{\pm}$ (\cite[p.22]{Friedb}).
The highest weight of $s$ is given by $
\nu_{s}=(\frac{1}{2},\ldots,\frac{1}{2})$,
and the highest weights of $s^{+},s^{-}$ are
$\nu_{s^{+}}=(\frac{1}{2},\ldots,\frac{1}{2})$,
$\nu_{s^{-}}=(\frac{1}{2},\ldots,-\frac{1}{2})$,
respectively (\cite[p. 20]{BO}).
We consider now the parametrization of the principal series representation.
Let $P=MAN$ be the standard parabolic subgroup of $G$. For $(\sigma,V_{\sigma})\in \widehat{M}$,
we define the space $\mathcal{H}_{\sigma}$ of continuous functions on $G$ by
\begin{equation*}
\mathcal{H}_{\sigma}:=\{f\colon G\rightarrow V_{\sigma}: f(gman)=e^{-(i\lambda+|\rho|)t}\sigma^{-1}(m)f(g), \forall g\in G, \forall man\in P\},
\end{equation*}
where $\lambda\in\mathbb{C}$ and $\rho$ as in (2.3), with norm
\begin{equation}
\lVert f \rVert_{c}=\int_{K}\lVert f(k) \rVert^{2}dk.
\end{equation}
We define the principal series representation as the induced representation
\begin{equation*}
\pi_{\sigma,\lambda}:=\Ind_{P}^{G}(\sigma\otimes e^{i\lambda}\otimes \Id),
\end{equation*}
with representation space the Hilbert space, obtained by completion of $\mathcal{H}_{\sigma}$ with respect to the norm
$\lVert \cdot\rVert_{c}$ in (2.6).
For $f\in\mathcal{H}_{\sigma}$, the action of $G$ on $f$ is given by $ \pi_{\sigma,\lambda}(g)f(g')=f(g^{-1}g')$.
$\mathfrak{a}_{\mathbb{C}}^{*}$ is the space of the linear functionals on $\mathfrak{a}_{\mathbb{C}}$.
In the definition of the space $ \mathcal{H}_{\sigma}$,
$\lambda$ is a complex number. Hence, $\mathfrak{a}_{\mathbb{C}}^{*}$
is identified with $\mathbb{C}$, using the positive root.
If $\lambda\in\mathbb{R}$, then the representation $\pi_{\sigma,\lambda}$ is unitary.
In addition, if $\lambda\in \mathbb{R}-\{0\}$, then $\pi_{\sigma,\lambda}$ is irreducible.
Let $\mu_{PL}(\pi_{\sigma,\lambda})$ be the Plancherel measure, viewed as a measure on the set of
the principal series represenations $\pi_{\sigma,\lambda}$.
Since $\rank(G)>\rank(K)$, by classical result of Harish-Chandra (\cite{HC2}), the set of the
discrete series representations of $G$ is empty.
Then, by \cite[Theorem 13.2]{Knapp},
\begin{equation*}
d\mu_{PL}(\pi_{\sigma,\lambda})=P_{\sigma}(i\lambda)d\lambda.
\end{equation*}
Here, $P_{\sigma}(i\lambda)$ is the Plancherel polynomial
given by
\begin{equation*}
P_{\sigma}(i\lambda)=\prod_{\alpha\in\Delta^{+}(\mathfrak{g}_{\mathbb{C}},\mathfrak{h})}\frac{\langle i\lambda+\nu_{\sigma}+\rho_{\mathfrak{m}},\alpha\rangle}{\langle \rho_{\mathfrak{g}},\alpha\rangle},
\end{equation*}
where $\langle\cdot,\cdot\rangle$ is the inner product on $\mathfrak{a}^*$, $\nu_{\sigma}$ is the highest weight of $\sigma$ as in (2.5),
$\rho_{\mathfrak{m}}$ is defined as in (2.4),
and $\rho_{\mathfrak{g}}$ is defined by
\begin{equation*}
\rho_{\mathfrak{g}}:=\frac{1}{2}\sum_{\alpha\in\Delta^{+}(\mathfrak{g}_{\mathbb{C}},\mathfrak{h})}\alpha,
\end{equation*}
(see \cite[p.46]{BO}).
Let $z=i\lambda\in\mathbb{C}$.
Then, by \cite[p.264-265]{Mi2}, $P_{\sigma}(z)$ is an even polynomial of $z$,
and hence $ P_{\sigma}(z)=P_{\sigma}(-z)$.
\newpage
\begin{center}
{\section{\textnormal{Twisted Selberg and Ruelle zeta functions}}}
\end{center}
We consider the twisted Ruelle and Selberg zeta functions associated with the geodesic flow on the
sphere vector bundle $S(X)$ of $X=\Gamma\backslash G/ K$. Since $K$ acts transitively on the unit vectors of $\mathfrak{p}$,
by the adjoint representation, $S(\widetilde{X})$ can be represented by the homogenous space $G/M$.
Therefore, $S(X)=\Gamma\backslash G/M$.
We recall the Cartan decomposition $G=KA^{+}K$ of $G$,
where $A^{+}$ is as in (2.1).
Then, every element $g\in G$ can be written as $g=ha_{+}k$, where $h,k\in K$ and $a_{+}=\exp(tH_{\mathbb{R}})$ for some $t\in \mathbb{R}^{+}$.
The positive real number $t$ equals $ t=d(eK,gK),$
where $d$ denotes the geodesic distance on $\widetilde{X}$.
It is a well known fact (\cite{GKM}) that there is a 1-1 correspondence between the closed geodesics on a manifold $X$ with negative sectional curvature
and the non-trivial conjugacy classes of the fundamental group $\pi_{1}(X)$ of $X$.
The hyperbolic elements of $\Gamma$ can be realized as the semisimple elements of this group, i.e., the diagonalizable elements of $\Gamma$.
Since $\Gamma$ is a cocompact subgroup of $G$, we realize every element $\gamma\in\Gamma-\{e\}$ as hyperbolic.
We denote by $c_{\gamma}$ the closed geodesic on $X$, associated with the hyperbolic conjugacy class $[\gamma]$.
We denote also by $l(\gamma)$ the length of $c_{\gamma}$.
Since $\Gamma$ is torsion-free, $l(\gamma)$ is always positive and therefore we can obtain an infimum for the length spectrum $\spec(\Gamma):=\{l(\gamma):\gamma\in\Gamma\}$.
An element $\gamma\in\Gamma$ is called primitive if there exists no $n\in\mathbb{N}$ with $n>1$ and $\gamma_{0}\in\Gamma$ such that $\gamma=\gamma_{0}^{n}$.
We associate to a primitive element $\gamma_{0}\in\Gamma$ a prime geodesic on $X$.
The prime geodesics correspond to the periodic orbits of minimal length.
Hence, if a hyperbolic element $\gamma$ in $\Gamma$ is generated by a primitive element $\gamma_{0}$, then
there exists a $n_{\Gamma}(\gamma)\in \mathbb{N}$ such that $\gamma=\gamma_{0}^{n_{\Gamma}(\gamma)}$. The corresponding
closed geodesic is of length $l(\gamma)=n_{\Gamma}(\gamma)l(\gamma_{0})$.
We lift now the closed geodesic $c_{\gamma}$ to the universal covering $\widetilde{X}$.
For $\gamma\in\Gamma$, $l(\gamma):=\inf \{d(x,\gamma x):x\in\widetilde{X}\}$,
and $l(\gamma)=\inf \{d(eK,g^{-1}\gamma gK):g\in G\}$.
Hence, we see that the length of the closed geodesic $l(\gamma)$ depends only on $\gamma\in\Gamma$.
Let $\gamma \in \Gamma$, with $ \gamma\neq e$ and $\gamma$ hyperbolic. Then, by \cite[Lemma 6.5]{Wa} there exist a $g\in G$, a $m_{\gamma} \in M$, and an
$a_{\gamma} \in A^{+}$, such that $ g^{-1}\gamma g=m_{\gamma}a_{\gamma}$.
The element $m_{\gamma}$ is determined up to conjugacy classes in $M$, and the element $a_{\gamma}$ depends only on $\gamma$.
Analogous to the consideration of \cite[Section 3.1]{BO},
we define the geodesic flow $\phi$ on $S(X)$ by the map
\begin{equation*}
\phi\colon\mathbb{R}\times S(X)\ni (t,\Gamma gM)\rightarrow \Gamma g\exp(-tH_{\mathbb{R}}) M \in S(X).
\end{equation*}
A closed orbit of $\phi$ is described by the set $c:=\{\Gamma g\exp(-tH_{\mathbb{R}}) M\colon t\in\mathbb{R}\}$,
where $g\in G$ is such that $g^{-1}\gamma g:=m_{\gamma}a_{\gamma}\in MA^{+}$.
The Anosov property of the geodesic flow $\phi$ on $S(X)$ can be expressed by the following
$d\phi$-invariant splitting of $TS(X)$
\begin{equation}
TS(X)=T^sS(X)\oplus T^cS(X)\oplus T^uS(X),
\end{equation}
where $T^sS(X)$ consist of vectors that shrink exponentially, $T^uS(X)$ expand exponentially,
and $T^{c}S(X)$ is the one dimensional subspace of vectors tangent to the flow, with respect to the riemannian metric, as $t\rightarrow\infty$.
The spitting in (3.1), corresponds to splitting
\begin{equation}
TS(X)=\Gamma\backslash G\times_{\Ad}(\overline{\mathfrak{n}}\oplus \mathfrak{a} \oplus \mathfrak{n}),
\end{equation}
where $\Ad$ denotes the adjoint action of $\Ad(\exp(-tH_{\mathbb{R}}))$ on $\overline{\mathfrak{n}}, \mathfrak{a},\mathfrak{n}$,
and $\overline{\mathfrak{n}}=\theta \mathfrak{n}$ is the sum of the negative root spaces of
the system $(\mathfrak{g},\mathfrak{a})$.
Let $(\sigma,V_\sigma)\in \widehat{M}$ and $(\chi, V_\chi)$ be a finite dimensional representation of $\Gamma$.
Let $E(\sigma,\chi):=\Gamma\backslash (G\times_{\sigma\otimes\chi}(V_\sigma\otimes V_\gamma))\rightarrow S(X)$ be the vector bundle
over $S(X)$, associated to the representations $\sigma$ and $\chi$. The action of $M$ and $\Gamma$ on $G\times(V_\sigma\otimes V_\gamma)$
is defined by
\begin{equation*}
[g, v\otimes w]=\{(\gamma g, \sigma(m)v\otimes\chi(\gamma)w)\colon g\in G, v\in V_{\sigma}, w\in {V_{\chi}}, \gamma\in\Gamma, m\in M \},
\end{equation*}
We can lift the flow $\varphi$ to the flow $\varphi_{\sigma,\chi}$ on $E(\sigma,\chi)$ by the map
\begin{equation*}
\phi_{\sigma,\chi}\colon\mathbb{R} \times E(\sigma, \chi)\ni (t, [g,v\otimes w])\mapsto[g\exp(-tH_{\mathbb{R}}), v\otimes w]\in E(\sigma, \chi).
\end{equation*}
\begin{defi}
Let $\chi\colon\Gamma\rightarrow \GL(V_{\chi})$ be a finite dimensional representation of $\Gamma$ and $\sigma\in \widehat{M}$.
The twisted Selberg zeta function $Z(s;\sigma,\chi)$ for
$X$ is defined by the infinite product
\begin{equation}
Z(s;\sigma,\chi):=\prod_{\substack{[\gamma]\neq{e}\\ [\gamma]\prim}} \prod_{k=0}^{\infty}\det\big(\Id-(\chi(\gamma)\otimes\sigma(m_\gamma)\otimes S^k(\Ad(m_\gamma a_\gamma)|_{\overline{\mathfrak{n}}})) e^{-(s+|\rho|)\lvert l(\gamma)}\big),
\end{equation}
where $s\in \mathbb{C}$, $\overline{\mathfrak{n}}=\theta \mathfrak{n}$ is the sum of the negative root spaces of $\mathfrak{a}$,
$S^k(\Ad(m_\gamma a_\gamma)_{\overline{\mathfrak{n}}})$ denotes the $k$-th
symmetric power of the adjoint map $\Ad(m_\gamma a_\gamma)$ restricted to $\mathfrak{\overline{n}}$, and $\rho$ is as in (2.3).
\end{defi}
\begin{defi} Let $\chi\colon\Gamma\rightarrow \GL(V_{\chi})$ be a finite dimensional representation of $\Gamma$ and $\sigma\in \widehat{M}$.
The twisted Ruelle zeta function $ R(s;\sigma,\chi)$ for $X$ is defined by the infinite product
\begin{equation}
R(s;\sigma,\chi):=\prod_{\substack{[\gamma]\neq{e}\\ [\gamma]\prim}}\det\big(\Id-\chi(\gamma)\otimes\sigma(m_{\gamma})e^{-sl(\gamma)}\big)^{(-1)^{d-1}}.
\end{equation}
\end{defi}
In order to prove the convergence of the zeta functions,
we need to find an upper bound for the character of any finite dimensional representation of $\Gamma$.
\begin{lem}
Let $\chi\colon\Gamma\rightarrow \GL(V_{\chi})$ be a finite dimensional representation of $\Gamma$.
Then, there exist positive constants $K,k>0$ such that
\begin{equation}
\lvert\tr(\chi(\gamma))\rvert\leq Ke^{kl(\gamma)},\quad \forall\gamma\in\Gamma-\{e\}.
\end{equation}
\end{lem}
\begin{proof}
We fix a finite set of generators $L=\{\gamma_1,\ldots,\gamma_{r}\}$ of $\Gamma$ and choose a norm $\lVert\cdotp\rVert$ on $V_{\chi}$.
Let $d_{W}(\cdot,\cdot)$ be the word metric on $\Gamma$ (see \cite{LMR} for further details). Let $l_{W}(\gamma)=d_{W}(\gamma,e)$ be the length of
$\gamma\in\Gamma$ with respect to this metric.
Then, if we put $C=\max\{\lVert\chi(\gamma_i)\rVert: \gamma_{i}\in L\cup L^{-1}\}$, we get for $c=\log C$,
\begin{equation}
\lVert\chi(\gamma)\rVert\leq e^{cl_{W}(\gamma)}.
\end{equation}
By \cite[Prop. 3.2]{LMR}, it follows that there exist positive constants $c_{1},c_{2}>0$ such that
\begin{equation}
c_{1}d(x_{0},\gamma x_{0})\leq d_{W}(e,\gamma)\leq c_{2}d(x_{0}, \gamma x_{0}),
\end{equation}
where $x_{0}:=eK$ is the identity element of $\widetilde{X}$.
Then, (3.6) becomes by (3.7)
\begin{equation*}
\lVert\chi(\gamma)\rVert\leq C_{1}e^{c_{2}d(x_{0},\gamma x_{0})}.
\end{equation*}
It follows that
\begin{equation}
\lvert\tr\chi(\gamma)\rvert\leq\dim(V_{\chi})\lVert\chi(\gamma)\rVert\leq C_{3}e^{c_{2}d(x_{0},\gamma x_{0})}.
\end{equation}
Now by definition,
\begin{equation*}
l(\gamma):=\min\{d(x,\gamma x):x\in\widetilde{X}\}.
\end{equation*}We choose a fundamental domain $F\subset\widetilde{X}$ for $\Gamma$ such that $x_{0}\in F$.
Given $\gamma\in\Gamma$, let $x_{1}$ be in $\widetilde{X}$, such that $l(\gamma)=d(x_{1},\gamma x_{1})$. Then, there exists a $\gamma_1\in\Gamma$
such that $ x_{1}\in \gamma_{1}F$. Let $x_{2}\in F$ such that $x_{1}=\gamma_{1}x_{2}$.
By compactness of the fundamental domain, $\diam(F)$ is finite. If we put $\delta:=\diam(F)$, then
\begin{equation}
d(x_{0},x_{2})\leq \delta.
\end{equation}
We see that
\begin{align}
d(x_{0},\gamma_{1}^{-1}\gamma\gamma_{1} x_{0})\notag&\leq d(x_{0},x_{2})+d(x_2,\gamma_{1}^{-1}\gamma\gamma_{1}x_{0})\\
&\leq \delta+d(x_2,\gamma_{1}^{-1}\gamma\gamma_{1}x_{0}).
\end{align}
In addition,
\begin{align}
d(x_2,\gamma_{1}^{-1}\gamma\gamma_{1}x_{0})\notag&\leq d(x_{2},\gamma_{1}^{-1}\gamma\gamma_{1}x_{2})+d(\gamma_{1}^{-1}\gamma\gamma_{1}x_{2},\gamma_{1}^{-1}\gamma\gamma_{1}x_{0})\\\notag
&\leq d(x_{2},\gamma_{1}^{-1}\gamma\gamma_{1}x_{2})+d(x_{0},x_{2})\\
&\leq d(x_{2},\gamma_{1}^{-1}\gamma\gamma_{1}x_{2})+\delta.
\end{align}
Hence, by (3.10) and (3.11) we get
\begin{equation*}
d(x_{0},\gamma_{1}^{-1}\gamma\gamma_{1} x_{0})\leq 2\delta+ d(x_{2},\gamma_{1}^{-1}\gamma\gamma_{1}x_{2}).
\end{equation*}
Recall that $x_{1}=\gamma_{1}x_{2}$. Therefore, we have
\begin{align}
d(x_{0},\gamma_{1}^{-1}\gamma\gamma_{1}x_{0})\notag&\leq 2\delta+d(\gamma_{1}^{-1}x_{1},\gamma_{1}^{-1}\gamma x_{1})\\
&\leq 2\delta+d(x_{1},\gamma x_{1}).
\end{align}
Using (3.8) and (3.12) we obtain the following inequalities.
\begin{align*}
\lvert\tr(\chi(\gamma))\rvert&=\lvert\tr(\chi(\gamma_{1}^{-1}\gamma\gamma_{1}))\rvert\notag\\
&\leq C_{3}e^{c_{2}d(x_{0},\gamma_{1}^{-1}\gamma\gamma_{1} x_{0})}\notag\\
&\leq C_{3}e^{c_{2}(2\delta+d(x_{1},\gamma x_{1}))}\notag\\
&=C_{4}e^{c_{2}d(x_{1},\gamma x_{1})}=C_{4}e^{c_{2}l(\gamma)}.
\end{align*}
The assertion follows.
\end{proof}
We are ready now to prove the convergence of the Selberg and Ruelle zeta functions.
\begin{prop}
Let $\chi\colon\Gamma\rightarrow \GL(V_{\chi})$ be a finite dimensional representation of $\Gamma$ and $\sigma\in \widehat{M}$. Then, there exists a constant $c>0$ such that
\begin{equation}
Z(s;\sigma,\chi):=\prod_{\substack{[\gamma]\neq{e}\\ [\gamma]\prim}}\prod_{k=0}^{\infty}\det(1-(\chi(\gamma)\otimes\sigma(m_\gamma)\otimes S^k(\Ad(m_\gamma a_\gamma)_{\overline{\mathfrak{n}}}))e^{-(s+|\rho|)l(\gamma)}),
\end{equation}
converges absolutely and uniformly on compact subsets of the half-plane $\emph{\Re}(s)>c$.
\end{prop}
\begin{proof}
We observe that
\begin{align}
\log Z(s;\sigma,\chi)\notag=&\sum_{\substack{[\gamma]\neq{e}\\ [\gamma]\prim}} \sum_{k=0}^{\infty}\tr\log(1-(\chi(\gamma)\otimes\sigma(m_\gamma)\otimes S^k(\Ad(m_\gamma a_\gamma)_{\overline{\mathfrak{n}}}))e^{-(s+|\rho|)l(\gamma)})\\\notag
=&-\sum_{\substack{[\gamma]\neq{e}\\ [\gamma]\prim}}\sum_{k=0}^{\infty}\sum_{j=1}^{\infty}\frac{\tr((\chi(\gamma)\otimes\sigma(m_\gamma)\otimes S^k(\Ad(m_\gamma a_\gamma)_{\overline{\mathfrak{n}}}))e^{-(s+|\rho|)l(\gamma)})^{j}}{j}\\\notag
=&-\sum_{[\gamma]\neq{e}}\sum_{k=0}^{\infty} \frac{1}{n_{\Gamma}(\gamma)}\tr(\chi(\gamma)\otimes\sigma(m_\gamma)\otimes S^k(\Ad(m_\gamma a_\gamma)_{\overline{\mathfrak{n}}}))e^{-(s+|\rho|)l(\gamma)}\\\notag
=&-\sum_{[\gamma]\neq{e}}\frac{1}{n_{\Gamma}(\gamma)}\tr(\chi(\gamma)\otimes\sigma(m_\gamma))\frac{e^{-(s+|\rho|)l(\gamma)}}{\det(\Id-\Ad(m_\gamma a_\gamma)_{\overline{\mathfrak{n}}})},\\
\end{align}
where in the last equation we made use of the identity
\begin{equation*}
\sum_{k=0}^{\infty} S^k(\Ad(m_\gamma a_\gamma)_{\overline{\mathfrak{n}}})=\frac{1}{\det(\Id-\Ad(m_{\gamma}a_{\gamma})_{\overline{\mathfrak{n}}}}.
\end{equation*}
Initially, we observe that
\begin{equation*}
\lvert \tr\sigma(m_{\gamma})\rvert\leq \dim(\sigma), \quad\forall \sigma\in \widehat{M}.
\end{equation*}
We need an upper bound for the growth of the length spectrum $l(\gamma)$.
Using the normalization of the Haar measure on $G$ as in \cite[Proposition 7.6.4]{Wab} we see that there exists a
positive constant $C>0$ such that for every $R>0$
\begin{equation*}
\Vol(B(x_{0},R))\leq Ce^{2|\rho| R},
\end{equation*}
where $\rho$ as in (2.3).
Since $\Gamma$ is a cocompact lattice of $G$, there exists a positive constant $C'$ such that
\begin{equation}
\sharp \{[\gamma]:l(\gamma)<R\}\leq\sharp\{ \gamma \in \Gamma: l(\gamma)\leq R\}\leq C'e^{2|\rho |R}.
\end{equation}
We need also an upper bound for the quantity
\begin{equation*}
\frac{1}{\det(\Id-\Ad(m_\gamma a_\gamma)_{\overline{\mathfrak{n}}})}.
\end{equation*}
Since $
\det(\Ad(a_{\gamma})_{\overline{\mathfrak{n}}})=\exp(-2|\rho| l(\gamma))
$
we can use the estimates (3.15) to see that we can consider a $[\gamma_{min}]$ among all the conjugacy classes of $\Gamma$, such that $l(\gamma_{min})$
is of minimum length.
Hence, there exists a positive constant $C''>0$ such that
\begin{equation*}
\frac{1}{\det(\Id-\Ad(m_\gamma a_\gamma)_{\overline{\mathfrak{n}}})}<C''.
\end{equation*}
By Lemma 3.3, it follows that there exist positive constants $C,c_{1}>0$ such that
\begin{align*}
\sum_{[\gamma]\neq{e}}\frac{1}{n_{\Gamma}(\gamma)}\Big|\tr(\chi(\gamma)\otimes&\sigma(m_\gamma))\frac{e^{-(s+|\rho|)l(\gamma)}}
{\det(\Id-\Ad(m_\gamma a_\gamma)_{\overline{\mathfrak{n}}})}\Big|\\
&\leq C \sum_{[\gamma]\neq{e}}e^{(c_{1}-\text{\Re}(s))l(\gamma)}\\
&= C\sum_{k=0}^{\infty}\sum_{\substack{[\gamma]\neq {e}\\ k\leq l(\gamma)\leq k+1}}e^{(c_{1}-\text{\Re}(s)) l(\gamma)}\\
&\leq C\sum_{k=0}^{\infty}\mathcal{N}(k+1) e^{(c_{1}-\text{\Re}(s))k},\\
\end{align*}
where
\begin{equation*}
\mathcal{N}(R):=\sharp\{[\gamma] \in \Gamma: l(\gamma)\leq R\},\quad R\geq 0.
\end{equation*}
By (3.15), we have
\begin{equation*}
\sum_{k=0}^{\infty}\mathcal{N}(k+1) e^{(c_{1}-\text{\Re}(s))k}\leq C' \sum_{k=0}^{\infty} e^{(2|\rho|+c_{1}-\text{\Re}(s))k}.
\end{equation*}
Hence, there exists a positive constant $c>0$ such that for $s\in\mathbb{C}$ with $\text{\Re}(s)>c$,
\begin{equation*}
\sum_{[\gamma]\neq{e}}\frac{1}{n_{\Gamma}(\gamma)}\Big|\tr(\chi(\gamma)\otimes\sigma(m_\gamma))\frac{e^{-(s+|\rho|)l(\gamma)}}
{\det(\Id-\Ad(m_\gamma a_\gamma)_{\overline{\mathfrak{n}}})}\Big|\\
<\infty.
\end{equation*}
The assertion follows form (3.14).
\end{proof}
A similar approach will be used to establish the convergence of the Ruelle zeta function.
\begin{prop}
Let $\chi\colon\Gamma\rightarrow \GL(V_{\chi})$ be a finite dimensional representation of $\Gamma$ and $\sigma\in\widehat{M}$. Then, there exists a constant $r>0$ such that
\begin{equation}
R(s;\sigma,\chi):=\prod_{\substack{[\gamma]\neq{e}\\ [\gamma]\prim}}\det\big(\Id-\chi(\gamma)\otimes\sigma(m_{\gamma})e^{-sl(\gamma)}\big)^{(-1)^{d-1}}.
\end{equation}
converges absolutely and uniformly on compact subsets of the half-plane \emph{Re}$(s)>r$.
\end{prop}
\begin{proof}
We observe that
\begin{align}
\log R(s;\sigma,\chi)=\notag&(-1)^{d-1}\sum_{\substack{[\gamma]\neq{e}\\ [\gamma]\prim }} \tr\log(1-\chi(\gamma)\otimes \sigma(m_{\gamma})e^{-sl(\gamma)})\\\notag
&=(-1)^{d}\sum_{\substack{[\gamma]\neq{e}\\ [\gamma]\prim }}\sum_{j=1}^{\infty}\frac{\tr((\chi(\gamma)\otimes \sigma(m_{\gamma})e^{-sl(\gamma)})^{j})}{j}\\
&=(-1)^{d}\sum_{[\gamma]\neq{e}}\frac{1}{n_{\Gamma}(\gamma)} \tr(\chi(\gamma)\otimes \sigma(m_{\gamma}))e^{-sl(\gamma)}.
\end{align}
By Lemma 3.3, it follows that there exist positive constants $C,c_{1}>0$ such that
\begin{align*}
\sum_{[\gamma]\neq{e}}\frac{1}{n_{\Gamma}(\gamma)} \Big|\tr(\chi(\gamma)\otimes &\sigma(m_{\gamma}))e^{-sl(\gamma)}\Big |\\
&\leq C\sum_{[\gamma]\neq{e}} e^{(c_{1}-\text{\Re}(s))l(\gamma)}\\
&=C\sum_{k=0}^{\infty}\sum_{\substack{[\gamma]\neq e\\ k\leq l(\gamma)\leq k+1}} \mathcal{N}(k+1)e^{(c_{1}-\text{\Re}(s))l(\gamma)}\\
&\leq C\sum_{k=0}^{\infty}\mathcal{N}(k+1)e^{(c_{1}-\text{\Re}(s))k}.\\
\end{align*}
By (3.15), we have
\begin{equation*}
\sum_{k=0}^{\infty}\mathcal{N}(k+1)e^{(c_{1}-\text{\Re}(s))k}\leq C'\sum_{k=0}^{\infty}e^{(2|\rho|+c_{1}-\text{\Re}(s))k}.
\end{equation*}
Hence, there exists a positive constant $r>0$ such that for $s\in\mathbb{C}$ with $\text{\Re}(s)>r$,
\begin{equation}
\sum_{[\gamma]\neq{e}}\frac{1}{n_{\Gamma}(\gamma)} \Big|\tr(\chi(\gamma)\otimes \sigma(m_{\gamma}))e^{-sl(\gamma)}\Big |<\infty.
\end{equation}
The assertion follows form (3.17).
\end{proof}
\begin{lem}We set
\begin{equation}
L_{sym}(\gamma;\sigma):=\frac{\tr(\chi(\gamma)\otimes\sigma(m_{\gamma}))e^{-|\rho|l(\gamma)}}
{\det{(\Id-\Ad(m_{\gamma}a_{\gamma})_{\overline{n}})}}.
\end{equation}
Then, the logarithmic derivative of the Selberg zeta function $Z(s;\sigma,\chi)$ is given by
\begin{equation}\label{log der selberg}
L(s):=\frac{d}{ds}\log(Z(s;\sigma,\chi))=\sum_{[\gamma]\neq{e}}\frac{l(\gamma)}{n_{\Gamma}(\gamma)}L_{sym}(\gamma;\sigma)
e^{-sl(\gamma)}.
\end{equation}
\end{lem}
\begin{proof}
We see by equation (3.14)
\begin{align*}
\frac{d}{ds}\log(Z(s;\sigma,\chi))&=\sum_{[\gamma]\neq{e}}\frac{l(\gamma)}{n_{\Gamma}(\gamma)}\tr(\chi(\gamma)\otimes\sigma(m_\gamma))
\frac{e^{-sl(\gamma)}e^{-|\rho|l(\gamma)}}{\det(1-\Ad(m_\gamma a_\gamma)_{\overline{\mathfrak{n}}})}\\
&=\sum_{[\gamma]\neq e}\frac{l(\gamma)}{n_{\Gamma}(\gamma)}L_{sym}(\gamma;\sigma)
e^{-sl(\gamma)}.
\end{align*}
\end{proof}
\begin{center}
\section{\textmd{The twisted Bochner-Laplace operator}}
\end{center}
Let $E\rightarrow X$ be a complex vector bundle with covariant derivative $\nabla$. We define the second covariant derivative $\nabla^2$ by
\begin{equation*}
\nabla^2_{V,W}:=\nabla_{V}\nabla_{W}-\nabla_{\nabla^{LC}_{V}W},
\end{equation*}
where $V,W\in C^{\infty}(X,TX)$ and $\nabla^{LC}$ denotes the Levi-Civita connection on $TX$.
We define the connection Laplacian $\Delta_{E}$ to be the negative of the trace of the second covariant derivative, i.e.,
\begin{equation*}
\Delta_{E}:=-\Tr\nabla^2.
\end{equation*}
By \cite[p.154]{LM}, the connection Laplacian is equal to the Bocner-Laplace operator, i.e.
\begin{equation*}
\Delta_{E}=\nabla^{*}\nabla.
\end{equation*}
In terms of a local orthonormal frame field $(e_{1},\ldots,e_{d})$ of $T_{x}X$, for $x\in X$,
the connection Laplacian is given by
\begin{equation*}
\Delta_{E}=-\sum_{j=1}^{d}\nabla^ 2{_{e_j,e_j}}.
\end{equation*}
$\Delta_{E}\colon C^{\infty}(X,E)\circlearrowright$ is a second order differential operator.
Let $h$ be a metric in $E$. Then, $\Delta_{E}$ acts in $L^{2}(X,E)$ with domain $C^{\infty}(X,E)$.
Since the principal symbol of $\Delta_{E}$ is computed to be $\sigma_{\Delta_{E}}(x,\xi)
=\lVert \xi \rVert^ {2}_{x} \Id_{E_{x}}$, we can conclude that $\Delta_{E}$ is an elliptic operator
and hence it has nice spectral properties. Namely, its spectrum is discrete and contained in a translate of a positive cone $C\subset \mathbb{C}$ such that $\mathbb{R}^{+}\subset C$
(\cite[Theorem 8.4 and Theorem 9.3]{Sh}).
Furthermore, if the metric is compatible with the connection $\nabla$, $\Delta_{E}$ is formally self-adjoint.
Let $\chi\colon\Gamma\rightarrow \GL(V_{\chi})$ be an arbitrary representation $\chi\colon\Gamma\rightarrow \GL(V_{\chi})$ of $\Gamma$.
Let $E_{\chi}\rightarrow X$ be the associated flat vector bundle
over $X$, equipped with a flat connection $\nabla^{E_{\chi}}$.
We specialize now to the twisted case $E=E_{0}\otimes E_{\chi}$, where $E_{0}\rightarrow X$ is a complex vector bundle equipped with a connection $\nabla^{E_{0}}$ and
a metric, which is compatible with this connection.
Let $\nabla^{E}=\nabla^{E_{0}\otimes E_{\chi}}$ the product-connection, defined by
\begin{equation*}
\nabla^{E_{0}\otimes E_{\chi}}:=\nabla^{E_{0}}\otimes 1+1\otimes\nabla^{E_{\chi}}.
\end{equation*}
We define the operator $\Delta_{E_{0},\chi}^\sharp$ by
\begin{equation}
\Delta_{E_{0},\chi}^{\sharp}=-\Tr\big((\nabla^{E_{0}\otimes E_{\chi}})^2\big).
\end{equation}
We choose a hermitian metric in $E_{\chi}$.
Then, $\Delta_{E_{0},\chi}^{\sharp}$ acts on $L^{2}(X,E_{0}\otimes E_{\chi})$. However, it is not a formally self-adjoint operator in general.
We want to describe this operator locally. Following the analysis in \cite{M1}, we consider an open subset $U$ of $X$ such that $E_{\chi}\lvert_{U}$ is trivial.
Then, $E_{0}\otimes E_{\chi}\lvert_{U}$ is isomorphic to the direct sum of $m$-copies of $E_{0}\lvert_{U}$, i.e.,
\begin{equation*}
(E_{0}\otimes E_{\chi})\lvert_{U}\cong\oplus_{i=1}^{m}E_{0}\lvert_{U},
\end{equation*}
where $m:=\rank(E_\chi)=\dim V_{\chi}$.
Let $(e_{i}),i=1,\cdots,m$ be a basis of flat sections of $E_{\chi}\lvert_{U}$.
Then, each $\phi \in C^{\infty}(U,(E_{0}\otimes E_{\chi})\lvert_{U})$ can be written as
\begin{equation*}
\phi=\sum_{i=1}^{m}\phi_{i} \otimes e_{i},
\end{equation*}
where $\phi_{i}\in C^{\infty}(U, E_{0}\lvert_{U}), i=1,\ldots,m$.
The product connection is given by
\begin{equation*}
\nabla_{Y}^{E_{0}\otimes E_{\chi}}\phi=\sum_{i=1}^{m}(\nabla_{Y}^{E_{0}})(\phi_{i})\otimes e_{i},
\end{equation*}
where $Y\in C^{\infty}(X,TX)$.\\
By (4.1) we obtain the twisted Bochner-Laplace operator acting on $C^{\infty}(X,E_{0}\otimes E_{\chi})$, defined by
\begin{equation}
\Delta_{E_{0},\chi}^{\sharp}\phi=\sum_{i=1}^{m}(\Delta_{E_{0}}\phi_{i})\otimes e_{i},
\end{equation}
where $\Delta_{E_{0}}$ denotes the Bochner-Laplace operator $\Delta^{E_{0}}=(\nabla^{E_{0}})^*\nabla^{E_{0}}$
associated to the connection $\nabla^{E_{0}}$.
Let now $\widetilde{E}_{0}, \widetilde{E}_{\chi}$ be the pullbacks to $\widetilde{X}$ of $E_{0},E_{\chi}$, respectively.
Then,
\begin{equation*}
\widetilde{E}_{\chi}\cong \widetilde{X}\times V_{\chi},
\end{equation*}
and
\begin{equation}
C(\widetilde{X}, \widetilde{E}_{0}\otimes \widetilde{E}_{\chi})\cong C(\widetilde{X}, \widetilde{E}_{0})\otimes V_{\chi}.
\end{equation}
With respect to the isomorphism (4.3), it follows from (4.2) that the lift of $\Delta_{E_{0},\chi}^{\sharp}$ to $\widetilde{X}$ takes the form
\begin{equation}
\widetilde{\Delta}^{\sharp}_{E_{0},\chi}=\widetilde{\Delta}_{E_{0}}\otimes \Id_{V_{\chi}},
\end{equation}
where $\widetilde{\Delta}_{E_{0}}$ is the lift of $\Delta_{E_{0}}$ to $\widetilde{X}$.
By (4.2), $\Delta_{E_{0},\chi}^{\sharp}$ has principal symbol
\begin{equation*}
\sigma_{\Delta_{E_{0},\chi}^\sharp}(x,\xi)=\lVert \xi \rVert^ {2}_{x} \Id_{({E_{0}\otimes E_{\chi})_{x}}}, \quad x\in X, \xi\in\T_{x}^{*}X, \xi\neq0.
\end{equation*}
Hence, since the principal symbol is self-adjoint with respect to
the fiber metrics on $E\chi$ and $E_{0}$,
it has nice spectral properties, i.e.,
its spectrum is discrete and contained in a translate of a positive cone $C\subset \mathbb{C}$ such that $\mathbb{R}^{+}\subset C$
(\cite[Theorem 8.4 and Theorem 9.3]{Sh}).
We include here some definitions, which are needed to study the spectrum of the
twisted Laplace-Bochner operator.
For further details see \cite[p.203-206]{BK2}.
\begin{defi}
A spectral cut is a ray
\begin{equation*}
R_{\theta}:=\{\rho e^ {i\theta}: \rho\in[0,\infty]\},
\end{equation*}
where $\theta\in[0,2\pi)$.
\end{defi}
\begin{defi}
The angle $\theta$ is a principal angle for the elliptic operator $\Delta_{E_{0},\chi}^{\sharp}$ if
\begin{equation*}
\spec(\sigma_{\Delta}(x,\xi))\cap R_{\theta}=\emptyset,\quad \forall x\in X,\forall\xi\in T_{x}^{*}X,\xi\neq 0.
\end{equation*}
\end{defi}
\begin{defi}
We define the solid angle $L_{I}$ associated with a closed interval $I$ of $\mathbb{R}$ by
\begin{equation*}
L_{I}:=\{\rho e^ {i\theta}: \rho\in(0,\infty), \theta\in I \}.
\end{equation*}
\end{defi}
\begin{defi}
The angle $\theta$ is an Agmon angle for an elliptic operator $\Delta_{E_{0},\chi}^{\sharp}$, if it is a principal angle for $D$
and there exists $\varepsilon>0$ such that
\begin{equation*}
\spec(\Delta)\cap L_{[\theta-\varepsilon,\theta+\varepsilon]}=\emptyset.
\end{equation*}
\end{defi}
\begin{lem}
Let $\varepsilon\in(0,\frac{\pi}{2})$ be an angle such that the principal symbol $\sigma_{{\Delta}^{\sharp}_{E_{0},\chi}}(x,\xi)$ of ${\Delta}^{\sharp}_{E_{0},\chi}$,
for $\xi\in T_{x}^{*}X,\xi\neq 0$ does not take values in $ L_{[-\varepsilon,\varepsilon]}$.
Then, the spectrum $\spec({\Delta}^{\sharp}_{E_{0},\chi})$ of the operator ${\Delta}^{\sharp}_{E_{0},\chi}$
is discrete and for every $\varepsilon\in(0,\frac{\pi}{2})$ there exist $R>0$ such that $\spec({\Delta}^{\sharp}_{E_{0},\chi})$
is contained in the set $B(0,R)\cup L_{[-\varepsilon,\varepsilon]}\subset \mathbb{C}$.
\end{lem}
\begin{proof}
The discreteness of the spectrum follows from \cite[Theorem 8.4]{Sh}. For the second statement see \cite[Theorem 9.3]{Sh}.
\end{proof}
Let $\lambda_{k}$ be an eigenvalue of ${\Delta}^{\sharp}_{E_{0},\chi}$ and
$V_{\lambda_{k}}$ be the corresponding eigenspace.
This is a finite dimensional subspace of $C^{\infty}(X,E)$ invariant under $D$.
For every $k\in\mathbb{N}$, there exist $N_{k}\in\mathbb{N}$ such that
\begin{align*}
&({\Delta}^{\sharp}_{E_{0},\chi}-\lambda_{k}\Id)^{N_{k}}V_{\lambda_{k}}=0\\
&\lim_{k\rightarrow \infty}\lvert \lambda_{k}\rvert=\infty.
\end{align*}
By \cite{Mk}, the space $L^{2}(X,E)$ can be decomposed as
\begin{equation*}
L^{2}(X,E)=\overline{\bigoplus_{k\geq 1}V_{\lambda_{k}}}.
\end{equation*}
This is the generalization of the eigenspace decomposition of a self-adjoint operator.
We note here that in general the above decomposition is not a sum of mutually orthogonal subspaces (see \cite[p. 7]{M1}).
\begin{defi}
We call algebraic multiplicity $m(\lambda_{k})$ of the eigenvalue $\lambda_{k}$ the dimension of the corresponding
eigenspace $V_{\lambda_{k}}$.
\end{defi}
We want to define the heat operator $e^{-t\Delta_{E_{0},\chi}^\sharp}$ associated to the twisted Bochner-Laplace operator.
Let $\theta$ be an Agmon angle for the operator $\Delta_{E_{0},\chi}^\sharp$. Then, by definition of the Agmon angle and Lemma 4.5, there exists $\varepsilon>0$
such that
\begin{equation*}
\spec(\Delta_{E_{0},\chi}^\sharp)\cap L_{[\theta-\varepsilon,\theta+\varepsilon]}=\emptyset.
\end{equation*}
Since $\Delta_{E_{0},\chi}^\sharp$ has discrete spectrum, there exists also an $r_{0}>0$ such that
\begin{equation*}
\spec(\Delta_{E_{0},\chi}^\sharp)\cap\{z\in\mathbb{C}:\lvert z+1\rvert\leq 2r_{0}\}=\emptyset.
\end{equation*}
We define a contour $\Gamma_{\theta,r_{0}}$ as follows.
\begin{equation*}
\Gamma_{\theta,r _{0}}=\Gamma_{1}\cup\ \Gamma_{2}\cup\Gamma_{3},
\end{equation*}
where $\Gamma_{1}=\{-1+re^ {i\theta}\colon\infty>r\geq r_{0}\}$, $\Gamma_{2}=\{-1+r_{0}e^ {ia}\colon\theta\leq a\leq \theta+2\pi\}$,
$\Gamma_{3}=\{-1+re^{i(\theta+2\pi)}\colon r_{0}\leq r< \infty\}$.
On $\Gamma_{1}$, $r$ runs from $\infty$ to $r_{0}$, $\Gamma_{2}$ is oriented counterclockwise, and on $\Gamma_{3}$,
$r$ runs from $r_{0}$ to $\infty$.
We put
\begin{equation}
e^{-t\Delta_{E_{0},\chi}^{\sharp}}=\frac{i}{2\pi}\int_{\Gamma_{\theta,r_{0}}}e^{-t\lambda}(\Delta_{E_{0},\chi}^{\sharp}-\lambda\Id)^{-1} d\lambda.
\end{equation}
By \cite[Corollary 9.2]{Sh} and the fact that $\lvert e^{-t\lambda}\rvert\leq e^{-t\text{\Re}(\lambda)}$, the integral in equation (4.5)
is well defined.
\begin{center}
\section{\textnormal{Trace Formulas}}
\end{center}
We want now to define a more special case of a twisted Bochner-Laplace operator.
Namely, we conisder the operator $\Delta^{\sharp}_{\tau,\chi}$
acting on $C^{\infty}(X,E_{\tau}\otimes E_{\chi})$, where $E_{\tau}$
is the locally associated homogenous vector bundle
associated with a complex finite dimensional unitary representaion $(\tau,V_{\tau})$ of $K$.
The keypoint is that, when we consider the lift of the twisted Bochner-Laplace operator to the universal covering, this operator acts as the identity operator on the space
of the smooth sections of the flat vector bundle $E_{\chi}$.
Recall that by formula (4.4), we get
\begin{equation*}
\widetilde{\Delta}^{\sharp}_{\tau,\chi}=\widetilde{\Delta}_{\tau}\otimes \Id_{V_{\chi}},
\end{equation*}
where $\widetilde{\Delta}_\tau$ is the lift to $\widetilde{X}$ of the Bochner-Laplace operator $\Delta_{\tau}$, associated with the
representation $\tau$ of $K$.
We give here an explicit description of the operator $\Delta_{\tau}$.
We regard the Lie group $G$ as principal $K$-fiber bundle over $\widetilde{X}$. Let $\pi\colon G\rightarrow G/K$ be the canonical projection.
Then, since $\mathfrak{p}$ is invariant under the adjoint action $\Ad(k), k\in K$, the assignment
\begin{equation*}
T_{g}^{hor}:={\frac{d}{dt}\bigg|_{t=0}g\exp(tX), \quad X\in \mathfrak{p}}
\end{equation*}
defines a horizontal distribution on $G$ (\cite[Chapter \MakeUppercase{\romannumeral 3}]{KNo}). This is the canonical connection in the principal bundle $G$.
Let $\tau:K\rightarrow \GL(V_\tau)$ be a complex finite dimensional unitary representation of $K$ on a vector space $V_\tau$, equipped with an inner product $\langle\cdotp,\cdotp\rangle_{\tau}$.
Let $\widetilde{E}_{\tau}$ be the homogenous vector bundle associated with $(\tau,V_{\tau})$, defined by
\begin{equation*}
\widetilde{E}_{\tau}:=G\times_{\tau} V_{\tau}\rightarrow\widetilde{X},
\end{equation*}
where $K$ acts on $(G,V_{\tau})$ on the right by
\begin{equation*}
(g,v)k=(gk,\tau^{-1}(k)v), \quad g\in G, k\in K, v\in V_{\tau}.
\end{equation*}
The inner product $\langle \cdotp, \cdotp \rangle_{\tau}$ on the vector space $V_{\tau}$
induces a $G$-invariant metric $h^{E_\tau}$ on $\widetilde{E}_{\tau}$.
We denote by $C^{\infty}(\widetilde{X},\widetilde{E}_{\tau})$ the space of the smooth sections
of the vector bundle $\widetilde{E}_{\tau}$.
We define the space
\begin{equation}
C^{\infty}(G;\tau)=\{f:G\rightarrow V_{\tau}\colon f\in C^{\infty}, f(gk)=\tau(k)^{-1}f(g), \forall g\in G, \forall k\in K\}.
\end{equation}
Similarly, we denote by $C^{\infty}_{c}(G;\tau)$ the subspace of $C^{\infty}(G;\tau)$ of compactly supported functions and
$L^{2}(G;\tau)$ the completion of $C^{\infty}_{c}(G;\tau)$ with respect to the inner product
\begin{equation*}
\langle f,h\rangle=\int_{G/K}\langle f(g),h(g)\rangle_{\tau} d\dot{g}.
\end{equation*}
Let $A\colon C^{\infty}(\widetilde{X},\widetilde{E}_{\tau})\rightarrow C^{\infty}(G;\tau)$ be the operator, defined by
\begin{equation*}
Af(g)=g^{-1}f(gK).
\end{equation*}
Then, the canonical connection on $\widetilde{E}_{\tau}$ is given by
\begin{align*}
A(\nabla_{d\pi(g)X}^{\tau}f)(g)&=\frac{d}{dt}\bigg|_{t=0}Af(g\exp(tX))\\
&=\frac{d}{dt}\bigg|_{t=0}(g\exp(tX))^{-1}f(g\exp(tX)K),
\end{align*}
where $g\in G, X\in \mathfrak{p}$, and $f\in C^{\infty}(\widetilde{X},\widetilde{E}_{\tau})$.
By \cite[p. 4]{Mi1}, $A$ induces a canonical isomorphism
\begin{equation}
C^{\infty}(\widetilde{X},\widetilde{E}_{\tau})\cong C^{\infty}(G;\tau).
\end{equation}
Similarly, there exist the following isomorphisms
\begin{align}
&C_{c}^{\infty}(\widetilde{X},\widetilde{E}_{\tau})\cong C^{\infty}_{c}(G;\tau);\\\notag
&L^{2}(\widetilde{X},\widetilde{E}_{\tau})\cong L^{2}(G;\tau).
\end{align}
We consider the Bochner-Laplace operator associated with $\widetilde{\nabla}^{\tau}$,
\begin{equation*}
\widetilde{\Delta}_{\tau}=(\widetilde{\nabla}^{\tau})^{*}\widetilde{\nabla}^{\tau}:C_{c}^{\infty}(\widetilde{X},\widetilde{E}_{\tau})\rightarrow L^{2}(\widetilde{X},\widetilde{E}_{\tau}).
\end{equation*}
Let now $\Omega \in Z(\mathfrak{g_{\mathbb{C}}})$ be the Casimir element of $G$. We assume that $\tau$ is irreducible.
Let $\Omega|_{K}\in Z(\mathfrak{k})$ be the Casimir element of $K$ and $\lambda_{\tau}$ the associated Casimir eigenvalue,
where $Z(\mathfrak{k})$ denotes the center of $\mathfrak{k}$.
Then, with respect to the isomorphism (5.2), the Bochner-Laplace operator acting on $C^{\infty}_{c}(G;\tau)$ is given by
\begin{equation}
\widetilde{\Delta}_{\tau}=-R(\Omega)+\lambda_{\tau}\Id.
\end{equation}
This is proved in \cite[Proposition 1.1]{Mi1}.
The operator $\widetilde{\Delta}_{\tau}$ is an elliptic formally self-adjoint differential operator of second order.
By \cite{Ch}, it is an essentially self-adjoint operator. Its self-adjoint extension will be also denoted by $\widetilde{\Delta}_{\tau}$.\\
We consider the corresponding heat semi-group $e^{-t\widetilde{\Delta}_{\tau}}$ acting on the space $L^2(\widetilde{X}, \widetilde{E}_{\tau})$.
\begin{equation*}
e^{-t\widetilde{\Delta}_{\tau}}\colon L^2(\widetilde{X}, \widetilde{E}_{\tau})\rightarrow L^2(\widetilde{X}, \widetilde{E}_{\tau}).
\end{equation*}
By \cite[p.467]{CY}, $e^{-t\widetilde{\Delta}_{\tau}},t>0$ is an infinitely smoothing operator with a $C^{\infty}$-kernel,
i.e. there exists a smooth function $k_{t}^{\tau}\colon G\times G\rightarrow \End(V_{\tau})$ such that
\begin{enumerate}
\item it is symmetric in the $G$-variables and for each $g\in G$, the map
$g'\mapsto k_{t}^{\tau}(g,g')$
belongs to $L^2(\widetilde{X}, \widetilde{E}_{\tau})$;
\item it satisfies the covariance property,
\begin{equation*}
k_{t}^{\tau}(gk,g'k')=\tau^{-1}(k)k_{t}^{\tau}(g,g') \tau(k'), \quad
\forall g,g'\in G, k,k'\in K;
\end{equation*}
\item for $f\in L^{2}(\widetilde{X}, \widetilde{E}_{\tau})$,
\begin{equation}
e^{-t\widetilde{\Delta}_{\tau}}f(g)=\int_{G}k_{t}^{\tau}(g,g')f(g')dg'.
\end{equation}
\end{enumerate}
The Casimir element is invariant under the action of $G$. Hence, $\widetilde{\Delta}_{\tau}$ is $G$-invariant, and $e^{-t\widetilde{\Delta}_{\tau}}$ is an integral
operator which commutes with the right regular representation of $G$ in $L^{2}(\widetilde{X},\widetilde{E}_{\tau})$.
Then there exists a function $H_{t}^{\tau}\colon G\rightarrow\End(V_{\tau})$, such that
\begin{enumerate}
\item $H_{t}^{\tau}(g^{-1}g')=k_{t}^{\tau}(g,g'),\quad \forall g,g'\in G$;
\item it satisfies the covariance property
\begin{equation}
H_{t}^{\tau}(kgk')=\tau^{-1}(k)H_{t}^{\tau}(g) \tau(k'),\quad \forall g\in G, \forall k,k'\in K;
\end{equation}
\item for $f\in L^{2}(\widetilde{X}, \widetilde{E}_{\tau})$,
\begin{equation}
e^{-t\widetilde{\Delta}_{\tau}}f(g)=\int_{G}H_{t}^{\tau}(g^{-1}g')f(g')dg'.
\end{equation}
\end{enumerate}
We denote by $(\mathcal{C}^{q}(G)\otimes \End(V_{\tau}))^{K\times K}$ the Harish-Chandra $L^{q}$-Schwartz space of functions on $G$
with values in $\End(V_{\tau})$, defined as in \cite[p.161-162]{BM}, such that the covariance property (5.6) is satisfied.
\begin{thm}
Let $t>0$. Then, for every $q>0$
\begin{equation*}
H_{t}^{\tau}\in (\mathcal{C}^{q}(G)\otimes \End(V_{\tau}))^{K\times K}.
\end{equation*}
\end{thm}
\begin{proof}
This is proved in \cite[Proposition 2.4]{BM}.
\end{proof}
In \cite[p.161]{BM}, it is proved that
\begin{equation*}
e^{{-t\widetilde{\Delta}_{\tau}}}=R_{\Gamma}(H_{t}^{\tau}),
\end{equation*}
where $R_{\Gamma}(H_{t}^{\tau})$ denotes the bounded trace class operator,
induced by the right regular representaion of $G$, acting on $C^{\infty}(G;\tau)$.
It is decribed by the formula
\begin{equation*}
e^{-t\widetilde{\Delta}_{\tau}}f(g)=\int_{G}H_{t}^{\tau}(g^{-1}g')f(g')dg'.
\end{equation*}
More generally, we consider a unitary admissible representation $\pi$ of $G$ in a Hilbert space $\mathcal{H}_{\pi}$. We set
\begin{equation*}
\widetilde{\pi}(H_{t}^{\tau})=\int_{G}\pi(g)\otimes H_{t}^{\tau}(g)dg.
\end{equation*}
This defines a bounded trace class operator on $\mathcal{H}_{\pi}\otimes V_{\tau}$.
By \cite[p.160-161]{BM}, relative to the splitting
\begin{equation*}
\mathcal{H}_{\pi}\otimes V_{\tau}=(\mathcal{H}_{\pi}\otimes V_{\tau})^K\oplus [(\mathcal{H}_{\pi}\otimes V_{\tau})^K]^{\perp},
\end{equation*}
$ \widetilde{\pi}(H_{t}^{\tau})$ has the form
\begin{equation}
\widetilde{\pi}(H_{t}^{\tau})= \begin{pmatrix}
\pi(H_{t}^{\tau}) & 0 \\
0 & 0
\end{pmatrix},
\end{equation}
with $\pi(H_{t}^{\tau})$ acting on $(\mathcal{H}_{\pi}\otimes V_{\tau})^K$.
Then, it follows that
\begin{equation}
e^{-t(-\pi(\Omega)+\lambda_{\tau})}\Id=\pi(H_{t}^{\tau}),
\end{equation}
where $\Id$ denotes the identity on the space $(\mathcal{H}_{\pi}\otimes V_{\tau})^K$ (\cite[Corollary 2.2]{BM}).
We let
\begin{equation*}
h_{t}^{\tau}(g):=\tr H_{t}^{\tau}(g).
\end{equation*}
We consider orthonormal bases $(\xi_{n}), n\in \mathbb{N}, (e_{j}), j=1,\cdots,k$ of the vector spaces $\mathcal{H}_{\pi}, V_{\tau}$, respectively, where $k:=\dim(V_{\tau})$.
By (5.8),
\begin{equation}
\Tr(\pi(H_{t}^{\tau}))=\Tr(\widetilde{\pi}(H_{t}^{\tau})).
\end{equation}
We have
\begin{align}
\Tr(\widetilde{\pi}(H_{t}^{\tau}))=\notag&\sum_{n}\sum_{j}\langle \widetilde{\pi}(H_{t}^{\tau})(\xi_n\otimes e_j),(\xi_n\otimes e_j)\rangle\\\notag
&=\sum_{n}\sum_{j}\int_{G}\langle \pi(g)\xi_n,\xi_n\rangle\langle H_{t}^{\tau}(g)e_j,e_j\rangle dg\\\notag
&=\sum_{n}\int_{G}\langle\pi(g)\xi_n,\xi_n\rangle h_{t}^{\tau}(g)dg\\\notag
&=\sum_{n}\langle\pi(h_{t}^{\tau})\xi_n,\xi_n\rangle\\
&=\Tr\pi(h_{t}^{\tau}).
\end{align}
Hence, if we combine equations (5.9), (5.10) and (5.11), we get
\begin{equation}
\Tr\pi(h_{t}^{\tau})=e^{-t(-\pi(\Omega)+\lambda_{\tau})}\dim(\mathcal{H}_{\pi}\otimes V_{\tau})^K.
\end{equation}
Now we want to specify the unitary representation $\pi$ of $G$. We consider the unitary principal series representation $\pi_{\sigma,\lambda}$,
defined in section 2.
Our goal is to compute the Fourier transform of $h_{t}^{\tau}$,
\begin{equation*}
\Theta_{\sigma,\lambda}(h_{t}^{\tau})=\Tr\pi_{\sigma,\lambda}(h_{t}^{\tau}).
\end{equation*}
\begin{prop}
For $\sigma \in \widehat{M}$ and $\lambda \in \mathbb{R}$, let $\Theta_{\sigma, \lambda}$ be the global character of $\pi_{\sigma,\lambda}$.
Let $\tau\in \widehat{K}$. Then,
\begin{equation}
\Theta_{\sigma,\lambda}(h_{t}^{\tau})=e^{-t(-\pi_{\sigma,\lambda}(\Omega)+\lambda_{\tau})}.
\end{equation}
\end{prop}
\begin{proof}
We have
\begin{equation*}
\Theta_{\sigma,\lambda}(h_{t}^{\tau})=e^{-t(-\pi_{\sigma,\lambda}(\Omega)+\lambda_{\tau})}\dim(\mathcal{H}_{\pi_{\sigma,\lambda}}\otimes V_{\tau})^K
=e^{-t(-\pi_{\sigma,\lambda}(\Omega)+\lambda_{\tau})}[\pi_{\sigma,\lambda}:\check{\tau}],
\end{equation*}
where $\check{\tau}$ denotes the contragredient representation of $\tau$.
We have
\begin{align*}
\Theta_{\sigma,\lambda}(h_{t}^{\tau})=&e^{-t(-\pi_{\sigma,\lambda}(\Omega)+\lambda_{\tau})}[\pi_{\sigma,\lambda}:\check{\tau}]\\
&e^{-t(-\pi_{\sigma,\lambda}(\Omega)+\lambda_{\tau})}[\pi_{\sigma,\lambda}:{\tau}]=e^{-t(-\pi_{\sigma,\lambda}(\Omega)+\lambda_{\tau})}[\tau\mid_{M}:\sigma].
\end{align*}
In the last line in the equation above we use the Frobenius reciprocity principle,
which is described for compact Lie groups in (\cite[Theorem 1.14]{Knapp}).
By (\cite[p.208]{Knapp}), one has
\begin{equation*}
[\pi_{\sigma,\lambda}|_{K}:{\tau}]=\sum_{\omega\in (M\cap K)^{\widehat{}}}n_{\omega}
[\tau\mid_{M\cup K}:\omega],
\end{equation*}
where $n_{\omega}$ are positive integers.
But, in our case $M\subset K$ and therefore $M\cap K=M$. \\
Hence,
\begin{equation*}
[\pi_{\sigma,\lambda}|_{K}:{\tau}]=[\tau\mid_{M}:\sigma].
\end{equation*}
By \cite[Theorem 8.1.3]{GW}, $K$ is multiplicity free in $G$, i.e. $[\pi_{\sigma,\lambda}:{\tau}]\leq 1$.
The assertion follows.
\end{proof}
We pass now to $X=\Gamma\backslash \widetilde{X}$. We consider the
locally homogeneous vector bundle
\begin{equation*}
E_{\tau}:=\Gamma\backslash\widetilde{E}_{\tau}\rightarrow X.
\end{equation*}
Let $E_{\chi}$ be the flat vector bundle over $X$.
We want to derive a trace formula for the heat operator $e^{-t\Delta_{\tau,\chi}^{\sharp}}$.
By Lemma 2.4 and Proposition 2.5 in \cite{M1}, $e^{-t\Delta_{\tau,\chi}^{\sharp}}$
is an integral operator with smooth kernel and of trace class.
We can apply the Lidskii's theorem,
which gives a general expression for the trace of a trace class (not necessarily self-adjoint) operator
in terms of its eigenvalues.
By \cite[Theorem 3.7]{SB},
\begin{equation}
\Tr e^{-t\Delta_{\tau,\chi}^\sharp}=\sum_{\lambda_{j}\in\spec(\Delta_{\tau,\chi}^\sharp)}m(\lambda_{j})e^{-t\lambda_{j}},
\end{equation}
where $m(\lambda_{j})$ is in the Definition 4.6.
The kernel function $H^{\tau,\chi}_{t}$ of the integral operator $e^{-t\Delta_{\tau,\chi}^\sharp}$ is a smooth section of
$(E_{\tau}\otimes E_{\chi})\otimes(E_{\tau}\otimes E_{\chi})^{*}$,
i.e.,
\begin{equation*}
H^{\tau,\chi}_{t}\in C^{\infty}(X,(E_{\tau}
\otimes E_{\chi})\otimes(E_{\tau}\otimes E_{\chi})^{*}).
\end{equation*}
It can be expressed as
\begin{equation*}
H^{\tau,\chi}_{t}(x,y)=\sum_{\gamma \in \Gamma}\widetilde{H}^{\tau}_{t}(\widetilde{x},\gamma\widetilde{y})\otimes \chi(\gamma)\Id_{V_{\chi}},
\end{equation*}
where $\widetilde{x},\widetilde{y}$ are lifts of $x,y$ to $\widetilde{X}$, respectively, and
$\widetilde{H}^{\tau}_{t}$ is the kernel of $e^{-t\widetilde{\Delta}_{\tau}}$.
By \cite[Proposition 4.1]{M1}, we have the following proposition.
\begin{prop}
Let $E_{\chi}$ be a flat vector bundle over $X=\Gamma\backslash \widetilde{X}$ associated with a finite dimensional complex
representation $\chi\colon\Gamma\rightarrow \GL(V_{\chi})$ of $\Gamma$. Let $\Delta_{\tau,\chi}^{\sharp}$ be the twisted Bochner-Laplace operator
acting on $C^{\infty}(X,E_{\tau}\otimes E_{\chi})$.
Then,
\begin{align}
\Tr(e^{-t\Delta_{\tau,\chi}^{\sharp}})=\notag&\sum_{\lambda_{j}\in\spec(\Delta_{\tau,\chi}^\sharp)}m(\lambda_{j})e^{-t\lambda_{j}}\\
&=\sum_{\gamma \in \Gamma}\tr \chi(\gamma)\int_{\Gamma\backslash G}\tr H_{t}^{\tau}(g^{-1}\gamma g)d\dot{g}.
\end{align}
\end{prop}
We proceed further to obtain a better version of the trace formula, analyzing the above identity in orbital integrals.
We group together into the conjugacy classes $[\gamma]$ of $\Gamma$, and
we write separately the conjugacy class of the identity element $e$ to get
\begin{align}
\Tr(e^{-t\Delta_{\tau,\chi}^{\sharp}})=\notag&\dim(V_{\chi})\Vol(X)\tr H_{t}^{\tau}(e)\\
&+\sum_{[\gamma]\neq e}\tr\chi(\gamma)\Vol(\Gamma_{\gamma}\backslash G_{\gamma})\int_{G_{\gamma}\backslash G} \tr H_{t}^{\tau}(g^{-1}\gamma g)d\dot{g},
\end{align}
where $\Gamma_{\gamma}$ and $G_{\gamma}$ are the centralizers of $\gamma$ in $\Gamma$ and $G$, respectively.
We are interested in the zeta functions associated with a geodesic flow
on the bundle $E(\sigma,\chi):=G\times_{\sigma\otimes\chi}(V_\sigma\otimes V_\gamma)\rightarrow S(X)$
as it is explained in section 3.
Let $R(M)^{+}$ and $R(M)^{-}$ be the subspaces of the elements of $R(M)$ that are invariant, respectively not invariant, under the action of the Weyl group $W_{A}$.
More precisely, since the order of the Weyl group $W_{A}$ is two, there is an eigenspace decomposition of $R(M)$ into $R(M)^{+}$ and $R(M)^{-}$.
The subspaces $R(M)^{\pm}$ correspond to the $(\pm1)$-eigenspaces, respectively.
\begin{prop}
\begin{enumerate}
\item The map $i^{*}$ is a bijection between $R(K)$ and $R(M)^{+}$
\item If $\sigma\in R(M)^{-}$, then there exists a unique element $\tau(\sigma)\in \widehat{K}$,
with highest weight $\nu_{\tau}=\big((\nu_{1}-\frac{1}{2})e_{1},\ldots,(\nu_{n}-\frac{1}{2})e_{n}\big)$ and
$\nu_{n}(\sigma)>0$,
such that (4.41),
\begin{equation}
\sigma-w\sigma=(s^{+}-s^{-})i^{*}(\tau(\sigma)),
\end{equation}
where $s^{+}, s^{-}$ are the half spin representations of $M$.
More precisely, if $s$ is the spin representation of $K$, then $\tau(\sigma)\otimes s$ splits into
\begin{equation}
\tau(\sigma)\otimes s=\tau^{+}(\sigma)\oplus\tau^{-}(\sigma)
\end{equation}
such that
\begin{equation}
\sigma+w\sigma=i^{*}(\tau^{+}(\sigma)-\tau^{-}(\sigma)),
\end{equation}
with
\begin{equation}
\tau^{\pm}(\sigma)=\sum_{\substack{\mu\in\{0,1\}^{n}\\c(\mu)=\pm 1}}(-1)^{c(\mu)}\tau_{\nu_{\mu}}(\sigma),
\end{equation}
where $c(\mu):=\sharp \{1\in\mu\}$, $\tau_{\nu_{\mu}}(\sigma)$ is the representation
of $K$ with highest weight $\nu_{\mu}(\sigma)=\nu_{\sigma}-\mu$, and $\nu_{\sigma}$
is given by \emph{(2.5)}.
\end{enumerate}
\end{prop}
\begin{proof}
See \cite[Proposition 1.1]{BO}.
\end{proof}
Let $\tau_{\sigma}\in R(K)$ with $\tau_{\sigma}:=\tau^{+}(\sigma)-\tau^{-}(\sigma)$.
By Proposition 5.4, there exist unique integers
$m_{\tau}(\sigma)\in\{-1,0,1\}$, which are equal to zero except for finitely many $\tau\in \widehat{K}$,
such that
\begin{equation}
\sigma=\sum_{\tau\in\widehat{K}}m_{\tau}(\sigma)i^{*}(\tau);
\end{equation}
Then, the locally homogeneous vector bundle $E(\sigma)$ associated with $\tau$ is of the form
\begin{equation}
E(\sigma)=\bigoplus_{\substack{\tau\in\widehat{K}\\m_{\tau}(\sigma)\neq 0}}E_{\tau},
\end{equation}
where $E_{\tau}$ is the locally homogeneous vector bundle associated with $\tau\in\widehat{K}$.
Therefore, the vector bundle $E(\sigma)$ has a grading $E(\sigma)=E(\sigma)^{+}\oplus E(\sigma)^{-}$.
This grading is defined exactly by the positive or negative sign of $m_{\tau}(\sigma)$.
Let $\widetilde{E}(\sigma)$ be the pullback of $E(\sigma)$ to $\widetilde{X}$.
Then,
\begin{equation*}
\widetilde{E}(\sigma)=\bigoplus_{\substack{\tau\in\widehat{K}\\m_{\tau}(\sigma)\neq0}}\widetilde{E}_{\tau}.
\end{equation*}
We consider now the lift $\widetilde{\Delta}_{\tau}$ of the Bochner-Laplace operator ${\Delta_{\tau}}$ associated to $\tau\in\widehat{K}$ to $\widetilde{X}$, acting on smooth sections of $\widetilde{E}_{\tau}$.
Recall equation (5.4):
\begin{equation*}
\widetilde{\Delta}_{\tau}=-R(\Omega)+\lambda_{\tau}\Id.
\end{equation*}
We put
\begin{equation}
\widetilde{A}_{\tau}:=\widetilde{\Delta}_{\tau}-\lambda_{\tau}\Id.
\end{equation}
Hence, by (5.4) the operator $\widetilde{A}_{\tau}$ acts like $-R(\Omega)$ on the space of smooth sections of $\widetilde{E}_{\tau}$.
It is an elliptic formally self-adjoint operator of second order. By \cite{Ch}, it is an essentially self-adjoint operator.
Its self-adjoint extension will be also denoted by $\widetilde{A}_{\tau}$.
We use the flat Laplacian $\widetilde{\Delta}^{\sharp}_{\tau,\chi}$ on the universal covering $\widetilde{X}$.
We get then the operator $\widetilde{A}^{\sharp}_{\tau,\chi}$ acting on the space $C^{\infty}(\widetilde{X},\widetilde{E}_{\tau}\otimes\widetilde{E}_{\chi})$.
Since $\widetilde{A}^{\sharp}_{\tau,\chi}$ is induced by the operator $\widetilde{\Delta}^{\sharp}_{\tau,\chi}$,
it can be locally described as
\begin{equation}
\widetilde{A}^{\sharp}_{\tau,\chi}=\widetilde{A}_{\tau}\otimes\Id_{V_{\chi}}.
\end{equation}
We pass to $X=\Gamma\backslash \widetilde{X}$.
We put
\begin{equation}
c(\sigma):=-\lvert \rho \rvert^{2}-\lvert\rho_{m}\rvert^{2}+\lvert \nu_{\sigma}+\rho_{m}\rvert^{2},
\end{equation}
where $\nu_{\sigma}$ is the highest weight of $\sigma\in\widehat{M}$ as in (2.5)
and $\rho,\rho_{m}$ are defined by (2.3) and (2.4), respectively.
We define the operator $A_{\chi}^{\sharp}(\sigma)$ acting on $C^{\infty}(X,E(\sigma)\otimes E_{\chi})$ by
\begin{equation}
A_{\chi}^{\sharp}(\sigma):=\bigoplus_{m_{\tau}(\sigma)\neq 0}A_{\tau,\chi}^{\sharp}+c(\sigma).
\end{equation}
Obviously, $A_{\chi}^{\sharp}(\sigma)$ preserves the grading.
It is an elliptic operator of order two. However, the situation is now different,
because it is not a self-adjoint operator anymore. This property is carried by the operator $A^{\sharp}_{\tau,\chi}$.
We deal first with the corresponding heat semi-group generated by the operator $e^{-tA_{\tau,\chi}^{\sharp}}$.
Since $A_{\tau,\chi}^{\sharp}$ is induced by $\Delta^{\sharp}_{\tau,\chi}$, it is an integral operator with smooth kernel.
By Proposition 5.3, its trace is given by
\begin{equation}
\Tr(e^{-tA_{\tau,\chi}^{\sharp}})=\sum_{\gamma \in \Gamma}\tr \chi(\gamma)\int_{\Gamma\backslash G}\tr Q_{t}^{\tau}(g^{-1}\gamma g)d\dot{g},
\end{equation}
where $Q^{\tau}_{t}\in (\mathcal{C}^{q}(G)\otimes \End(V_{\tau}))^{K\times K}$ is the kernel associated to the operator
$e^{-t\widetilde{A}_{\tau}}$.
We put
\begin{equation*}
q_{t}^{\tau}=\tr Q_{t}^{\tau}(g).
\end{equation*}
\begin{align}
q_{t}^{\sigma}=&\sum_{\tau \in \widehat{K}}m_{\tau}(\sigma) q_{t}^{\tau},\\
K(t;\sigma)=&\sum_{\tau \in \widehat{K}}m_{\tau}(\sigma) \Tr(e^{-tA^{\sharp}_{\tau,\chi}}).
\end{align}
We use now the trace formula from \cite[p.177-178]{Wa}, which expands (5.27). We have
\begin{align}
K(t;\sigma)=\notag&\dim(V_{\chi})\Vol(X)q_{t}^{\sigma}(e)\\
&+\frac{1}{2\pi}\sum_{[\gamma]\neq e} \frac{l(\gamma)\tr(\chi(\gamma))}{n_{\Gamma}(\gamma)D(\gamma)}\sum_{\sigma\in\widehat{M}}
\overline{\tr\sigma(m_{\gamma})}\int_{\mathbb{R}}\Theta_{\sigma,\lambda}(q_{t}^{\sigma})e^{-il(\gamma)\lambda}d\lambda.
\end{align}
We continue analysing the trace formula above in terms of characters. For the identity contribution we have
\begin{equation}
(q_{t}^{\sigma})(e)=\sum_{\sigma\in\widehat{M}}\int_{\mathbb{R}}\Theta_{\sigma,\lambda}(q_{t}^{\sigma})P_{\sigma}(i\lambda)d\lambda,
\end{equation}
where $P_{\sigma}(i\lambda)$ denotes the Plancherel polynomial, defined in section 2.
By equation (5.28) we get
\begin{equation}
\Theta_{\sigma,\lambda}(q_{t}^{\sigma})=
\sum_{\tau \in \widehat{K}}m_{\tau}(\sigma) \Theta_{\sigma,\lambda}(q_{t}^{\tau}).
\end{equation}
By Proposition 5.2,
\begin{equation}
\Theta_{\sigma,\lambda}(q_{t}^{\tau})=e^{-t(-\pi_{\sigma,\lambda}(\Omega))}[\tau\rvert_{M}:\sigma].
\end{equation}
The term $\lambda_{\tau}$ does not occur here, since our operator $A^{\sharp}_{\tau,\chi}$ is induced by the operator $A_{\tau}=\Delta_{\tau}-\lambda_{\tau}\Id$.
We recall also
\begin{equation}
\pi_{\sigma,\lambda}(\Omega)=-\lambda^{2}+c(\sigma).
\end{equation}
This is proved in \cite[p.48]{Ar}.\\
Combining equations (5.32), (5.33) and (5.34) we get
\begin{equation}
\Theta_{\sigma,\lambda}(q_{t}^{\sigma})=\sum_{\tau \in \widehat{K}}m_{\tau}(\sigma)
e^{-t(\lambda^{2}-c(\sigma))}[\tau\rvert_{M}:\sigma].
\end{equation}
Equivalently, for $\sigma,\sigma'\in \widehat{M}$
\begin{equation*}
\Theta_{\sigma',\lambda}(q_{t}^{\sigma})=e^{tc(\sigma)}e^{-t\lambda^{2}}\bigg[\sum_{\tau \in \widehat{K}}m_{\tau}(\sigma)i^{*}(\tau):\sigma'\bigg].
\end{equation*}
Hence, by (5.21),
\begin{align}
& \Theta_{\sigma',\lambda}(q_{t}^{\sigma})=e^{tc(\sigma)}e^{-t\lambda^{2}}.
\end{align}
If we put everything together and insert (5.31), and (5.36) in (5.30), we obtain
\begin{align*}
K(t;\sigma)=&e^{tc(\sigma_{k})}\bigg(\dim(V_{\chi})\Vol(X)\int_{\mathbb{R}}e^{-t\lambda^{2}}P_{\sigma}(i\lambda)d\lambda\\
&+\sum_{[\gamma]\neq[e]} \frac{l(\gamma)}{n_{\Gamma}(\gamma)}L_{sym}(\gamma;\sigma)\frac{e^{-l(\gamma)^{2}/4t}}{(4\pi t)^{1/2}}\bigg);
\end{align*}
where
\begin{equation}
L_{sym}(\gamma;\sigma)= \frac{\tr(\chi(\gamma)\otimes\sigma(m_{\gamma}))e^{-|\rho|l(\gamma)}}{\det(\Id-\Ad(m_{\gamma}a_{\gamma})_{\overline{n}})}.
\end{equation}
\newline
By the definition of the operator $A_{\chi}^{\sharp}(\sigma)$ in (5.26), we get the following theorem.
\begin{thm}
For every $\sigma \in \widehat{M}$,
\begin{align} \label{f:trace formula 1}
\Tr(e^{-tA_{\chi}^{\sharp}(\sigma)})=&\dim(V_{\chi})\Vol(X)\int_{\mathbb{R}}e^{-t\lambda^{2}}P_{\sigma}(i\lambda)d\lambda\\\notag
&+\sum_{[\gamma]\neq e} \frac{l(\gamma)}{n_{\Gamma}(\gamma)}L_{sym}(\gamma;\sigma)
\frac{e^{-l(\gamma)^{{2}}/4t}}{(4\pi t)^{1/2}};
\end{align}
where $L_{sym}(\gamma;\sigma)$ is as in \emph{(5.37)}.
\end{thm}
\begin{center}
\section{\textnormal{Meromorphic continuation of the zeta functions}}
\end{center}
Let $A$ be a closed linear operator, defined on a dense subspace of $\mathcal{D}(A)$ of a Hilbert space $\mathcal{H}$. Let $a\in\mathbb{C}-\spec(-A)$.
We set $R(a):=(A+a\Id)^{-1}=(A+a)^{-1}$.
Then, the resolvent identity states
\begin{equation*}
R(a)-R(b)=(b-a)R(a)R(b),
\end{equation*}
for all $a,b\in\mathbb{C}-\spec(-A)$.
The generalized resolvent identity is described in the following lemma.
\begin{lem}
Let $s_{1},\ldots,s_{N}\in\mathbb{C}-\spec(-A)$, $N\in\mathbb{N}$, such that $s_{i}\neq s_{j}$
for all $i\neq j$.
Then,
\begin{equation}
\label{generalized resolvent}
\prod_{i=1}^{N}R(s_{i})=\sum_{i=1}^{N}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}-s_{i}}\bigg)R(s_{i}).
\end{equation}
\end{lem}
\begin{proof}
This is proved in \cite[Lemma 3.5]{BO}.
\end{proof}
We will use also the following lemmata.
\begin{lem}
Let $s_{1},\ldots,s_{N}\in\mathbb{C}$, $N\in\mathbb{N}$,
such that $s_{i}\neq s_{j}$ for all $i\neq j$ and let $l=0,1,\ldots, N-2$. Then, we have
\begin{equation}
\sum_{i=1}^{N}s_{i}^{2l}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg)=0.
\end{equation}
\begin{proof}
This follows from \cite[Lemma 3.6]{BO}, applied to $s_{i}^{2}$.
\end{proof}
\end{lem}
\begin{lem}
Let $s_{1},\ldots,s_{N}\in\mathbb{C}$, $N\in\mathbb{N}$,
such that $s_{i}\neq s_{j}$ for all $i\neq j$.
Then,
\begin{equation}
\sum_{i=1}^{N}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg) e^{-ts_{i}^{2}}=O(t^{N-1}),
\end{equation}
as $t\rightarrow 0^{+}$.
\end{lem}
\begin{proof}
We will use the Taylor expansion of the exponential function $e^{-ts_{i}^{2}}$.\\
We have as $t\rightarrow 0^{+}$
\begin{align*}
\sum_{i=1}^{N}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg) e^{-ts_{i}^{2}}&=\sum_{k=1}^{N-2}\sum_{i=1}^{N}\frac{(-t)^k}{k!}s_{i}^{2k}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg)+O(t^{N-1})\\
&=\sum_{k=1}^{N-2}\frac{(-t)^k}{k!}\sum_{i=1}^{N} s_{i}^{2k}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg)+O(t^{N-1})=O(t^{N-1}),
\end{align*}
where in the last equality we used Lemma 6.2.
\end{proof}
\begin{lem}
Let $s_{1}\in \mathbb{C}$ such that \emph{Re}$(s_{i}^{2})>0$ for all $i=1,\ldots, N$. Then, the following integral
\begin{equation}
\int_{0}^{\infty}\int_{\mathbb{R}}\sum_{k=1}^{N}
\bigg(\prod_{\substack{j=1\\ j\neq k}}^{N}\frac{1}{s_{j}^{2}-s_{k}^{2}}\bigg)
e^{-t(s_{k}^{2}+\lambda^{2})} P_{\sigma}(i\lambda)d\lambda dt
\end{equation}
converges absolutely.
\end{lem}
\begin{proof}
We have as $t\rightarrow\infty$,
\begin{equation}
\int_{\mathbb{R}}\bigg|\sum_{k=1}^{N}\bigg(\prod_{\substack{j=1\\ j\neq k}}^{N}\frac{1}{s_{j}^{2}-
s_{k}^{2}}\bigg) e^{-t(s_{k}^{2}+\lambda^{2})} P_{\sigma}(i\lambda)\bigg| d\lambda=O(e^{-t\epsilon}),
\end{equation}
for some $\epsilon> 0$.\\
We use now the fact the $P(i\lambda)$ is an even polynomial of degree $2n$ (\cite[264-265]{Mi2}).
If we make a change of variables $\lambda'\mapsto \lambda \sqrt{t}$, we get as $t\rightarrow 0^{+}$,
\begin{align}
\int_{\mathbb{R}}\left|e^{-t\lambda^{2}}P_{\sigma}(i\lambda)\right| d\lambda =O(t^{-d/2}).
\end{align}
Hence, if we combine (6.3) and (6.6) we have that as $t\rightarrow 0^{+}$,
\begin{equation}
\int_{\mathbb{R}}\bigg|\sum_{k=1}^{N}\bigg(\prod_{\substack{j=1\\ j\neq k}}^{N}\frac{1}{s_{j}^{2}-s_{k}^{2}}
\bigg) e^{-t(s_{k}^{2}+\lambda^{2})} P_{\sigma}(i\lambda)\bigg| d\lambda=O(t^{-d/2+N-1}).
\end{equation}
The assertion follows from (6.5) and (6.7).
\end{proof}
Let $N\in\mathbb{N}$. Let $s_{i},i=1,\ldots,N$ be complex numbers such that $s_{i}\in\mathbb{C}-\spec(-{A^{\sharp}_{\chi}(\sigma)})$.
We consider the resolvent operator
$
R(s_{i}^{2})=({A^{\sharp}_{\chi}(\sigma)}+s_{i}^{2})^{-1}.
$
We want to obtain the trace class property of the operator
$
\prod_{i=1}^{N}R(s_{i}^{2}).
$
In order to obtain this property, we take sufficient large $N\in\mathbb{N}$,
such that for $N>\frac{d}{2}$,
\begin{equation}
\Tr(\prod_{i=1}^{N}R(s_{i}^{2}))<\infty.
\end{equation}
We denote the space of pseudodifferential operators of order $k$ by $\psi DO^{k}$.
To prove the trace class property of the operators above, we
observe at first that $\prod_{i=1}^{N}R(s_{i}^{2})\in\psi DO^{-2N}$.
Let $\Delta$ be the Bochner-Laplace operator with respect
to some metric, acting on $C^{\infty}(X, E_{\tau_{\sigma}}\otimes E_{\chi})$.
Then, $\Delta$ is a second-order elliptic differential operator,
which is formally self-adjoint and non-negative, i.e.,
$\Delta\geq 0$. Then, by Weyl's law, we have that
for $N>\frac{d}{2}$,
$
(\Delta+\Id)^{-N}
$
is a trace class operator.
Moreover,
$
B:=(\Delta+\Id)^{N}\prod_{i=1}^{N}R(s_{i}^{2})
$
is $\psi DO$ of order zero.
Hence, it defines a bounded operator in $L^{2}(X, E_{\tau_{s}(\sigma)}\otimes E_{\chi})$.\\
Thus,
$
\prod_{i=1}^{N}R(s_{i}^{2})=(\Delta+\Id)^{-N}B
$
is a trace class operator.
We recall here the following expressions of the resolvents. Let $s_{1},\ldots,s_{N}\in\mathbb{C}$ such that
Re$(s_{i}^{2})>-c$, for all $i=1,\ldots,N$, where $c$ is a real number such that $\spec\big(A_{\chi}^{\sharp}(\sigma)\big)\subset
\{z\in\mathbb{C}\colon \text{Re}(z)>c\}$.
Then,
\begin{align}
(A_{\chi}^{\sharp}(\sigma)+s_{i}^{2})^{-1}&=\int_{0}^{\infty}e^{-ts_{i}^{2}}e^{-t{A_{\chi}^{\sharp}(\sigma)}}dt.
\end{align}
Let $N\in\mathbb{N}$ with $N>d/2$.
Let $s_{1},\ldots,s_{N}\in\mathbb{C}$ with
$s_{i}\neq s_{j}$ for all $i\neq j$ such that
Re$(s_{i}^{2})>-C$, for all $i=1,\ldots,N$, where $C$ is a real number such that $\spec\big(A_{\chi}^{\sharp}(\sigma)\big)\subset
\{z\in\mathbb{C}\colon \text{Re}(z)>C\}$.\\
By Lemma 6.1 and equation (6.9), we get
\begin{align*}
\prod_{i=1}^{N}(A_{\chi}^{\sharp}(\sigma)+s_{i}^{2})^{-1}=
\int_{0}^{\infty}\sum_{i=1}^{N}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg)e^{-ts_{i}^{2}}e^{-t{A_{\chi}^{\sharp}(\sigma)}}dt.\\
\end{align*}
Then,
\begin{align*}
\Tr \prod_{i=1}^{N}(A_{\chi}^{\sharp}(\sigma)+s_{i}^{2})^{-1}=
\int_{0}^{\infty}\sum_{i=1}^{N}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg)e^{-ts_{i}^{2}}\Tr e^{-t{A_{\chi}^{\sharp}(\sigma)}}dt.\\
\end{align*}
We insert now the right-hand side of the trace formula (5.38) for the operator $A_{\chi}^{\sharp}(\sigma)$ and get
\begin{align*}
\Tr \prod_{i=1}^{N}(A_{\chi}^{\sharp}(\sigma)+&s_{i}^{2})^{-1}=\int_{0}^{\infty}\sum_{i=1}^{N}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg)e^{-ts_{i}^{2}}\\
&\bigg\{\dim(V_{\chi})\Vol(X)\int_{\mathbb{R}}e^{-t\lambda^{2}}P_{\sigma}(i\lambda)d\lambda
+\sum_{[\gamma]\neq e} \frac{l(\gamma)}{n_{\Gamma}(\gamma)}L_{sym}(\gamma;\sigma)
\frac{e^{-l(\gamma)^{{2}}/(4t)}}{(4\pi t)^{1/2}}\bigg\}dt.
\end{align*}
Hence,
\begin{align}
\Tr \prod_{i=1}^{N}(A_{\chi}^{\sharp}(\sigma)+s_{i}^{2})^{-1}\notag&=\dim(V_{\chi})\Vol(X)\int_{0}^{\infty}\int_{\mathbb{R}}\sum_{i=1}^{N}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg)
e^{-ts_{i}^{2}}e^{-t\lambda^{2}}P_{\sigma}(i\lambda)d\lambda dt\\
&+\sum_{i=1}^{N}\int_{0}^{\infty}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg)e^{-ts_{i}^{2}}\bigg\{\sum_{[\gamma]\neq[e]} \frac{l(\gamma)}{n_{\Gamma}(\gamma)}L_{sym}(\gamma;\sigma)
\frac{e^{-l(\gamma)^{2}/(4t)}}{(4\pi t)^{1/2}}\bigg\}dt.
\end{align}
The first sum in the right-hand side of (6.10), which involves the double integral can be explicitly calculated.
We set
\begin{equation*}
I:=\int_{0}^{\infty}\int_{\mathbb{R}}\sum_{i=1}^{N}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg)
e^{-ts_{i}^{2}}e^{-t\lambda^{2}}P_{\sigma}(i\lambda)d\lambda dt.
\end{equation*}
By Lemma 6.4, we can interchange the order of the integration and get
\begin{align*}
I=\int_{\mathbb{R}}\int_{0}^{\infty}\sum_{i=1}^{N}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg)
e^{-t(s_{i}^{2}+\lambda^{2})}P_{\sigma}(i\lambda)dtd\lambda.\\
\end{align*}
By \cite[Lemma 3.5]{BO} and since $P_{\sigma}$ is an even polynomial, we obtain the following convergent integral
\begin{equation*}
I=\int_{\mathbb{R}}\sum_{i=1}^{N}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg)
\bigg(\frac{1}{\lambda^{2}+s_{i}^{2}}\bigg)P_{\sigma}(i\lambda)d\lambda.
\end{equation*}
Using the Cauchy integral formula we have
\begin{align}
I=\sum_{i=1}^{N}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg)
\frac{\pi}{s_{i}}P_{\sigma}(s_{i}).
\end{align}
For the second sum in the right-hand side of (6.10) we use the formula (see \cite[p.146,(27)]{Er})
\begin{equation}
\int_{0}^{\infty}e^{-ts^{2}} \frac{e^{-l(\gamma)^{{2}}/(4t)}}{(4\pi t)^{1/2}}dt=\frac{1}{2s}e^{-sl(\gamma)}.
\end{equation}
Hence, equation (6.10) becomes by (6.11) and (6.12)
\begin{align}
\Tr \prod_{i=1}^{N}(A_{\chi}^{\sharp}(\sigma)+s_{i}^{2})^{-1}\notag&=\sum_{i=1}^{N}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg)\frac{\pi}{s_{i}} \dim(V_{\chi})\Vol(X)P(s_{i})\\
&+\sum_{i=1}^{N}\frac{1}{2s_{i}}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg)\sum_{[\gamma]\neq e} \frac{l(\gamma)}{n_{\Gamma}(\gamma)}L_{sym}(\gamma;\sigma)
e^{-s_{i}l(\gamma)}.
\end{align}
By Lemma 3.6, the sum over the conjugacy classes $[\gamma]$ of $\Gamma$ in the right-hand side of (6.13) is equal to the
logarithmic derivative $L(s_{i})=\frac{d}{ds}\log(Z(s_{i};\sigma,\chi)$ of the Selberg zeta function.
Hence,
\begin{align}
\Tr \prod_{i=1}^{N}(A_{\chi}^{\sharp}(\sigma)+s_{i}^{2})^{-1}\notag&=\sum_{i=1}^{N}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg)\frac{\pi}{s_{i}} \dim(V_{\chi})\Vol(X)P(s_{i})\\
&+\sum_{i=1}^{N}\frac{1}{2s_{i}}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg)L(s_{i}).
\end{align}
In the Theorem 6.5 below, we choose the branch of the square roots
of the complex numbers $t_{k}$, whose real part is positive.
In addition, if $t_{k}$ are negative real numbers, we choose the branch of the square roots,
whose imaginary part is positive.
\begin{thm}
The Selberg zeta function $Z(s;\sigma,\chi)$ admits a meromorphic continuation to the whole complex plane $\mathbb{C}$. The set of the singularities
equals $\{s_{k}^{\pm}=\pm i \sqrt{t_{k}}:t_{k}\in \spec(A^{\sharp}_{\chi}(\sigma)), k\in\mathbb{N}\}$.
The orders of the singularities are equal to $m(t_{k})$, where $m(t_{k})\in\mathbb{N}$ denotes the algebraic multiplicity of the eigenvalue $t_{k}$.
For $t_{0}=0$, the order of the singularity $s_{0}$ is equal to $2m(0)$.
\end{thm}
\begin{proof}
By (6.1) and (6.14) we get
\begin{align*}
\Tr\sum_{i=1}^{N}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg)(A_{\chi}^{\sharp}(\sigma)+s_{i}^{2})^{-1}&=\sum_{i=1}^{N}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg)\frac{\pi}{s_{i}} \dim(V_{\chi})\Vol(X)P(s_{i})\\\notag
&+\sum_{i=1}^{N}\frac{1}{2s_{i}}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg)L(s_{i}).
\end{align*}
Equivalently,
\begin{align}
\label{eq selberg mer}
\sum_{i=1}^{N}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg)\frac{1}{2s_{i}}L(s_{i})=\notag&-\sum_{i=1}^{N}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg)\frac{\pi}{s_{i}}\dim(V_{\chi})\Vol(X)P(s_{i})\\\notag
&+\sum_{t_{k}}\sum_{i=1}^{N}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg)\frac{m(t_{k})}{t_{k}+s_{i}^{2}},\\
\end{align}
If we multiply equation (6.15) by $2s_{1}$, we obtain
\begin{align}
\notag\sum_{i=1}^{N}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg)\frac{s_{1}}{s_{i}}L(s_{i})\notag=&-\sum_{i=1}^{N}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg)\frac{2s_{1}}{s_{i}}\pi\dim(V_{\chi})\Vol(X)P(s_{i})\\\notag
&+\sum_{t_{k}}\sum_{i=1}^{N}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg)2s_{1}\frac{m(t_{k})}{t_{k}+s_{i}^{2}}.\\
\end{align}
Let $ \varPsi(s_{1},\ldots,s_{N})$ be the function of the complex numbers $s_{1},\ldots,s_{N}$, defined by
\begin{equation*}
\varPsi(s_{1},\ldots,s_{N}):=\sum_{i=1}^{N}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg)\frac{s_{1}}{s_{i}}L(s_{i}).
\end{equation*}
We fix the complex numbers $s_{i}, i=2,\ldots,N$ with $s_{i}\neq s_{j}$ for $i,j=2,\ldots, N$
and let the complex number $s=s_{1}$ vary.\\
Put
\begin{equation*}
\varPsi(s,\ldots,s_{N})=\varPsi(s).
\end{equation*}
Then, equation (6.16) becomes
\begin{align}
\notag\varPsi(s)=&-\sum_{i=1}^{N}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg)\frac{2s}{s_{i}}\pi\dim(V_{\chi})\Vol(X)P(s_{i})\\\notag
&\notag+\sum_{t_{k}}\sum_{i=1}^{N}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg)2s\frac{m(t_{k})}{t_{k}+s_{i}^{2}},\\
\end{align}
where $s_{1}=s$.
The term that contains the logarithmic derivative $L(s)$ in $\varPsi(s)$ is of the form
\begin{equation*}
\bigg(\prod_{j=2}^{N}\frac{1}{s_{j}^{2}-s^{2}}\bigg)L(s).
\end{equation*}
The term of
\begin{equation*}
\sum_{t_{k}}\sum_{i=1}^{N}\bigg(\prod_{\substack{j=1\\ j\neq i}}^{N}\frac{1}{s_{j}^{2}-s_{i}^{2}}\bigg)2s\frac{m(t_{k})}{t_{k}+s_{i}^{2}},
\end{equation*}
which is singular at $\pm i\sqrt{t_{k}}$, $k\in\mathbb{N}$ is
\begin{equation*}
\bigg(\prod_{j=2}^{N}\frac{1}{s_{j}^{2}-s^{2}}\bigg)2s\frac{m(t_{k})}{t_{k}+s^{2}}.
\end{equation*}
We multiply both sides of the
equality (6.17) by
\begin{equation*}
\prod_{j=2}^N(s_j^2-s^2).
\end{equation*}
Then, the residues of $L(s)$ at the points $\pm i\sqrt{t_{k}}$
are $m(t_{k})$, for $k\neq0$, and $2m(0)$ for $k=0$.
By (3.20), $L(s)$ decreases exponentially as Re$(s)\rightarrow \infty$. Hence, the integral
\begin{equation*}
\int_{s}^{\infty}L(w)dw
\end{equation*}
over a path connecting $s$ and infinity is well defined and
\begin{equation}
\log Z(s;\sigma,\chi)=-\int_{s}^{\infty}L(w)dw.
\end{equation}
The integral above depends on the choice of the path, because $L(s)$
has singularities.
Since the residues of the singularities
are integers, it follows that the
exponential of the integral in the right-hand side of (6.18)
is independent of the choice of the path. The meromorphic continuation of the
Selberg zeta function $Z(s;\sigma,\chi)$ to the whole complex plane follows.
\end{proof}
In view of the previous results, we will use the meromorphic continuation of the Selberg zeta function to obtain
the meromorphic continuation of the Ruelle zeta function to the whole complex plane $\mathbb{C}$ .
Following the analysis in \cite[p. 93-94]{BO}, we
consider the identification $\mathfrak{a}_{\mathbb{C}}^{*}\cong \mathbb{C}\ni \lambda$.
Let $\alpha>0$ be the unique positive
root of the system $(\mathfrak{g},\mathfrak{a})$. Let $\lambda\colon A\to \mathbb{C}^\times$ be the character,
defined by $\lambda(a)=e^{\alpha(\log a)}$.
Let $\mathfrak{n}_{\mathbb{C}}$ be the complexification of the Lie algebra $\mathfrak{n}$.
Let $\nu_{p}:=\Lambda^{p}\Ad_{\mathfrak{n}_{\mathbb{C}}}(MA)$ be the representation of $MA$ in $\Lambda^{p}\mathfrak{n}_{\mathbb{C}}$ given by the $p$-th exterior power of the adjoint representation:
\begin{equation*}
\nu_{p}:=\Lambda^{p}\Ad_{\mathfrak{n}_{\mathbb{C}}}\colon MA\rightarrow \GL(\Lambda^{p}\mathfrak{n}_{\mathbb{C}}),\quad p=0,1,\ldots,d-1.
\end{equation*}
For $p=0,1,\ldots,d-1$, let $J_{p}\subset\{(\psi_{p},\lambda)\colon\psi_{p}\in\widehat{M},\lambda\in\mathbb{C}\}$ be the subset consisting of all pairs of unitary irreducible representations of $M$
and one dimensional representations of $A$ such that, as $MA$-modules,
the representations $\nu_{p}$ decompose as
\begin{equation*}
\Lambda^{p}\mathfrak{n}_{\mathbb{C}}=\bigoplus_{(\psi_{p},\lambda)\in J_{p}}V_{\psi_{p}}\otimes \mathbb{C}_{\lambda},
\end{equation*}
where $\mathbb{C}_{\lambda}\cong \mathbb{C}$ denotes the representation space of $\lambda$.
For $\sigma\in \widehat{M}$ we define
\begin{equation}
Z_{p}(s;\sigma,\chi):=\prod_{(\psi_{p},\lambda)\in J_{p}}Z(s+\rho-\lambda;\psi_{p}\otimes\sigma,\chi).
\end{equation}
We have then the following theorem, which gives a representation of $R(s;\sigma,\chi)$ as a product of $Z_{p}(s;\sigma,\chi)$ over $p$.
\begin{thm}
Let $\sigma\in\widehat{M}$. Then the Ruelle zeta function has the representation
\begin{equation}
R(s;\sigma,\chi)=\prod_{p=0}^{d-1} Z_{p}(s;\sigma,\chi)^{(-1)^{p}}.
\end{equation}
\end{thm}
\begin{proof}
By (3.14), we have
\begin{equation}
\log Z(s+\rho-\lambda;\psi_{p}\otimes\sigma,\chi)=-\sum_{[\gamma]\neq{e}}\frac{1}{n_{\Gamma}(\gamma)}
\tr(\chi(\gamma)\otimes\sigma(m_\gamma))\frac{e^{(-s-2\rho+\lambda)l(\gamma)}\tr(\psi_{p}(m))}{\det(1-\Ad(m_\gamma a_\gamma)|_{\overline{\mathfrak{n}}})}.
\end{equation}
We use now the fact that
\begin{equation}
{\det(1-\Ad(m_\gamma a_\gamma)|_{\overline{\mathfrak{n}}})}=(-1)^{d-1}a_{\gamma}^{-2\rho}{\det(1-\Ad(m_\gamma a_\gamma)|_{{\mathfrak{n}}})}.
\end{equation}
Hence, if we insert (6.22) in (6.21), we get
\begin{equation}
\log Z(s+\rho-\lambda;\psi_{p}\otimes\sigma,\chi)=(-1)^{d}\sum_{[\gamma]\neq{e}}\frac{1}{n_{\Gamma}(\gamma)}
\tr(\chi(\gamma)\otimes\sigma(m_\gamma))\frac{e^{-sl(\gamma)} e^{\lambda l(\gamma)}\tr(\psi_{p}(m))}{\det(1-\Ad(m_\gamma a_\gamma)|_{{\mathfrak{n}}})}.
\end{equation}
We have
\begin{align}
\log \prod_{p=0}^{d-1}Z_{p}(s;\sigma,\chi)^{(-1)^{p}}=\notag&\sum_{p=0}^{d-1}\log Z_{p}(s;\sigma,\chi)^{(-1)^{p}}\\\notag
&=\sum_{p=0}^{d-1}{(-1)^{p}}\log Z_{p}(s;\sigma,\chi)\\\notag
&=\sum_{p=0}^{d-1}(-1)^{p}\log\prod_{(\psi_{p},\lambda)\in J_{p}}Z(s+\rho-\lambda;\psi_{p}\otimes\sigma,\chi)\\\notag
&=\sum_{p=0}^{d-1}(-1)^{p} \sum_{(\psi_{p},\lambda)\in J_{p}}\log \bigg(Z(s+\rho-\lambda;\psi_{p}\otimes\sigma,\chi)\bigg)\\\notag
&=\sum_{p=0}^{d-1}(-1)^{p}\bigg((-1)^{d}\sum_{[\gamma]\neq{e}}\frac{1}{n_{\Gamma}(\gamma)}\tr(\chi(\gamma)\otimes\sigma(m_\gamma))e^{-sl(\gamma)}\\\notag
&\frac{\sum_{(\psi_{p},\lambda)\in J_{p}} e^{\lambda l(\gamma)}\tr(\psi_{p}(m))}{\det(1-\Ad(m_\gamma a_\gamma)|_{{\mathfrak{n}}})}\bigg ),\\
\end{align}
where in the third line we used the definition (6.19), and in the last line, equation (6.23).
We recall now the that for any endomorphism of a finite dimensional vector space $W$, we have
\begin{equation*}
\det(\Id_{W}-W)=\sum_{p=0}^{\infty}(-1)^{p}\tr(\Lambda^{p}W).
\end{equation*}
If we apply this identity to $\Ad(m_\gamma a_\gamma)|_{{\mathfrak{n}}}$ we get
\begin{equation*}
\sum_{p=0}^{d-1}(-1)^{p} \frac{\sum_{(\psi_{p},\lambda)\in J_{p}} e^{\lambda l(\gamma)}\tr(\psi_{p}(m))}{\det(1-\Ad(m_\gamma a_\gamma)_{{\mathfrak{n}}})}=1.
\end{equation*}
Hence, by (6.24) we have
\begin{align*}
\log \prod_{p=0}^{d-1}Z_{p}(s;\sigma,\chi)^{(-1)^{p}}&=(-1)^{d}\sum_{[\gamma]\neq{e}}\frac{1}{n_{\Gamma}(\gamma)}\tr(\chi(\gamma)\otimes\sigma(m_\gamma)){e^{-sl(\gamma)}}\\
&=\log R(s;\sigma,\chi),
\end{align*}
where in the last line we used by equation (3.17) in the proof of Proposition 3.5.
\end{proof}
\begin{thm}
For every $\sigma\in\widehat{M}$, the Ruelle zeta function $R(s;\sigma,\chi)$ admits a meromorphic continuation to the whole complex plane $\mathbb{C}$.
\end{thm}
\begin{proof}
The assertion follows from Theorem 6.5 together with Theorem 6.6.
\end{proof}
\bibliographystyle{amsalpha}
|
2,877,628,091,228 | arxiv | \section{Introduction} The Raviart-Thomas finite element spaces were
introduced in \cite{RT,T}, and extended to the three-dimensional case by
N\'ed\'elec \cite{N}, to approximate second order elliptic problems in mixed form.
After publication of that paper there has been an increasing
interest in the analysis of these spaces and on the approximation
properties of the associated Raviart-Thomas interpolation
operator. This interest has been motivated by the fact that, apart
from the original motivation, these spaces (or rotated versions of
them in the two dimensional case) arise in several interesting applications. For example in
mixed methods for plates (see \cite{BF,BFS,DL}) and
in the numerical approximation of fluid-structure interaction
problems \cite{BDMRS}. Also, it is well known that mixed methods
are related to non-conforming methods \cite{AB,M}, therefore, the
Raviart-Thomas interpolation operator can be useful in some cases
to analyze this kind of methods (see for example \cite{AD1} where
a non-conforming method for the Stokes problem is analyzed).
The original error analysis developed in \cite{N,RT,T} is based on the
so-called regularity assumption on the elements and therefore, the constants
arising in the error estimates in those works depend on the ratio between
outer and inner diameter of the elements. In this way narrow or anisotropic elements,
which are very important in many applications, are excluded.
For the standard Lagrange interpolation it is known, since the pioneering works
\cite{BA,J,Synge} and many generalizations of them (see \cite{Ap} and its references),
that the regularity assumption can be relaxed to a \emph{maximum angle condition} in many cases.
Error estimates for the Raviart-Thomas interpolation under conditions weaker
than the regularity have been proved in several papers. In \cite{AD1} the lowest
order case $k=0$ was considered and optimal order error estimates were proved
under the maximum angle condition for triangles and a suitable generalization
of it for tetrahedra, called \emph{regular vertex property}. This result was extended in \cite{FNP} to prismatic elements and functions from weighted Sobolev spaces.
It is not straightforward to extend the arguments given in \cite{AD1}
to higher order Raviart-Thomas approximations. In \cite{D2} it was proved
that the maximum angle condition is also sufficient to obtain optimal
error estimates for the case $k=1$ and $n=2$ and in \cite{DL3} that result was
generalized to any $k\ge 0$. Also in \cite{DL3}, error estimates for any $k\ge 0$
and $n=3$ were proved assuming the regular vertex property.
The error estimates obtained in \cite{DL3}
require ``maximum regularity''. To be precise let $\Pi_k{\bf u}$ be
the Raviart-Thomas interpolation of degree $k$ of ${\bf u}$ on a triangle $T$ then, it was proved
in \cite{DL3} that
\begin{equation}
\label{DL3}
\|{\bf u}-\Pi_k{\bf u}\|_{L^2(T)}
\le\frac{C}{\sin\alpha}\,\,h_T^{k+1} |{\bf u}|_{H^{k+1}(T)}
\end{equation}
where we have used the standard notation for Sobolev seminorms,
$\alpha$ and $h_T$ are the maximum angle and the diameter of $T$ respectively,
and the constant $C$ is independent of $T$. However, an estimate like (\ref{DL3}) but
with $k$ replaced by $j<k$, only on the right hand side, cannot be proved by the arguments given in \cite{DL3}
and therefore a different approach is needed. Let us remark that this kind of estimates
is important in many situations. In particular, the lowest order estimate
$$
\|{\bf u}-\Pi_k{\bf u}\|_{L^2(T)}
\le\frac{C}{\sin\alpha}\,\,h_T |{\bf u}|_{H^1(T)}
$$
is fundamental in the error analysis for the scalar variable in mixed
approximations of second order elliptic problems. In particular
the inf-sup condition can be obtained from this estimate (see for example \cite{DR,D3}).
The maximum angle condition was originally introduced for triangles.
For the three dimensional case two different generalizations have been given.
One is the K{\bf v} r\'\i{\bf v} zek maximum angle condition
introduced in \cite{K}: the angles between faces and the angles in the faces are bounded
away from $\pi$.
Another possible extension is the regular vertex property
introduced in \cite{AD1}: a family of tetrahedral elements satisfies this
condition if for each element there is at least one vertex such that
the unit vectors in the direction of the edges sharing that vertex
are ``uniformly'' linearly independent,
in the sense that the volume determined by them is uniformly bounded
away from zero.
These two conditions are equivalent in two dimensions but not
in three. Indeed, the K{\bf v} r\'\i{\bf v} zek maximum angle condition
allows for more general elements. This can be seen in the following way:
consider the two families of elements
given in Figure \ref{prismas}, where $h_1$, $h_2$ and $h_3$ are arbitrary positive numbers.
Both families satisfy the K{\bf v} r\'\i{\bf v} zek condition
but the second family
does not satisfy the regular vertex property.
\begin{figure}[h]
\begin{center}
\label{prismas}
\begin{tabular}{cc}
\epsfxsize 5 cm \epsfbox{Prisma1.eps} & \epsfxsize 5 cm
\epsfbox{Prisma2.eps}
\\ (a) & (b)
\end{tabular}
\\ Figure 1
\end{center}
\end{figure}
Essentially these two families of elements give all possibilities.
Indeed, the family of all elements satisfying the K{\bf v} r\'\i{\bf v} zek
condition with a constant $\bar\psi<\pi$ (i.e., angles between
faces and angles in the faces less than or equal to $\bar\psi$)
can be obtained transforming both families in the figure by
``good'' affine transformations (see Theorem \ref{mac} for the
precise meaning of this). This result was obtained in \cite{AD1}
in the proof of Lemma 5.9. For the sake of clarity we will include
this result as a theorem. On the other hand, the family of all
elements satisfying the regular vertex property with a given
constant (see Section 3 for the formal definition of this) is
obtained by transforming in the same way only the first family in
the figure.
Therefore, to obtain general results under the K{\bf v} r\'\i{\bf v} zek maximum angle condition
({\sl resp. regular vertex property})
it is enough to prove error estimates for both families ({\sl resp. the first family})
in Figure \ref{prismas} with constants independent of the relations
between $h_1$, $h_2$ and $h_3$.
The error estimates in \cite{DL3} for the general ${\mathcal RT}_k$ were obtained
assuming the regular vertex property and the arguments given in
that paper cannot be extended to treat the more general case of
elements satisfying the K{\bf v} r\'\i{\bf v} zek condition. On the other hand,
as we have mentioned above, the arguments in \cite{DL3} can not
be applied to obtain error estimates for functions in $H^{j+1}(T)^n$
with $j<k$. For these reasons we need to introduce here a different approach.
In this paper we complete the error analysis for the Raviart-Thomas interpolation
of arbitrary order $k\ge 0$.
We develop the analysis in the general case of $L^p$ based norms, generalizing
also in this aspect the results of previous papers.
Our arguments are different to those used in previous papers.
The main point is to prove sharp estimates in reference elements.
Let us explain the idea in the two dimensional case. Consider
the reference triangle $\widehat T$ which has vertices at $(0,0)$, $(1,0)$ and
$(0,1)$. A stability estimate on $\widehat T$ can be used to obtain
the stability in a general triangle by using the Piola transform.
Afterwards, error estimates can be proved combining stability
with polynomial approximation results.
The original proof given in \cite{RT} uses that
$$
\|\Pi_k{\bf u}\|_{L^2(\widehat T)} \le C \|{\bf u}\|_{H^1(\widehat T)}.
$$
In this way, the constant arising in the estimate for a general
element depends on the minimum angle and so the regularity
assumption is needed. The reason of that dependence is that
the complete $H^1$-norm appears on the right hand side.
Therefore, to improve this result one may try to
obtain sharper estimates on $\widehat T$ for each component of $\Pi_k{\bf u}$.
Denote with $u_j$ and $\Pi_{k,j}{\bf u}$, $j=1,2$, the components of
${\bf u}$ and its Raviart-Thomas interpolation respectively and
consider for example $j=1$. Ideally, we would like to have the estimate
$$
\|\Pi_{k,1}{\bf u}\|_{L^2(\widehat T)} \le C \|u_1\|_{H^1(\widehat T)}.
$$
However, an easy computation shows that if, for example,
${\bf u}=(0,x_2^2)$ then, $\Pi_k{\bf u}=\frac13(x_1,x_2)$ and therefore
the above estimate is not true. In other words, even for a rectangular
triangle $\widehat T$, $\Pi_{k,1}{\bf u}$ depends on both components of ${\bf u}$.
Now, the question is: which are the essential degrees of freedom
defining $\Pi_{k,1}{\bf u}$?
To answer this question one can try to ``kill'' degrees of freedom
by modifying ${\bf u}$ without changing $\Pi_{k,1}{\bf u}$. A key observation
is that if ${\bf r}=(0,g(x_1))$ then $\Pi_{k,1}{\bf r}=0$ (we will give the proof of this
result for appropriate reference elements in the three dimensional case).
Therefore, if ${\bf v}=(u_1(x_1,x_2),u_2(x_1,x_2)-u_2(x_1,0))$ then,
$\Pi_{k,1}{\bf v}=\Pi_{k,1}{\bf u}$. But the normal component of ${\bf v}$ on the
edge $\ell_2$ contained in the line $\{x_2=0\}$, i.e. $v_2$, vanishes,
and so do all the degrees of freedom defining $\Pi_k$ associated with that edge. Moreover,
if we now modify the second component defining
${\bf w}=(u_1(x_1,x_2),u_2(x_1,x_2)-u_2(x_1,0)-x_2\alpha)$, for some
$\alpha\in {\mathcal P}_{k-1}$, we still have that $w_2$ vanishes on $\ell_2$
and that $\Pi_{k,1}{\bf w}=\Pi_{k,1}{\bf u}$ (because we are modifying ${\bf v}$ by adding
a vector field belonging to the Raviart-Thomas space of order $k$).
But, as we will see, it is possible to choose $\alpha$ in such a way
that the degrees of freedom corresponding to integrals over $\widehat T$ also vanish.
Of course, it will be necessary to estimate some norm of $\alpha$.
We will give the details of the proofs in the three dimensional
case. It is easy to see that the same arguments can be used to complete
the arguments explained above for the two dimensional case.
The new contributions of this paper can be summarized as follows:
\begin{itemize}
\item We prove error estimates under the maximum angle condition
with order $j+1$ if the approximated function is in $W^{j+1,p}(T)^n$,
$n=2,3$, where $0\le j \le k$ and $1\le p \le\infty$.
\item Under the regular vertex property we obtain estimates
of anisotropic type also for general $k\ge 0$ and $1\le p \le\infty$.
We also show that this kind of estimates is not valid
under the maximum angle condition.
\end{itemize}
Let us finally mention that the interpolation error estimates of
anisotropic type are necessary when one wishes to exploit the
independent element sizes $h_1$, $h_2$ and $h_3$ to treat edge
singularities in elliptic problems or layers in singularly perturbed
problems. The dilemma is that such estimates hold, as we show,
only for tetrahedra with the regular vertex property but it seems to
be impossible to fill space by using this type of elements only. An
anisotropic triangular prism (pentahedron) can, for example, be
subdivided into three tetrahedra, from which only two satisfy the
regular vertex property while the third is of the type of the element
at right hand side of Figure 1. The only known way out so far is
discussed in \cite{FNP}. These authors use pentahedral meshes or
tetrahedral meshes which are obtained by a suitable subdivision of a
pentahedral mesh. Pentahedra based on a regular triangular face
satisfy the regular vertex property by construction. For the
approximation on tetrahedral elements they use a composition of two
interpolation operators in order to avoid the above mentioned
insufficiency with the tetrahedra which do not satisfy the regular
vertex property. This approach is restricted to prismatic domains so far.
The rest of the paper is as follows. In Section 2 we introduce notation
and give some preliminary results on the conditions on tetrahedra that
we will work with.
Then, we prove stability in $L^p(T)^3$ for the
Raviart-Thomas interpolation of arbitrary degree for functions in
$W^{1,p}(T)^3$. These stability results are proved in Sections 3
for elements satisfying the regular vertex property, and in Section 4
for elements satisfying the maximum angle condition.
The estimates obtained under both hypotheses are essentially different but
the results are sharp. Indeed,
in Section 5 we show that
anisotropic type stability estimates can not
be obtained for the larger class of elements satisfying the
maximum angle condition.
Finally, in Section 6, we derive
the error estimates from the stability results
and standard approximation arguments.
\setcounter{equation}{0}
\section{Notation and Preliminary Results}
In this section we recall some known results involving geometric
properties of certain degenerate tetrahedra. Most of these results
were proved in \cite{AD1} and \cite{Ap}.
Given a general tetrahedron $T\subset {\rm I}\!{\rm R}^3$, ${\bf p}_0$ will
denote an arbitrary vertex
and, for $1\le i \le 3$, $\ell_i$, with $\|\ell_i\|=1$, will be the directions
of the edges sharing ${\bf p}_0$ and $h_i$ the lengths of those edges.
In other words, $T$ is the convex hull of
$\{{\bf p}_0\}\cup \{{\bf p}_0+h_i\ell_i\}_{1\le i\le 3}$.
We will use the standard notation for Sobolev spaces $W^{k,p}(\Omega)$ of
functions with all their derivatives up to the order $k$ belonging
to $L^p(\Omega)$, denoting by $\|\cdot\|_{W^{k,p}(\Omega)}$ the associated norm.
The same notation will be used for the norm of vector fields
${\bf u}\in W^{k,p}(\Omega)^3$. As it is usual, we use boldface
fonts for vector fields.
With $\mathcal P_k(T)$ we denote the set of polynomials of degree
less than or equal $k$ defined over $T\subset {\rm I}\!{\rm R}^3$. The
Raviart-Thomas space of order $k$ is defined as
$$
\mathcal{RT}_k=\mathcal P_k(T)^3 + (x_1,x_2,x_3)\mathcal P_k(T),
$$
and for ${\bf u}\in W^{1,p}(T)^3$ the Raviart-Thomas
interpolation of order $k$ is defined as $\Pi_k{\bf u} \in\mathcal{RT}_k$
such that
\begin{eqnarray}
\label{RTcaras} \int_F\Pi_k{\bf u}\cdot{\mathbf n} p_k & = & \int_F{\bf u}\cdot {\mathbf n}
p_k\qquad\forall
p_k\in\mathcal P_k(F),\ \ F\mbox{ face of }T,\\
\label{RTadentro} \int_{T}\Pi_k{\bf u}\cdot{\bf p}_{k-1} & = &
\int_T{\bf u}\cdot{\bf p}_{k-1} \qquad \forall{\bf p}_{k-1}\in\mathcal
P_{k-1}(T)^3.
\end{eqnarray}
In the rest of the paper the letter $C$ will denote a generic constant that may
change from line to line.
Now we introduce the different conditions on the elements that we will use.
The first one, called {\it ``regular vertex property"} was introduced in \cite{AD1}.
\begin{definition}
A tetrahedron $T$ satisfies the ``regular vertex property" with a
constant $\overline c>0$ (or shortly, \textsl{RVP}$(\bar c)$) if $T$ has a
vertex ${\bf p}_0$, such that if $M$ is the matrix made up with
$\ell_i$, $1\le i\le 3$, as columns then $|\det M|>\overline c$.
\end{definition}
One can easily check that a regular family of tetrahedra (with the usual
definition of regularity given for example in \cite{C1}) verifies
the regular vertex property. On the other hand, simple examples like that
at the left hand side of
Figure \ref{prismas} show that arbitrarily narrow elements are allowed in
the class given by \textsl{RVP}$(\bar c)$ for a fixed~$\bar c$.
Despite the
presence of anisotropic elements the regular vertex property
arises as a natural geometric condition if one looks for
Raviart-Thomas interpolation error bounds. Indeed, looking at the vertex
placed at ${\bf p}_0$, one can see that the family of elements satisfying
\textsl{RVP}$(\bar c)$, have three normal vectors (those normals to the
faces sharing ${\bf p}_0$) uniformly linearly independent (see
\cite{AD1} for more details). A reasonable condition, since
the moments of the normal components of vectors fields are used as
degrees of freedom in the Raviart-Thomas interpolation.
Strikingly, as was shown in \cite{AD1}, the uniform independence
of the normal components can be somehow relaxed and error
estimates valid uniformly for a wider class of elements can be still
obtained for $\Pi_0$ (and for $\Pi_k$ as we
will show). More precisely, we will prove error estimates under
the maximum angle condition defined below, which was introduced
by Krizek in \cite{K} and is weaker than the \textsl{RVP}.
\begin{definition}
A tetrahedron $T$ satisfies the ``maximum angle condition" with a
constant $\bar\psi<\pi$ (or shortly \textsl{MAC}$(\bar\psi)$) if the
maximum angle between faces and the maximum angle inside the faces
are less than or equal to $\bar\psi$.
\end{definition}
Let us mention that the estimates obtained under \textsl{RVP} are stronger
than those valid under \textsl{MAC}. Indeed, in the first case the estimates
are of anisotropic type
(roughly speaking, this means that the estimates are given in terms
of sizes in different directions and their corresponding derivatives).
On the other hand, we will show that this kind of estimates are not valid
for the more general class of elements verifying the \textsl{MAC} condition.
The definition of the maximum angle condition,
is strongly geometric. In order to find an equivalent condition, more appropriate
for our further computations, we introduce the following definitions.
In what follows, ${\bf e}_i$ will denote the canonical vectors.
\begin{definition}
A tetrahedron $T$ belongs to the family ${\mathcal F}_1$
if its vertices are at ${\bf 0}$, $h_1{\bf e}_1$,
$h_2{\bf e}_2$ and $h_3{\bf e}_3$,
where $h_i>0$ are arbitrary lengths (see Figure 1a).
\end{definition}
\begin{definition}
A tetrahedron $T$ belongs to the family ${\mathcal F}_2$
if its vertices are at ${\bf 0}$, $h_1{\bf e}_1+ h_2 {\bf e}_2$,
$h_2{\bf e}_2$ and $h_3{\bf e}_3$,
where $h_i>0$ are arbitrary lengths (see Figure 1b).
\end{definition}
Note that elements in $\mathcal F_2$ satisfy \textsl{MAC}$(\frac\pi2)$ but they
do not fulfill \textsl{RVP}$(\bar c)$ for any $\bar c$.
\begin{lemma}
\label{lemmaK}
Let $T$ be a tetrahedron satisfying \textsl{MAC}$(\bar\psi)$. Then we have
\begin{enumerate}
\item If $\alpha\le\beta\le\gamma$ are the angles of an arbitrary
face of $T$, then $\gamma\ge\frac\pi3$ and
$\beta,\gamma\in[(\pi-\bar\psi)/2,\bar\psi]$. \item If ${\bf p}_0$ is an
arbitrary vertex of $T$ and $\chi\le \psi\le\phi$ are the angles
between faces passing through ${\bf p}_0$, then $\phi\ge\frac\pi3$ and
$\psi,\phi\in[(\pi-\bar\psi)/2,\bar\psi]$.
\end{enumerate}
\end{lemma}
\begin{proof} See \cite{K}.
\end{proof}
For a matrix $M\in{\rm I}\!{\rm R}^{3\times 3}$, $\|M\|$ will denote its infinity norm.
The arguments used in the following theorem are essentially contained in the
proof of Theorem 7 in \cite[page 516]{K}. We include some details here
for the sake of clarity.
\begin{theorem}
\label{mac} Let $T$ be a tetrahedron satisfying \textsl{MAC}$(\bar\psi)$.
Then there exists an element $\widetilde T\in\mathcal F_1\cup
\mathcal F_2$ that can be mapped onto $T$ through an affine
transformation $F(\widetilde{\bf x})=M\widetilde{\bf x} + {\bf c}$ with
$\|M\|,\|M^{-1}\|\le C$ where the constant $C$ depends only on
$\bar\psi$.
\end{theorem}
\begin{proof} Given a tetrahedron $T$ we denote with ${\bf p}_i$, $i=0,1,2,3$,
its vertices and use obvious notations for its faces and edges.
Let ${\bf p}_0{\bf p}_1{\bf p}_2$ be an arbitrary face of $T$ and
${\bf p}_3$ its opposite vertex. We can assume that the maximum angle
$\gamma$ of the face ${\bf p}_0{\bf p}_1{\bf p}_2$ is at the vertex ${\bf p}_0$. Then from
Lemma \ref{lemmaK} we have
\[
\sin\gamma\ge
m:=\min\left\{\sin\frac{\pi-\bar\psi}{2},\sin\bar\psi\right\}.
\]
Let ${\bf t_1}$ and ${\bf t_2}$ be unit vectors along the edges ${\bf p}_0{\bf p}_1$
and ${\bf p}_0{\bf p}_2$. We can also assume that the angle $\omega$ between the
faces ${\bf p}_0{\bf p}_1{\bf p}_2$ and ${\bf p}_0{\bf p}_1{\bf p}_3$ is not less than the angle
between ${\bf p}_0{\bf p}_1{\bf p}_2$ and ${\bf p}_0{\bf p}_2{\bf p}_3$ (otherwise we interchange the
notation between the vertices ${\bf p}_1$ and ${\bf p}_2$). Then, again from Lemma
\ref{lemmaK} we have
\[
\sin\omega\ge m.
\]
Now consider the triangle ${\bf p}_0{\bf p}_1{\bf p}_3$ and choose $k\in\{0,1\}$ so
that the angle $\xi$ at the vertex ${\bf p}_k$ is not less than that at the
vertex ${\bf p}_{1-k}$. Using again Lemma \ref{lemmaK} we obtain
\[
\sin\xi\ge m.
\]
We take now ${\bf t_3}$ as the unit vector along ${\bf p}_k{\bf p}_3$ and define
$M_0$ as the matrix made up with ${\bf t_1}, {\bf t_2}$ and ${\bf t_3}$
as its columns.
Since the columns of $M_0$ are unitary vectors we have $\|M_0\|\le 3$.
Also, the adjugate matrix of $M_0$ has coefficients with absolute value
bounded by 2 and therefore, $\|M_0^{-1}\|\le 6/|\det M_0|$.
Then, to obtain the desired bound for $\|M_0^{-1}\|$ it is enough to
show that $|\det M_0|$ is bounded by below by a constant which depends only
on $\bar \psi$.
Consider the parallelepiped generated by the vectors ${\bf t_1}, {\bf t_2}$
and ${\bf t_3}$. Let $z$ be its height in the direction
perpendicular to ${\bf t_1}$ and ${\bf t_2}$ and
$y$ the height of the face generated by ${\bf t_1}$ and ${\bf t_3}$
in the direction perpendicular to ${\bf t_1}$.
Since
$\|{\bf t_i}\|=1$ we have
$$
|\det M_0|=z\sin\gamma=y\sin\omega\sin\gamma=\sin\xi\sin\omega\sin\gamma
\ge m^3.
$$
as we wanted to prove.
Obviously, the same properties are satisfied by the
matrix $M_1$ made up with ${\bf t_2}, -{\bf t_1}$ and ${\bf t_3}$,
as its columns.
Now, define $h_1=|{\bf p}_0{\bf p}_1|$, $h_2=|{\bf p}_0{\bf p}_2|$ and $h_3=|{\bf p}_k{\bf p}_3|$.
If $k=0$ take $\widetilde T \in \mathcal F_1$ with
vertices at ${\bf 0},h_1{\bf e}_1, h_2{\bf e}_2$ and $h_3{\bf e}_3$
and if $k=1$ take
$\widetilde T \in \mathcal F_2$ with
vertices at ${\bf 0},h_1{\bf e}_1+ h_2 {\bf e}_2,
h_2{\bf e}_2$ and $h_3{\bf e}_3$. Then it is easy to
check that $\widetilde{\bf x}\mapsto M_k\widetilde{\bf x}+ {\bf p}_k$ maps
$\widetilde T$ onto $T$.
\end{proof}
As mentioned above, the regular vertex property is stronger
than the maximum angle condition. Indeed, the following
theorem shows that, under \textsl{RVP}$(\bar c)$, the reference
family in the previous theorem can be restricted to $\mathcal F_1$.
\begin{theorem}
\label{sobrervp} Let $T$ be a tetrahedron satisfying
\textsl{RVP}$(\bar c)$. Then, there exists an element $\widetilde
T\in\mathcal F_1$ that can be mapped onto $T$ through an affine
transformation $F(\widetilde{ {\bf x}})=M \widetilde{\bf{x}}+{\bf
p}_0$ with $\|M\|,\|M^{-1}\|\le C$ where the constant $C$ depends
only on $\bar c$. Furthermore, if $h_i, i=1,2,3$ are the lengths
of the edges of $T$ sharing the vertex ${\bf p}_0$, we can take
$\widetilde{T}\in {\mathcal F}_1$ such that, for $i=1,2,3$, $h_i$
is the length in the direction ${\bf e}_i$.
\end{theorem}
\begin{proof} Let ${\bf p}_0$ and ${\ell_i}$ be as in the
definition of \textsl{RVP}$(\bar c)$ and $h_i$ be the length of the
edge of $T$ with direction $\ell_i$.
Take $M$ as the matrix made up with $\ell_i$
as its columns. Since $|\det(M)|>\bar c$ and $\ell_i$ are unitary vectors
then it is easy to check that
$\|M\|\le C$ and $\|M^{-1}\|\le C$ with a constant $C$ depending only
on a lower bound of $|\det(M)|$ and therefore on $\bar c$.
Then, if $\widetilde T$ is the tetrahedron of $\mathcal F_1$ with
with lengths $h_i$ in the directions ${\bf e}_i$, the affine transformation $F(\widetilde
{\bf x})=M\widetilde{\bf x}+\bf p_0$ maps $\widetilde T$ onto $T$.
\end{proof}
\begin{remark} It is not difficult to see that the converses of
Theorems \ref{mac} and \ref{sobrervp} hold true. Namely, the family
of elements obtained by transforming $\mathcal F_1\cup\mathcal F_2$
(resp. $\mathcal F_1)$)
by affine maps $\widetilde{\bf x}\mapsto M\widetilde{\bf x}+ {\bf c}$, where
$\|M\|,\|M^{-1}\|\le C$, satisfies \textsl{MAC}$(\bar\psi)$ (resp. \textsl{RVP}$(\bar c)$)
for some $\bar\psi$ (resp. $\bar c$) which depends only on $C$.
\end{remark}
\setcounter{equation}{0}
\section{Stability under the regular vertex property}
The goal of this section is to prove the
stability in $L^p$ for the Raviart-Thomas interpolation of
arbitrary order of functions in $W^{1,p}(T)^3$, for families
of elements satisfying the regular vertex property.
Precisely, the main result of this section is the following theorem.
\begin{theorem}
\label{mainrvp}
Let $k\ge0$ and $T$ be a tetrahedron satisfying \textsl{RVP}$(\bar c)$. If
${\bf p}_0$ is the regular vertex, $\ell_i, i=1,2,3$ are unitary
vectors with the directions of the edges sharing ${\bf p}_0$,
$h_i, i=1,2,3$, the lengths of these edges, and $h_T$ the diameter of $T$ then,
there exists a constant $C$ depending only on $k$ and $\bar c$ such that,
for all ${\bf u}\inW^{1,p}(T)^3$, $1\le p\le \infty$,
$$
\left\|\Pi_k{\bf u}\right\|_{L^p(T)}\le C\left( \|{\bf u}\|_{L^p(T)} +
\sum_{i,j} h_j\left\|\frac{\partial u_i}{\partial
\ell_j}\right\|_{L^p(T)} + h_T\|\mathrm{div\,} {\bf u}\|_{L^p(T)}\right).
$$
\end{theorem}
\bigskip
The theorem will follow by Theorem \ref{sobrervp} once we have proved
error estimates for elements in the family $\mathcal F_1$.
First we will prove appropriate estimates in the reference element
$\widehat T$ defined as the tetrahedron with vertices at
$(0,0,0), (1,0,0), (0,1,0)$ and $(0,0,1)$.
This is the object of the next two lemmas. Afterwards, estimates
for elements in $\mathcal F_1$ will be obtained by scaling arguments.
We denote with $\widehat F_i$ the face of
$\widehat T$ normal to ${\mathbf n}_i$, with ${\mathbf n}_1=(-1,0,0),
{\mathbf n}_2=(0,-1,0), {\mathbf n}_3=(0,0,-1)$ and ${\mathbf n}_4=\frac1{\sqrt{3}}(1,1,1)$.
We will use the same notation for a function
of two variables than for its extension to
$\widehat T$ as a function independent of the other variable,
for example, $f(x_2,x_3)$ will denote a function defined on
$\widehat F_1$ as well as one defined in $\widehat T$ (anyway,
the meaning in each case will be clear from the context).
In the same way, the same notation will be used to denote a polynomial
$p_k$ on a face and a polynomial in three variables such that
its restriction to that face agrees with $p_k$. For example,
for $p_k\in\mathcal{P}_k(\widehat F_4)$ we will write $p_{k}(1-x_2-x_3,x_2,x_3)$.
In what follows $\widehat\Pi_{k,i}{\bf u}$ denotes the $i$-th component
of $\widehat\Pi_k{\bf u}$.
\begin{lemma}
\label{lemma1}
Let $f\in L^p(\widehat F_1)$, $g\in L^p(\widehat F_2)$,
and $h\in L^p(\widehat F_3)$. If
$$
{\bf u}(x_1,x_2,x_3)=(f(x_2,x_3),0,0),
\quad{\bf v}(x_1,x_2,x_3)=(0,g(x_1,x_3),0),
$$
and
$$
{\bf w}(x_1,x_2,x_3)=(0,0,h(x_1,x_2))
$$
then, their Raviart-Thomas interpolations are of the same form,
namely, there exist $q_i\in\mathcal{P}_k(\widehat F_i)$, $i=1,2,3$,
such that
$$
\widehat\Pi_k{\bf u}=(q_1(x_2,x_3),0,0),\quad
\widehat\Pi_k{\bf v}=(0,q_2(x_1,x_3),0),
$$
and
$$
\widehat\Pi_k{\bf w}=(0,0,q_3(x_1,x_2)).
$$
\end{lemma}
\begin{proof} Let us prove for example the first equality, the other two are
obviously analogous.
Since $\mathrm{div\,}{\bf u}=0$, we have that $\mathrm{div\,}\widehat\Pi_k{\bf u}=0$ and therefore,
from a well known property of the Raviart-Thomas
interpolation (see for example \cite{BF,D3}), it follows that
$\widehat\Pi_k{\bf u}\in \mathcal{P}_k(\widehat T)^3$.
On the other hand, using now (\ref{RTcaras}) for $i=2,3$, and that
$u_2=u_3=0$, we have
$$
\int_{\widehat F_i}\widehat\Pi_{k,i}{\bf u}\, p_k = 0
\qquad\forall p_k\in\mathcal P_k(F_i),
\quad i=2,3,
$$
and then, taking $p_k=\widehat\Pi_{k,i}{\bf u}$, we conclude that
$\widehat\Pi_{k,i}{\bf u}|_{\widehat F_i}=0$ for $i=2,3$. Therefore
$\widehat\Pi_{k,i}{\bf u}=x_i r_i$ for some $r_i\in\mathcal{P}_{k-1}(\widehat T)$
and so, using now (\ref{RTadentro}) and again that $u_2=u_3=0$,
we obtain that, for $i=2,3$, $\widehat\Pi_{k,i}{\bf u}=0$ in $\widehat T$ as we wanted
to show.
Finally, since $\mathrm{div\,}\widehat\Pi_k{\bf u}=0$, it follows that
$\frac{\partial\widehat\Pi_{k,1}{\bf u}}{\partial x_1}=0$ and so,
$\widehat\Pi_{k,1}{\bf u}$ is independent of $x_1$.
\end{proof}
\begin{lemma}
There exists a constant $C$ depending only on $k$ such that, for all
${\bf u}=(u_1,u_2,u_3)\in W^{1,p}(\widehat T)^3$,
\begin{eqnarray}\label{ineq1}
\|\widehat\Pi_{k,i}{\bf u}\|_{L^p(\widehat T)}&\le& C\left(\|u_i\|_{W^{1,p}(\widehat T)}
+ \|\mathrm{div\,}{\bf u}\|_{L^p(\widehat T)}\right), \qquad i=1,2,3.
\end{eqnarray}
\end{lemma}
\begin{proof}
From the previous lemma we know that, if
$$
{\bf v}=(u_1,u_2-u_2(x_1,0,x_3), u_3-u_3(x_1,x_2,0))
$$
then,
$\widehat\Pi_{k,1}{\bf u} = \widehat\Pi_{k,1}{\bf v}$.
Let $\alpha,\beta\in\mathcal P_{k-1}(\widehat T)$ be
such that
\begin{equation}
\label{eq10}
\int_{\widehat T}(v_2-x_2\alpha)\,p_{k-1}=0\quad
\mbox{and}\quad
\int_{\widehat T}(v_3-x_3\beta)\,p_{k-1} =0\qquad
\forall p_{k-1}\in\mathcal{P}_{k-1}(\widehat T).
\end{equation}
Observe that those $\alpha$ and $\beta$ exist. Indeed, it is easy to
prove uniqueness (and therefore existence) of solution of the square
linear systems of equations defining them.
Define now ${\bf w}=(v_1,v_2-x_2\alpha,v_3-x_3\beta)$.
Since $(0,x_2\alpha,x_3\beta)\in{\mathcal RT}_k(\widehat T)$ it follows that
$\widehat\Pi_{k,1}{\bf v}=\widehat\Pi_{k,1}{\bf w}$ and therefore
$\widehat\Pi_{k,1}{\bf u}=\widehat\Pi_{k,1}{\bf w}$.
Taking into account that $w_2|_{\widehat F_2}=0$ and
$w_3|_{\widehat F_3}=0$ and the equations (\ref{eq10}), it follows
that $\widehat\Pi{\bf w}$ is determined by the equations
\begin{eqnarray}\nonumber
\int_{\widehat T}\widehat\Pi_{k,1}{\bf w}\, p_{k-1} & = &
\int_{\widehat T}w_1 \,p_{k-1} \qquad \forall p_{k-1}\in\mathcal
P_{k-1}(\widehat
T)\\
\nonumber \int_{\widehat T}\widehat\Pi_{k,2}{\bf w}\, p_{k-1} & = & 0
\qquad \forall p_{k-1}\in\mathcal P_{k-1}(\widehat
T)\\
\label{eq13} \int_{\widehat T}\widehat\Pi_{k,3}{\bf w}\, p_{k-1} & = &
0 \qquad \forall p_{k-1}\in\mathcal P_{k-1}(\widehat T)\\\nonumber
\int_{\widehat F_1} \widehat\Pi_{k,1}{\bf w}\, p_k &=& \int_{\widehat
F_1}w_1\,p_k \qquad \forall p_{k}\in\mathcal P_{k}(\widehat
F_1)\\\nonumber \int_{\widehat F_2} \widehat\Pi_{k,2}{\bf w}\, p_k &=&0
\qquad \forall p_{k}\in\mathcal P_{k}(\widehat F_2)\\\nonumber
\int_{\widehat F_3} \widehat\Pi_{k,3}{\bf w}\, p_k &=& 0 \qquad \forall
p_{k}\in\mathcal P_{k}(\widehat F_3)\\\nonumber \int_{\widehat
F_4} (\widehat\Pi_{k,1}{\bf w} + \widehat \Pi_{k,2}{\bf w} + \widehat
\Pi_{k,3}{\bf w})\, p_k &=& \int_{\widehat F_4}(w_1+w_2+w_3)\,p_k
\qquad \forall p_{k}\in\mathcal P_{k}(\widehat F_4).
\end{eqnarray}
Now, for $p_k\in\mathcal P_{k}(\widehat T)$, we have
\begin{eqnarray*}
\int_{\widehat T}\mathrm{div\,} {\bf w} p_k &=&-\int_{\widehat T}{\bf w}\cdot \nabla
p_k + \int_{\partial\widehat T}{\bf w}\cdot {\mathbf n}\,p_k\\
&=&-\int_{\widehat T}{\bf w}\cdot \nabla p_k +
\frac1{\sqrt{3}}\int_{\widehat F_4}(w_1+w_2+w_3)p_k +
\int_{\partial\widehat T\setminus\widehat F_4}{\bf w}\cdot {\mathbf n}\,p_k
\end{eqnarray*}
but, from the definition of ${\bf w}$, we have
\[
-\int_{\widehat T}{\bf w}\cdot \nabla p_k=-\int_{\widehat
T}w_1\,\frac{\partial p_k}{\partial x_1}\qquad\mbox{and}\qquad
\int_{\partial\widehat T\setminus\widehat F_4}{\bf w}\cdot {\mathbf n}\,p_k =
-\int_{\widehat F_1}w_1\,p_k,
\]
therefore, for all $p_{k}\in\mathcal P_{k}(\widehat T)$,
\begin{equation}\label{eq12}
\frac1{\sqrt{3}}\int_{\widehat F_4}(w_1+w_2+w_3)p_k =
\int_{\widehat T}\mathrm{div\,} {\bf w}\,p_{k} + \int_{\widehat
T}w_1\,\frac{\partial p_k}{\partial x_1} + \int_{\widehat
F_1}w_1\,p_k.
\end{equation}
But,
\[
\mathrm{div\,} {\bf w}=\mathrm{div\,}{\bf v}-\mathrm{div\,} (0,x_2\alpha,x_3\beta)
= \mathrm{div\,}{\bf u}-\mathrm{div\,} (0,x_2\alpha,x_3\beta).
\]
So, using (\ref{eq12}), (\ref{eq13}), and standard arguments, we
obtain
\begin{eqnarray*}
\lefteqn{\|\widehat\Pi_{k,1}{\bf u}\|_{L^p(\widehat T)} =
\|\widehat\Pi_{k,1}{\bf w}\|_{L^p(\widehat T)}}\\&\le& C\left( \|u_1\|_{W^{1,p}(\widehat T)}
+ \|\mathrm{div\,} {\bf u}\|_{L^p(\widehat T)} + \|\mathrm{div\,} (0,x_2\alpha,x_3\beta)\|_{L^p(\widehat T)}\right).
\end{eqnarray*}
Then, to obtain (\ref{ineq1}) for $i=1$, it is enough to show that
\begin{equation}
\label{ultima cota}
\|\mathrm{div\,} (0,x_2\alpha,x_3\beta)\|_{L^p(\widehat T)} \le
C(\|u_1\|_{W^{1,p}(\widehat T)} + \|\mathrm{div\,} {\bf u}\|_{L^p(\widehat T)}).
\end{equation}
For $p_k\in\mathcal P_{k}(\widehat T)$ we have
\begin{eqnarray*}
0&=&\int_{\widehat T} (0,v_2-x_2\alpha,v_3-x_3\beta)\cdot\nabla
p_{k}\\
&=&-\int_{\widehat T} \mathrm{div\,} (0,v_2-x_2\alpha,v_3-x_3\beta)\,p_k +
\int_{\partial\widehat
T}\left[(v_2-x_2\alpha)n_2+(v_3-x_3\beta)n_3\right]\,p_k.
\end{eqnarray*}
Now we take $p_k(x_1,x_2,x_3)=(1-x_1-x_2-x_3)p_{k-1}$ with
$p_{k-1}\in\mathcal P_{k-1}(\widehat T)$. Then, since $p_k=0$ on
$\widehat F_4$, $(v_2-x_2\alpha)n_2=0$ on $\partial\widehat T\setminus F_4$
and $(v_3-x_3\beta)n_3=0$ on $\partial\widehat T\setminus F_4$, it follows
that, in the last equation, the boundary integral vanishes. Then,
\[
\int_{\widehat T} (1-x_1-x_2-x_3)\,\mathrm{div\,}
(0,v_2-x_2\alpha,v_3-x_3\beta)\,p_{k-1} =0.
\]
That is, for all $p_{k-1}\in\mathcal P_{k-1}(\widehat T)$,
$$
\int_{\widehat T} (1-x_1-x_2-x_3)\,\mathrm{div\,}(0,x_2\alpha,x_3\beta)\,p_{k-1}=\int_{\widehat T}
(1-x_1-x_2-x_3)\,\mathrm{div\,} (0,v_2,v_3)\,p_{k-1}.
$$
Therefore, taking $p_{k-1}=\mathrm{div\,}(0,x_2\alpha,x_3\beta)$ and applying the H\"older
inequality we obtain
$$
\int_{\widehat T} (1-x_1-x_2-x_3)\,|\mathrm{div\,}(0,x_2\alpha,x_3\beta)|^2
\le C\|\mathrm{div\,} (0,v_2,v_3)\|_{L^p(\widehat T)}\,\|\mathrm{div\,}(0,x_2\alpha,x_3\beta)\|_{L^{p'}(\widehat T)}.
$$
But, since all the norms on $\mathcal P_{k-1}(\widehat T)$ are equivalent
we conclude that,
\begin{equation}
\label{casi esta}
\|\mathrm{div\,}(0,x_2\alpha,x_3\beta)\|_{L^p(\widehat T)}
\le C\|\mathrm{div\,} (0,v_2,v_3)\|_{L^p(\widehat T)}.
\end{equation}
Now, observe that $\mathrm{div\,} (0,v_2,v_3)=\mathrm{div\,} (0,u_2,u_3)$ and
$$
\|\mathrm{div\,} (0,u_2,u_3)\|_{L^p(\widehat T)}
\le C\left(\|\mathrm{div\,} {\bf u}\|_{L^p(\widehat T)} +
\|u_1\|_{W^{1,p}(\widehat T)}\right)
$$
then, (\ref{ultima cota}) follows from (\ref{casi esta}).
Clearly, the estimates for $\widehat\Pi_{k,2}{\bf u}$ and
$\widehat\Pi_{k,3}{\bf u}$ can be proved analogously.
\end{proof}
From the previous lemma and a change of variables we obtain
estimates for elements in $\mathcal F_1$.
The Raviart-Thomas operators on the elements $\widehat T$ and $\widetilde T$ will
be denoted by $\widehat\Pi_k$ and $\widetilde\Pi_k$ respectively.
Analogous notations will be used for variables and derivatives
or differential operators on $\widehat T$ and $\widetilde T$ whenever
needed for clarity.
\begin{proposition}
\label{proprvp} Let $\widetilde T\in\mathcal F_1$ be the element with vertices
at ${\bf 0}$, $h_1{\bf e}_1$,
$h_2{\bf e}_2$ and $h_3{\bf e}_3$, where $h_i>0$.
There exists a constant $C$ depending only on $k$
such that, for $\widetilde{\bf u}=(\tilde u_1,\tilde u_2,\tilde u_3)\in W^{1,p}(\widetilde T)^3$
and $i=1,2,3$,
\begin{eqnarray*}
\label{ineq0}
\|\widetilde\Pi_{k,i}\widetilde{\bf u}\|_{L^p(\widetilde T)}&\le&
C\left(\|\tilde u_i\|_{L^p(\widetilde T)} + \sum_{j=1}^3 h_j
\left\| \frac{\partial \tilde u_i}{\partial \tilde x_j}
\right\|_{L^p(\widetilde T)} +
h_i\|\widetilde\mathrm{div\,}\widetilde{\bf u}\|_{L^p(\widetilde T)}\right)
\end{eqnarray*}
\end{proposition}
\begin{proof} Let $\widehat{\bf u}\in W^{1,p}(\widehat T)^3$ defined via the
Piola transform by
$$
\widetilde{\bf u}(\widetilde{\bf x})=
\frac1{\det B}B\widehat{\bf u}(\widehat{\bf x}),\quad\widetilde{\bf x}=B\widehat{\bf x},
\quad \mbox{with}\ \
B=\left(%
\begin{array}{ccc}
h_1 & 0 & 0 \\
0 & h_2 & 0 \\
0 & 0 & h_3 \\
\end{array}%
\right)
$$
It is known that (see for example \cite{D3,RT}),
\begin{equation}
\label{Piola1}
\widetilde\Pi_k\widetilde{\bf u}(\widetilde{\bf x})=
\frac1{\det B}\,B\,\widehat\Pi_k\widehat{\bf u}(\widehat{\bf x}),
\end{equation}
and
\begin{equation}
\label{Piola2}
\widetilde\mathrm{div\,}\widetilde{\bf u}(\widetilde{\bf x}) =
\frac1{\det B}\widehat\mathrm{div\,}\widehat{\bf u}(\widehat{\bf x}).
\end{equation}
Consider for example $i=1$ (the other cases are of course analogous).
Using (\ref{ineq1}) we have
\begin{eqnarray*}
\left\|\widetilde\Pi_{k,1}\widetilde{\bf u}\right\|_{L^p(\widetilde T)}^p &=&
\frac{h_1h_2h_3}{h_2^ph_3^p}\left\|\widehat\Pi_{k,1}\widehat{\bf u}\right\|_{L^p(\widehat T)}^p\\
&\le& C\frac{h_1h_2h_3}{h_2^ph_3^p} \left(\|\hat
u_1\|_{W^{1,p}(\widehat T)}^p +
\|\widehat\mathrm{div\,}\widehat{\bf u}\|_{L^p(\widehat T)}^p\right)\\
&\le& C\left(\|\tilde u_1\|_{L^p(\widetilde T)}^p + \sum_{j=1}^3
h_j^p \left\| \frac{\partial \tilde u_1}{\partial \tilde x_j}
\right\|_{L^p(\widetilde T)}^p +
h_1^p\|\widetilde\mathrm{div\,}\widetilde{\bf u}\|_{L^p(\widetilde T)}^p\right)
\end{eqnarray*}
as we wanted to show.
\end{proof}
We are finally ready to prove the main theorem of
this section.
\begin{proof}[Proof of Theorem \ref{mainrvp}]
To simplify notation we assume ${\bf p}_0={\bf 0}$. From Theorem
\ref{sobrervp} we know that, if $\widetilde T\in\mathcal F_1$ is the
element with vertices at ${\bf 0}$, $h_1{\bf e}_1$, $h_2{\bf e}_2$
and $h_3{\bf e}_3$, there exists a matrix $M$ such that the
associated linear transformation maps $\widetilde T$ onto $T$. Moreover,
$M{\bf e_i}=\ell_i, i=1,2,3$.
Now, given ${\bf u}\inW^{1,p}(T)^3$ we define $\widetilde{\bf u}\inW^{1,p}(\widetilde T)^3$
via the Piola transform, namely,
\[
{\bf u}({\bf x})=\frac1{\det M}M\widetilde{\bf u}(\widetilde{\bf x}),\qquad
{\bf x}=M\widetilde{\bf x}\, .
\]
Using Proposition \ref{proprvp} after the change of variables
${\bf x}\mapsto\widetilde{\bf x}$ we have
\[
\|\Pi_k{\bf u}\|_{L^p(T)}^p \le C \frac{\|M\|^p}{(\det M)^{p-1}}
\left(
\|\widetilde{\bf u}\|_{L^p(\widetilde T)}^p + \sum_{j=1}^3 h_j^p \left\|
\frac{\partial\widetilde{\bf u}}{\partial\tilde x_j}\right\|_{L^p(\widetilde T)}^p +
h_{\widetilde T}^p\|\widetilde\mathrm{div\,}\widetilde{\bf u}\|_{L^p(\widetilde T)}^p\right)
\]
where $h_{\widetilde T}$ is the diameter of $\widetilde T$ and
$\frac{\partial\widetilde{\bf u}}{\partial\tilde x_j}$ denotes the
vector $\left(\frac{\partial\tilde u_1}{\partial\tilde x_j},
\frac{\partial\tilde u_2}{\partial\tilde x_j},
\frac{\partial\tilde u_3}{\partial\tilde x_j}\right)^t$. But,
\begin{equation}
\label{propiedadpiola}
\frac{\partial\widetilde{\bf u}}{\partial\tilde x_j} = \det M\, M^{-1}
\frac{\partial{\bf u}}{\partial \ell_j},\qquad
\mathrm{div\,}{\bf u}({\bf x})=\frac1{\det M}\widetilde\mathrm{div\,}\widetilde{\bf u}(\widetilde{\bf x}),
\end{equation}
and $h_{\widetilde T}\le \|M^{-1}\| h_T$. Therefore we arrive at
\[
\|\Pi_k{\bf u}\|_{L^p(T)}^p \le C \|M\|^p\|M^{-1}\|^p \left(
\|{\bf u}\|_{L^p(T)}^p + \sum_{j=1}^3 h_j^p \left\|
\frac{\partial{\bf u}}{\partial \ell_j}\right\|_{L^p(T)}^p + h_T^p\|\mathrm{div\,}{\bf u}\|_{L^p(T)}^p\right)
\]
and recalling that $\|M\|,\|M^{-1}\|\le C$ with $C$ depending
only on $\bar c$, we conclude the proof. \end{proof}
\setcounter{equation}{0}
\section{Stability under the maximum angle condition}
\label{section4}
In this section we prove a stability result weaker
than that obtained in the previous section but which is valid
for families of elements satisfying the maximum angle condition.
The estimate obtained here, although uniform in the class of
elements satisfying \textsl{MAC}$(\bar\psi)$, is weaker than the estimate
obtained in Theorem \ref{mainrvp} under the stronger \textsl{RVP}$(\bar c)$ hypothesis.
Indeed, in front of each derivative, it appears the diameter $h_T$
instead of the length of the edge in the direction
of the derivative. However, our result is optimal. In fact,
we will show in the next section that estimates like those in
Theorem \ref{mainrvp} are not valid in general under the maximum
angle condition.
The main result of this section is the following theorem.
\begin{theorem}
\label{mainmac} Let $k\ge0$ and $T$ be a tetrahedron with diameter $h_T$
satisfying \textsl{MAC}$(\bar\psi)$. There exists a constant $C$
depending only on $k$ and $\bar\psi$ such that, for all ${\bf u}\in W^{1,p}(T)^3$,
$1\le p\le \infty$,
\begin{equation}
\label{mainmac1}
\|\Pi_k{\bf u}\|_{L^p(T)}\le C\left(\|{\bf u}\|_{L^p(T)}
+ h_T \|\nabla{\bf u}\|_{L^p(T)}\right).
\end{equation}
\end{theorem}
\bigskip
The steps to prove this theorem are similar to those followed in
Section 3. Now our reference element $\widehat T$ is the
tetrahedron with vertices at ${\bf 0}$, ${\bf e}_1+ {\bf e}_2$,
${\bf e}_2$ and ${\bf e}_3$. For ${\mathbf n}_1=(1,0,0),
{\mathbf n}_2=\frac1{\sqrt{2}}(1,-1,0), {\mathbf n}_3=(0,0,1)$ and
${\mathbf n}_4=\frac1{\sqrt{2}}(0,1,1)$ we denote with $\widehat F_i$ the
face of $\widehat T$ normal to ${\mathbf n}_i$ and with $\overline{F}_2$
the projection of $\widehat F_2$ onto the plane given by $x_2=0$.
\begin{lemma}
\label{lemma31} Let $f\in L^p(\widehat F_1)$, $g\in L^p(\overline
F_2)$, and $h\in L^p(\widehat F_3)$. If
$$
{\bf u}(x_1,x_2,x_3)=(f(x_2,x_3),0,0),
\quad{\bf v}(x_1,x_2,x_3)=(0,g(x_1,x_3),0),
$$
and
$$
{\bf w}(x_1,x_2,x_3)=(0,0,h(x_1,x_2))
$$
then, their Raviart-Thomas interpolations are of the same form,
namely, there exist $q_i\in\mathcal{P}_k(\widehat F_i)$, $i=1,3$,
and $q_2\in\mathcal{P}_k(\overline F_2)$ such that
$$
\widehat\Pi_k{\bf u}=(q_1(x_2,x_3),0,0),\quad
\widehat\Pi_k{\bf v}=(0,q_2(x_1,x_3),0),
$$
and
$$
\widehat\Pi_k{\bf w}=(0,0,q_3(x_1,x_2)).
$$
\end{lemma}
\begin{proof} The proof is similar to that of Lemma \ref{lemma1}.
We will prove the first equality, the other two follow in
an analogous way.
First, we have that $\mathrm{div\,}\widehat\Pi_k{\bf u}=0$ and therefore
$\widehat\Pi_k{\bf u}\in \mathcal{P}_k(\widehat T)^3$.
Then, proceeding exactly as in the proof of Lemma \ref{lemma1},
we obtain that $\widehat\Pi_{k,3}{\bf u}=0$ in $\widehat T$.
Analogously, using now (\ref{RTcaras}) for $i=4$ we have
$(\widehat\Pi_{k,2}{\bf u} + \widehat\Pi_{k,3}{\bf u})|_{\widehat F_4}=0$,
and so
$$
\widehat\Pi_{k,2}{\bf u} + \widehat\Pi_{k,3}{\bf u}=(1-x_2-x_3)r
$$
for some $r\in\mathcal{P}_{k-1}(\widehat T)$. Consequently, using
now (\ref{RTadentro}) and that $u_2=u_3=0$,
we obtain $\widehat\Pi_{k,2}{\bf u} + \widehat\Pi_{k,3}{\bf u}=0$ in $\widehat T$.
Then, since we already know that $\widehat\Pi_{k,3}{\bf u}=0$,
we conclude that $\widehat\Pi_{k,2}{\bf u}=0$ in $\widehat T$.
Therefore, $\widehat\Pi_k{\bf u}=(q,0,0)$ for some $q\in\mathcal{P}_k(\widehat T)$
but, since $\mathrm{div\,}\widehat\Pi_k{\bf u}=0$, it follows that $\widehat\Pi_{k,1}$ is independent
of $x_1$.
\end{proof}
\begin{lemma}
There exists a constant $C_1$ depending only on $k$ such that, for all
${\bf u}=(u_1,u_2,u_3)\in W^{1,p}(\widehat T)^3$,
\begin{eqnarray}\label{ineq1.5}
\|\widehat\Pi_{k,1}{\bf u}\|_{L^2(\widehat T)}&\le&
C_1\left(\|u_1\|_{W^{1,p}(\widehat T)} + \|\mathrm{div\,}{\bf u}\|_{L^p(\widehat T)}\right)\\
\label{ineq2} \|\widehat\Pi_{k,2}{\bf u}\|_{L^2(\widehat T)}&\le&
C_1\left(\|u_2\|_{W^{1,p}(\widehat T)} + \left\|\frac{\partial u_1}{\partial x_1}\right\|_{L^p(\widehat T)}
+ \left\|\frac{\partial u_3}{\partial
x_3}\right\|_{L^p(\widehat T)}\right)\\
\label{ineq3}\|\widehat\Pi_{k,3}{\bf u}\|_{L^p(\widehat T)}&\le&
C_1\left(\|u_3\|_{W^{1,p}(\widehat T)} + \|\mathrm{div\,}{\bf u}\|_{L^p(\widehat T)}\right).
\end{eqnarray}
In particular, for $i=1,2,3$,
\begin{equation}
\label{ineq3.5}
\|\widehat\Pi_{k,i}{\bf u}\|_{L^2(\widehat T)}
\le C_2\left(\|u_i\|_{W^{1,p}(\widehat T)}
+ \sum_{j=1\atop j\neq i}^3\left\|\frac{\partial u_j}{\partial x_j}\right\|_{L^p(\widehat T)}\right)
\end{equation}
for another constant $C_2$ which depends only on $k$.
\end{lemma}
\begin{proof}
Let
${\bf v}=(u_1,u_2-u_2(x_1,x_1,x_3),u_3-u_3(x_1,x_2,0))$ and
$\alpha,\beta\in \mathcal P_{k-1}(\widehat T)$ such that
\begin{equation}
\label{int0}
\int_{\widehat T}(v_2-(x_1-x_2)\alpha)p_{k-1} =0\quad
\mbox{and}\quad
\int_{\widehat T}(v_3-x_3\beta)p_{k-1} =0\qquad \forall p_{k-1}\in
\mathcal P_{k-1}(\widehat T),
\end{equation}
and define
\[{\bf w}=(u_1,u_2-u_2(x_1,x_1,x_3)-(x_1-x_2)\alpha,u_3-u_3(x_1,x_2,0)-x_3\beta).\]
Then, since
$(0,(x_1-x_2)\alpha,x_3\beta)\in{\mathcal RT}_k$, it follows from Lemma \ref{lemma31}
that
$$
\widehat\Pi_{k,1}{\bf u}=\widehat\Pi_{k,1}{\bf w}.
$$
Now, taking into account the definition of ${\bf w}$ and (\ref{int0}), we have that
$\widehat\Pi_k{\bf w}$ is defined by
\begin{eqnarray}\nonumber
\int_{\widehat T}\widehat\Pi_{k,1}{\bf w}\, p_{k-1} & = & \int_{\widehat T}w_1
\,p_{k-1} \qquad \forall p_{k-1}\in\mathcal P_{k-1}(\widehat
T)\\
\nonumber \int_{\widehat T}\widehat\Pi_{k,2}{\bf w}\, p_{k-1} & = & 0
\qquad \forall p_{k-1}\in\mathcal P_{k-1}(\widehat
T)\\
\label{eq15} \int_{\widehat T}\widehat\Pi_{k,3}{\bf w}\, p_{k-1} & = &
0 \qquad \forall p_{k-1}\in\mathcal P_{k-1}(\widehat T)\\\nonumber
\int_{\widehat F_1} \widehat\Pi_{k,1}{\bf w}\, p_k &=& \int_{\widehat
F_1}w_1\,p_k \qquad \forall p_{k}\in\mathcal P_{k}(\widehat
F_1)\\\nonumber \int_{\widehat F_2}
(\widehat\Pi_{k,1}{\bf w}-\widehat\Pi_{k,2}{\bf w})\, p_k &=&\int_{\widehat
F_2}w_1\,p_k \qquad \forall p_{k}\in\mathcal P_{k}(\widehat
F_2)\\\nonumber \int_{\widehat F_3} \widehat\Pi_{k,3}{\bf w}\, p_k &=&
0 \qquad \forall p_{k}\in\mathcal P_{k}(\widehat F_3)\\\nonumber
\int_{\widehat F_4} (\widehat\Pi_{k,2}{\bf w} + \widehat\Pi_{k,3}{\bf w})\,
p_k &=& \int_{\widehat F_4}(w_2+w_3)\,p_k \qquad \forall
p_{k}\in\mathcal P_{k}(\widehat F_4).
\end{eqnarray}
But, using again (\ref{int0}) we have, for $p_k\in\mathcal P_{k}(\widehat T)$,
\begin{eqnarray}
\int_{\widehat T}\mathrm{div\,}(0,w_2,w_3)p_k & = & -\int_{\widehat T}
(0,w_2,w_3)\cdot\nabla p_k \nonumber\\
& + & \int_{\partial\widehat T\setminus\widehat F_4}
(w_2n_2+w_3n_3)p_k + \frac1{\sqrt{2}}\int_{\widehat F_4}(w_2+w_3)p_k\nonumber\\
&=&\frac1{\sqrt{2}}\int_{\widehat F_4}(w_2+w_3)p_k.\label{eq16}
\end{eqnarray}
Then, it follows from (\ref{eq15}) and (\ref{eq16}) that
\begin{eqnarray*}
\|\widehat\Pi_{1,k}{\bf w}\|_{L^p(\widehat T)} &\le& C\left(
\|w_1\|_{W^{1,p}(\widehat T)} +
\|\mathrm{div\,}(0,w_2,w_3)\|_{L^p(\widehat T)}\right)\\
&\le & C \left( \|u_1\|_{W^{1,p}(\widehat T)} + \|\mathrm{div\,}{\bf u}\|_{L^p(\widehat T)}
+ \|\mathrm{div\,}(0,(x_1-x_2)\alpha,x_3\beta)\|_{L^p(\widehat T)}\right).
\end{eqnarray*}
Therefore, to conclude the proof of (\ref{ineq1.5}) it is enough to show that
\begin{equation}
\label{eq17}
\|\mathrm{div\,}(0,(x_1-x_2)\alpha,x_3\beta)\|_{L^p(\widehat T)} \le C\,(
\|u_1\|_{W^{1,p}(\widehat T)} + \|\mathrm{div\,}{\bf u}\|_{L^p(\widehat T)}).
\end{equation}
But, for all $p_k\in\mathcal P_{k}(\widehat T)$, we have
$$
0=\int_{\widehat T} (0,w_2,w_3)\cdot \nabla p_k
= -\int_{\widehat T}\mathrm{div\,}(0,w_2,w_3) + \int_{\partial \widehat T}
(w_2n_2+w_3n_3)p_k.
$$
Now, taking $p_k=(1-x_2-x_3)p_{k-1}$ with $p_{k-1}\in\mathcal P_{k-1}(\widehat T)$
the boundary integral in the last
equation vanishes, and therefore we obtain
$$
\int_{\widehat T} (1-x_2-x_3)\mathrm{div\,}(0,(x_1-x_2)\alpha,x_3\beta)p_{k-1}
=\int_{\widehat T} (1-x_2-x_3)\mathrm{div\,}(0,u_2,u_3)p_{k-1}.
$$
Then, (\ref{eq17}) can be obtained with an argument like that used for
(\ref{ultima cota}).
Clearly, the proof of inequality (\ref{ineq3})
is analogous to that of (\ref{ineq1.5}).
Now, to prove (\ref{ineq2}), take
${\bf v}=(u_1-u_1(0,x_2,x_3),u_2,u_3-u_3(x_1,x_2,0))$,
$\alpha,\beta\in \mathcal P_{k-1}(\widehat T)$ such that
\[
\int_{\widehat T}(v_1-x_1\alpha)p_{k-1} =0 \quad
\mbox{and}\quad
\int_{\widehat T}(v_3-x_3\beta)p_{k-1} =0\quad \forall p_{k-1}\in
\mathcal P_{k-1}(\widehat T),
\]
and define
$$
{\bf w}=(v_1-x_1\alpha,v_2,v_3-x_3\beta).
$$
Using again Lemma \ref{lemma31} and that
$(x_1\alpha,0,x_3\beta)\in{\mathcal RT}_k$ we obtain
\[
\widehat\Pi_{k,2}{\bf u}=\widehat\Pi_{k,2}{\bf w}.
\]
In this case, it follows from the definition of ${\bf w}$ that
$\Pi_k{\bf w}$ is defined by
\begin{eqnarray}\nonumber
\int_{\widehat T}\widehat\Pi_{k,1}{\bf w}\, p_{k-1} & = & 0 \qquad
\forall p_{k-1}\in\mathcal P_{k-1}(\widehat
T)\\
\nonumber \int_{\widehat T}\widehat\Pi_{k,2}{\bf w}\, p_{k-1} & = &
\int_{\widehat T} w_2\,p_{k-1} \qquad \forall p_{k-1}\in\mathcal
P_{k-1}(\widehat
T)\\
\label{eq20} \int_{\widehat T}\widehat\Pi_{k,3}{\bf w}\, p_{k-1} & = &
0 \qquad \forall p_{k-1}\in\mathcal P_{k-1}(\widehat T)\\\nonumber
\int_{F_1} \widehat\Pi_{k,1}{\bf w}\, p_k &=& 0 \qquad \forall
p_{k}\in\mathcal P_{k}(\widehat F_1)\\\nonumber \int_{\widehat
F_2} (\widehat\Pi_{k,1}{\bf w}-\widehat\Pi_{k,2}{\bf w})\, p_k
&=&\int_{F_2}(w_1-w_2)\,p_k \qquad \forall p_{k}\in\mathcal
P_{k}(\widehat F_2)\\\nonumber \int_{\widehat F_3} \Pi_{k,3}{\bf w}\,
p_k &=& 0 \qquad \forall p_{k}\in\mathcal P_{k}(\widehat
F_3)\\\nonumber \int_{\widehat F_4} (\widehat\Pi_{k,2}{\bf w} +
\widehat\Pi_{k,3}{\bf w})\, p_k &=& \int_{\widehat F_4}(w_2+w_3)\,p_k
\qquad \forall p_{k}\in\mathcal P_{k}(\widehat F_4).
\end{eqnarray}
But, it is easy to check by integration by parts that,
for all $p_k\in\mathcal P_{k}(\widehat T)$,
\begin{equation}
\int_{\widehat T}\mathrm{div\,}(0,w_2,w_3)p_k
= - \int_{\widehat T} w_2\frac{\partial p_k}{\partial x_2}-
\frac1{\sqrt{2}}\int_{\widehat F_2}w_2\,p_k +
\frac1{\sqrt{2}}\int_{\widehat F_4}(w_2+w_3)p_k
\label{eq18}
\end{equation}
and
\begin{equation}
\int_{\widehat T}\mathrm{div\,}(w_1,w_2,0)p_k
= - \int_{\widehat T} w_2\frac{\partial p_k}{\partial x_2}+
\frac1{\sqrt{2}}\int_{\widehat F_4}w_2\,p_k +
\frac1{\sqrt{2}}\int_{\widehat F_2}(w_1-w_2)p_k\label{eq19}
\end{equation}
Now, it follows from (\ref{eq20}), (\ref{eq18}) and (\ref{eq19}) that
$$
\|\widehat\Pi_{2,k}{\bf w}\|_{L^p(\widehat T)}
\le C\left( \|w_2\|_{W^{1,p}(\widehat T)}
+ \|\mathrm{div\,}(0,w_2,w_3)\|_{L^p(\widehat T)} + \|\mathrm{div\,}(w_1,w_2,0)\|_{L^p(\widehat T)}\right)
$$
and therefore, using the definition of ${\bf w}$, we obtain
\begin{eqnarray*}
\|\widehat\Pi_{2,k}{\bf w}\|_{L^p(\widehat T)}
&\le& C \left( \|u_2\|_{W^{1,p}(\widehat T)}
+ \left\|\frac{\partial u_1}{\partial x_1}\right\|_{L^p(\widehat T)}
+\right.\\&&\left.
\left\|\frac{\partial u_3}{\partial x_3}\right\|_{L^p(\widehat T)}
+ \left\|\frac{\partial (x_1\alpha)}{\partial x_1}\right\|_{L^p(\widehat T)}
+ \left\|\frac{\partial(x_3\beta)}{\partial x_3}\right\|_{L^p(\widehat T)}\right).
\end{eqnarray*}
Then, to conclude the proof of (\ref{ineq2}) we have to estimate
the last two terms in the above inequality. From the definition of
$w_3$ we have, for all $p_k\in\mathcal P_k(\widehat T)$,
$$
0=\int_{\widehat T} w_3\frac{\partial p_k}{\partial x_3}
= - \int_{\widehat T} \frac{\partial w_3}{\partial x_3}p_k
+ \int_{\partial\widehat T} w_3n_3p_k,
$$
but, if we take $p_k=(1-x_2-x_3)p_{k-1}$ with $p_{k-1}\in\mathcal P_{k-1}(\widehat T)$
the boundary integral in the
last equation vanishes, and therefore
$$
\int_{\widehat T} (1-x_2-x_3)\frac{\partial (x_3\beta)}{\partial
x_3}p_{k-1} = \int_{\widehat T} (1-x_2-x_3)\frac{\partial
u_3}{\partial x_3}p_{k-1} \qquad \forall p_{k-1}\in\mathcal
P_{k-1}(\widehat T),
$$
from which we obtain
\[
\left\|\frac{\partial (x_3\beta)}{\partial x_3}\right\|_{L^p(\widehat T)}
\le C\,\left\|\frac{\partial u_3}{\partial
x_3}\right\|_{L^p(\widehat T)}.
\]
In a similar way we can prove
\[
\left\|\frac{\partial (x_1\alpha)}{\partial
x_1}\right\|_{L^p(\widehat T)} \le C\,\left\|\frac{\partial u_1}{\partial
x_1}\right\|_{L^p(\widehat T)}
\]
and so (\ref{ineq2}) is proved.\end{proof}
Proceeding as in the previous section we obtain now
estimates for elements in~$\mathcal F_2$.
\begin{proposition}
\label{propmac}
Let $\widetilde T\in\mathcal F_2$ be the element with vertices
at ${\bf 0}$, $h_1{\bf e}_1+ h_2 {\bf e}_2$,
$h_2{\bf e}_2$ and $h_3{\bf e}_3$, where $h_i>0$.
There exists a constant $C$ depending only on $k$
such that, for $\widetilde{\bf u}=(\tilde u_1,\tilde u_2,\tilde u_3)\in W^{1,p}(\widetilde T)^3$
and $i=1,2,3$,
\begin{equation}
\label{ineq4}
\|\widetilde\Pi_{k,i}\widetilde{\bf u}\|_{L^p(\widetilde T)}
\le C\left(\|\tilde u_i\|_{L^p(\widetilde T)}
+ \sum_{j=1}^3 h_j\left\| \frac{\partial \tilde u_i}{\partial \tilde x_j}
\right\|_{L^p(\widetilde T)}
+ h_i \sum_{j=1\atop j\neq i}^3 \left\|\frac{\partial \tilde u_j}{\partial \tilde x_j}
\right\|_{L^p(\widetilde T)}\right)
\end{equation}
\end{proposition}
\begin{proof} We proceed as in the proof of Proposition \ref{proprvp}.
Recall that now our reference element $\widehat T$ is the tetrahedron with vertices
at ${\bf 0}$, ${\bf e}_1+ {\bf e}_2$,
${\bf e}_2$ and ${\bf e}_3$. Therefore, the same linear map given
by $B$ in Proposition \ref{proprvp} maps $\widehat T$ in $\widetilde T$.
Then, if $\widehat{\bf u}\in W^{1,p}(\widehat T)^3$ is defined via the Piola transform
we have
\begin{equation}
\label{Piola3}
\widetilde\Pi_k\widetilde{\bf u}(\widetilde{\bf x})=
\frac1{\det B}\,B\,\widehat\Pi_k\widehat{\bf u}(\widehat{\bf x}),
\end{equation}
and
\begin{equation}
\label{Piola4}
\widetilde\mathrm{div\,}\widetilde{\bf u}(\widetilde{\bf x}) =
\frac1{\det B}\widehat\mathrm{div\,}\widehat{\bf u}(\widehat{\bf x}).
\end{equation}
Using (\ref{ineq3.5}) and changing variables we have
\begin{eqnarray*}
\left\|\widetilde\Pi_{k,i}\widetilde{\bf u}\right\|_{L^p(\widetilde T)}^p
&=&\frac{h_1h_2h_3}{h_2^ph_3^p}\left\|\widehat\Pi_{k,i}\widehat{\bf u}\right\|_{L^p(\widehat T)}^p\\
&\le& C \frac{h_1h_2h_3}{h_2^ph_3^p}\left(\|\hat
u_i\|^p_{L^p(\widehat T)} + \sum_{j=1}^3 \left\| \frac{\partial \hat
u_i}{\partial \hat x_j}\right\|^p_{L^p(\widehat T)} + \sum_{j=1\atop
j\neq i}^3 \left\|\frac{\partial \hat u_j}{\partial \hat x_j}
\right\|^p_{L^p(\widehat T)}\right)\\
&=&C \left(\|\tilde u_i\|^p_{L^p(\tilde T)} + \sum_{j=1}^3
h_j^p\left\| \frac{\partial \tilde u_i}{\partial \tilde
x_j}\right\|^p_{L^p(\widetilde T)} + h_i^p \sum_{j=1\atop j\neq
i}^3 \left\|\frac{\partial \tilde u_j}{\partial \tilde x_j}
\right\|^p_{L^p(\widetilde T)}\right)
\end{eqnarray*}
and therefore (\ref{ineq4}) is proved.\end{proof}
\begin{remark} For $i=1$ and $i=3$ a better result can be obtained. Indeed,
by the same arguments used in the previous proposition, but using now
(\ref{ineq1.5}) and (\ref{ineq3}), we can prove the following estimates,
$$
\left\|\widetilde\Pi_{k,i}\widetilde{\bf u}\right\|_{L^p(\widetilde T)}
\le C\left(\|\tilde u_i\|_{L^p(\widetilde T)} + \sum_{j=1}^3
h_j \left\| \frac{\partial \tilde u_i}{\partial \tilde x_j}
\right\|_{L^p(\widetilde T)} +
h_2\|\widetilde\mathrm{div\,}\widetilde{\bf u}\|_{L^p(\widetilde T)}\right).
$$
Anyway, this clearly depends on the particular orientation
of the element and so, it does not seem to be useful for
general tetrahedra.
\end{remark}
We can now prove the main result of this section.
\begin{proof}[Proof of Theorem \ref{mainmac}] From Theorem \ref{mac}
we know that there exists $\widetilde T\in \mathcal F_1\cup\mathcal F_2$
that can be mapped onto $T$ through an affine transformation
$\widetilde{\bf x}\mapsto M\widetilde{\bf x}+{\bf c}$, with $\|M\|,\|M^{-1}\|\le C$
for a constant $C$ depending only on $\bar\psi$. To simplify notation assume
that ${\bf c}={\bf 0}$.
If $\widetilde T\in\mathcal F_1$ then, $T$ satisfies the regular vertex property
with a constant which depends only on $\bar\psi$ and so (\ref{mainmac1})
follows immediately from Theorem \ref{mainrvp}. Therefore, we
may assume that $\widetilde T\in\mathcal F_2$ and has vertices at
${\bf 0}$, $h_1{\bf e}_1+ h_2 {\bf e}_2$,
$h_2{\bf e}_2$ and $h_3{\bf e}_3$, where $h_i>0$.
Given ${\bf u}\inW^{1,p}(T)^3$ we use again the Piola transform and define
$\widetilde{\bf u}\inW^{1,p}(\widetilde T)^3$ given by
\[
{\bf u}({\bf x})=\frac1{\det M}M\widetilde{\bf u}(\widetilde{\bf x}),\qquad
{\bf x}=M\widetilde{\bf x}\, .
\]
Then, using that
\[
\Pi_k{\bf u}({\bf x})=\frac1{\det M}M\,\widetilde\Pi_k\widetilde{\bf u}(\widetilde{\bf x}),
\]
changing variables and using (\ref{mainmac1}) in $\widetilde T$
we obtain
$$
\|\Pi_k{\bf u}\|_{L^p(T)}^p
\le C\|M\|^p\|M^{-1}\|^p \left( \|{\bf u}\|_{L^p(T)}^p + h_T^p
\|M\|^p\|D{\bf u}\|_{L^p(T)}^p\right)
$$
concluding the proof.\end{proof}
\setcounter{equation}{0}
\section{Sharpness of the results}
In view of the results of the previous sections, it is natural to ask whether
the estimate obtained under the maximum angle condition could be improved.
The goal of this section is to show that this is not possible.
Consider the element $\widetilde T\in\mathcal F_2$ with vertices at
${\bf 0}$, $h_1{\bf e}_1+ h_2 {\bf e}_2$,
$h_2{\bf e}_2$ and $h_3{\bf e}_3$ and with diameter $h_T$.
We are going to show that the inequality
\begin{equation}
\label{ineq9}
\|\widetilde\Pi_{k,2}\widetilde{\bf u}\|_{L^p(\widetilde T)}\le
C\left(\|\widetilde {\bf u}\|_{L^p(\widetilde T)} + \sum_{i,j=1}^3 h_j
\left\| \frac{\partial \tilde u_i}{\partial \tilde x_j}
\right\|_{L^p(\widetilde T)} +
h_T\|\widetilde\mathrm{div\,}\widetilde{\bf u}\|_{L^p(\widetilde T)}\right),
\end{equation}
with a constant $C$ independent of
$h_1, h_2$ and $h_3$,
does not hold for some
$\widetilde{\bf u}=(\tilde u_1,\tilde u_2,\tilde u_3)\in W^{1,p}(\widetilde T)^3$.
Suppose that (\ref{ineq9}), with $C$ independent of $h_1, h_2$ and
$h_3$, holds true for all
$\widetilde{\bf u}\inW^{1,p}(\widetilde T)^3$ . Let $\widehat T$ be the
reference element used in Section \ref{section4}, i.e.,
$\widehat T$ has vertices
at ${\bf 0}$, ${\bf e}_1+ {\bf e}_2$,
${\bf e}_2$ and ${\bf e}_3$. Then, with $\widehat{\bf u}\in W^{1,p}(\widehat T)^3$
we associate $\widetilde {\bf u} \in W^{1,p}(\widetilde T)^3$ defined via the
Piola transform with the linear transformation used in the proof of
Theorem \ref{mainmac}.
To simplify notation we drop the hat from now on and
write ${\bf u}$ instead of $\widehat{\bf u}$ and $x_i$ the variables
in $\widehat T$.
A simple computation shows that from inequality (\ref{ineq9})
we obtain
\begin{eqnarray*}
\|\Pi_{k,2}{\bf u}\|_{L^p(\widehat T)} & \le &
C\frac1{h_2}\left(\sum_{i=1}^3 h_i \|{\bf u}_i\|_{W^{1,p}(\widehat T)} +
h_T\|\mathrm{div\,}{\bf u}\|_{L^p(\widehat T)}\right).
\end{eqnarray*}
Then, taking $h_1=h_3=h_2^2$ (with $h_2<1$), we would have
\begin{eqnarray*}
\|\Pi_{k,2}{\bf u}\|_{L^p(\widehat T)} & \le & C\left(h_2
\|u_1\|_{W^{1,p}(\widehat T)}+ \|u_2\|_{W^{1,p}(\widehat T)} + h_2 \|u_3\|_{W^{1,p}(\widehat T)}
+ \|\mathrm{div\,}{\bf u}\|_{L^p(\widehat T)}\right),
\end{eqnarray*}
and letting $h_2\to 0$ we would arrive at
\begin{equation}\label{ineq8}
\|\Pi_{k,2}{\bf u}\|_{L^p(\widehat T)}
\le C\left(\|u_2\|_{W^{1,p}(\widehat T)} + \|\mathrm{div\,}{\bf u}\|_{L^p(\widehat T)}\right).
\end{equation}
However, we are going to show that there exists ${\bf u}\in W^{1,p}(\widehat T)^3$
for which inequality (\ref{ineq8}) is not valid.
In fact, in the next proposition we will give, for each
$k\ge 0$, a function ${\bf u}\in W^{1,p}(\widehat T)^3$ such that
the right hand side of (\ref{ineq8}) vanishes while the left hand
side does not. We will use the notation of Section \ref{section4}
for the faces of $\widehat T$.
\begin{proposition} For $k\ge 0$,
the function ${\bf u}(x_1,x_2,x_3)=(x_1^{k+1},0,-(k+1)x_1^kx_3)$
verifies $\mathrm{div\,}{\bf u}=0, u_2=0$ and $\Pi_{k,2}{\bf u}\ne0$.
\end{proposition}
\begin{proof} We consider the case $k\ge1$ (the case $k=0$ follows analogously).
Since $\mathrm{div\,}{\bf u}=0$ we have $\Pi_{k,1}{\bf u}, \Pi_{k,3}{\bf u} \in\mathcal P_k(\widehat T)$.
Now, using that $u_1=0$ on $\widehat F_1$ and $u_3=0$ on $\widehat F_3$ it
follows from the definition of $\Pi_k{\bf u}$ that
$$
\int_{\widehat F_1}\Pi_{k,1}{\bf u}\,p_k =0
\qquad \forall p_k\in\mathcal P_k(\widehat F_1)
$$
and
$$
\int_{\widehat F_3}\Pi_{k,3}{\bf u}\,p_k =0
\qquad \forall p_k\in\mathcal P_k(\widehat F_3).
$$
Then $\Pi_{k,1}{\bf u}=x_1\alpha$ and $\Pi_{k,3}{\bf u}=x_3\beta$
with $\alpha,\beta\in\mathcal P_{k-1}(\widehat T)$.
Also from the definition of $\Pi_k{\bf u}$ we have
\[
\int_{\widehat F_2} (\Pi_{k,1}{\bf u} -
\Pi_{k,2}{\bf u})\,p_k = \int_{\widehat F_2}
(u_1-u_2)\,p_k\qquad \forall p_k\in\mathcal P_k(\widehat F_2),
\]
and then, if $\Pi_{k,2}{\bf u}=0$, we would obtain
\[
\int_{\widehat F_2}x_1(\alpha-x_1^k)\,p_k=0 \qquad\forall
p_k\in\mathcal P_k(\widehat F_2).
\]
But taking $p_k=\alpha(x_1,x_1,x_3)-x_1^k$ in the last equation, we
arrive at $\alpha(x_1,x_1,x_3)=x_1^k$, but this contradicts the
fact that $\alpha\in\mathcal P_{k-1}(\widehat T)$. Then we have
$\Pi_{k,2}{\bf u}\ne0$.\end{proof}
\setcounter{equation}{0}
\section{Error estimates for RT interpolation}
We end the paper giving optimal error estimates for Raviart-Thomas
interpolation of any order. These estimates are derived from the stability
results obtained in the previous sections combined with
polynomial approximation results.
Let us recall some well known properties of the averaged Taylor polynomial.
For a convex domain $D$ and any non-negative integer
$m$, given $f\in W^{p,m+1}(D)$ the averaged Taylor polynomial
is given by
$$
Q_mf({\bf x})=\frac1{|D|}\int_D T_mf({\bf y},{\bf x})\,d{\bf y} \, ,
$$
where
$$
T_mf({\bf y},{\bf x})=\sum_{|\alpha|\le m} D^\alpha
f({\bf y})\frac{({\bf x}-{\bf y})^\alpha}{\alpha!}\, .
$$
Then, there exists a constant $C$, depending only on $m$
and $D$ (see for example \cite{BS,D3}), such that
\begin{equation}
\label{ineqinterp}
\|D^\beta (f-Q_mf)\|_{L^p(D)}
\le C \sum_{i_1+i_2+i_3=m+1}
\left\|\frac{\partial^{m+1}f}{\partial x_1^{i_1}\partial x_2^{i_2}
\partial x_3^{i_3}} \right\|_{L^p(D)}
\end{equation}
whenever $0\le |\beta|\le m+1$.
As a consequence of these results we have the following approximation
result for elements satisfying the regular vertex property. Given a
function $f$, $D^m f$ denotes the sum of the absolute values of all the
derivatives of order $m$ of $f$.
\begin{lemma}
\label{averaged taylor}
Let $T$ be a tetrahedron satisfying \textsl{RVP}$(\bar c)$ such that
${\bf p}_0$ is the regular vertex, $\ell_i, i=1,2,3$ are unitary
vectors with the directions of the edges sharing ${\bf p}_0$,
$h_i, i=1,2,3$, the lengths of these edges, and $h_T$ the diameter of $T$.
Then, given ${\bf u}\in W^{m+1,p}(T)^3$, $m\ge0$, there exists ${\bf q}\in{\mathcal P}_m(T)^3$
such that,
\begin{equation}
\label{at1}
\left\|\frac{\partial({\bf u} - {\bf q})}{\partial \ell_1}\right\|_{L^p(T)}
\le C \sum_{i_1+i_2+i_3=m} h_1^{i_1} h_2^{i_2} h_3^{i_3}
\left\|\frac{\partial^{m+1}{\bf u}}{\partial\ell_1^{i_1+1}\partial\ell_2^{i_2}
\partial\ell_3^{i_3}} \right\|_{L^p(T)}
\end{equation}
and analogously for $\frac{\partial({\bf u} - {\bf q})}{\partial \ell_j}$ with $j=2,3$.
Also,
\begin{equation}
\label{at2}
\|\mathrm{div\,}({\bf u}-{\bf q})\|_{L^p(T)}\le C h_T^m \|D^m\mathrm{div\,}{\bf u}\|_{L^p(T)}
\end{equation}
where the constant $C$ depends only on $m$ and $\bar c$.
\end{lemma}
\begin{proof} To simplify notation we assume again that ${\bf p}_0={\bf 0}$.
From Theorem \ref{sobrervp} we know that there exists a matrix $M$
such that its associated linear transformation maps $\widetilde T$ onto
$T$, where $\widetilde T$ is the element with vertices at ${\bf 0}$,
$h_1{\bf e}_1$, $h_2{\bf e}_2$ and $h_3{\bf e}_3$. Moreover, the
norms of $M$ and of its inverse matrix are bounded by a constant
which depends only on $\bar c$.
Now, as in the proof of Theorem \ref{mainrvp}, we define $\widetilde{\bf u}\in W^{m+1,p}(\widetilde T)^3$
via the Piola transform and
$$
\widetilde{\bf Q}_m\widetilde{\bf u}=(\widetilde Q_m \tilde u_1,\widetilde Q_m \tilde u_2,\widetilde Q_m \tilde
u_3)\in\mathcal P_m(\widetilde T)^3,
$$
where $\widetilde Q_m\tilde u_j$ denotes the averaged Taylor polynomial of
$\tilde u_j$.
Using the estimate (\ref{ineqinterp}) on the reference element $\widehat T$ which
has vertices at ${\bf 0}$, ${\bf e}_1$, ${\bf e}_2$ and ${\bf e}_3$, and a standard
scaling argument we obtain
$$
\left\|\frac{\partial(\widetilde{\bf u}-\widetilde{\bf Q}_m\widetilde{\bf u})}{\partial\tilde
x_1}\right\|_{L^p(\widetilde T)} \le C \sum_{i_1+i_2+i_3=m} h_1^{i_1}
h_2^{i_2} h_3^{i_3} \left\|\frac{\partial^{m+1}\widetilde{\bf u}}{\partial
\tilde x_1^{i_1+1}\partial \tilde x_2^{i_2}
\partial \tilde x_3^{i_3}} \right\|_{L^p(\widetilde T)}.
$$
Then, defining ${\bf q}\in\mathcal P_m(T)^3$ via the Piola transform,
that is,
$$
{\bf q}({\bf x})=\frac1{\det M}M\widetilde{\bf Q}_m\widetilde{\bf u}(\widetilde{\bf x}),
\qquad{\bf x}=M\widetilde{\bf x}\, ,
$$
(\ref{at1}) follows by changing variables as in the proof
of Theorem \ref{mainrvp}.
On the other hand, since
$$
\widetilde\mathrm{div\,}\widetilde{\bf Q}_m\widetilde{\bf u}= \widetilde Q_{m-1}\widetilde\mathrm{div\,}\widetilde{\bf u} ,
$$
using again (\ref{ineqinterp}) in $\widehat T$ and a scaling argument we obtain,
$$
\|\widetilde\mathrm{div\,}(\widetilde{\bf u}-\widetilde{\bf Q}_m\widetilde{\bf u})\|_{L^p(\widetilde T)}
\le C h_{\widetilde T}^m \|\widetilde D^m\widetilde\mathrm{div\,}{\bf u}\|_{L^p(\widetilde T)}
$$
and therefore, (\ref{at2}) follows by using the properties
of the Piola transform stated in (\ref{propiedadpiola}).
\end{proof}
We can now state and prove optimal error estimates for
elements satisfying the regular vertex property. Our theorem
generalizes the results proved in \cite{AD1}, where the same error estimate
was proved in the case $k=0$, as well as those proved in \cite{DL3},
where the estimate was proved for any $k\ge 0$ but only in the case $m=k$.
\begin{theorem}
Let $k\ge0$ and $T$ be a tetrahedron satisfying \textsl{RVP}$(\bar c)$. If
${\bf p}_0$ is the regular vertex, $\ell_i, i=1,2,3$ are unitary
vectors with the directions of the edges sharing ${\bf p}_0$,
$h_i, i=1,2,3$, the lengths of these edges, and $h_T$ the diameter of $T$ then,
there exists a constant $C$ depending only on $k$ and $\bar c$ such that,
for $0\le m\le k$, $1\le p\le \infty$, and ${\bf u}\in W^{m+1,p}(T)^3$,
\begin{eqnarray*}
\lefteqn{\|{\bf u} - \Pi_k{\bf u}\|_{L^p(T)}}\\&\le&
C\left\{\sum_{i_1+i_2+i_3=m+1} h_1^{i_1} h_2^{i_2} h_3^{i_3}
\left\|\frac{\partial^{m+1}{\bf u}}{\partial\ell_1^{i_1}\partial\ell_2^{i_2}
\partial\ell_3^{i_3}} \right\|_{L^p(T)}
+ h_T^{m+1} \|D^m\mathrm{div\,}{\bf u}\|_{L^p(T)}\right\}
\end{eqnarray*}
\end{theorem}
\begin{proof} Since $m\le k$, for any ${\bf q}\in {\mathcal P}_m(T)^3$ we have
$$
{\bf u} - \Pi_k{\bf u}={\bf u}-{\bf q} - \Pi_k({\bf u}-{\bf q})
$$
and therefore, applying Theorem \ref{mainrvp}, we obtain
\begin{eqnarray*}
\lefteqn{\left\|{\bf u}-\Pi_k{\bf u}\right\|_{L^p(T)}} \\
&\le& C\left\{ \|{\bf u}-{\bf q}\|_{L^p(T)} +
\sum_{i,j} h_j\left\|\frac{\partial (u_i-q_i)}{\partial
\ell_j}\right\|_{L^p(T)} + h_T\|\mathrm{div\,}({\bf u}-{\bf q})\|_{L^p(T)}\right\}.
\end{eqnarray*}
Then, taking ${\bf q}\in {\mathcal P}_m(T)^3$ satisfying (\ref{at1}) and (\ref{at2})
we conclude the proof. \end{proof}
Also optimal error estimates under the maximum angle condition
can be proved. We state the results in the following theorem.
\begin{theorem}
Let $k\ge0$ and $T$ be a tetrahedron with diameter $h_T$
satisfying \textsl{MAC}$(\bar\psi)$. There exists a constant $C$
depending only on $k$ and $\bar\psi$ such that,
for $0\le m\le k$, $1\le p\le \infty$, and ${\bf u}\in W^{m+1,p}(T)^3$,
$$
\|{\bf u}-\Pi_k{\bf u}\|_{L^p(T)}\le C h_T^{m+1}\|D^{m+1}{\bf u}\|_{L^p(T)}.
$$
\end{theorem}
The proof is analogous to that of the previous theorem, using now
the stability estimates obtained in Theorem \ref{mainmac}, and so
we omit the details.
|
2,877,628,091,229 | arxiv | \section{Introduction}
\label{Sec-intro}
\vskip -10pt
Wolf-Rayet (WR) stars are the evolved descendants of massive O-type stars. Mass loss during the main-sequence phase, possibly aided by episodic mass ejection during the Luminous Blue Variable (LBV) stage and/or Roche-lobe overflow in close binary systems, strips off the star's H-rich outer layers. This mass loss, plus mixing from the interior, helps to reveal enhanced He and N (the products of CNO H-burning) at the surface.
Such a star is identified spectroscopically as a WN-type WR. If the star has sufficiently high mass, then additional evolution, mass loss, and mixing will lead to a WC-type WR, with enhanced C and O (the products of He-burning). Further evolution may lead to one of the very rare WO-type WRs. The spectra of WRs are characterized by broad, strong emission lines as these lines are formed in an extended, expanding atmosphere/stellar wind; if absorption is present in the spectrum, it is usually
(but not always) due to a close OB companion. Reviews are provided by Maeder \& Conti (1994), Crowther (2008) and Massey (2013), among others.
The relative number of WN- and WC-type WRs as a function of metallicity has long been used as a key diagnostic of massive star evolutionary models. Main-sequence mass-loss rates are larger at
higher metallicities, as they are driven by radiation pressure acting on
highly ionized metal ions. (The metallicity-dependence of wind-driven mass loss was first offered as an explanation for the changing WC to WN number ratio by Vanbeveren \& Conti 1980.) The conventional wisdom has long been that while single-star evolutionary models do a good job of matching the WC/WN ratio at lower metallicities (such as those found in the Magellanic Clouds), they fail at the higher metallicities characteristic of the center of M33, which has a metallicity that is approximately solar, and M31, which has a metallicity that is approximately $2\times$ solar. Examples of this are shown by Massey \& Johnson (1998), Meynet \& Maeder (2005), and most recently by Neugent et al.\ (2012a).
The linchpins for such comparisons at lower metallicities are the Magellanic Clouds. They are the nearest star-forming galaxies to our own, and studies over the years have identified 139 WRs in the LMC (134 stars listed in the Breysacher et al.\ 1999 catalog [BAT99] plus 7 WRs subsequently discovered by various studies, minus 2 that have been demoted to Of-type; see Table 3 of Neugent et al.\ 2012b and references therein\footnote{Note that the reference for the discovery of [M2002] LMC 15666 as a WR star is incorrectly given in that table. Instead,
the discovery should be credited to Gvaramadze et al.\ (2012), who reported the discovery of a WR star in the LMC, but did not provide any coordinates or cross-IDs in that brief conference proceeding. Brian Skiff identified the object from their images, and this is the source of the information in Neugent et al.\ (2012b) and in SIMBAD. It is the NE component of a 2\arcsec\ pair, with the companion a B0~V star.}) and 12 in the SMC (8 listed by Azzopardi \& Breysacher 1979a, plus 4 WRs subsequently discovered; see Table 1 of Massey et al.\ 2003 and references therein). For years it has been commonly accepted that these numbers are {\it essentially} complete. For instance, in their report of discovering two WRs in the LMC, Howarth \& Walborn (2012) suggested that perhaps as many as a dozen or so weak-lined WNEs (10\% of the LMC's total WR population) remained to be found, but no more.
However, even before the Howarth \& Walborn (2012) paper appeared in print, Neugent et al.\ (2012b) announced the discovery of a very strong-lined WO-type WR in the LMC. Similarly, Massey \& Duffy (2001) concluded that the completeness of their survey for WRs in the SMC
could not ``preclude a WR star (or two) [from] having
been overlooked," a statement that proved prescient, as another SMC WN star was chanced upon within a year (Massey et al.\ 2003).
These discoveries were unsettling, and forced us to examine how we came to know the WR content of the Magellanic Clouds, and what would be involved in conducting a more thorough survey, particularly in the LMC where the number of WR stars is large enough to provide robust statistics. Some of the discoveries of WRs in the Clouds came about as part of general spectroscopy, while others came about as a result of directed objective prism searches. Of the 158 ``brightest stars" in the Magellanic Clouds, 15 were classified
as WR type (Feast et al.\ 1960) through various spectroscopic surveys. An objective prism survey of the LMC aimed at finding WRs by Westerlund \& Rodgers (1959) resulted in the identification of 50 WRs, 30 of which were in common with those known from the HD catalog. (See also Westerlund \& Smith 1964.) Deeper and more complete surveys for WRs in the Magellanic Clouds were carried out by Azzopardi \& Breysacher (1979a, 1979b, 1980). Their surveys employed an objective prism in combination with an interference filter that isolated the region around C~III $\lambda 4650$ and He~II $\lambda 4686$ (the two strongest optical emission lines in the spectra of WC- and WN-type WRs, respectively) in order to reduce problems with crowding and sky background that would have occurred with the use of the objective prism by itself.
These studies added 4 additional WRs to the 4 that were previously known
in the SMC (Breysacher \& Westerlund 1978),
and 17 additional WRs to the 80 known in the LMC (Fehrenbach et al.\ 1976).
It is worth noting that {\it all} of the new WRs found by Azzopardi \& Breysacher (1979a, 1979b, 1980) were of WN type.
Indeed, the difficulty in identifying unbiased samples of WRs has been described by Massey \& Johnson (1998): the strongest optical line in WC stars (typically C III $\lambda4650$) is about 4$\times$ stronger (on average) than that found in WN stars (He~II $\lambda$4686). The weakest-lined WN stars have He~II $\lambda 4686$ equivalent widths of just $-$10~\AA, in contrast to the $-$50~\AA\ equivalent widths found in the weakest-lined WCs (see, e.g., Fig.~1 of Massey \& Johnson 1998). Thus, a survey for WRs has to be sufficiently sensitive to detect weak-lined WNs if it is going to be useful for comparing with the predictions of the evolutionary models. Armandroff \& Massey (1985) described a set of interference filters that has proven very effective at this task: three 50~\AA\ wide filters centered on C III $\lambda 4650$, He~II $\lambda 4686$, and neighboring continuum at $\lambda 4750$ are used to image a region with CCDs, and the brightness of objects
compared. This was done by Armandroff \& Massey (1985), Massey et al.\ (1986, 1992), and Massey \& Johnson (1998) to survey small regions of Local Group galaxies beyond the Magellanic Clouds (e.g., parts of M31, M33, NGC~6822, IC~1613, IC 10, and NGC~6822) using the relatively tiny CCDs that were then available. Crowded-field photometry algorithms (i.e., PSF-fitting with {\it DAOPHOT}, Stetson 1987) were then used to find WR candidates that were significantly brighter in one of the on-band filters compared to the
expected photometric errors. More
recently, we have been able to take advantage of CCD cameras with much larger fields of view to survey all of M33
(Neugent \& Massey 2011) and M31 (Neugent et al.\ 2012a). These two surveys used image-subtraction techniques to search for candidates in order to avoid the many false positives that plagued the photometry method. Spectroscopic confirmation of these candidates demonstrated that we were finding WRs as weak-lined as any known, and indeed finding new Of-type stars with even smaller emission-lines fluxes, lending some confidence that these surveys were sufficiently sensitive and deep to be detecting the vast majority of the WNs.
For the SMC, Massey \& Duffy (2001) undertook such a survey using a wide-area CCD on the CTIO Curtis Schmidt. It
covered 9.6 deg$^2$ and spectroscopy confirmed two new WR stars (both WNs), bringing the total number of known WRs in
the SMC to 11, the result of photometry of over 1.6 million stars. Still, the survey had some deficiencies: the pixel size with the instrument was 2\farcs3, reducing the precision in crowded regions. The areal coverage, while large, did not cover all of the star-forming regions of the SMC. As mentioned above, the next year saw the discovery of a 12th SMC WR (another WN) which had been overlooked in the Massey \& Duffy (2001) survey because of crowding.
For the LMC, in the 20 years between the Azzopardi \& Breysacher (1979b, 1980) survey and the compilation of the BAT99 ``Fourth Catalogue," the number of WRs known grew from 97 to 134 (roughly 40\%), mostly as a result of accidental discovery through spectroscopy of stars in selected regions of the Clouds, with only 6 found as
the result of new objective prism surveys for WRs (Morgan \& Good 1985, 1990). Since BAT99, the number of known LMC WRs has grown to
139 (an increase of 4\%), including two demotions of WRs to Of-type.
To us, our discovery (Neugent et al.\ 2012b) of a very rare (and very strong-lined) WO-type WR in Lucke-Hodge 41, a well-studied LMC OB association (harboring, among other things, another WR star and two LBVs, including the archetype S Doradus itself), seemed a wakeup call. We are in the somewhat embarrassing position of knowing more about the WR content of
M33 and M31 (at distances of $\sim$ 800 kpc) than we do about our next-door neighbors, the Magellanic Clouds (at distances
of 50-60 kpc, i.e., $\sim 15 \times$ closer).
The question, then, was what to do about it. We decided to survey the Magellanic Clouds for WRs using the same method that we had so successfully employed in M31 and M33, using interference-filter imaging and image-subtraction
to identify WR candidates and then using spectroscopy to confirm and classify them. The task, however, is quite daunting, as
one of the things that makes the Magellanic Clouds so attractive---their closeness---also results in their very large angular sizes.
The need, however, is timely: Improved evolutionary models have become
available from the Geneva group that actually now predict a significantly smaller WC/WN ratio ($\sim 0.1$) for the LMC than the ``observed" ratio (0.23). Is the problem with the models, or is the problem that too many WNs have been missed in
past studies? At the same time, the Cambridge STARS evolutionary models continue to improve and become increasingly available (see, e.g., Eldridge et al.\ 2008), allowing comparison with models that include the effects of close binary evolution as well. And, of course, one never knows what surprises await in such new surveys, as we shall see.
Here we report the results of our first observing season. Although we have only surveyed $\sim15$\% of the SMC and LMC, we have already discovered nine new WRs (all in the LMC), two interesting ``Of?p" stars, four Of-type supergiants and a new O4~V star. Of the 9 new
WRs, the majority show strong early-type absorption. Are these extremely massive binaries, or are they members of a newly discovered class of massive stars? We describe the details of the new survey and follow-up spectroscopy
in Section~\ref{Sec-survey}, provide our new spectral types in Section~\ref{Sec-results}, demonstrate the sensitivity of our survey in Section~\ref{Sec-completeness}, and discuss the nature of our new discoveries in Section~\ref{Sec-sum}.
\section{The New Survey Begins}
\label{Sec-survey}
In designing our survey we were guided by wide-field (``parking-lot camera") images of the SMC and LMC described by Bothun \& Thompson (1988) and kindly made available by G. Bothun.
We decided to survey a region extending 3\fdg0 in radius (28.3 deg$^2$) centered on
$\alpha_{\rm 2000}=1^h08^m00^s, \delta_{\rm 2000}=-73\degr10\arcmin00\arcsec$ for the SMC, and a region extending 3\fdg5 in radius (38.5 deg$^2$) centered on $\alpha_{\rm 2000}=5^h18^m00^s, \delta_{\rm 2000}=-68\degr45\arcmin00\arcsec$ for the LMC. These areas are shown in Figs.~\ref{fig:SMC} and
~\ref{fig:LMC}, respectively. These regions encompass all of the HII regions shown in the parking-lot camera
H$\alpha$ images, as well as cataloged OB associations shown by Hodge (1985) and Lucke \& Hodge (1970).
Note that the area to be surveyed in the SMC will be $3\times$ larger than that covered by Massey \& Duffy (2001).
The choice of telescopes and instruments was straight-forward. As far as the imaging was concerned, one thing we had learned from the Massey \& Duffy (2001) survey was that good image scale is crucial in dealing with the crowding that often characterizes the location of massive stars in the Magellanic Clouds. Thus, we would rather deal with more fields (and a longer project) than accept a pixel scale of 1-2\arcsec. At the same time, the larger the field of view (FOV), the better in keeping the project finite. We realized that the Las Campanas 1~m Swope would provide a modestly large FOV (0.094 deg$^2$) with good image scale (0\farcs435 pixel$^{-1}$). Despite this, it would require $\sim 450$ fields to cover the LMC, and $\sim 330$ for the SMC, allowing for some overlap between fields. We estimated that we would achieve
adequate signal-to-noise in a few minutes per filter, and thus if we planned on 6 fields per hour (optimistic as it turned out) the survey would require ``only" 130 hours, or about 20 nights of observing, a large but not impossible amount. (The camera has since been replaced with an even better system,
with a 0.21 deg$^2$ FOV, a factor of 2.2$\times$ larger, more than off-setting our optimism about the number of fields per hour; see below.) As for the spectroscopy, the faintness ($V\sim 16$) and need for good signal-to-noise at reasonable exposure times led us to the Magellan telescopes, and in particular the Magellan Echellette Spectrograph (MagE) implemented on the 6.5~m Clay telescope (Marshall et al.\ 2008).
\subsection{Imaging}
Our recent WR surveys of M33 (Neugent \& Massey 2011) and M31 (Neugent et al.\ 2012a) used the ``standard" {\it WC}, {\it WN}, and {\it CT} WR filters developed and first used by Armandroff \& Massey (1985). These filters have bandpasses that are $\sim$50~\AA\ wide, and are centered on C III $\lambda 4650$ ({\it WC} filter), He~II $\lambda 4686$ ({\it WN} filter), and continuum at $\lambda 4750$ ({\it CT} filter).
We needed new versions of these that would fit the 3-inch $\times$ 3-inch filter holder of the Swope, and we had 2-cavity interference filters made to similar specifications by the Andover Corporation. The filters
have very good transmission ($\sim$80\% at their peaks) and full-widths at half-maximum (FWHM) bandpasses of 51-55~\AA, with 10\% band widths of $\sim$90~\AA. We note that one problem with this filter set is that foreground M dwarfs can be picked up as WR candidates as they have a strong TiO absorption band at $\lambda 4760$, which falls in the continuum band. We were confident, however, that 2MASS colors would allow us to eliminate the majority of such spurious candidates in the Magellanic Clouds, an option we did not have in the case of M31 and M33, as those candidates were much fainter than sources in the 2MASS catalog.
In Fig.~\ref{fig:test} we superimpose the transmission curves of these filters on two fluxed spectra of newly discovered WR stars we will discuss later in this paper.
Our inaugural imaging run took place on 9 nights 2013 Sept 21-29 (UT) using the SITe\#3 CCD camera. The CCD was a $2048\times4096$ device (formatted to $2048\times3150$ to avoid vignetting) with 15$\mu$m (0\farcs435) pixels; each exposure thus covered 14\farcm8$\times$22\farcm8. On the first night we observed 18 fields with
120 s exposures through each of our filters. The seeing was poor, 1\farcs6-3\farcs0. Examination of the data in real time cast doubt that we were going sufficiently deep: we found that we were detecting the faintest and weakest-lined of the known WNs but not with extremely high confidence. After the first night we increased our exposure times by a factor of 2.5 (i.e., by 1 mag) to 300 s per filter, and repeated all of the first night fields with these longer exposure times, finding that we were now extremely sure of our detections for all of the previously known WRs. We were closed due to high winds and poor seeing for most of the second night, and had some unusually poor seeing and clouds on the sixth night, but for the remainder of the time we had good conditions. Our typical seeing was 1\farcs2-1\farcs9, with a median value of 1\farcs5. Occasionally the seeing was worse than 2", in which case we repeated the field on another occasion.
The observing run substantiated that our goal of covering all of the Magellanic Clouds is practical, and we can now provide a more accurate estimate of what will be needed. In addition to 15 minutes of exposure per field, our overhead was significant, and dominated by the relatively long read-out times, 127 s per image. Unfortunately we discovered that we could not slew the telescope to the next target during the final readout without introducing streaks into our data, and so the total time spent on reading out the chip was 6.4 minutes per field. Slewing and setting up the guider did not take much time, usually just 2-3 minutes per field, and so in practice we managed about 2.5 fields per hour.
We also stopped to focus and check the telescope
pointing 2-3$\times$ a night, each requiring about 10 minutes. In the end we obtained satisfactory data on 51 fields in the SMC, and 76 fields in the LMC (127 in total), about 15-17\% of our desired coverage for each galaxy, in essentially 6 good nights (eliminating the times of poor seeing, high winds and clouds) observing about 8.5 hours on the Clouds each night. With the new camera, we will have a $2.2\times$ improvement in areal coverage and much shorter readout times, and we foresee another $\sim$15 clear nights will be needed to complete the survey. We have been fortunate in already being assigned 10 nights for the 2014 Magellanic Cloud observing season.
We took $\sim$10 bias frames and 3 dome flats per filter daily,
and tried to obtain good sky flats through at least one of the filters
each evening. We reduced the data nightly using IRAF\footnote{IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under cooperative agreement with the National Science Foundation.}. We found that the biases were indeed useful, as
there was a 12 ADU turn-up on the left side of the images that was well taken out by the bias frames.
The dome flats did an excellent job of removing the pixel-to-pixel variations and large-scale donuts due to dust specks on the dewar window, but the sky flats were necessary to remove a 4-5\% gradient in the y (NS) direction due to the dome flats not illuminating the chip quite the same way as the sky.
Prior to running the image subtraction software, we needed to carefully align the images. In order to determine accurate shifts we used IRAF's {\it daofind} to find sources that were 10$\sigma$ or higher above the noise of the background. We then matched the sources between the three filters with the assumption that the shifts were small, finding the median
pixel shifts. Next, we shifted the {\it WN} and {\it WC} images to align with the {\it CT} images using a cubic spline interpolation. We also added an accurate world coordinate system to the headers of each shifted image using the ``astrometry.net" software (Lang et al.\ 2010). At this point the images were ready for image-subtraction.
For the image subtraction, we used the High Order Transform of PSF And Template Subtraction ({\it HOTPANTS}) code written by Andrew Becker, described briefly in Becker et al.\ (2004) and in more detail on his webpage\footnote{http://www.astro.washington.edu/users/becker/v2.0/hotpants.html}, as Carlos Contreras (2013, private communication) reported excellent results with this code on images taken with the Swope. Fig.~\ref{fig:wow} shows the results of the image subtraction on one of our SMC fields. There are two known WRs in the image, SMC-WR6 (AzV 332 = Sk 108) on the left, and SMC-WR12 on the right. The former is the brightest ($V=12.3$) WR in the SMC (other than HD 5980, an unusual LBV/WR object), while the latter is the faintest ($V=15.5$) and among the weakest-lined WRs; see Table 1 in Massey et al.\ (2003). Both WRs show up unambiguously in the difference frame at the bottom. The bright WR is not close to saturation (on our {\it WN} frame it is 1.5 mag fainter than where non-linearity would become a significant issue), while the faint WR has a nominal signal-to-noise on the resultant image of 200.
After the subtracted images were produced, each of the resultant images was examined for WR candidates. All previously known WRs in our frames were readily detected except for the most crowded members of R136. Candidates were classified into three classes depending upon how certain we felt about the detection and the significance of the magnitude differences we computed from photometry of the frames. The list was then checked against known
objects in the Clouds and 2MASS photometry; this allowed us to eliminate many planetary nebulae (which will
show He~II $\lambda 4686$ nebula emission if the radiation field is hard enough), known Of-type stars, and very red stars which we expect to show up because of the TiO absorption feature in the {\it CT} filter.
In practice the process of identifying candidates from the subtracted images was somewhat iterative. We had an opportunity to observe a few candidates spectroscopically within a few weeks of our imaging run, and selected 8 candidates that spanned a range in our confidence level to help us evaluate our search. We were greatly encouraged that most but not all of these proved to be new WR stars (as described below),
substantiating that our project was not in vain.
\subsection{Spectroscopy}
For our spectroscopic followup, we used MagE (Marshall et al.\ 2008) on the Clay 6.5-m Magellan telescope.
We had previously been assigned two nights, UT 2013 Oct 16 and Dec 14,
as part of our spectroscopic survey of massive stars
in the Magellanic Clouds (see, e.g., Neugent et al.\ 2012b).
Francesco Di Mille kindly provided a few additional spectra
obtained on UT Oct 18 and Dec 16, 2013, during engineering time.
Exposure times were typically 600 s, and were followed by a 3 s Th-Ar exposure to provide wavelength calibration.
MagE provides wavelength coverage from the atmospheric cut-off ($\sim$3200~\AA) through 1$\mu$m at a resolving power of 4100 with a 1" slit.
Because of the large wavelength coverage, obtaining flat fields with sufficient
signal-to-noise so as to not degrade the actual spectra is very tricky. We have found that we can achieve high signal-to-noise (100-200) by {\it not} flat fielding, owing to the intrinsic flatness (uniformity) of the chip (see discussion in Massey \& Hanson 2013). The spectra were extracted using Jack Baldwin's ``mtools" IRAF routines, and wavelength calibrated
and fluxed with the usual IRAF echelle reduction tasks.
Spectrophotometric standards were observed in order to remove the blaze function of each order and to provide flux calibration. Further details of our reduction procedure can be found in Massey et al.\ (2012).
During the October observing we confirmed 5 new WRs in the LMC; the other 3 candidates we observed were planetary nebulae (PNe) or red stars.
By the December observing date we had completed our inspection of the remaining fields, and were prepared with a list of 7 priority 1 candidates, 25 priority 2 candidates, and 13 priority 3 candidates. All of the priority 1 and 3 candidates were in the LMC, while 9 of the 25 priority 2s were in the SMC. Other candidates had been eliminated by literature searches as either being Of-type stars, PNe, or red stars. Of the 7 priority 1s, 3 proved to be Of-type stars, and the other 4 newly found WRs, bringing the total number of newly found WRs to 9. None of the priority 2s or 3s proved to be WRs, although some proved to be Of-type stars. We describe these and the other interesting stars in the following section. During the December observing time we also re-observed the five new WRs we had found in October, as all had shown absorption lines in addition to emission, and we wanted to further investigate the nature of these stars.
\section{Spectral Classifications}
\label{Sec-results}
Our study has identified 9 new WNs in the LMC, along with 5 previously unknown Of-type stars in the LMC plus one in the SMC. Two of these turn out to be members of the ``Of?p" class of magnetic O stars. In addition, we found one early-type O star (not of Of-type) accidentally. We re-observed one known B[e] star, and were struck by the presence of broad He~II $\lambda 4686$ emission; we argue below this may be a B[e] + WN binary. We had no difficulty recovering most of the previously known WRs in our LMC fields (119 WRs) and SMC fields (12 WRs); the only exceptions were the very crowded WRs in the R136 region. We also ``rediscovered" 9 known Of stars along with several PNe.
We list the newly confirmed WRs
in Table~\ref{tab:WRs} and the other new discoveries in Table~\ref{tab:Ofs}. We provide cross references to
Massey (2002), who gives {\it UBVR} photometry for many of the massive stars in the Magellanic Clouds,
along with cross references to the near-IR catalog of Kato et al.\ (2007). The Kato et al.\ (2007) survey goes a few magnitudes deeper than 2MASS, but more to the point has considerably greater spatial resolution. We refer to these stars by our field designation i.e., LMC172-1 happens to be the first star identified in field 172 in the LMC. When our survey is complete we plan to provide a more rational designation, but for now these serve as useful short identifiers.
The coordinates listed in the table are on the ICRS system and are good to a fraction of an arcsecond. We are indebted to Brian Skiff for helping us refine these. We have provided finding charts in Fig.~\ref{fig:FCs} for the five stars that were crowded; the rest should be easily
identified from their coordinates.
We include our {\it CT} magnitude, the zero-point of which has been set from the $V$-band magnitudes of the known WRs. We do not expect this to be particularly accurate, but in nearly all cases we also have actual $V$-band magnitudes from either Massey (2002) or Zaritsky et al.\ (2004), and these usually agree to within 0.1~mag with the {\it CT} mag. We have computed the absolute magnitudes assuming a distance of
50~kpc for the LMC and 59~kpc for the SMC (van den Bergh 2000), corresponding to true distance moduli of 18.5~mag and 18.9~mag, respectively. Except for LMC 174-1, the faintest star in our sample (with $V=17.2$) , we adopt an average extinction $A_V$ of 0.4~mag and 0.3~mag, respectively, for the LMC and SMC stars, based upon Massey et al.\ (1995). These values are consistent with the reddenings we infer from our fluxed spectra, although we note that modeling would be useful for improved estimates of the reddenings. For LMC 174-1, the fluxed spectrum indicates an additional 1.2~mag of extinction, as noted in Table~\ref{tab:WRs}.
\subsection{Spectral Classifications: Newly Found WRs}
We classified the WRs following the same premises as in our M33 and M31 studies, i.e., using the criteria originally introduced by Smith (1968a, 1968b) and extended to earlier and later types by van der Hucht et al.\ (1981).
Before discussing the stars individually we need to comment on
one subtlety of the classification of early WNs. Many of the previously known WNs in the LMC are of WN3 subtype (BAT99), in which N~IV $\lambda 4058$ is ``much weaker" than N~V $\lambda \lambda 4603, 19$. The only earlier type defined at the time of BAT99 was WN2, in which N~V (along with N~IV) is absent. Van der Hucht (2001) introduced the intermediate WN2.5 type in which N~V is present but N~IV is absent, at some unspecified signal-to-noise.
If we were to classify our 8 WNs in that way
they would all be of WN2.5 type. However, examination of our old spectra (Conti \& Massey 1989) of
LMC WRs classified as WN3 confirms that these too would be classified as WN2.5.
So, for consistency with older works, we eschew the WN2.5 class and refer to these stars as WN3.
\subsubsection{WO! Another One!}
\paragraph{LMC195-1: WO2.} This star is a rare find, a WO-type Wolf-Rayet, only the third known in the LMC. Its spectrum is shown in the upper panel of Fig.~\ref{fig:LMC1951}. WO subtypes are based primarily on the ratio of
the O~VI $\lambda \lambda 3811,34$ to O~V$\lambda 5590$ (Crowther et al.\ 1998). We measure an equivalent width ratio of 4.5, making this an WO2.
The star is located within the LH41 association, home to S Dor and R85 (two LBVs), Br 21 (B1 Ia+WN3 star), and numerous O stars and B supergiants and even a rare F-type supergiant. (See Table 1 of Neugent et al.\ 2012b.)
As can be seen from the finding chart in Fig.~\ref{fig:FCs}, our new WO2 star is just 9\arcsec\ north of LH41-1042, the WO4 star we discovered two years ago (Neugent et al.\ 2012b). We were concerned for a moment that we had possibly observed the wrong star and reobserved LH41-1042, but we can reject this for three reasons. First, we made careful use of a finding chart at the telescope. Second, the telescope coordinates of the spectra of the objects observed both before and afterwards show small consistent offsets with the intended coordinates, and the observation of LMC195-1 shows the same offset to better than 1". And third, although both stars are classified as WO, their spectra are not identical by any means (hence the difference in the WO subtype). For comparison, we show the spectrum of LH41-1042 in the lower panel of Fig.~\ref{fig:LMC1951}.
\subsubsection{New WNs}
Our eight other newly found WRs are all WN3 stars. All show absorption lines in addition, but in only two cases are we convinced that these are likely binaries.
\paragraph{LMC079-1, LMC170-2, LMC172-1, LMC199-1, and LMC277-2: ``WN3+O3~V."} The spectra of these 5 stars are essentially indistinguishable, and are shown together in Fig.~\ref{fig:O3s}. We have selected a star (LMC 277-2) with high signal-to-noise data to illustrate the blue part of the spectrum in Fig.~\ref{fig:O3}. The stars have strong N~V $\lambda\lambda 4603,19$ and He~II $\lambda 4686$ emission, and exhibit absorption lines characteristic of early O-type. We classify the WR components as WN3, as no N~IV $\lambda 4058$ is present, although N~V $\lambda 4945$ emission is present. He~II $\lambda 6560$/H$\alpha$ is also strongly in emission. The absorption spectra consist of Balmer lines and He~II. Despite our good signal-to-noise (60-120 per 1~\AA\ resolution element), there is no trace of He~I $\lambda 4771$ or weaker He~I lines, and so we classify the absorption as O3~V\footnote{Walborn et al.\ (2002) extended the O-type classification to the O2 type based upon the relative strengths of N~IV $\lambda 4058$ and N~III $\lambda 4634,42$. Since none of the stars here have either line, we do not attempt to distinguish O2s from O3s; we instead use ``O3" inclusively.}. The lack of emission from
Si IV $\lambda \lambda 4089, 4116$ or N~IV $\lambda 4058$ argues for the dwarf luminosity class, while the spectral subtype is due to the lack of He~I and strong He~II $\lambda 4200$ and $\lambda 4542$ absorption.
All five stars have absolute visual magnitudes that are fainter than either WN3s or O3~Vs as we argue further in Section~\ref{Sec-sum}, where we more fully discuss the nature of these objects. Here we simply note that we obtained a second observation for LMC079-1, LMC170-2, and LMC172-1 in order to see if the radial velocities of the emission and absorption lines varied in anti-phase. Of course, without three or more observations it is hard to evaluate our measuring errors, particularly with very broad emission lines and relatively weak absorption. We measured the radial velocities both by using the line centroids and by using cross-correlation. In the end, our results
were ambiguous, as the agreement in the velocity shifts was not very consistent between the two methods. Although we found small velocity shifts for these three stars, there was no indication that the absorption and emission moved in opposite senses. Additional radial velocity monitoring is planned for the next observing season.
\paragraph{LMC143-1: WN3+O8-9 III.} The spectrum is shown in Fig.~\ref{fig:LMC1431}. We find an emission-line spectrum characteristic of an early WN-type WR plus the absorption spectrum of a mid-to-late O-type star.
Strong N~V $\lambda\lambda 4603,19$ and He~II $\lambda 4686$ are in emission, along with He~II $\lambda 4542$, N~V $\lambda 4945$, He~II $\lambda 5412$, and He~II $\lambda 6560$/H$\alpha$ emission. The latter is split into a double peak by an absorption component. There is no N~IV $\lambda 4058$ nor N~III $\lambda \lambda 4634,42$ emission, and so again we classify the WR component as WN3. The absorption spectrum is dominated by the Balmer lines and He~I, with strong He~I $\lambda 4387$ and $\lambda 4471$. He~II $\lambda 4200$ may be barely present in absorption. Si~IV~$\lambda 4089$ is modestly in absorption but there is no sign of Si III $\lambda 4553$ despite our S/N of 150 per 1~\AA\ spectral resolution element. The presence of He~II emission makes exact classification of the O star uncertain, but given the strength of Si IV and lack of Si III we conclude the O star is roughly O8-O9 III. The giant classification follows from Si IV $\lambda 4089$ being half as strong as He~I $\lambda 4026$.
We also obtained two observations of this star. The emission and the absorption line velocity shifts are anti-correlated, as one expects for a double-lined binary, although the change is somewhat marginal compared to our estimated errors.
\paragraph{LMC173-1: WN3+O7.5~V.} We illustrate the spectrum of this star in Fig.~\ref{fig:LMC1731}. We again see an emission spectrum typical of an early-type WN plus the absorption component of a mid-O-type star. There is emission at N~V $\lambda \lambda 4603, 19$ and He~II $\lambda 4686$, as well as N~V $\lambda 4945$, He~II $\lambda 4860$/H$\beta$, He~II $\lambda 5412$, and He~II $\lambda 6560$/H$\alpha$. The absorption spectrum is that of an intermediate O-type, with
He~I $\lambda 4471$ being just a bit stronger than He~II $\lambda 4542$ and modest He~I $\lambda 4387$ being
present. We classify the O star as O7.5. N~III $\lambda 4511-17$ absorption is present, as we would expect for
an O7.5~V (e.g., see the spectrum of HDE~319703A illustrated in Sota et al.\ 2011), as well as modest N~IV $\lambda 4058$ absorption. The apparent lack of any N~III $\lambda \lambda 4634, 42$ emission argues that the star is a dwarf, which is consistent with its $M_V$.
We obtained two observations of this star, and in this case the absorption and emission radial velocities were clearly anti-correlated,
with a shift of $\sim$ -100 km s$^{-1}$ for the absorption and +250 km s$^{-1}$ for the emission.
\paragraph{LMC174-1: WN3+ early O.} The spectrum of this star is shown in Fig.~\ref{fig:LMC1741}. Its spectrum is very similar to that of the ``WN3+O3~V" stars discussed above, but is described separately here as the WR emission-line spectrum dominates, with only weak absorption present at H$\delta$ and He~II $\lambda 4200$ and $\lambda 4542$.
The WR component is again a WN3, with N~V $\lambda \lambda 4603, 19$, He~II $\lambda 4686$, N~V $\lambda 4945$, He~II $\lambda 5412$, and He~II $\lambda 6560$/H$\alpha$ all present. Nebular emission that was not completely subtracted is apparent at H$\beta$ as well as
[OIII] $\lambda \lambda 4959, 5007$, H$\alpha$, [NII] $\lambda 6584$ and [SII] $\lambda \lambda 6717, 31$.
If the O star is late enough to have He~I $\lambda 4471$ it is filled in with emission. We classify the system as WN3 + early O. Of the stars in the sample, it is the only one whose fluxed spectrum indicates significant reddening, with 1.2~mag more extinction than that of the other stars. Applying this correction leads to $M_V\sim -3.0$, very similar to that of the ``WN3+O3~V" objects discussed above.
\subsection{Other Interesting Stars}
As we have emphasized in our previous surveys for WRs, a critical test of completeness is whether or not the sensitivity is sufficient to detect even Of-type stars. We discuss this further in Section~\ref{Sec-completeness}, but here we are heartened by the fact that we not only recovered known Of-type stars but discovered new ones. Two of these are newly found members of the ``Of?p" class
(see, e.g., Walborn et al.\ 2010), while four others are normal Of-type supergiants. Our spectroscopy also accidentally found an early-type O4~V star. Finally, we comment upon a previously known B[e] star suggesting it might have a WN-type companion. All the stars discussed in
this section are listed in Table~\ref{tab:Ofs}
\subsubsection{Two O8f?p Stars}
\paragraph{SMC159-2 and LMC164-2: O8f?p.}
The spectra of these two stars are shown in Fig.~\ref{fig:Ofs} in comparison with the Of-type supergiants discussed below.
Both of these stars have very strong He~II $\lambda 4686$ emission, along with
much weaker N~III $\lambda \lambda 4634,42$ and C~III $\lambda 4650$ emission. These emission line signatures
are characteristic of Of?p objects, a class introduced by Walborn (1972) to describe the peculiar spectra of
the Galactic O stars HD~108 and HD~148937, which show very strong He~II $\lambda 4686$ emission relative to that
of N~III $\lambda \lambda~4634, 42$, and C~III $\lambda~4650$ emission that is comparable to N~III.
Subsequent studies have shown that this class likely consists of magnetically-braked oblique rotators
(see discussion in Walborn et al.\ 2010). Both SMC159-2 and LMC164-2 are likely members of this class\footnote{We are indebted to our referee, Nolan Walborn, for suggesting that the Of?p classification should apply to these two stars.}.
Both stars show He~I $\lambda 4471$ absorption just a bit stronger than that of He~II $\lambda 4542$, making these both O8f?p.
SMC159-2 has a He I profile that is considerably broader than that of He II, while the reverse appears to be true for LMC164-2; this is
understandable given that He I may have a circumstellar component in Of?p stars. Finally, both
stars show narrow emission superposed on the lower Balmer absorption lines. These do not appear to be nebular as [OIII] is not present.
There is no nebulosity visible around SMC159-2 in the digitized sky survey, and only a little around LMC164-2.
Inspection of the two-dimensional spectra confirms that that the emission is not spatially extended around SMC159-2. In the case of LMC164-2, it is a little harder to tell as there are faint nebular lines present, including [O~III], but the fact that [O~III] subtracted out well in the reductions again suggests a local original for the Balmer emission. We conclude that the Balmer emission is circumstellar. An alternative classification
of these stars as supergiants is ruled out by the lack of Si~IV $\lambda 4089$ absorption.
\subsubsection{Of-type Supergiants}
\paragraph{LMC104-2: O3.5~If + O6-7.} Strong N~IV $\lambda 4058$ and N~III $\lambda \lambda 4634,41$ emission are apparent, with
He~II $\lambda 4686$ showing a P Cygni profile. Strong He~II absorption is present along with He~I $\lambda 4471$ with He~I $\lambda 4387$ only very weakly present. We would
classify this as an O3.5 If + O6-7 pair. The O3.5 If classification comes about due to the similarity in strengths of the N~IV and N~III emission (Walborn et al.\ 2002). The strong Si IV $\lambda \lambda 4089, 4116$ emission combined with N~IV $\lambda 4058$ emission argues that the star is an early O supergiant. The
presence of He~I $\lambda 4471$ combined with the weak presence of $\lambda 4387$ would suggest an O6-O7
star is also present. The blue absorption component of He II $\lambda 4686$ is likely part of a P Cyg profile (typical of O3.5 If stars), rather
than signifying that the companion is a dwarf.
\paragraph{LMC156-1: O6 If.} This star has modest N~III $\lambda \lambda 4634,42$ and He~II $\lambda 4686$ emission. In combination with the absorption spectrum, we classify it as O6 If.
\paragraph{LMC173-2: O7.5 Iaf.} The star is clearly an Of-type star, and not a WR. Weak N~III $\lambda \lambda 4634, 41$ and He~II $\lambda 4686$ emission is present, but no other emission lines are seen. The spectrum is readily classified as O7.5 Iaf.
\paragraph{LMC174-4: O4 Ifc.} This is another Of-type star, with N~IV $\lambda 4058$, Si IV $\lambda \lambda 4089, 4116$, N~III $\lambda \lambda 4634,42$, C III $\lambda 4650$, He~II $\lambda 4686$, and H$\alpha$ emission. He~I $\lambda 4471$ is readily discernible despite it having an equivalent width of 85~m\AA, thanks to our high S/N (200) spectrum. We classify the star as O4 Ifc, with the ``c"
due to the strong C III component. For comparison, see, e.g., the spectrum of CPD -47 2963 illustrated in Figure 3 of Sota et al.\ (2011).
\subsubsection{An Accidental Find of an O4~V Star}
\paragraph{LMC174-3E: O4 V.} We ``rediscovered" the Of-type star [ST92] 5-31 as part of our survey, calling it LMC174-3. This star was first classified by Testor \& Niemela (1998) as one of the very rare O3 If* stars, and more recently reclassified with the advanced notation ``O2-3(n)f*p" by Walborn et al.\ (2010). Owing to some confusion with the cross-identification and with the finding chart, we wound up not only re-observing this star, but also the fainter star located 3" to the east, which we designate LMC174-3E. The fainter star is not marked on the finding chart of Testor \& Niemela (1998), but the isophotes do show [ST92] 5-31 as elongated east and west. The NIR survey of Kato et al.\ (2008) identifies multiple sources at this position.
We classify it as O4~V.
\subsubsection{A B[e]+WN Binary?}
\paragraph{LMC174-5 = HD 38489 = LHA 120-S134 = Hen S 134: B[e]+WN?} This star is often referred to as a B[e] star (e.g., Zickgraf et al.\ 1986), and was likened to $\eta$ Car, S Dor and other LBVs by Shore \& Sanduleak (1982) and van Genderen (2001). The star is discussed extensively by Shore \& Sanduleak (1983), and a high dispersion photographic spectrum is shown and briefly discussed by Stahl et al.\ (1985)---see their Fig.~29. Technically B[e] stars should show absorption features
typical of a B star but in fact absorption has not been seen in this star, although as Conti (1997) remarks such absorption is often weak and difficult to detect. We ``rediscovered" this star, and thought it would be useful to take a modern spectrum of it, as the previous work has mostly been photographic. Our spectrum is similar to the one shown by Stahl et al.\ (1985), with strong Balmer emission and [FeII] and Fe~II lines, similar to what we observe in AE And and other ``hot" LBV candidates in M31 and M33; see Figs.~10-12 of Massey et al.\ (2007). However, what we found most intriguing was the broad He~II $\lambda 4686$ feature. The relevant section of our spectrum is shown in Fig.~\ref{fig:HD38489}. Although B[e] stars often display broad stellar wind features in addition to sharp emission lines (Zickgraf et al.\ 1985, 1986), a broad He~II $\lambda 4686$ feature is unique amongst such objects as far as we know. This feature was mentioned as a curiosity by others, e.g., Shore \& Sanduleak (1983), Stahl et al.\ (1985), and Zickgraf et al.\ (1986). We propose an alternative explanation, namely that this star is a B[e]+WN binary. We note that the star is an x-ray source (appearing in both the ROSAT All-Sky Bright Source Catalogue and XMM-Newton Serendipitous Source Catalog), unlike other bright Magellanic Cloud B[e] stars (i.e., R126 = HD 37974 = Hen S127). At $V\sim$12.0, this star serves as an example of how superficially we know the massive star content of our nearby extragalactic neighbors. Clearly a modern study of this star is warranted.
\section{Completeness}
\label{Sec-completeness}
The critical issue surrounding all searches for WRs is that of completeness: if the goal is to compare the number ratios of WCs and WNs, then being complete for the weaker-lined WNs is necessary. Such surveys are mostly {\it flux-limited}, but not entirely: a bright star with a small equivalent width will still have a substantial line flux but might be hard to detect because of the low contrast between the on-line and off-line exposures. A very faint star with a large equivalent width will have a small line flux but might be easily detectable from the high contrast, as long as the survey is sufficiently deep. In our case,
6 of our 9 newly discovered WRs are faint ($M_V\sim-3$) and have weak emission-line strengths,
sadly calling into question all previous ``complete" surveys. Of course, to keep this in perspective, we are talking about the addition of 6 unusually faint WRs compared to 131 previously known in the same fields, a 4\% issue, and within the 5\% completeness limit that we have estimated for our M33 and M31 surveys (Neugent \& Massey 2011, Neugent et al.\ 2012a).
Still, it will be interesting to see how this plays out as additional area is surveyed and how much the WC/WN ratio changes in these galaxies.
We can address the completeness issue somewhat more quantitatively by using photometry from our images to compare the ``detectability" of the newly found WRs with previously known WRs and the Of-type stars. In the upper two panels of Fig.~\ref{fig:completeness}
we plot the {\it CT} magnitude vs.\ the {\it WN}$-${\it CT} or {\it WC}$-${\it CT} magnitude differences for WN and WC stars, respectively. We have separated the plots for the LMC and SMC as the SMC WNs are often described as the ``weakest-lined" WRs known (see, e.g. Massey \& Johnson 1998). We are intrigued to find that the distinction still holds. Although the newly found WNs in the LMC have weak emission (in the sense of their not having
largely negative {\it WN}$-${\it CT} values and are fainter on average), the SMC WNs are in fact closer
to the {\it WN}$-${\it CT}=0 line. There are also other WRs already known in the LMC which are equally weak-lined and faint.
Given this, why have our new WRs not been discovered before now? In Section~\ref{Sec-intro} we emphasized the fact that many of the known WRs in the LMC were found accidentally--the result of spectroscopy rather than as part of systematic searches. We believe this underscores the necessity of the present survey.
We can examine these data another way. We have made the point above that the detectability will depend not only on
the emission line fluxes but also on the continuum magnitude. One way of combining these two is by considering the ``significance level" of the magnitude differences. This was first used by Armandroff \& Massey (1985), and discussed further by Massey \& Johnson (1998). If we detect a certain magnitude difference between the on-line WR filter and the off-line continuum filter, how significant is this difference compared to the photometric error associated with the difference? In other words, if the magnitude difference is $-0.5$~mag and the uncertainty in the magnitude is 0.05~mag, we would consider this a 10$\sigma$ detection. This is probably what limits the detectability when we visually examine the difference frames after our image subtraction. If the star is faint and the magnitude difference is small, we are less likely to detect the object, particularly if the residual image is comparable to the noise on the subtracted frame. Such a situation would result in a low significance level. At the same time if a brighter star has the same magnitude difference, the residual image will rise above the noise. Of course, the significance level will depend upon the details of the survey and the observing conditions: a 3$\sigma$ detection with one set of exposure times could be a 30$\sigma$ detection if the exposure time were increased a hundred fold, although the magnitude difference would remain the same. One should recall that we adjusted our exposure times to make the faintest and weakest-lined WNs known in the SMC easy detections.
We show the ``significance" plots in the lower half of Fig.~\ref{fig:completeness}. We have had to use a log scale, as there are known WRs which have significance levels of over 100$\sigma$! We note that all of the newly found WRs have significance levels greater than 10$\sigma$. Yet, Of-type stars with much lower significance levels were readily detected. We believe this strongly argues that our survey is finding what we set out to find.
In both kinds of plots, the Of-type stars represent an extreme: their emission-line equivalent widths are closer to zero than those of most WRs, but they are also considerably brighter on average. We see that they occur at lower significance levels than do the vast majority of our WRs.
We do note that we failed to find several of the known WRs under the extremely crowded conditions of the R136 cluster. This region is unique within the Magellanic Clouds (and indeed within the nearby universe) and we are satisfied that under ``typical" crowding conditions we are complete, as shown, for instance, by the discovery of the new WO2 star in a crowded knot in LH-41.
\section{Discussion: the Nature of our Discoveries}
\label{Sec-sum}
We have described the first exciting results of our survey. Despite having covered only 15\% of the LMC and SMC, we have confirmed 9 new WR stars in the LMC (an increase of 6\%), and suggested that a well-known B[e] star, HD 38489, may be a tenth.
We have also identified 2 of the rare Of?p objects, 4 previously unknown Of supergiants, and an O4 dwarf.
We detected all of the known WRs in these fields (except the most crowded members of R136 in 30 Dor), as well as many previously known Of-type stars. We have argued that our survey is going both faint enough and that our detection method is sensitive to even the weakest-lined WRs.
The most remarkable aspect of our find is not the quantity of new WRs, but their characteristics. First, one of the newly found WRs is a WO star, only the third to be found in the LMC. It is of WO2 type, and is located just 9\arcsec\ (2.2 pc in projected distance) from the WO4 we found two years ago (Neugent et al.\ 2012b). The other 8 newly found WRs are WN3s that also show absorption lines. Two of these WN3s appear to have normal mid-to-late O-type companions and show radial velocity variations consistent with a binary nature. However, our most remarkable find has been that of the five stars we would naively classify as ``WN3+O3~V."
The presence of absorption in the spectrum of a WR star is nearly always indicative of binarity. If absorption is otherwise present, it is usually combined with P Cygni emission, such as the case for the very luminous and massive hydrogen-rich late-type WNs seen in the R136
cluster (Massey \& Hunter 1998, Crowther et al.\ 2010) and in NGC 3603 (Melena et al.\ 2008). Those are unevolved stars but which are
so luminous that their winds mimic the emission found in evolved WRs. Possibly a closer analogy to our ``WN3+O3~V' objects are some of the SMC WN3 stars which show the absorption signature of an
early-type O star, although none as early as O3~V (Table 1 of Massey et al.\ 2003). None of these have been shown to be binaries.
However, they are all significantly more luminous than ours ($M_V\sim -3.6$ to $-5.5$) and therefore could simply be multiples viewed at unfavorable inclinations, or whose components are too widely separated to have detectable radial velocity variations.
What, then, is the nature of our ``WN3+O3~V" objects?
There are several reasons why these stars are unlikely to be actual WN3+O3~V pairs. First, O3~V stars are the ``rarest of the rare," as only the most luminous and massive stars start their lives in an O3~V phase. (Only stars of 50$M_\odot$ and larger obtain sufficiently high effective temperatures to be spectroscopically identified as O3 stars; see, e.g., Ekstr\"{o}m et al.\ 2012.) Thus, outside of the concentration of O3 stars in the very young and massive R136 cluster (Massey \& Hunter 1998), only about a dozen O3~V stars are known in the entire LMC (Skiff 2014). So, to have come across five O3~V stars that just all happen to be members of a binary system with WN3 stars seems rather far-fetched. A second, and perhaps more irrefutable argument, is that the absolute magnitudes of these ``WN3 + O3~V" systems are all quite faint, with $M_V=-2.3$ to $-3.0$. But,
this is much fainter than an O3~V star ($M_V\sim -5.4$, Conti 1988), and in fact is even faint for a WN3
($M_V\sim -3.8$, Hainich et al.\ 2014). Thus, there would seem to be no way that these objects can truly consist of a WN3+O3~V pair. Third, we have two observations for three of these systems, and none show the radial velocity variations we might expect to find in binaries.
Finally, such a WN3+O3~V system would be very hard to understand from an evolution point of view: the O3~V component must be quite young ($<$1-2~Myr), while it would have taken several million years to have formed the WR component.
With five such objects (and likely a sixth) we are forced to conclude that we have discovered a hitherto unrecognized class of WRs,
stars that are under-luminous visually and whose winds are thin enough to show underlying absorption. For absorption lines to be present from the WR itself requires a different set of physical conditions in the stellar wind than is found in other WRs. Are these ``WN3+O3~V" stars even evolved objects?
A preliminary effort at modeling the optical data of LMC170-2 using CMFGEN (Hillier \& Miller 1998) shows that a good
match to the observed spectrum (emission {\it and} absorption) can be achieved with a model using a high effective temperature ($\sim$80,000-100,000 K) along with a strongly enhanced helium (He/H$\sim$1.0 by number) and nitrogen abundances ($\sim$10 $\times$ solar),
indicative of advanced CNO processing. The models require mass-loss rates of $0.8-1.2\times 10^{-6} M_\odot$ yr$^{-1}$, corrected for clumping using a volume filling factor of 0.1. Fig.~\ref{fig:john} shows how successful the best-fitting model matches the spectrum. The high effective temperatures would imply a bolometric luminosity of $\log L/L_\odot\sim$ 5.3-5.6. These physical parameters are all in accord with what we expect for LMC WN3 stars (Hainich et al.\ 2014), except for the mass-loss rate, which is lower than what we expect for a WN3 star by a factor of 3 (see Fig.~6 of Hainich et al.\ 2014), and more similar to what we would expect from O2-3~V stars (see, e.g., Massey et al.\ 2005).
However, none of these values are well determined by the optical data alone, as we lack lines arising from multiple ionization stages of the same species, severely hindering our ability to constrain the effective temperature. For instance, we detect He~II but not He~I, and we see N~V but not N~IV. We have applied for
{\it HST} time to obtain the UV data needed to better determine these values, as these will provide additional
ionization states (for instance, N~IV $\lambda 1718$), and key diagnostics of the stellar wind (e.g.,
C~IV $\lambda 1550$). The full modeling will be discussed once those data are obtained, or, if we are not
successful in securing UV data, once we have additional optical data. But the preliminary modeling does show that the observed spectra {\it can} be produced by a single object, and (if our effective temperatures are correct) that the bolometric luminosities, and hence the progenitor masses, would be
normal rather than small. Why the mass-loss rates are low, and how these stars evolved, remain unanswered questions for the present. Are they the hitherto unrecognized products of single star evolution, or are binary models needed to produce such objects? If the latter, then where is the spectroscopic signature of the companion?
The results from our first observing season have certainly justified in our minds the effort involved in our survey. As Figs.~\ref{fig:SMC} and \ref{fig:LMC} show, we have so far concentrated on where many WRs were already known. So it is possible that that we will have a lower success rate next year in terms of finding new ones. On the other hand, we will not know until we look. Will any new ones be as equally intriguing as the ones we found this year? We look forward to more surprises.
\acknowledgements
We thank Nolan Walborn for his useful, constructive referee report, and in particular in suggesting several refinements to our classification of the O-type stars. Deidre Hunter and an earlier, anonymous referee also helped us
improve the presentation of the material. We also thank Carlos Contreras for his suggestion to try the image subtraction code {\it HOTPANTS} as well as his help in getting it to execute successfully. {\it HOTPANTS} was written by Andrew
Becker (University of Washington), who kindly makes the code freely available. We also thank
Dustin Lang (Carnegie Mellon University) for advice and support for the ``astrometry.net" software. We are grateful to Francesco Di Mille for obtaining several spectra for us during engineering time on the Clay.
This paper makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation.
We would additionally like to thank Brian Skiff for his literature search on potential WR candidates and, as always, the excellent support staff at Las Campanas for all of their help during our observing runs. This work was supported by the National Science Foundation under AST-1008020 and by Lowell Observatory's research support fund, thanks to generous donations by Mr.\ Michael Beckage and Mr.\ Donald Trantow. D.J.H. acknowledges support from STScI theory grant HST-AR-12640.01.
{\it Facilities:} \facility{Magellan:Clay(MagE spectrograph)}, \facility{Swope (SITe No. 3 imaging CCD)}
|
2,877,628,091,230 | arxiv | \section{Methods}
\linebreak\indent This research was supported in part by the Division of Materials Science and Engineering, U.S. Department of Energy and the Center for Nanophase Materials Sciences (CNMS), sponsored by the Division of Scientific User Facilities, U. S. Department of Energy. AS-S is grateful a CNPq fellowship. AGSF acknowledges the FUNCAP and CNPq agencies. AGSF and JDN acknowledge the Rede Nanotubos de Carbono/CNPq and the FAPESPA agency.
|
2,877,628,091,231 | arxiv | \section{Brief survey of population models}
Evolution equations, describing population dynamics, are widely
employed in various branches of biology, ecology, and sociology.
The main forms of such equations are given by the variants of the
predator-prey Lotka-Volterra models. In this paper, we introduce
a novel class of models whose principal feature, making them different
from other models, is the functional dependence of the population
carrying capacities on the population species. This general class
of models allows for different particular realizations
characterizing specific correlations between coexisting species.
The functional dependence of the carrying capacities describes the
mutual influence of species on the carrying capacities of each
other, including the self-influence of each kind of species on
its own capacity. Such a dependence is, both mathematically and
biologically, principally different from the direct interactions
typical of the predator-prey models. Before formulating the general
approach, we give in this section a brief survey of the main known
models of population dynamics. This will allow us to stress the
basic difference of our approach from other models used for
describing the population dynamics in biology, ecology, and
sociology.
\vskip 2mm
(i) {\bf Predator-prey Lotka-Volterra model}
\vskip 2mm
The first model, describing interacting species, one of which is
a predator with population $N_1$, and another is a prey with population $N_2$,
has been the Lotka-Volterra \cite{1,2} model
\be
\label{1}
\frac{dN_1}{dt} = -\gm_1 N_1 + A_{12} N_2 N_1 \; , \qquad
\frac{dN_2}{dt} = \gm_2 N_2 - A_{21} N_1 N_2 \; ,
\ee
where all coefficients are positive numbers. It is easy to show
that the solutions to these equations are bound oscillating
functions of time.
\vskip 2mm
(ii) {\bf Predator-prey Kolmogorov model}
\vskip 2mm
The Lotka-Volterra model is a particular case of the predator-prey
Kolmogorov model \cite{3,4} that has the general form
\be
\label{2}
\frac{dN_1}{dt} = f_1(N_1,N_2) N_1 \; , \qquad
\frac{dN_2}{dt} = f_2(N_1,N_2) N_2 \; ,
\ee
under the conditions
$$
\frac{\prt f_1}{\prt N_2} \; > \; 0 \; , \qquad
\frac{\prt f_2}{\prt N_1} \; < \; 0 \; .
$$
This model is too general and requires specifications
for describing concrete cases.
\vskip 2mm
(iii) {\bf Generalized predator-prey Lotka-Volterra model}
\vskip 2mm
Generalizing the Lotka-Volterra model (\ref{1}) to multiple
species yields the equations
\be
\label{3}
\frac{dN_i}{dt} = \left ( \gm_i +
\sum_j A_{ij} N_j \right ) N_i \; ,
\ee
where all coefficients are real numbers \cite{5}. The signs of
the parameters can be different. When all $\gamma_i$'s are
positive, while all $A_{ij}$'s are negative, one gets the
competitive Lotka-Volterra equations whose behavior has been
analyzed in detail in Refs. \cite{6,7,8,9}.
\vskip 2mm
(iv) {\bf Replicator equations}
\vskip 2mm
These equations have the form
\be
\label{4}
\frac{dN_i}{dt} = \left ( f_i - \overline f \right ) N_i \; ,
\ee
where $f_i$ is the species fitnesses and
$$
\overline f \equiv \sum_i f_i N_i
$$
is the average fitness characterizing the whole society \cite{5}.
The species populations are usually assumed to be defined on
a simplex, being normalized to a constant representing the
total fixed population
$$
\sum_i N_i = N = const \; .
$$
The $n$- dimensional replicator model is equivalent to the
$n-1$-dimensional Lotka-Volterra model (\ref{3}), to which
it can be reduced by a change of variables \cite{5,10}.
\vskip 2mm
(v) {\bf Jacob-Monod equations}
\vskip 2mm
The equations describe not the coexisting species but rather
a single type of species of population $N_1$, like bacteria,
which are fed on the nutrient of amount $N_2$. The nutrient
plays the role of the prey that is getting depleted being
consumed by the feeders and, at the same time, being supplied
into the system from outside according to the supply function
$f(N_2) = \alpha N_2 / (\beta + N_2)$. The equations read as
\be
\label{5}
\frac{dN_1}{dt} = f(N_2) N_1 \; , \qquad
\frac{dN_2}{dt} = -\gm f(N_2) N_1 ,
\ee
with all parameters being positive \cite{11}. As time increases,
$t \ra \infty$, the nutrient becomes depleted, $N_2 \ra 0$,
and the bacteria population reaches the stationary value
$N_1 = N_1(0) + N_2(0)/ \gamma$.
The Holling equation of second kind \cite{12} takes into account that
predators, in order to consume prey, need to search for it, chase,
kill, eat, and digest. This is why predators attack not all preys
but a limited number of them, which saturates to a constant when
the prey density increases. Mathematically, the Holling equation
is analogous to the Jacob-Monod model.
\vskip 2mm
(vi) {\bf Verhulst logistic equation}
\vskip 2mm
The well known logistic equation
\be
\label{6}
\frac{dN}{dt} = \gm N \left ( 1 \; - \; \frac{N}{K} \right ) \; ,
\ee
where all parameters are positive, was suggested by Verhulst \cite{13}.
The constant $K$ is the carrying capacity. The solution to this
equation is the sigmoid function
$$
N(t) =
\frac{N_0 K e^{\gm t} }{K+N_0\left (e^{\gm t} - 1\right )} \; ,
$$
in which $N_0 \equiv N(0)$.
\vskip 2mm
(vii) {\bf Hutchinson delayed logistic equation}
\vskip 2mm
If one interprets the term inside the brackets in Eq. (\ref{6}) as
an effective reproductive rate, then, as Hutchinson argued \cite{14},
it could be delayed in time, which results in the equation
\be
\label{7}
\frac{dN(t)}{dt} = \gm N(t) \left [ 1 \; - \;
\frac{N(t-\tau)}{K} \right ] \; ,
\ee
in which $K$ is a fixed carrying capacity. The solution to this
equation gives additional oscillations superimposed on the
logistic curve.
\vskip 2mm
(viii) {\bf Generalized delayed logistic equations}
\vskip 2mm
There are many variants generalizing the delayed logistic
equation (\ref{7}), which can be found in Refs. \cite{15,16,17}. For
example, the multiple-delayed equation
\be
\label{8}
\frac{dN(t)}{dt} = \gm N(t) \left [ 1 \; - \sum_j \;
\frac{N(t-\tau_j)}{K_j} \right ] \; ,
\ee
where the carrying capacities $K_j$ are positive constants.
All such equations are usually applied to single-species
systems of population $N$. The multiple carrying capacities
$K_j$ in Eq. (\ref{8}) correspond to different processes of the
same single species. The logistic equations of the above type,
whether with delays or without them, do not describe the
possible coexistence of several species. When such equations
are generalized to the case of several species, one comes back
to the generalized Lotka-Volterra predator-prey model (\ref{3}).
\vskip 2mm
(ix) {\bf Peschel-Mende hyperlogistic equation}
\vskip 2mm
In order to take into account the accelerated growth of population,
occurring, for instance, for the human world population, Peschel and
Mende \cite{18} extended the standard logistic equation by introducing
two additional positive powers $m$ and $n$, getting the equation
\be
\label{8a}
\frac{dN}{dt} = \gm N^m
\left ( 1 \; - \; \frac{N}{K} \right )^n \; .
\ee
The solution to this equation could be reasonably well fitted to the world
population dynamics. The form of the solution is a slightly modified sigmoid
curve, with the population never surmounting the carrying capacity $K$.
For $m=1$ and $n=1$, the Verhulst logistic equation (\ref{6}) is recovered.
\vskip 2mm
(x) {\bf Hyperlogistic time-delay equations}
\vskip 2mm
The straightforward extension of the Peschel-Mende hyperlogistic equation
is the hyperlogistic time-delay equation
\be
\label{8b}
\frac{dN(t)}{dt} = \gm N^m(t)
\left [ 1 \; - \; \frac{N(t-\tau)}{K} \right ]^n \; ,
\ee
which can also be treated as a generalization of the Hutchinson delayed
logistic equation ({\ref{7}). This time-delay equation is capable of
simulating a population that can essentially surmount the carrying
capacity $K$ for a limited period of time, then dropping below it
subsequently \cite{19}.
\vskip 2mm
(xi) {\bf Singular Malthus equations}
\vskip 2mm
All equations enumerated above produce bounded solutions. In some cases,
the population dynamics seems to follow a law ending with divergent
solutions. The well known Malthus equation \cite{20} gives the exponential
population growth. But sometimes, the population dynamics develops a
super-exponential behavior, diverging at a finite time $t_c$ according to a power law
of the time to the singular time $t_c$.
The simplest way of modeling such a behavior is by the equation
\be
\label{8c}
\frac{dN}{dt} = \gm N^m \; ,
\ee
with the power $m \geq 1$. This is a direct generalization of the Malthus
equation, capturing the positive feedback of the population on the growth rate:
the larger the population, the higher the growth rate. For $m=1$, one recovers
the usual Malthus equation with the exponential solution. When $m > 1$, the
solution to the equation (\ref{8c}) is of power law
$$
N(t) = \frac{C}{(t_c-t)^{1/\ep}} \; ,
$$
where
$$
C \; \equiv \; \frac{1}{(\ep\gm)^{1/\ep}} \; , \qquad
\ep \equiv m - 1 \; ,
$$
$$
t_c \; \equiv \; \frac{1}{N^\ep_0\ep\gm} \; , \qquad
N_0 \equiv N(0) \; .
$$
The solution diverges hyperbolically at the critical time $t_c$. Such
strongly singular solutions were first discussed by von Foerster
et al. \cite{21} and applied to rationalize the super-exponential growth
of the human world population \cite{22,23,24,25}, population dynamics and
financial markets \cite{26,27,28}, material failures and
earthquakes \cite{29,30}, climate and environmental changes \cite{31,32,33,34},
and dynamics of other systems \cite{35,36,37}. In ecology, the correlation
between population density and the per capita growth rate is known as the
Allee effect \cite{38}. The feedback between the population density,
associated with the Allee effect, can lead to the increase of the effective
growth rate, in the case of sufficiently large populations, or to the rate
decrease and species extinction for small-density populations \cite{38,39,40,41}.
In addition to the differential equations describing population
dynamics, there exist as well difference equations \cite{42,43} and
integro-differential equations \cite{44,45}. There are more complicated
equations characterizing several factors, such as the population density,
mass or weight dependence of individual members of different species,
the dynamics of available food, and so on \cite{44,46,47}. Some study the influence
on population dynamics of available information \cite{48}. It is also possible
to investigate the spatial dependence of populations \cite{49}. More details
on these and other types of equations can be found in the review articles
\cite{50,51,52,53}.
We do not consider here the complications caused by the desire to take into
account many various features of the studied populations.
Our aim here is different: we concentrate our attention on the influence of the
functional dependence of carrying capacities on population densities. Since
this idea is rather new, it is necessary, first of all, to study the related
effects for simpler equations, without overloading them by secondary
specifications. Once the main influence of the functional carrying capacities
is understood, it will be possible to complicate the equations by taking into
account more and more mechanisms and specificities. For the same reason,
the parameters characterizing the interactive species are treated as fixed.
In reality, the characteristics of each given biological species vary in general with
the age of the individuals composing the group. However, it is always admissible to
divide the populations into different age ranges characterized by similar
birth and death rates. Another possibility is to consider each species being
characterized by effective averaged parameters, which corresponds to what is
called the mean-field approximation \cite{36,37,54}.
In the dynamics of any population, one can distinguish several time scales.
The shortest time scale is the {\it interaction time} $t_{int}$, describing
interactions between individuals. This time is much shorter than the
{\it observation time} $t_{obs}$, during which the population is investigated.
In order that the studied species could be characterized by fixed parameters,
such as birth-death rates, the observation time should be shorter than the
{\it variation time} $t_{var}$ during which individuals experience noticeable
changes related, e.g., to their age. In this way, we keep in mind the situation,
when
$$
t_{int} \ll t_{obs} \ll t_{var} \; .
$$
Then all population parameters, except the carrying capacity, can be kept fixed.
Our main point is that the carrying capacity can vary due to mutual interactions
of individuals, hence it varies during the interaction time and this variation
needs to be taken into account.
The standard situation in treating the population evolution by
means of the equations of the above types is that the carrying
capacities are kept as {\it fixed constant parameters}. In the
following Section 2, we propose an approach where the carrying
capacities are functionals of the species populations. This
makes it possible to drastically extend the applicability of
the population-evolution equations to various situations
describing the regimes that where unavailable with other models.
The basic idea of the approach has been formulated in our
previous papers \cite{39,40}, where some particular models were
considered. Now, in Section 2, we propose a general framework
for generating a large class of such models. In Sections 3 and 4,
we consider particular variants of the suggested equations,
corresponding to those of Refs. \cite{39,40}. The difference from
the previous works is three-fold. First, we simplify the
consideration by a convenient choice of the scaling for the
terms describing the carrying capacities, which allows us to make
a more straightforward classification of different dynamical regimes.
Second, we emphasize the conditions under which solutions arise that
are characterized by extreme events, such as the finite-time death
and finite-time singularity of the species. Third, we suggest a new
interpretation for an extreme event such as the finite-time
singularity. The novel interpretation is based on the leverage effect
and considers the singularity as a manifestation of an evolutional
boom followed by a crash. Our considerations are phrased for
applications to the development, growth and possible collapse of
biological as well as human societies, as they both follow similar
dynamics with analogous underlying mechanisms \cite{37}.
\section{Population evolution with functional carrying capacity}
The idea that the carrying capacity may be not a constant but a function
of population fractions has repeatedly appeared in the literature in the
form of general discussions. In Sec. 2.1, we give a historical overview
of these ideas that provide a firm justification for their mathematical
representation in Sec. 2.2 and in the following sections.
\subsection{General meaning of functional carrying capacity}
The carrying capacity of a biological species in an environment is generally
understood as the maximum population size of the species that the environment
can sustain indefinitely, given the food, habitat, water and other necessities
available in the environment. In population biology, carrying capacity is
defined as the environment maximal load \cite{Hui2006}, which is different
from the concept of population equilibrium. Historically, carrying capacity
has been treated as a given fixed value \cite{Zimmerer1994, Sayre2008}. But
then, it has been understood that the carrying capacity of an environment may
vary for different amounts of species and may change over time due to a variety
of factors, including food availability, water supply, environmental conditions,
living space, and population activity.
Thus, the carrying capacity of a human society is influenced by the intensity
of human activity, which depends on the level of technological development.
When prehistoric humans first discovered that crude tools and weapons allowed
greater effectiveness in gathering wild foods and hunting animals, they
effectively increased the carrying capacity of the environment for their
species. The subsequent development and improvement of agricultural systems
has had a similar effect, as have discoveries in medicine and industrial
technology. Clearly, the cultural and technological evolution of human
socio-technological systems has allowed enormous increases to be achieved
in carrying capacity for our species. This increased effectiveness of
environmental exploitation has allowed a tremendous multiplication of the
human population to occur \cite{Ricklefs1990, Freedman1995}.
Technology is an important factor in the dynamics of carrying capacity. For
example, the Neolithic revolution increased the carrying capacity of the world
relative to humans through the invention of agriculture. Currently, the use of
fossil fuels has artificially increased the carrying capacity of the world by
the use of stored sunlight, albeit with increasingly negative externalities,
such as global warming, ocean acidification and the indirect reduction of
diversity. Other technological
advances that have increased the carrying capacity of the world relative to
humans are: polders, fertilizer, composting, greenhouses, land reclamation,
and fish farming. Agricultural capability on Earth expanded in the last quarter
of the 20th century. Whether this is sustainable is debatable.
There are signs that human-induced soil erosion as well as destabilization
of sensitive ecosystems may lead, at the same time, to a reduction of
agricultural capability over the coming decades, such as in Africa
where the population is expected to double before 2050. The
change in the carrying capacity of the habitat and environment
supporting a human society can be described by the
consumption impact \cite{Ehrlich1971}, which is proportional to the
population size, with a coefficient characterizing the technology level.
One way to estimate human influence on the carrying capacity of the ecosystem is
to use the so-called {\it ecological footprint accounting} method that provides empirical, non-speculative
assessments of human activities with regard to the preservation or destruction of the
Earth carrying capacity. It compares regeneration rates (biocapacity) against
human demand (ecological footprint) in the same year. The results show that, in recent years,
humanity demand has exceeded the planet biocapacity by more than 20
percent \cite{Wackernagel2002}.
The present situation of rapid population growth in some regions, massive
overexploitation of resources and steady accumulation of pollution and wastes
diminishes the Earth carrying capacity. To a first approximation, with all
the caveats associated with the heterogeneity of technological developments
in different parts of the World, one can consider
an average footprint per person, which leads to an
estimation of the decrease of the
Earth carrying capacity as roughly proportional to the size
of the total population. The question is how and by what means this
change of the Earth carrying capacity for humanity will pay back \cite{Dahl1996}.
Mutual coexistence and symbiosis of several species also strongly influences
the carrying capacities of the species, with the changes being, to a first
approximation, proportional to
the species numbers. For example, humans have increased the carrying capacity
of the environment for a few other species, including those with which we live
in a mutually beneficial symbiosis. Those companion species include more than
about 20 billion domestic animals such as cows, horses, pigs, sheep, goats,
dogs, cats, and chickens, as well as certain plants such as wheat, rice, barley,
maize, tomato, and cabbage. Clearly, humans and their selected companions have
benefited greatly through active management of mutual carrying capacities
\cite{Begon1990}.
Interactions between two or more biological species are known to essentially
influence the carrying capacity of each other, by either increasing it,
when species derive a mutual benefit, or decreasing it, when their interactions
are antagonistic \cite{Boucher1982, Callaway1995, Stachowicz2001}. The same
applies to economic and financial interactions between firms, which also form a
kind of symbiosis, where the interacting firms develop the carrying capacity
of each other also roughly proportionally to their sizes \cite{Press2006}.
Many authors (e.g., Del Monte-Luna et al. \cite{Luna2004}) stress that, due
to the influence on the carrying capacity resulting from the existing populations, its
original definition implying a constant value has lost its meaning. As a
consequence of the feedback loops of the population sizes,
the notion of carrying capacities has taken a broader sense.
Carrying capacity should be understood as a
non-equilibrium relationship or function that depends on the population size and
on the symbiotic
relations between the interacting populations. It characterizes the growth
or development of available resources at all hierarchical levels of biological
integration, beginning with the populations, and shaped by processes and
interdependent relationships between finite resources and the consumers of
those resources \cite{Luna2004}.
The above discussion shows that, in general, the carrying capacity is not a fixed
quantity, but it should be considered a function of population sizes. In the case
of a single species, the carrying capacity can be created or destroyed by the
species activity. The simplest assumption is to take the species impact
as proportional to the species population size.
When species coexist, their carrying capacities are influenced by the species
mutual interactions, either facilitating the capacity development or damaging
it. Being functions of the species populations, such nonequilibrium carrying
capacities can be naturally represented as polynomials over the population
numbers \cite{Richerson1998}.
\subsection{Mathematical formulation of basic equations}
Let the considered society consist of several species enumerated
by an index $i = 1,2, \ldots$. The number of members in a society
is denoted as $N_i = N_i(t)$,
The main idea of our approach is that the carrying capacity of
each species is not a fixed constant, but it is a functional
\be
\label{9}
K_i = K_i( \{ N_i \} )
\ee
of the set $\{N_i\}$ of the species populations. This
assumption takes into account that the species may interact
not merely directly but also by influencing the carrying
capacities of each other as well as their own carrying
capacity.
A general form of the evolution equations, that takes into
account both direct interactions and mutual influence on the
carrying capacities, can be written as
\be
\label{10}
\frac{dN_i}{dt} = \left ( \gm_i + \sum_{ij}
\frac{C_{ij}}{K_i}\; N_j \right ) N_i \; ,
\ee
where $K_i$ is the functional carrying capacity (\ref{9}).
If the latter were a fixed parameter, one would return
to the generalized Lotka-Volterra predator-prey model (\ref{3}).
But in our approach, the carrying capacities are nontrivial
functionals of populations.
In a particular case, when the species do not display
strong direct interactions, but their mutual correlations
are mainly through influencing the carrying capacities of
each other, then Eq. (\ref{10}) reduces to the evolution
equation
\be
\label{11}
\frac{dN_i}{dt} = \gm_i N_i \; - \;
\frac{C_i N_i^2}{K_i} \; .
\ee
Here the effective rate
\be
\label{12}
\gm_i \equiv \gm_i^{birth} - \gm_i^{death}
\ee
is the difference between the birth and death rates of the
corresponding species. When birth prevails over death, then
$\gamma_i > 0$, while if death is prevailing over birth, then
$\gamma_i < 0$. In economic applications, birth translates
into gain and death into loss. The parameter
\be
\label{13}
C_i \equiv C_i^{comp} - C_i^{coop}
\ee
characterizes the difference between the competition and
cooperation of the members inside a given type of species.
When competition is stronger than cooperation, then $C_i > 0$,
while if cooperation is stronger, then $C_i < 0$.
It is important to stress that Eq. (\ref{10}) is principally
different, both mathematically as well as by its meaning,
from the predator-prey model (\ref{3}). Similarly,
Eq. (\ref{11}) is principally different from the logistic
equation (\ref{6}) or its variants (\ref{7}) and (\ref{8}).
As a result, the equations with functional carrying capacities
can display novel types of solutions allowing for the
consideration of effects that are absent in other evolution
equations. In the following sections, we consider concrete
examples of evolution equations with functional carrying
capacities.
\section{Action of society on its own carrying capacity}
\subsection{Justification for equation form}
In the previously studied variants of the logistic equation,
the carrying capacity is treated as a given quantity. However,
it is often the case that the society activity does influence
its own carrying capacity that can be either enhanced by producing
new goods, materials, knowledge, and so on, or can be destroyed by
unreasonable exploitation of resources, e.g., by deforestation,
polluting water, and spoiling climate. Therefore, the carrying
capacity, taking into account such feedback effects, must be
a functional $K = K(N)$ of the population $N$.
Thus, the population evolution is characterized by the equation
\be
\label{14}
\frac{dN}{dt} = \gm N \; - \; \frac{CN^2}{K(N)} \; ,
\ee
with the carrying capacity depending on $N$ itself.
Moreover, the creation or destruction of the carrying capacity
by the society members does not occur immediately, but is delayed
since any creation or destruction requires time for its realization.
Hence, the variable $N$, entering the carrying capacity, should be
delayed in time by a lag $\tau$, so that $K(N) = K(N(t-\tau))$.
Different types of carrying capacity can be introduced that depend
on a delayed population variable. In the present paper, we consider
a simple linear form
\be
\label{15}
K(N) = A + B N( t - \tau) \; .
\ee
Here the first term $A>0$ is a {\it natural carrying capacity},
provided by Nature. The second term is the created or
destroyed capacity, depending on whether the society activity
is constructive or destructive. The parameter $B$ is the
{\it production factor}, if it is positive, and it is a
{\it destruction factor}, when it is negative. Form (\ref{15})
of the carrying capacity agrees with the assumption of
{\it additivity}, when its different parts sum to produce the
total carrying capacity.
\subsection{Reduced quantities and choice of scaling}
As the population numbers can be very large, it is therefore more
convenient to deal with the reduced quantities
\be
\label{16}
x \equiv \frac{N(t)}{N_{eff} }
\ee
measured in units of some typical population size $N_{eff}$. It is
also convenient to introduce dimensionless parameters for the
natural carrying capacity
\be
\label{17}
a \equiv \frac{A}{N_{eff} } \left |\;
\frac{\gm}{C} \; \right |
\ee
and for the production-destruction factor
\be
\label{18}
b \equiv B \left |\; \frac{\gm}{C} \; \right | \; .
\ee
The total dimensionless carrying capacity
\be
\label{19}
y \equiv \frac{K(N)}{N_{eff} } \left |\;
\frac{\gm}{C} \; \right |
\ee
takes the form
\be
\label{20}
y = a + bx(t-\tau) \; .
\ee
Up to now, the effective value $N_{eff}$ has been arbitrary.
By a special choice of the scaling, it is possible to simplify
the equations and to make a more transparent classification of
arising dynamical regimes. It is convenient to choose
\be
\label{21}
N_{eff} \equiv A \left |\; \frac{\gm}{C} \; \right | \; .
\ee
Then parameters (\ref{17}) and (\ref{18}) reduce to
\be
\label{22}
a = 1 \; , \qquad b = \frac{B}{A}\; N_{eff} \; .
\ee
The dimensionless carrying capacity (\ref{20}) becomes
\be
\label{23}
y = 1 + bx(t-\tau) \; .
\ee
Let us also define the signs of the birth-death rate and of
the competition-cooperation parameter as
\be
\label{24}
\sgm_1 \equiv {\rm sgn} \gm = \frac{\gm}{|\gm|} \; , \qquad
\sgm_2 \equiv {\rm sgn} C = \frac{C}{|C|} \; .
\ee
Depending on these signs, the following situations can occur.
\begin{eqnarray}
\label{25}
\begin{array}{lll}
\sgm_1 = +1 \; , & ~\sgm_2 = +1 & ~(gain \; + \; competition) \; , \\
\sgm_1 = +1 \; , & ~\sgm_2 = -1 & ~(gain\; + \;cooperation) \; , \\
\sgm_1 = -1 \; , & ~\sgm_2 = +1 & ~(loss\; + \;competition) \; , \\
\sgm_1 = -1 \; , & ~\sgm_2 = -1 & ~(loss\; + \;cooperation) \; .
\end{array}
\end{eqnarray}
Using the above notations and measuring time $t>0$ in units
of $1/\gamma$, we reduce Eq. (\ref{14}) to
\be
\label{26}
\frac{dx}{dt} = \sgm_1 x - \sgm_2 \; \frac{x^2}{y} \; ,
\ee
with the carrying capacity (\ref{23}). This equation is to be
complemented by the initial conditions
$$
x(t) = x_0 \qquad (t\leq 0) \; ,
$$
\be
\label{27}
y(t) = y_0 = 1 + bx_0 \qquad (t\leq 0) \; .
\ee
The solution for $x$, by its meaning, is to be positive.
The production-destruction factor $b$ can take any real values, being
positive for the constructive society activity, while negative, for
its destructive activity.
\subsection{Evolutionary stable states}
One of the most important problems in studying any
evolutional model is the determination of evolutionary
stable states. These are given by the stable stationary
solutions to the considered equation. In order to analyze the stability of the solutions
to the differential delay equations, we employ the Lyapunov stability theory
following the work by Pontryagin \cite{55} and the books \cite{11,15,16}.
The stationary states of Eq. (\ref{26}) are defined as the solutions to
the fixed-point equation
\be
\label{28}
\sgm_1 x^* \; - \; \frac{\sgm_2(x^*)^2}{1+ bx^*} = 0 \; .
\ee
This yields two fixed points
\be
\label{29}
x_1^* = 0 \; , \qquad x_2^* = \frac{\sgm_1}{\sgm_2-b} \; .
\ee
Resorting to the Lyapunov stability analysis, we need
to consider a small deviation
\be
\label{30}
\dlt x_j(t) = x_j(t) - x_j^* \qquad
(j=1,2)
\ee
from the related fixed point. This deviation satisfies the
equation
\be
\label{31}
\frac{d}{dt}\; \dlt x_j(t) = C_j \dlt x_j(t)
+ D_j\dlt x_j(t-\tau) \; ,
\ee
in which
$$
C_j \equiv \sgm_1 \; - \;
\frac{2\sgm_2 x_j^*}{1+bx_j^*} \; , \qquad
D_j \equiv b\sgm_2 \frac{x_j^*}{1+bx_j^*} \; .
$$
For the corresponding fixed points, these parameters are
$$
C_1 =\sgm_1 \; , \qquad D_1 = 0\; ,
$$
$$
C_2 = \sgm_1 \; \frac{b(\sgm_1-1)-\sgm_2}{b(\sgm_1-1)+\sgm_2} \; ,
\qquad
D_2 = \frac{b\sgm_2}{[b(\sgm_1-1)+\sgm_2]^2 } \; .
$$
Looking for the deviation in the exponential form
$$
\dlt x_j(t) \; \propto \; e^{\lbd_j t} \; ,
$$
we obtain the equation
\be
\label{32}
\lbd_j = C_j + D_j e^{-\lbd_j\tau}
\ee
for the characteristic exponents $\lambda_j$. By using
the notation
$$
W_j \equiv (\lbd_j-C_j) \tau \; , \qquad
z_j \equiv \tau D_j e^{-C_j\tau} \; ,
$$
equation (\ref{32}) becomes
$$
W_j e^{W_j} = z_j \; ,
$$
which is nothing but the equation defining the Lambert function $W_j$.
Therefore, the characteristic-exponent equation (\ref{32})
acquires the form
\be
\label{33}
\lbd_j = C_j + {1 \over \tau} W_j(D_j e^{-C_j\tau}) \; .
\ee
The stationary solution is stable when the real part of the
characteristic exponent is negative, $\Re \lambda_j < 0$.
To proceed further, we shall analyze separately the cases
listed in Eq. (\ref{25}).
\section{Society with gain and competition}
Under the prevailing gain (birth) and competition, when
\be
\label{34}
\sgm_1 = 1 \; , \qquad \sgm_2 = 1 \; ,
\ee
the evolution equation (\ref{26}) reads as
\be
\label{35}
\frac{dx(t)}{dt} = x(t) \; - \;
\frac{x^2(t)}{1+bx(t-\tau)} \; .
\ee
At the initial stage, when $0 \leq t < \tau$, we have the
exact solution
$$
x(t) = \frac{x_0(1+bx_0)e^t}{1+ x_0(b-1+e^t)} \qquad
(0\leq t< \tau ) \; .
$$
This can be used for constructing by iteration an approximate
solution at the second stage, when $\tau < t < 2\tau$. Then
we could find an approximate solution at the third step, and
so on. However, the accuracy of such iterative constructions
quickly deteriorates and is admissible only over a couple of
initial steps. More accurate solutions are to be found by numerically
solving the evolution equation (\ref{35}).
The stability analysis of the previous section shows that the
fixed point $x_1^* = 0$ is unstable for all $b$ and $\tau$.
The second fixed point
\be
\label{36}
x_2^* =\frac{1}{1-b} \equiv x^*
\ee
is stable when either
\be
\label{37}
-1 < b < 1 \; , \qquad \tau \geq 0 \; ,
\ee
or when
\be
\label{38}
b < -1 \; , \qquad \tau\leq\tau_0 \; ,
\ee
where
\be
\label{39}
\tau_0 \equiv \frac{1}{\sqrt{b^2-1} }\;
\arccos\left ( \frac{1}{b} \right ) \; .
\ee
The stability region is shown in Fig. 1.
Varying the system parameters and initial conditions, we
can meet the following dynamic regimes.
\subsection{Punctuated unlimited growth}
For the parameters
\be
\label{40}
b \geq 1 \; , \qquad \tau \geq 0 \qquad (x_0 > 0 )\; ,
\ee
the population grows by steps, as shown in Fig. 2. The
growth continues to infinite times. This is a typical
example of the punctuated evolution caused by the fact
that the production factor $b$ is positive and sufficiently
large. Hence, the carrying capacity is produced by the
population, with a delay $\tau$.
\subsection{Punctuated growth to stationary state}
The punctuated growth is not always unbounded, but it can
be bounded by the fixed point, provided the initial condition
$x_0$ is smaller than $x^*$ and the parameters are
\be
\label{41}
0 \leq b < 1 \; , \qquad \tau\geq 0 \qquad
\left ( x_0 <x^* \right ) .
\ee
This regime is presented in Fig. 3.
\subsection{Punctuated decay to stationary state}
When the parameters are the same as in Eq. (\ref{41}), but
the initial condition $x_0$ is larger than the fixed
point $x^*$,
\be
\label{42}
x_0 > x^* = \frac{1}{1-b} \; ,
\ee
then there appears the punctuated decay, as illustrated
in Fig. 4.
\subsection{Punctuated alternation to stationary state}
If the carrying capacity is destroyed by the population,
then there can occur a punctuated alternation to a stationary
state, when the parameters and the initial condition are
\be
\label{43}
-1 \leq b < 0 \; , \qquad \tau\geq 0 \qquad
\left ( x_0 < \frac{1}{|b|} \right ) \; .
\ee
This is depicted in Fig. 5.
\subsection{Oscillatory approach to stationary state}
For sufficiently large destruction factor, there arises a
regime of an oscillatory approach to a stationary state, as
is presented in Fig. 6. This happens under the parameters
and the initial condition being defined by the inequalities
\be
\label{44}
b < -1 \; , \qquad \tau <\tau_0 \qquad
\left ( x_0 < \frac{1}{|b|} \right ) \; ,
\ee
where the lag $\tau_0$ is defined in Eq. (\ref{39}).
\subsection{Sustained oscillations}
There exists a lag $\tau_1=\tau_1(b)$, such that, when
\be
\label{45}
b < -1 \; , \qquad \tau_0\leq \tau \leq \tau_1 \qquad
\left ( x_0 < \frac{1}{|b|} \right ) \; ,
\ee
then oscillations do not decay, but continue without attenuation,
as in Fig. 7. The lag $\tau_1(b)$ can be found only numerically.
\subsection{Punctuated alternation to finite-time death}
If the time lag surpasses the value $\tau_1=\tau_1(b)$, the alternating
solution exists only for a limited time. At the death time $t_d$,
given by the equation
\be
\label{46}
1 + b x(t_d-\tau) = 0 \; ,
\ee
all population becomes extinct, as in Fig. 8. This happens for
the parameters
\be
\label{47}
b < -1 \; , \qquad \tau > \tau_1 \qquad
\left ( x_0 < \frac{1}{|b|} \right ) \; .
\ee
\subsection{Growth to finite-time singularity}
In the case, where the activity of the population is destructive,
time lags are large, and the initial condition is also large, so that
\be
\label{48}
b < 0 \; , \qquad \tau > \tau_c \qquad
\left ( x_0 > \frac{1}{|b|} \right ) \; ,
\ee
the population dynamics becomes dramatic, diverging at a finite
time, called the {\it critical time}. The divergence is hyperbolic,
according to the law
\be
\label{49}
x(t) \; \propto \; \frac{1}{t_c-t} \qquad
(t\ra t_c-0 ) \; ,
\ee
as is demonstrated in Fig. 9. The values of the critical lag
$\tau_c$ and the critical divergence time $t_c$ can be found
numerically.
\subsection{Unlimited exponential growth}
For shorter time lags, when
\be
\label{50}
b < 0 \; , \qquad \tau \leq \tau_c \qquad
\left ( x_0 > \frac{1}{|b|} \right ) \; ,
\ee
the divergence moves to infinity, the solution being a simple
growing exponential, as shown in Fig. 10.
\vskip 2mm
In this system with gain and competition, there may happen two
extreme events, the finite-time death at a death time $t_d$ and
the finite-time singularity at a critical time $t_c$. These two
extreme events occur under the condition of a destructive activity
of the population. The finite-time death is caused by the
destruction of all resources. The finite time singularity implies
that close to this critical point, the dynamic regime has to be changed,
according to the accepted interpretation of such singularities
\cite{27,39,40}. Such a change of the dynamic regime is analogous to
the occurrence of critical phenomena in statistical systems
\cite{36,37,54,56}. An interpretation of the finite-time singularity, based
on the leverage effect, will be given below.
\section{Society with gain and cooperation}
When gain prevails over loss, and cooperation over competition,
that is, when
\be
\label{51}
\sgm_1 = 1 \; , \qquad \sgm_2 = -1 \; ,
\ee
the evolution equation takes the form
\be
\label{52}
\frac{dx(t)}{dt} = x(t) + \frac{x^2(t)}{1+bx(t-\tau)} \; .
\ee
There are no stable stationary solutions in that case.
Depending on the system parameters and initial conditions,
there can arise the following dynamic regimes.
\subsection{Growth to finite-time singularity}
When the population activity is productive, but the time lag
is long, so that
\be
\label{53}
b > 0 \; , \qquad \tau> \tau_c \qquad (x_0 > 0 ) \; ,
\ee
or if the activity is destructive, when
\be
\label{54}
b < 0 \; , \qquad \tau > 0 \qquad
\left ( x_0 < \frac{1}{|b|} \right ) \; ,
\ee
then the solution diverges at a finite critical time $t_c$.
The behavior is the same as in Fig. 9.
\subsection{Unlimited exponential growth}
Productive activity, under cooperation and not too long
time lags, such that
\be
\label{55}
b > 0 \; \qquad 0 < \tau \leq \tau_c \qquad (x_0>0) \; ,
\ee
result in an exponential growth, as in Fig. 10.
\subsection{Punctuated unlimited growth}
For the parameters
\be
\label{56}
b < -1 \; , \qquad \tau\geq 0 \qquad
\left ( x_0 > \frac{1}{|b|-1} \right ) \; ,
\ee
the solution displays unlimited punctuated growth, as in Fig. 2.
\subsection{Punctuated decay to finite-time death}
Destructive activity, under one of the conditions, when either
\be
\label{57}
-1 < b < 0 \; , \qquad \tau \geq 0 \qquad
\left ( x_0 > \frac{1}{1-|b|} \right ) \; ,
\ee
or when
\be
\label{58}
b < -1 \; , \qquad \tau \geq 0 \qquad
\left ( x_0 < \frac{1}{|b|-1} \right ) \; ,
\ee
leads to population extinction, at the death time given
by Eq. (\ref{46}). But the dynamics for this case, as is shown
in Fig 11, is different from that of Fig. 8. In the present case,
there are no alternations, but the decay to zero is monotonic,
exhibiting a finite number of quasi-plateaus.
\vskip 2mm
Under the conditions of gain and cooperation, there are two types
of extreme events, the finite-time singularity at a critical
time $t_c$ and the finite-time death at a death time $t_d$. The
finite-time death is caused by the destructive population activity.
And the finite-time singularity means that, close to the singularity
point, the system experiences a change of dynamic regime.
\section{Society with loss and competition}
Under prevailing loss and competition, when
\be
\label{59}
\sgm_1 = -1 \; , \qquad \sgm_2 = 1 \; ,
\ee
the evolution equation becomes
\be
\label{60}
\frac{dx(t)}{dt} = - x(t) \; - \;
\frac{x^2(t)}{1+bx(t-\tau)} \; .
\ee
There are two stable fixed points. One is the trivial point
\be
\label{61}
x_1^* = 0 \; ,
\ee
which is stable for all parameters
\be
\label{62}
-\infty < b < \infty \; , \qquad \tau \geq 0 \; .
\ee
Another stationary point
\be
\label{63}
x_2^* = - \frac{1}{1+b} \equiv x^*
\ee
is stable for the parameters
\be
\label{64}
b < -1 \; , \qquad \tau < \tau_0 \; ,
\ee
where
\be
\label{65}
\tau_0 \equiv \frac{1}{\sqrt{b^2-1} }\; \arccos
\left ( \frac{1}{|b|} \right ) \; .
\ee
Thus, there is the bistability region shown in Fig. 12.
Solutions tend to one of the two stationary states, when
the initial conditions are in the basin of attraction of the
corresponding fixed point. The following regimes can arise.
\subsection{Monotonic decay to zero}
In a society with prevailing loss and competition, the
decay to zero, as in Fig. 13, seems to be a natural type of
behavior. This happens when either
\be
\label{66}
b > 0 \; , \qquad \tau \geq 0 \qquad
(x_0 > 0 ) \; ,
\ee
or when
\be
\label{67}
b > 0 \; , \qquad \tau \geq 0 \qquad
\left ( x_0 < \frac{1}{|b|} \right ) \; .
\ee
\subsection{Oscillatory convergence to stationary state}
When the parameters are such that
\be
\label{67a}
b < -1 \; , \qquad 0 < \tau < \tau_0 \qquad
\left ( x_0 > \frac{1}{|b|-1} \right ) \; ,
\ee
the population fraction $x$ oscillates in time, converging
to the stationary state (\ref{63}), as is shown in Fig. 14.
Oscillations are caused by the presence of the time delay.
\subsection{Everlasting nondecaying oscillations}
For the parameters
\be
\label{68}
b < -1 \; , \qquad \tau_0 \leq \tau < \tau_1 \qquad
\left ( x_0 > \frac{1}{|b|-1} \right ) \; ,
\ee
the solution oscillates without decay, similarly to the
behavior in Fig. 7. The time lag $\tau_0$ is given by Eq.
(\ref{65}) and $\tau_1$ is defined numerically.
\subsection{Punctuated growth to finite-time singularity}
A rather interesting behavior of the population dynamics
happens for the parameters
\be
\label{69}
b < -1 \; , \qquad \tau_1 \leq \tau < \tau_2 \qquad
\left ( x_0 > \frac{1}{|b|-1} \right ) \; .
\ee
Then the solution experiences several punctuations, after
which it diverges, as is illustrated in Fig. 15, at the
critical time defined by the equation
\be
\label{70}
1 + bx(t_c-\tau) = 0 \; .
\ee
When the final rise is preceded by a fall, this
behavior is reminiscent of the Parrondo effect \cite{57},
\subsection{Up-down convergence to stationary state}
A highly non-monotonic behavior exists for the parameters
\be
\label{71}
b < -1 \; , \qquad 0 \leq \tau < \tau_c \qquad
\left ( \frac{1}{|b|} < x_0 < \frac{1}{|b|-1} \right ) \;,
\ee
where the time lag $\tau_c$ can be found only numerically.
In this case, the solution, first, bursts out upwards, after which
it decays to the stationary value $x^*$, as in Fig. 16.
\subsection{Growth to finite-time singularity}
Under the parameters
\be
\label{72}
b < -1 \; , \qquad \tau > \tau_c \qquad
\left ( \frac{1}{|b|} < x_0 < \frac{1}{|b|-1} \right ) \; ,
\ee
the solution diverges at a finite critical time, without
any punctuation, in the same way as in Fig. 9.
\subsection{Unlimited exponential growth}
In the region of the parameters
\be
\label{73}
-1 < b < 0 \; , \qquad 0 < \tau \leq \tau_c \qquad
\left ( x_0 > \frac{1}{|b|} \right ) \; ,
\ee
the solution grows exponentially, as in Fig. 10.
\vskip 2mm
For a society with prevailing loss and competition, there
are two extreme events, both characterized by a finite-time
singularity at a critical time $t_c$. These regimes occur
under a strong destructive activity of the population and
a rather long time lag. The divergence can be understood
as a critical point where the society dynamics qualitatively
changes.
\section{Society with loss and cooperation}
When loss and cooperation prevail, so that
\be
\label{74}
\sgm = -1 \; , \qquad \sgm_2 = -1 \; ,
\ee
the population evolution equation is
\be
\label{75}
\frac{dx(t)}{dt} = - x(t) + \frac{x^2(t)}{1+bx(t-\tau)} \; .
\ee
There exists the sole evolutionary stable state
\be
\label{76}
x^* = 0
\ee
that is stable for all parameters
\be
\label{77}
-\infty < b < \infty \; , \qquad \tau \geq 0 \; .
\ee
The following dynamic regimes are possible.
\subsection{Monotonic decay to zero}
For the initial conditions in the attraction basin of the
stable fixed point, the solutions decay to zero with time,
as in Fig. 13. This happens when either
\be
\label{78}
b < 0 \; , \qquad \tau \geq 0 \qquad
\left ( x_0 < \frac{1}{1-b} \right ) \; ,
\ee
or when
\be
\label{79}
b > 1 \; , \qquad \tau \geq 0 \qquad
\left ( x_0 > \frac{1}{b-1} \right ) \; .
\ee
\subsection{Growth to finite-time singularity}
If the initial conditions are outside of the attraction
basin of the fixed point (\ref{76}), they can diverge
at a finite critical time, similarly to the behavior
in Fig. 9. This happens when either
\be
\label{80}
b \leq 0 \; , \qquad \tau > 0 \qquad
\left ( x_0 < \frac{1}{|b|} \right ) \; ,
\ee
or when
\be
\label{81}
0 < b < 1 \; , \qquad \tau > \tau_c \qquad
\left ( x_0 > \frac{1}{1-b} \right ) \; .
\ee
\subsection{Unlimited exponential growth}
For the parameters
\be
\label{82}
0 < b < 1 \; , \qquad \tau < \tau_c \qquad
\left ( x_0 > \frac{1}{1-b} \right ) \; ,
\ee
the solution exhibits exponential growth, as in Fig. 10.
\subsection{Monotonic decay to finite-time death}
Finally, for the parameters
\be
\label{83}
b < 0 \; , \qquad 0 \leq \tau < \tau_d \qquad
\left ( x_0 > \frac{1}{|b|} \right ) \; ,
\ee
the population becomes extinct at a finite death time, as
in Fig. 17. The death time is defined by an equation having
the same form as Eq. (\ref{46}). However the decay to death
now is monotonic, which distinguishes it from the punctuated
behavior before death, shown in Fig. 8 and Fig. 11.
\vskip 2mm
The society with prevailing loss and cooperation can exhibit
the finite-time singularity as well as the finite-time death.
These two types of extreme events happen under the destructive
activity of population.
Summarizing, all extreme events, except one, occur when the
population destroys its carrying capacity. The sole
exception is the case of a society with gain and cooperation,
when there can arise a finite-time singularity under $b>0$,
i.e., when the activity of the population is productive. This
latter type of finite-time singularity is analogous to
that studied in Ref. \cite{27}. Its appearance means that, near
the critical time, the society becomes unstable and requires
to change its parameters, for instance replacing cooperation
by competition. It seems to be rather clear that, when the
population grows too much, the competition of individuals must
come into play, becoming prevailing over their cooperation.
The finite-time singularities, occurring under the destructive
society activity, imply the existence of some critical events,
whose detailed interpretation will be given in Sec. 11.
\section{Mutual influence of symbiotic species on their
carrying capacities}
\subsection{Classification of symbiosis types}
When the considered society is structured with
several species, it is necessary to characterize their
interactions. The standard way of doing this is by assuming
the equations of the predator-prey type (\ref{3}), with
direct interactions of species that eat each other. Such
equations, however, cannot describe indirect interactions,
when the species do not kill each other, but influence the
carrying capacities of each other. Therefore, the predator-prey
equations are suitable for describing the predator-prey
relations, but are not suitable for characterizing symbiotic
relations \cite{40}.
Examples of symbiosis are ubiquitous in biology and ecology
\cite{58,59,60,61}. It is also widespread in human societies.
For example, one can treat as symbiotic the interrelations between
firms and banks, between population and government, between culture
and language, between economics and arts, and between basic
science and applied research.
Considering purely symbiotic relations, we need equation (\ref{11}),
with the carrying capacities being functionals of the species
populations. The natural form of such carrying capacities for
symbiotic species is
\be
\label{84}
K_i = A_i + B_i S_i ( \{ N_i \} ) \; .
\ee
Here $A_i > 0$ is the natural carrying capacity, provided by
nature, for an $i$-th species. The coefficient $B_i$
characterizes the strength of influence of other species on
the carrying capacity of the $i$-th species. When $B_i$ is
positive, it can be called the production factor, while,
if it is negative, it is the destruction factor. The function
$S(\{N_i\})$ is a symbiotic function specifying the mutual
relations between symbiotic species. Since the sign has
already been attributed to the factor $B_i$, the symbiotic
function can be treated as non-negative.
Depending on the kinds of symbiotic relations, that is, on
the signs of the factors $B_i$, there can occur different
variants of symbiosis. To illustrate this, let us analyze the
case of two symbiotic species for which there can exist the
following types of symbiosis.
\vskip 2mm
(i) {\bf Mutualism}, when both species are useful for each
other, developing their mutual carrying capacities:
\be
\label{85}
B_1 > 0 \; , \qquad B_2 > 0 \quad (mutualism) \; .
\ee
\vskip 2mm
(ii) {\bf Parasitism}, when one of the species is harmful
for another, or both species are harmful for each other,
destroying the carrying capacities, which happens under
one of the pairs of inequalities below:
\begin{eqnarray}
\label{86}
\begin{array}{lll}
B_1 > 0 \; , & \qquad B_2 < 0 \; , & \\
B_1 < 0 \; , & \qquad B_2 > 0 \; , & \quad (parasitism) \\
B_1 < 0 \; , & \qquad B_2 < 0 ~. &
\end{array}
\end{eqnarray}
\vskip 2mm
(iii) {\bf Commensalism}, when one of the species is useful
for another, while the latter is indifferent to the existence
of the first species, which corresponds to the validity of one
of the pairs of equations:
\begin{eqnarray}
\label{87}
\begin{array}{lll}
B_1 > 0 \; , & \qquad B_2 = 0 \; , & \\
B_1 = 0 \; , & \qquad B_2 > 0 & \quad (commensalism) \; .
\end{array}
\end{eqnarray}
\subsection{Normalized species fractions}
We continue analyzing the symbiotic coexistence of two kinds
of species. As always, it is more convenient to work with
reduced quantities. So, we introduce the reduced fractions
\be
\label{88}
x \equiv \frac{N_1}{N_{eff} } \; , \qquad
z \equiv \frac{N_2}{Z_{eff} } \; ,
\ee
whose normalization values $N_{eff}$ and $Z_{eff}$ will be
chosen later. We define the dimensionless carrying capacities
\be
\label{89}
y_1 \equiv \frac{\gm_1 K_1}{C_1 N_{eff} } \; , \qquad
y_2 \equiv \frac{\gm_1 K_2}{C_2 Z_{eff} }
\ee
and the relative birth rate
\be
\label{90}
\al \equiv \frac{\gm_2}{\gm_1} \; .
\ee
With these notations, the symbiotic equations (\ref{11}), in
the case of two types of species, reduce to
\be
\label{91}
\frac{dx}{dt} = x \; - \; \frac{x^2}{y_1} \; , \qquad
\frac{dz}{dt} =\al z \; - \; \frac{z^2}{y_2} \; ,
\ee
where time is measured in units of $1/\gamma_1$. By their
definition, the solutions $x$ and $y$ are non-negative.
The equations are complemented by the initial conditions
\be
\label{92}
x(0) = x_0 \; , \qquad z(0) = z_0 \; .
\ee
For the following analysis, it is necessary to make concrete
the explicit forms of the carrying capacities (\ref{84}).
\section{Symbiosis with mutual interactions}
\subsection{Derivation of normalized equations}
The action of the species on the carrying capacities of
each other can be different, depending on whether,
influencing the carrying capacities, the species interact
or not. If the species, in the process of influencing their
carrying capacities, interact with each other, then the
carrying capacities (\ref{84}) can be represented in the
form
\be
\label{93}
K_1 = A_1 + B_1 N_1 N_2 \; , \qquad
K_2 = A_2 + B_2 N_2 N_1 \; .
\ee
Generally, the populations $N_1$ and $N_2$ in these
carrying capacities could depend on the shifted time, when
one would have
$$
K_i = A_i + B_i N_i(t-\tau_i) N_j(t-\tau_j) \; ,
$$
where $i \neq j$. However, we need, first, to understand the
influence of symbiosis without the time lag. Therefore, we
consider below the interactions without time delay.
Introducing the dimensionless natural carrying capacities
\be
\label{94}
a_1 \equiv \frac{\gm_1 A_1}{C_1 N_{eff} } \; , \qquad
a_2 \equiv \frac{\gm_1 A_2}{C_2 Z_{eff} } \;
\ee
and dimensionless symbiotic factors
\be
\label{95}
b \equiv \frac{\gm_1 B_1 Z_{eff}}{C_1} \; , \qquad
g \equiv \frac{\gm_1 B_2N_{eff} }{C_2}
\ee
translates Eqs. (\ref{93}) into the dimensionless expressions
\be
\label{96}
y_1 = a_1 + bxz \; , \qquad y_2 = a_2 + gxz \; .
\ee
Since the scaling values $N_{eff}$ and $Z_{eff}$ are arbitrary,
it is reasonable to choose them so as to simplify the equations.
For this purpose, we set
\be
\label{97}
N_{eff} \equiv \frac{\gm_1 A_1}{C_1} \; , \qquad
Z_{eff} \equiv \frac{\gm_1 A_2}{C_2} \; .
\ee
Then the natural carrying capacities (\ref{94}) become
\be
\label{98}
a_1 = a_2 = 1 \; .
\ee
And the total carrying capacities (\ref{96}) read as
\be
\label{99}
y_1 = 1 + bxz \; , \qquad y_2 = 1 + gxz \; .
\ee
As usual, we measure time in units of $1/\gamma_1$. The most
interesting case in symbiosis is when the species influence
each other throughout their lifetimes, and when these lifetimes
are of comparable durations. If this were not the case, i.e., with
very different lifetimes, the symbiotic relations could not be
supported for a duration longer than the shortest lifespan, making
symbiosis inefficient for the longer-lived species. Therefore, we
assume that the symbiotic species have comparable growth rates,
because the inverse of the growth rate sets the time scale of
lifetime, and the later is often found proportional to the growth
period, at least for mammals \cite{62}. We thus set $\alpha = 1$ and
obtain the equations
\be
\label{100}
\frac{dx}{dt} = x \; - \; \frac{x^2}{1+bxz} \; , \qquad
\frac{dz}{dt} = z \; - \; \frac{z^2}{1+gxz} \; .
\ee
We can note that these equations are symmetric with respect to
the simultaneous interchange between $x$ with $z$ and between $b$
with $g$. This symmetry will result in the corresponding symmetry
of the following solutions.
\subsection{Evolutionary stable states}
Again we use the Lyapunov stability analysis \cite{11,16,55}.
Equations (\ref{100}) possess the non-zero stationary state
$$
x^* = \frac{1}{2g} \left [ 1 - b + g -
\sqrt{(1+b-g)^2 - 4b} \right ] \; ,
$$
\be
\label{101}
z^* = \frac{1}{2b} \left [ 1 + b - g -
\sqrt{(1+b-g)^2 - 4b} \right ] \; .
\ee
It is stable when either
\be
\label{102}
b< 0 \; , \qquad -\infty < g < +\infty \; ,
\ee
or when
\be
\label{103}
0 \leq b < 1 \; , \qquad g\leq g_c \; ,
\ee
or when
\be
\label{104}
b \geq 1 \; , \qquad g\leq 0 \; ,
\ee
where the critical value $g_c$ is
\be
\label{105}
g_c \equiv \left ( 1 - \sqrt{b}\right )^2 \leq 1 \; .
\ee
The stability region is depicted in Fig. 18. The basin of attraction
of this stationary state, depending on the signs of
the symbiotic factors, is defined by the following equations:
$$
x_0z_0 < \frac{1}{|b|} \qquad ( b < 0 , \; g > 0 ) \; ,
$$
$$
x_0z_0 < \frac{1}{|g|} \qquad ( b > 0 , \; g < 0 ) \; ,
$$
\be
\label{106}
x_0z_0 < min \left \{ \frac{1}{|b|}\; , \frac{1}{|g|}
\right \} \qquad ( b < 0 , \; g<0) \; .
\ee
The solutions to the symbiotic equations (\ref{100}) should
be compared to those of the uncoupled equations
\be
\label{107}
\frac{dx}{dt} = x - x^2 \; , \qquad
\frac{dz}{dt} = z - z^2 \qquad ( b = g = 0 ) \; ,
\ee
corresponding to the case of no symbiosis, when the stationary
states are $x^* = z^* = 1$.
Solving numerically the system of equations (\ref{100}) for
different symbiotic factors and initial conditions yields
the following possible dynamic regimes.
\subsection{Convergence to stationary states}
For the system parameters in the region of stability and
for the initial conditions in the basin of attraction of the
non-zero fixed point (\ref{101}), both species develop and converge
to the stationary state. This is illustrated in Fig. 19 for
different types of symbiosis, where, for comparison, the solutions
for the case of no symbiosis are also presented. The four possible
cases are illustrated in the four panels of Fig. 19, depending on
the relative positions of $x(t)$ and $z(t)$ compared with the solution
of the uncoupled equations (\ref{107}).
\subsection{Unlimited exponential growth}
When stationary solutions do not exist, so that either
\be
\label{108}
0 < b < 1 \; , \qquad g > g_c \; ,
\ee
or when
\be
\label{109}
b > 1 \; , \qquad g > 0 \; ,
\ee
or when they exist, but the initial conditions are taken
outside of the attraction basin, then the populations of
both species grow exponentially with time.
\subsection{Finite-time death and singularity}
In the case of mutual parasitism, there can happen an extreme
solution when one of the species becomes extinct at a
finite critical time, while the other species displays a
finite-time singularity, as is shown in Fig. 20. This happens
when the initial conditions are outside of the attraction
basin so that either
\be
\label{110}
\frac{1}{|b| } < x_0 z_0 < \frac{1}{|g| } \qquad
( b < g < 0 ) \; ,
\ee
or if
\be
\label{111}
\frac{1}{|g| } < x_0 z_0 < \frac{1}{|b| } \qquad
( g < b < 0 ) \; .
\ee
The critical time is defined by one of the corresponding
equations:
$$
x(t_c) z(t_c) = \frac{1}{|b|} \qquad ( b < g < 0 ) \; ,
$$
\be
\label{112}
x(t_c) z(t_c) = \frac{1}{|g|} \qquad ( g < b < 0 ) \; .
\ee
The appearance of such an extreme solution is caused by the
mutual parasitism of the species, destroying the carrying
capacities of each other.
\section{Symbiosis without direct interactions}
\subsection{Derivation of symbiotic equations}
In many cases, symbiotic species influence each other by
increasing (improving) the carrying capacities of each other,
which does not involve direct interactions between the species.
The most known example of this type is the symbiosis between
tree roots and fungi. In that case, the carrying capacities
(\ref{84}) can be written in the form
\be
\label{113}
K_1 = A_1 + B_1 N_2 \; , \qquad K_2 = A_2 + B_2 N_1 \; .
\ee
The dimensionless carrying capacities (\ref{89}) now read as
\be
\label{114}
y_1 = a_1 + bz \; , \qquad y_2 = a_2 + gx \; .
\ee
Employing the scaling of Eqs. (\ref{97}) gives normalization
(\ref{98}) and the carrying capacities (\ref{114}) become
\be
\label{115}
y_1 = 1 + bz \; , \qquad y_2 = 1 + gx \; .
\ee
Thus, we come to the symbiotic equations in dimensionless form
\be
\label{116}
\frac{dx}{dt} = x \; -\; \frac{x^2}{1+bz} \; , \qquad
\frac{dz}{dt} = z \; -\; \frac{z^2}{1+gx} \; .
\ee
There exists again the symmetry with respect to the simultaneous
interchange between $x$ and $z$ and between $b$ and $g$.
\subsection{Evolutionary stable states}
Equations (\ref{116}) possess a non-zero stationary state
\be
\label{117}
x^* = \frac{1+b}{1-bg} \; , \qquad
z^* = \frac{1+g}{1-bg} \; ,
\ee
which is stable when either
\be
\label{118}
-1 \leq b < 0 \; , \qquad g \geq -1 \; ,
\ee
or when
\be
\label{119}
b \geq 0 \; , \qquad 0 \leq g \leq g_c \; ,
\ee
where
\be
\label{120}
g_c \equiv \frac{1}{b} \; .
\ee
The stability region is presented in Fig. 21.
If the symbiotic relations correspond to mutualism or
commensalism, then the attraction basin of the stationary
solution (\ref{117}) is the whole region of positive
initial conditions:
\be
\label{121}
x _0 > 0 \; , \qquad z_0 > 0 \qquad
(b \geq 0, \; 0 \leq g < g_c ) \; .
\ee
But if at least one of the species is parasitic, then the
attraction basins are defined by one of the conditions,
depending on the signs of the symbiotic factors:
$$
x_0 < \frac{1}{|g|} \; , \qquad z_0 > 0 \qquad
( b>0 , \; g < 0) \; ,
$$
$$
x_0 > 0 \; , \qquad z_0 < \frac{1}{|b|} \qquad
( b < 0 , \; g > 0) \; ,
$$
\be
\label{122}
x_0 < \frac{1}{|g|} \; , \qquad z_0 < \frac{1}{|b|} \qquad
( b < 0 , \; g < 0) \; .
\ee
The following dynamic regimes are possible.
\subsection{Convergence to stationary states}
If initial conditions are in the attraction basin, then
both species converge to their stationary populations.
The convergence can be monotonic or not, depending on the
system parameters and initial conditions, as is demonstrated
in Fig. 22.
\subsection{Unlimited exponential growth}
For the parameters outside the stability region, such that
\be
\label{123}
b > 0 \; , \qquad g > g_c \; ,
\ee
there exists a solution with exponential growth in time for both
species.
\subsection{Finite-time divergence}
Extreme solutions appear when at least one of the species is
parasitic and initial conditions are outside of the attraction
basin. Thus, when either
\be
\label{124}
x_0 > \frac{1}{|g|} \; , \qquad z_0 > 0 \qquad
( b > 0 , \; g < 0 ) \; ,
\ee
or when
\be
\label{125}
x_0 > 0 \; , \qquad z_0 > \frac{1}{|b|} \qquad
( b < 0 , \; g > 0 ) \; ,
\ee
then one of the species experiences a finite-time singularity
at a critical time $t_c$ that is defined numerically. In this
case, when approaching $t_c$, one of the following behaviors
arise:
$$
x(t) \ra x(t_c) < \infty \; , \qquad z(t) \ra \infty \; ,
$$
\be
\label{126}
x(t) \ra \infty \; , \qquad z(t) \ra z(t_c) < \infty \; ,
\ee
where the first line corresponds to conditions (\ref{124}),
while the second line, to conditions (\ref{125}). The typical
behavior of populations is shown in Fig. 23.
\subsection{Finite-time extinction}
Parasitic symbiotic relations may end with one of the species
being extinct and the other continuing its life without
symbiosis. When either
\be
\label{127}
x_0 < \frac{1}{|g|} \; , \qquad z_0 > 0 \qquad
( b > 0 , \; g\leq -1) \; ,
\ee
or when
\be
\label{128}
x_0 < \frac{1}{|g|} \; , \qquad z_0 > \frac{1}{|b|} \qquad
( b < 0 , \; g < 0) \; ,
\ee
then the species $z$ dies at a finite time $t_d$, defined by
the relation
\be
\label{129}
x(t_d) = \frac{1}{|g|} \; .
\ee
That is, the species $x$ kills the species $z$:
\be
\label{130}
x(t) \ra x(t_d) \; , \qquad z(t) \ra 0 \qquad
(t\ra t_d) \; .
\ee
The opposite situation, when the species $z$ kills the
species $x$ occurs if either
\be
\label{131}
x_0 > 0 \; , \qquad z_0 < \frac{1}{|b|} \qquad
( b \leq -1 , \; g > 0 ) \; ,
\ee
or if
\be
\label{132}
x_0 > \frac{1}{|g|} \; , \qquad z_0 < \frac{1}{|b|} \qquad
( b < 0 , \; g < 0 ) \; .
\ee
Then the species $x$ dies at a finite time given by the relation
\be
\label{133}
z(t_d) = \frac{1}{|b|} \; ,
\ee
so that
\be
\label{134}
x(t) \ra 0 \; , \qquad z(t) \ra z(t_d) \qquad
(t\ra t_d) \; .
\ee
The corresponding behavior is illustrated in Fig. 24.
\section{Interpretation of extreme events in population evolution}
\subsection{Types of extreme events}
In the population evolution, there may happen two types of extreme
events, finite-time death and finite-time singularity. The origin for
the occurrence of finite-time death is rather clear. This happens when
the carrying capacity of the species is destroyed, either by the species
themselves or by the parasitic symbiosis of other species. The destroyed
carrying capacity makes it impossible the long-term existence of the species that,
thus, go towards extinction.
Finite-time singularity can be due to two causes. One reason is the
existence of cooperation between the members of species, as in Secs. 5.1
and 7.2. This type of the finite-time singularity means that the society,
in which cooperation persists under fast increasing numbers of its
members, becomes unstable and, to be stabilized, requires that cooperation
be changed into competition. The necessity for such a change looks rather
evident and is easily understandable. Really, in the presence of a strongly
increasing population, the competition for the means of survival will become
unavoidable.
A more elaborate mechanism operates in the case of the finite-time
singularities occurring under competition, as in Secs. 4.8, 6.4, 6.6, 9.5,
and 10.5. In all these cases, the singularities appear under the destruction
of the carrying capacities either by the society itself or by a parasitic
symbiotic species. It may seem quite strange that, while the carrying
capacity is being destroyed, the population continues growing. To understand
the origin of such a paradoxical effect and of these finite-time
singularities, let us consider in turn the different types of finite-time
singularities found in Secs. 4.8, 6.4, 6.6, 9.5, and 10.5.
\subsection{Boom and crash in society with gain and competition}
This corresponds to the case studied in Sec. 4.8, of a finite-time
singularity occurring at a critical time $t_c$. The divergence is of the
hyperbolic type (\ref{49}). This extreme event happens under the destructive
action of the society on its own carrying capacity, when $b<0$. The parameters
are such that, at the initial moment of time, the effective carrying capacity
is negative,
\be
\label{135}
y(0) = 1 - |b| x_0 < 0 \; .
\ee
How would it be possible to understand the existence of a negative carrying
capacity? For some simple biological species, as ants or bees, the negative
capacity would, probably, be impossible. Such species would not be able to
live at all. However, for more complex societies such as human societies,
the negative carrying capacity may have sense. For instance, humans do extract
non-renewable resources that become progressively exhausted forever, they
destroy their habitat, poison rivers, pollute air, cut forests, and so on.
At the same time, humans possess the ability of regenerating the habitat by
cleaning rivers, or even oceans, and planting trees. Thus, humans may spoil
their habitat to such an extent that it would require a hard work for its
recovering. In that sense the effective carrying capacity can become negative
for a while, implying the necessity of its recuperation in the positive
domain in order to ensure the long-term survival of the human society
\cite{63}.
Even more transparent is the explanation for the existence of negative
carrying capacity for financial and economic societies, when the variable
$x$ represents not population, but capitalization. In these cases, negative
capacity is nothing but the borrowed resources that have to be returned back
to the lender. Due to this leverage effect resulting from borrowing, a firm
can exhibit a fast development. But borrowing cannot last forever. If the
firm, society, or country does not produce enough and is not able to pay
debts, its creditors will lose trust and may require early reimbursement, or
will simply refuse rolling over the debts, as occurred for Greece in May 2010
and Ireland in November 2010. This situation can be captured by assuming the
existence of a maximum level of debt, beyond which the society or country
becomes highly unstable due to feedbacks resulting from market forces.
Actually, Reinhart and Rogoff \cite{64} have recently documented
the existence of a strong link between levels of debt and countries' economic
growth over the last two centuries: Countries with a gross public debt
exceeding about 90\% of annual economic output tended to grow a lot more slowly
and to exhibit larger default risks.
Assuming the existence of a maximum debt level beyond which instabilities
appear leads to the existence of a time $t_{crash}$ beyond which a crash or
at least strong turbulence can occur. This is highly reminiscent of the scenario
leading to the ``great recession'' that started in 2007 worldwide
\cite{65}. The minimum {\it crash time} $t_{crash}$ is thus given
by the condition that the debt, represented by the negative carrying capacity,
reaches the value
\be
\label{136}
y(t_{crash}) = 1 - |b| x(t_{crash}-\tau) < 0 \; .
\ee
The crash happens before the critical divergence time $t_c$,
\be
\label{137}
t_{crash} < t_c \; ,
\ee
where the firm or country capitalization is still finite. In such a regime,
the accelerated growth, fueled by borrowing, leads to a boom that is not
supported by increasing productivity. This can therefore be called a bubble
\cite{28}. As the bubble develops, it eventually reaches a threshold
level beyond which it becomes unstable, and can therefore be followed by a
crash at times between $t_{crash}$ and $t_c$.
\subsection{Boom and crash in society with loss and competition}
The same interpretation as above is applicable for the society with loss
and competition, as in Secs. 6.4 and 6.6. There, the finite-time singularity
arises under a high level of destruction, when the destruction coefficient
$b<-1$ and the initial carrying capacity is negative. The hyperbolic
divergence occurs at a critical time $t_c$. In Sec. 6.6, the situation is
similar to that discussed above. The difference between Sec. 6.4 and Sec. 6.6
is that in Sec. 6.4 the divergence is defined by the equation
\be
\label{138}
y(t_c) = 1 - |b| x(t_c-\tau) = 0 \; .
\ee
Again, a society or a firm with loss and competition, actually, does not
reach the point of divergence, but becomes bankrupt before this. The fast
growth is due to exploiting and destroying the carrying capacity. But,
after destruction has taken place and reached an unbearable level,
the boom is followed by a crash.
\subsection{Species extinction under mutual parasitic symbiosis}
In the parasitic symbiosis of two species considered in Sec. 9.5,
there occurs a finite-time singularity. Thus, for the symbiotic
parameters $b<g<0$, the initial carrying capacities are such that
\be
\label{139}
y_1(0) = 1 - |b| x_0 z_0 < 0 \; , \qquad
y_2(0) = 1 - |g| x_0 z_0 > 0 \; .
\ee
For the opposite case, when $g<b<0$, the situation is symmetric. Thence,
below we shall treat the case of Eq. (\ref{139}) without loss of
generality. The divergence appears at the critical time $t_c$ given by
the equation
\be
\label{140}
y_1(t_c) = 1 - |b| x(t_c) z(t_c) \; .
\ee
At this time, the population of the species $x$ tends to infinity, while
that of the species $z$ goes to zero.
Of course, no realistic population can rise to infinite values. Such a
divergence happens because of the mutual parasitic symbiotic relations,
resulting in the formal appearance of a negative effective capacity.
As in the cases above, the divergence can be avoided by limiting the
carrying capacity by a fixed level. This implies that the rise of a
parasitic species continues only up to some limiting carrying capacity
threshold
\be
\label{141}
y_1(t_{crash}) = 1 - |b| x(t_{crash}) z(t_{crash}) \; ,
\ee
after which the species $x$ dies out by a fast process of extinction at
the crash time $t_{crash} < t_c$.
\subsection{Species extinction under parasitic symbiosis without direct
interactions}
A finite-time singularity also appears in the case of symbiosis without
direct interactions, as in Sec. 10.5. This happens when at least one of the
species is parasitic. For example, for the case $b<0$ and $b<g$. Below, we
shall consider this case, since the situation with $g<0$ and $g<b$ is
symmetric.
When $b<0$, this means that the species $z$ is parasitic and destroys the
carrying capacity of the species $x$. The finite-time singularity occurs
if, at the initial moment of time, the effective carrying capacity of species
$x$ is negative,
\be
\label{142}
y_1(0) = 1 - |b| z_0 < 0 \; ,
\ee
while the carrying capacity of species $z$,
\be
\label{143}
y_2(0) = 1 + gx_0 \; ,
\ee
can be positive or negative, depending on the values of $g$ and $x_0$. The
divergence of $x$ occurs at the critical time $t_c$, where the effective
capacity of species $x$ is negative,
\be
\label{144}
y_1(t_c) = 1 - |b|z(t_c) < 0 \; .
\ee
The population of species $z$ at the moment $t_c$ is finite.
In the same way as in the previous cases, we understand that this
divergence cannot be real and there should exist a limiting carrying
capacity
\be
\label{145}
y_1(t_{crash}) = 1 - |b| z(t_{crash}) \; ,
\ee
at which the population $x$ is to be set to zero, implying its
extinction caused by the parasitic species $z$. This extinction happens
at the crash time $t_{crash} < t_c$.
\vskip 2mm
In all these cases for which there arises a finite-time singularity, it
is possible to exclude the formal divergence by limiting the carrying
capacity to a minimal value $y(t_{crash})$, that is, a maximal absolute
value $|y(t_{crash})|$. This limiting value can be interpreted as a
threshold for a change of regime. The overall dynamics, thus, starting
with the fast growth of the population (or capitalization), is followed
by its drop to zero at the crash time $t_{crash}$, before the critical
time $t_c$.
\section{Conclusion}
In this paper, we have suggested a general approach for
describing the evolution of populations, whose activities
influence their carrying capacities. In order to take into
account this influence, the carrying capacities are to be
defined as functions of the society populations. This includes
the action of a population on its own carrying capacity.
In general, the actions of populations on the carrying
capacities can be delayed, since such actions, generally,
require time for their realization.
The approach is illustrated by analyzing the time evolution
of a society that acts on its own carrying capacity, either by
producing the increase of the capacity or by destroying it.
Different types of societies have been studied, depending on
the balance between gain and loss and between competition and
cooperation. A detailed classification of admissible dynamic
regimes has been given.
Two kinds of extreme events have been found to arise, when the
society destroys its carrying capacity. One is a finite-time death
at a death time $t_d$ and another is a finite-time singularity
at a critical time $t_c$. The finite-time death describes
the extinction of the population because of the destruction
of the carrying capacity. The finite-time singularity signals
that the society becomes unstable and its stabilization
requires changing the society parameters and a transfer to
another dynamic regime. The divergence can be avoided by
limiting the carrying capacity and interpreting the effect
as a fast rise of the population (or capitalization), followed
by its sharp drop. For economic and financial societies, the
fast growth is understood as a boom or bubble, due to the
leverage effect induced by over-indebtedness, after which a crash occurs.
The suggested approach is also illustrated by considering the
symbiosis of several species. This approach allows us to give
a general classification of different symbiosis types. The
case of two species is analyzed in detail. Extreme events arise
when at least one of the species is parasitic, destroying the
carrying capacity of the other species. Again, there can exist
two kinds of such extreme events, finite-time death and
finite-time singularity. Their interpretation is analogous to
that given for the case of the self-destructing population
activity.
As a general conclusion valid for the different considered
situations, we have to say that any destructive action of
populations, whether on their own carrying capacity or on the
carrying capacities of co-existing species, can lead to the
instability of the society that is revealed in the form of the
appearance of extreme events, finite-time extinctions or booms
followed by crashes.
\vskip 1cm
{\bf Acknowledgement}: We acknowledge financial support from the
ETH Competence Center ``Coping with Crises in Complex Socio-Economic
Systems" (CCSS) through ETH Research Grant CH1-01-08-2 and ETH Zurich
Foundation.
\newpage
|
2,877,628,091,232 | arxiv | \section{Introduction}
In life when we observe some object, we often collect information from multiple perspectives. For example, we measure differences and similarities of certain animal species by color, sex, height, weight etc, which we call features (or explanatory, independent variables). This unavoidably will end up with a vector collecting those (say $p$) features $(X_1, X_2, \cdots, X_p)$. As we collect sample instances with those $p$ features, we will find each feature $X_i$ follows some probability distribution. For example the animal species' biological sex follows a Bernoulli distribution with parameter $p\approx 0.5$.
It is fundamental to understand the relations between the $p$ features. In principle, we know everything if we know the joint distribution of the $p$ features, for example cumulative distribution \[
F_{X_1, X_2, \cdots, X_p}(t_1, t_2, \cdots, t_p) = \field{P}(X_1 \le t_1, X_2\le t_2, \cdots, X_p\le t_p)
\]
However, this is unlikely to happen in real world. And estimation of the joint distribution become impossible as $p$ increases due to curse of dimensionality. Simply put, the number of samples required to estimate the joint distribution will grow at $k^p$ where $k$ is the number of samples required to estimate any marginal $X_i$ within requested accuracy.
Since moments (mean, variance, correlation) determines many useful information of random variables, the most natural simplification of the problem is to estimate the moments of those features. Namely,
\[
\field{E} X_1^{a_1} X_2^{a_2} \cdots X_p^{a_p}, \quad a_1, \cdots a_p \ge 0
\]
In particular, collecting first order moments will obtain mean vector $\mu$, and second order centered moments will be covariance matrix $\Sigma$.
\[
\vec{\mu} = \mat[c]{\field{E} X_1 \\ \vdots \\ \field{E} X_p}, \quad \Sigma = \mat[ccc]{\cov X_1 & \cdots & \cov(X_1, X_p) \\ & \ddots & \\ \cov(X_p, X_1) &\cdots &\cov X_p}
\]
Estimating each element in $\mu$ and $\Sigma$ is not difficult. Since moments can be estimated by taking average statistics from samples. Let $k$-th samples of $X_i$ be $X_i^{\omega_k}$.
\[
\mu_i:=\field{E} X_i \approx \hat{\mu}_i:=\frac{1}{n} \sum_{k=1}^n X_{i}^{\omega_k}, \quad \Sigma_{i,j}:= \cov(X_i,X_j) \approx \hat{\Sigma}_{i,j}:= \frac{1}{n}\sum_{k=1}^n X_{i}^{\omega_k} X_{j}^{\omega_k} - \hat{\mu}_i \hat{\mu}_j
\]
Then for each entry as a random variable, with the law of large number we can conclude the error goes to zero in the limit and central limit theorem will control the fluctuation of the error is of order $O(\frac{1}{\sqrt{n}})$ where $n$ is the sample size.
Let us first fix some notations. Let $X$ be the data matrix with $n$ samples (rows) and $p$ columns (features). Then the sample covariance matrix is
\[
\frac{1}{n} X^T X - \frac{1}{n^2} X^T \mathbbm{1} (X^T \mathbbm{1})^T, \quad \mathbbm{1}= \mat[ccc]{1 & \cdots & 1}^T
\]
For simplification of the analysis, we will assume all random variables have mean 0, so that $\vec{\mu}=0$. Then the sample covariance matrix is simplified as
\[
\hat{\Sigma} = \frac{1}{n} X^T X
\]
However, as a high dimensional vector or matrix, entry-wise behavior is usually misleading. The matrix $\hat{\Sigma}$ usually exhibits fundamentally different behavior even entries behave as expected. Specifically, the spectrum (eigenvalues) of $\hat{\Sigma}$ has a consistent bias compared with $\Sigma$. For example if we fix $p/n \to c$, spectrum of sample covariance matrix for $X$ with i.i.d. entries mean 0 and variance 1,
converges to the Marchenko-Pastur law (see \cite{pastur1967}) as $n\to \infty$.
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{figures/sample100-eps-converted-to.pdf}
\caption{We take $n=p=100$. The sample covariance matrix ($\hat{\Sigma}$) of standard Gaussian of dimension 100. The true spectrum is $\lambda=1$ since $\Sigma=I$. On the left we see sample spectrum is concentrated around a biased curve. On the right the histogram of the sample spectrum can be fitted to Marchenko-Pastur distribution closely.}\label{fig: sample spectrum Wishart 100}
\end{figure}
In the example as shown in Figure \ref{fig: sample spectrum Wishart 100}, the true covariance matrix is identity, $\Sigma = I$. Since largest eigenvalue of $\hat{\Sigma}$ is very close to $4$, we see largest eigenvalues of the error matrix $\lambda_{max}(\hat{\Sigma} -\Sigma) \approx 4-1=3 $. Therefore $\|\hat{\Sigma} -\Sigma \|\approx 3 $. Similarly, for arbitrary true covariance $\Sigma$, the spectral norm of the error matrix $\|\hat{\Sigma} -\Sigma \|$ behave similar to a constant (depend on $\frac{p}{n}$) multiple of $\|\Sigma\|$ (see \cite{Kolt2}). Any attempt using sample covariance in a matrix fashion will yield a significant error, for example principle component analysis, MANOVA, factor analysis and linear discriminant analysis etc. in multivariate analysis.
In many practical applications, the spectrum of the true covariance matrix contains essential information about the structure of the data at hand. Therefore, recovering the true spectrum is critical to understand the behavior of various models we use for the data. In the case $\Sigma =I$ as shown in figure \ref{fig: sample spectrum Wishart 100}, we can expect a reverting process that may recover the true spectrum from Marchenko-Pastur distribution. In the case of general covariance matrix $\Sigma$, a series of results (see Silverstein \cite{silverstein1995}, Bai and Yin \cite{bai1988}, Yin, Bai and Krishnaiah \cite{yin1983} ) have shown the spectrum of the sample covariance $\hat{\Sigma}$ converge to the free product of spectrum of $\Sigma$ with a Marchenko-Pastur distribution, provided the spectrum of the true covariance $\Sigma$ converges. The main result is summarized as follows.
\begin{theorem}
Assume the following.
\begin{enumerate}
\item{} The entries of $X_p=(X_{i,j})_{n\times p}$ are
i.i.d. real random variables for all $p$.
\item{} $E[X_{1,1}]=0$, $E[|X_{1,1}|^2]=1$.
\item{} Let $p/n \to c >0$ as $p\to \infty$.
\item{} Let $\Sigma_p$ $(p\times p)$ be non-negative definite symmetric random matrix with spectrum distribution $F^{\Sigma_p}$ (If $\{\lambda_i\}_{1\le i \le p}$ are the eigenvalues of $\Sigma_p$, then $F^{\Sigma_p}=\sum_{1}^{p} \frac{1}{p} \delta_{\lambda_i}(x)$) such that $F^{\Sigma_p}$ almost surely converges weakly to $F^{\Sigma}$ on $[0, \infty)$.
\item{} $X_p$ and ${\Sigma}_p$ are independent.
\end{enumerate}
Then the spectrum distribution of $W_p= \frac{1}{n}{\Sigma}_p^{1/2}X_p^T X_p {\Sigma}_p^{1/2}$, denoted as $F^{W_p}$ almost surely converges weakly to $F^W$. $F^W$ is the unique probability measure whose Stieltjes transform $m(z)= \int \frac{d F^W(x)}{x-z}$, $z\in \mathbb{C}^+$ satisfies the equation
\begin{equation} \label{eqn:Marchenko-Pastur equation}
-\frac{1}{m}=z- c\int \frac{t }{1+tm} d F^{\Sigma}(t) \quad \forall z \in \mathbb{C}^+
\end{equation}
\end{theorem}
In theory, from the limiting distribution of the estimated covariance matrix, one could retrieve $\Sigma$ using the Stieltjes-transform from free probability. This idea, pioneered by EI Karoui \cite{elkaroui2008}, attempts to discretize the Marchenko-Pastur equation \ref{eqn:Marchenko-Pastur equation} then estimate spectrum by minimizing the residuals. Recently there is a series of follow-up work attempt to improve the discretization and the optimization for example using different discretization and quantization strategy in \cite{ledoit2015, ledoit2017numerical} and moving from complex plane to real line in \cite{li2013}.
Another type of approach is based on an explicit formula to relate the moments of true limiting spectrum and moments of sample limiting spectrum distribution see for example \cite{bai2010estimation, Valiant}. This formula approximates the finite dimensional relation with an asymptotic normal error. However, this approach is rather restrictive computationally since it has to solve polynomial equations and invert moments back to distribution. Mostly, it can only deal with the case that true spectrum has only a small number of unique values.
However, using limiting result from random matrix theory does not guarantee good performance in finite dimensional case. Even though the estimators are proven consistent in the limit, the rate of convergence was not well understood and in fact very often to be slow. From Figure \ref{fig: compare eigen and Quest 50, 100}, the random matrix approaches `Quest' \cite{ledoit2015} and `Moment' from \cite{Valiant} usually falls short. Instead, we introduce an iterative algorithm, `Concent' method (black in Figure \ref{fig: compare eigen and Quest 50, 100}), which solves a random optimization problem based on concentration of sample spectrum. Moreover it also exploited sample covariance eigenvectors to correct sample spectrum. This method exhibits remarkable reconstruction due to the sub-Gaussian concentration behavior of sample spectrum which become effective even with very small $n$ and $p$.
\begin{figure}[H]
\centering
\begin{subfigure}[t]{.45\textwidth}
\centering
\includegraphics[width=0.98\linewidth]{figures/linear_spectrum_50-eps-converted-to.pdf}
\end{subfigure}
\begin{subfigure}[t]{.45\textwidth}
\centering
\includegraphics[width=.98\linewidth]{figures/linear_spectrum_100-eps-converted-to.pdf}
\end{subfigure}
\caption{We take $n=p=50$ on the left, and $n=p=100$ on the right. The true covariance matrix $\Sigma$ has diagonalization $Q\Lambda Q^T$ where the diagonal matrix $\Lambda$ has eigenvalues spaced from 0 to 10 on the left and 0 to 1 on the right uniformly. The sample spectrum (in green) has a convex shape which is significantly different from the true spectrum. The `Quest' estimator form Ledoit \cite{ledoit2015} performs poorly because of its discontinuity from discretization. The `Moment' estimator from \cite{Valiant} is not gaining useful information at all on the left due to true spectrum has values $>1$, which will result higher moments overflow in computer. `Moment' On the right still perform poorly even we restrict spectrum $\le 1$.}\label{fig: compare eigen and Quest 50, 100}
\end{figure}
There is also many other work trying to find better estimator than sample covariance matrix which do not necessarily estimate true spectrum. For example, under sparsity or low rank condition of the true covariance, shrinkage method on sample covariance exhibits appealing performance, see Stein \cite{Stein}, Bickel and Levina \cite{Bickel} and Donoho \cite{Donoho}. See \cite{fan2016overview} for a detailed review.
The remaining of this paper is structured as follows. First, we show by various simulations the concentration of the sample spectrum in section \ref{sec:concentration}. This shows the bias in sample spectrum is consistent. Any sample spectrum can be used as a biased baseline. Then in section \ref{sec:random optimization}, we propose an optimization problem and its approximations based on this concentration property. Then in \ref{sec:iterative eigenvector correction} we outline an iterative eigenvector correction algorithm which actively improves approximated optimization solution. At the end, we show some simulations to demonstrate it works well in various settings in section \ref{sec:simulation}.
\section{From concentration of sample spectrum to recovery by optimization} \label{sec:concentration}
\subsection{Concentration of sample spectrum}
Let $\hat{\Lambda}$ (green in Figure \ref{fig: sample spectrum Wishart 100}) be spectrum of sample covariance matrix $\hat{\Sigma}$. The mean of sample spectrum $\field{E} \hat{\Lambda}$ is very far from the true spectrum $\Lambda$ (red in Figure \ref{fig: sample spectrum Wishart 100}). However $\hat{\Lambda}$ are very concentrated together around $\field{E} \hat{\Lambda}$. It is easily observed but not necessarily easy to prove. We formulate the simplest case (assuming Gaussian) below.
\begin{theorem} \label{thm:concentration of sample spectrum}
Let $\Lambda$ be the diagonal matrix with the true spectrum, i.e. eigenvalues of $p\times p$ true covariance matrix $\Sigma$ ($\Sigma$ and $\Lambda$ are unknown in practice). Assume all spectrum are sorted decreasingly. Let ${\mathcal{N}}$ be a random $n\times p$ matrix with i.i.d. Gaussian random variables with mean 0 and variance 1, which is unknown in practice as well. Suppose we observe data matrix $X= {{\mathcal{N}}}^{T} {\Lambda}^{1/2}$. Then denote $\hat{\Lambda}$ as the spectrum of the sample covariance $W=\frac{1}{n}X^TX = \frac{1}{n}{\Lambda}^{1/2} {{\mathcal{N}}}^{T}{\mathcal{N}} {\Lambda}^{1/2}$. Then we have the sample spectrum is concentrated around its mean,
\begin{equation} \label{eqn: concentration of sample}
\field{P}(\|\hat{\Lambda}- \field{E}\hat{\Lambda}\|_{\infty} >\|\Lambda\|_2 t )< Ce^{-cnt^2}
\end{equation}
\end{theorem}
The proof is based on the Lipschitz continuity of eigenvalue function and a Gaussian concentration inequality, which we present later. Essentially, this concerns the local statistics for the eigenvalues of finite dimensional sample covariance matrix. For the general case ${\mathcal{N}}$ is not Gaussian entries, it is much more complicated and we would not expect a sub-Gaussian tail bound.
For the simplest case $\Sigma= I$ (with general random variables) , the universality \cite{tao2012random} would imply the behavior is close to Wishart matrix as long as first four moments are matched for the entries. For the case of $\Sigma \ne I$ (with general random variables), there is no concentration or universality result available. We suspect there is a sub-Gaussian tail if first four moments are matched and sub-exponential tail if not.
\begin{conjecture}
Fixing all assumptions in Theorem \ref{thm:concentration of sample spectrum}, except the random matrix ${\mathcal{N}}$ have different entries.
\begin{itemize}
\item If entries of ${\mathcal{N}}$ is not Gaussian but has first four moments matched with Gaussian then we still have sub-Gaussian tail $e^{-nt^2}$.\\
\item If entries of ${\mathcal{N}}$ is not Gaussian and only have first two moments matched with Gaussian, and first four moments are finite then we have sub-exponential tail $e^{-nt}$.
\end{itemize}
\end{conjecture}
From Figure \ref{fig: concentration 2 linear}, clearly sample spectrum are concentrated around a convex curve which is significantly different from the true spectrum ($\Sigma \ne I$).
\begin{figure}[H]
\centering
\includegraphics[width=1.05\linewidth]{figures/concen2-eps-converted-to.pdf}
\includegraphics[width=1.05\linewidth]{figures/concen3-eps-converted-to.pdf}
\caption{The pictures in the first row has true covariance $\Sigma$ with eigenvalues ranging from 0 to 10 evenly of step size $1/p$. The graph in the second row has true spectrum $\Sigma$ with eigenvalues ranging from 0 to 10 evenly of step size $1/p$.}\label{fig: concentration 2 linear}
\end{figure}
\begin{proof}
The difficulty in proving such concentration result mainly due to the complexity of eigenvalue function. We will show the eigenvalue function is Lipschitz. Let eigenvalue $diag(\Lambda) = (\lambda_1, \cdots, \lambda_p)$ be the mapping $ f =\field{R}^{n\times p} \to \field{R}^p$ that $f({\mathcal{N}}): Q^{T}\left( \frac{1}{n}{\Lambda}^{1/2} {{\mathcal{N}}}^{T}{\mathcal{N}} {\Lambda}^{1/2} \right) Q = \Lambda $ where $Q$ is orthogonal matrix.
Then we compute a perturbation
\begin{align*}
\| \Lambda - \Lambda'\|_{\infty}
& = \| \Lambda - \Lambda'\|_2 \\
& = \| Q^{T}\left( \frac{1}{n}{\Lambda}^{1/2} ({{\mathcal{N}}}^{T}{\mathcal{N}} - {{\mathcal{N}}'}^{T}{\mathcal{N}}') {\Lambda}^{1/2} \right) Q \|_2 \\
& \le \frac{1}{n} \|{\Lambda}\|_2 \| {{\mathcal{N}}}^{T}{\mathcal{N}} - {{\mathcal{N}}'}^{T}{\mathcal{N}}'\|_2 \\
& \le \frac{1}{n} \|{\Lambda}\|_2 ( \| {{\mathcal{N}}}^{T}({\mathcal{N}} - {\mathcal{N}}')\|_2+ \| {({\mathcal{N}}-{\mathcal{N}}')}^{T}{\mathcal{N}}'\|_2)
\end{align*}
Notice for rectangular matrices $A,B$, $\|AB\|2\le \|A\|_2\|B\|_F$. Then we conclude
\[
\| {{\mathcal{N}}}^{T}({\mathcal{N}} - {\mathcal{N}}')\|_2 \le \| {{\mathcal{N}}}^{T}\|_2 \|{\mathcal{N}} - {\mathcal{N}}'\|_F = \| {{\mathcal{N}}}\|_2 \|{\mathcal{N}} - {\mathcal{N}}'\|_F
\]
and similarly
\[
\| ({\mathcal{N}} - {\mathcal{N}}')^{T}{{\mathcal{N}}'}\|_2 =\| {{\mathcal{N}}'}^{T}({\mathcal{N}} - {\mathcal{N}}')\|_2 \le \| {{\mathcal{N}}'}\|_2 \|{\mathcal{N}} - {\mathcal{N}}'\|_F
\]
This leads to the bound on the variation of eigenvalues
\[
\| \Lambda - \Lambda'\|_{\infty} \le \frac{1}{n} \|{\Lambda}\|_2 (\|{\mathcal{N}}\|_2 +\|{\mathcal{N}}'\|_2) \|{\mathcal{N}} - {\mathcal{N}}'\|_F
\]
For any $f$ has Lipschitz constant $L \le \frac{1}{n} \|{\Lambda}\|_2 (\|{\mathcal{N}}\|_2 +\|{\mathcal{N}}'\|_2)$, Of course this is random variables. However, for Gaussian random matrix ${\mathcal{N}}$ we have the concentration of the norm (or singular value) \cite{rudelson2010non}
\[
\field{P}( \|{\mathcal{N}}\|_2 > C(\sqrt{n}+\sqrt{p}) +t ) \le 2e^{-ct^2}
\]
With overwhelming probability, the function $g =\|\cdot \| \circ f$ has Lipschitz constant $L\le \frac{1}{n}\|{\Lambda}\|_2 C(\sqrt{n}+\sqrt{p})$. Then recall Gaussian concentration inequality (can be found in many textbooks for example \cite{boucheron2013concentration}) states that for $g: \field{R}^N \to \field{R}$ with Lipschitz constant $L$ then for a Gaussian random vector, we have
\[
\field{P} \left(\|g(X) - \field{E} g(X)\|>t\right)<2 e^{-\frac{t^2}{2L^2}}
\]
Therefore apply with $L=\frac{1}{n}\|{\Lambda}\|_2 C(\sqrt{n}+\sqrt{p}) =\|{\Lambda}\|_2O(1/\sqrt{n})$ (we assumed $p\le c n$), we obtain bound
\[
\field{P} \left(\|\hat{\Lambda} - \field{E} \hat{\Lambda}\|_{\infty} >\|\Lambda\|_2 t \right)<C e^{-cnt^2}
\]
\end{proof}
\subsection{Random optimization} \label{sec:random optimization}
Let $\Lambda$ be the true spectrum, and $\hat{\Lambda}$ be the sample covariance spectrum. Due to the concentration in previous section, we propose the following optimization,
\begin{equation}\label{eqn: optimization}
\min_{{D}\geq 0} \sum_{k=1}^K \left\| \hat{\Lambda}- \eig({D}^{1/2} {\mathcal{N}}_k^T {\mathcal{N}}_k {D}^{1/2} ) \right\|
\end{equation}
where ${\mathcal{N}}_i$ is $n\times p$ random matrix with i.i.d. standard normal random variables, and $\eig(\cdot)$ is computing the eigenvalues and sort in descending order. One note that the norm could be $\ell_1, \ell_2$ or any vector norm. However, by analyzing the performance on simulation, we did not observe too much difference for various norms. for computational reason $\ell_2$ is taken in the following discussion.
One caveat of this formulation is the optimization problem does not have convex structure so that it can not be solved by fast algorithms available in convex optimization literature. The reason for the complexity in the objective function is due to $\eig(\cdot)$ need to compute eigenvalues and sort afterwards. One could use a generic global optimizer search engine (e.g. Genetic algorithm) but it will be extremely slow and essentially not applicable for large dimension. So we replace it with a approximation of the problem. First, we translate the problem into a exact relation if we knew the true spectrum $\Lambda$.
\begin{equation} \label{eqn: optimization_true_ratio}
\Lambda_{concent} := \argmin_{{D}\geq 0} \sum_i \left\| \hat{\Lambda}- R_k{D} \right\|
\end{equation}
where $R_k$ is a vector obtained by element-wise division below
\[
R_k= \frac{\eig({\Lambda}^{1/2} {\mathcal{N}}_k^T {\mathcal{N}}_k {\Lambda}^{1/2} )}{\Lambda}
\]
Of course, $R_k$ is not obtainable, so we replace it with an estimator
\[
\hat{R_k}= \frac{\eig({\tilde{\Lambda}}^{1/2} {\mathcal{N}}_k^T {\mathcal{N}}_k \tilde{\Lambda}^{1/2} )}{\tilde{\Lambda}}
\]
where $\tilde{\Lambda}$ is any reasonable estimator of the covariance spectrum. In principle, one can iteratively find a sequence of such estimators $\tilde{\Lambda}_k$. We use the sample spectrum to start the iteration $\tilde{\Lambda}_0= \hat{\Lambda}$. Thus we arrived at
\begin{equation}\label{eqn: optimization_ratio}
\Lambda_{concent}= \argmin_{{D}\geq 0} \sum_k \left\| \hat{\Lambda}- \hat{R_k}{D} \right\|
\end{equation}
Now let's derive an explicit formula for $\ell_2$ minimization. The objective function can be rewritten as
\[
f=\sum_k \left\| \hat{\Lambda}- \hat{R_k}{D} \right\|^2= \sum_{k=1}^{K}\sum_{j=1}^{p} (\hat{\lambda}_j-\hat{R}_{kj}d_j)^2
\]
Then set partial derivatives $\partial f/ \partial{d_j}$ to be zero, we found
\[
d_j=\frac{\hat{\lambda}_j \sum_{k=1}^K \hat{R}_{kj}}{\sum_{k=1}^K \hat{R}_{kj}^2 }
\]
Here $d_j$ will serve as an estimator of the true spectrum $\lambda_j$. And $\hat{R}_{kj}$'s are approximates of ${\hat{\lambda}_j}/{\lambda_j}$. In principle, the simplest approximation would be taking average of such ratio to get a naive estimator ${\hat{\lambda}_j \sum_{k=1}^K \hat{R}_{kj}}/{K}$.
But instead our $\ell_2$ minimization give a second order correction of this naive approach. However, this approach has many parts replaced by estimators instead of the true, thus we will propose a follow up eigenvector correction procedure.
\subsection{An eigenvector correction } \label{sec:iterative eigenvector correction}
We start with any estimator, say with $\Lambda_0=\Lambda_{concent}$. Then in $k+1$-th step, we simulate a sample covariance matrix and diagonalize it
\[
W_{k}=\Lambda_k^{1/2} {\mathcal{N}}^T {\mathcal{N}} \Lambda_k^{1/2} /n \quad \to\quad W_k= V_k D_k V_k^T
\]
Then we obtain the next estimator by the diagonal elements of the matrix $V_k \hat{\Lambda} V_k^T$.
\[
\Lambda_{k+1}= diag(V_k \hat{\Lambda} V_k^T)
\]
We give a heuristic argument here to explain its effectiveness. Let $\hat{W}=Q^T\Lambda^{1/2} {\mathcal{N}}^T {\mathcal{N}} \Lambda^{1/2} Q/n$ be the given sample covariance matrix, then the true covariance matrix is $W= Q \Lambda Q^T$, then diagonal elements of $Q^T \hat{W} Q$ will be a good estimator of true spectrum. Since
\[
diag(Q^T \hat{W} Q) = q_k^T \hat{W} q_k= \lambda_k \frac{1}{n}\sum_{i=1}^{n} {\mathcal{N}}_{i,k}^2 \to \lambda_k
\]
where ${\mathcal{N}}_{i,k}$ is $i,k$-th entry of ${\mathcal{N}}$. In our procedure, the $V_k$ will play a similar role as $Q$. One limitations about this procedure is that it does not apply to high dimensional setting, for example $p\ge 10^4$. The computation would be too demanding due to the eigen-decomposition used in the iteration. However for small or moderate dimensions (for example $p= 10^3$), the iteration converges fast and usually less than $10$ iterations would be sufficient.
Combining the two approach, we propose the following 'Concent' algorithm for spectrum recovery.
\begin{algorithm}[H]
\caption{`Concent': Eigenvector corrected random optimization }\label{alg:cap}
\begin{algorithmic}
\Require $n, p \geq 0$, data matrix $X$.
\Require Set $loops \ge 10$, averaging $K \ge 10$.
\State $\hat{\Lambda} \gets \eig (X^T X/n)$
\State Initialize: $\Lambda_1 \gets \hat{\Lambda}$
\For{$i=1:loops$}
\begin{itemize}
\item \; \text{Approximated random optimization}
\end{itemize}
\State Generate random $n\times p$ standard normal matrix ${\mathcal{N}}_1,\cdots, {\mathcal{N}}_K$,
\For{$k=1:K$}
\State Create ratio vector $\hat{R}_k \gets diag( \eig({\Lambda_i}^{1/2} {\mathcal{N}}_k^T {\mathcal{N}}_k {\Lambda_i}^{1/2} ) /{\Lambda_i} )$
\EndFor
\State Create diagonal matrix $S = diag (\frac{ \sum_{k=1}^K \hat{R}_{kj}}{\sum_{k=1}^K \hat{R}_{kj}^2 }, \cdots \frac{ \sum_{k=1}^K \hat{R}_{kj}}{\sum_{k=1}^K \hat{R}_{kj}^2 } ) $
\State $\Lambda_i \gets \hat{\Lambda} S$
\begin{itemize}
\item \; \text{Eigenvector correction}
\end{itemize}
\State Generate normal matrix ${\mathcal{N}}$, and compute $W = \Lambda_i^{1/2} {\mathcal{N}}^T {\mathcal{N}} \Lambda_i^{1/2} /n $
\State Diagonalize $W= V D V^T$
\State $\Lambda_{i+1} \gets diag(V \hat{\Lambda} V^T)$
\EndFor
\end{algorithmic}
\end{algorithm}
\section{Simulation}\label{sec:simulation}
Here we show the simulations on various settings of our method. We also compare with `Quest' estimator form Ledoit \cite{ledoit2015} and `Moment' estimator from \cite{Valiant}.
\subsection{Simulated spectrum}
We have shown our 'Concent' method performing well for linear spectrum in Figure \ref{fig: compare eigen and Quest 50, 100}. We next exam the case that true spectrum has a convex or concave shape.
\begin{figure}[H]
\centering
\begin{subfigure}[t]{.45\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figures/convex_spectrum_50-eps-converted-to.pdf}
\end{subfigure}
\begin{subfigure}[t]{.45\textwidth}
\centering
\includegraphics[width=.98\linewidth]{figures/concave_spectrum_50-eps-converted-to.pdf}
\end{subfigure}
\caption{ The true spectrum on the left takes $x^2$ where $x$ is evenly spaced in $(0,5]$. On the right the true spectrum is taking $x^{0.3}$.}\label{fig:convex and concave spectrum 50}
\end{figure}
When we deal with spectrum of special unknown structure, it's still possible to recover the smoothing approximation of the true spectrum using our `Concent' algorithm. Here is a simulation with spectrum of step shape and sparse shape.
\begin{figure}[H]
\centering
\begin{subfigure}[t]{.45\textwidth}
\centering
\includegraphics[width=.98\linewidth]{figures/step_spectrum_50-eps-converted-to.pdf}
\end{subfigure}
\begin{subfigure}[t]{.45\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figures/sparse_spectrum_50-eps-converted-to.pdf}
\end{subfigure}
\caption{On the left, the true spectrum are half 2's and half 1's. On the right the true spectrum is taking by zero out last half of the linear spectrum.}\label{fig:structured spectrum}
\end{figure}
\subsection{Real world data}
We compare the result with the true spectrum generated from large sample size real stock data. The `true' spectrum is taken from 50 stocks with 1000 days. $n/p = 20$ which means the `true' spectrum is relatively close to the spectrum of real stock covariance matrix.
\begin{figure}[H]
\centering
\begin{subfigure}[t]{.45\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figures/stock_spectrum_50-eps-converted-to.pdf}
\end{subfigure}
\begin{subfigure}[t]{.45\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figures/stock_spectrum_50_1-eps-converted-to.pdf}
\end{subfigure}
\caption{On the right, we removed the largest eigenvalue to make it easy to see the difference.}
\end{figure}
Another example we study is amazon reviews dataset. We take 50 products with 4082 reviews for each. Therefore $n/p \approx 80$, we are confident the sample spectrum from this data is close to the true spectrum.
\begin{figure}[H]
\centering
\begin{subfigure}[t]{.45\textwidth}
\centering
\includegraphics[width=.95\linewidth]{figures/amazon_spectrum_50-eps-converted-to.pdf}
\end{subfigure}
\begin{subfigure}[t]{.45\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figures/amazon_spectrum_50_1-eps-converted-to.pdf}
\end{subfigure}
\caption{On the right, we removed the largest eigenvalue to show the difference in bulk eigenvalues.}
\end{figure}
In the majority of those cases, `Concent' outperforms others. There is another significant advantage of `Concent'. That is its robustness against to the random generated samples. In other words, taken any sample spectrum, `Concent' would be able to use it to recover the true spectrum due to powerful finite dimensional concentration of sample spectrum. On the other hand, in Figure \ref{fig:Quest weak}, `Quest' (in red) varies quite significantly even when sample spectrum (in green) changes very little.
\begin{figure}[H]
\centering
\includegraphics[width=.45\linewidth]{figures/Quest_weak-eps-converted-to.pdf}
\caption{We put 3 simulated recovery together. Even the sample spectrum is very concentrated but the Quest varies significantly.}\label{fig:Quest weak}
\end{figure}
Moments method generally does not produce relevant result. The total number of moments can be computed are too few (around 10). When eigenvalues are large than 1, large moment will blow up. In the case of eigenvalues are less than 1, then higher order moments will be close to zero and produce no information.
\section{Conclusion}
We derive a concentration of sample spectrum result and propose a random optimization to recover the true spectrum. Our method of recovering the spectrum is based on finite dimensional concentration of measure behavior so that it provides a competitive performance for small and moderate dimensional covariance matrix. It is much more stable compared with `Quest' and `Moment' method which are based on properties of the limiting random matrix behavior. From simulations we showed our algorithms overcome several weakness of `Quest' and `Moment' method. `Quest' method is very sensitive to small changes in sample spectrum and usually produce a discontinuous estimator. `Moment' method does not work properly for small or moderate dimensions and will blow up for eigenvalues larger than 1.
There are several limitations of our method. First, it has expensive diagonalization procedure which will be hard to implement for large dimensions. Second, for discontinuous true spectrum, the recovery is only possible if the structure is known otherwise it produces smoothing approximations of the true spectrum.
|
2,877,628,091,233 | arxiv | \section{Introduction}
The use of lattice techniques has proved very fruitful in gauge
theories. This is mainly due to two reasons: lattices provide a
regularization procedure compatible with gauge invariance and they allow
to put the theory on a computer and to calculate observable quantities.
Based on these successes one may be tempted to apply these techniques to
quantum gravity. There the situation is more problematic. Lattices
introduce preferred directions in spacetime and therefore break the
gauge symmetry of the theory, diffeomorphism invariance. If one studies
a canonical formulation of quantum gravity on the lattice this last fact
manifests itself in the non-closure of the algebra of constraints.
The introduction of the Ashtekar new variables, in terms of which
canonical general relativity resembles a Yang-Mills theory revived the
interest in studying a lattice formulation of the theory
\cite{ReSm,Re,Lo}.
The new variables allow for a much cleaner
formulation, quite resemblant to that of Kogut and Susskind of gauge
theories. However, it does little to attack the fundamental problem
we mentioned in the first paragraph. The lattice formulation still has
the problem of the closure of the constraints. The new variable
formulation has also allowed to obtain several formal results in the
continuum connecting knot theory with the space of solutions of the
Wheeler-DeWitt equation in loop space. This adds an extra motivation
for a lattice formulation, since in the lattice context these results
could be rigorously checked and many of the regularization ambiguities
present in the continuum could be settled.
It is not surprising that the lattice counterparts of diffeomorphisms do
not close an algebra. A lattice diffeomorphism is a {\em finite}
transformation. Finite transformations do not structure themselves
naturally into algebras but into groups. One cannot introduce an
arbitrary parameter that measures ``how much'' the diffeomorphism shifts
quantities along a vector field. One is only allowed to move things in
the fixed amounts permitted by the lattice spacing. It is only in the
limit of zero lattice spacing, where the constraints represent
infinitesimal generators, that one can recover an algebra structure.
The question is: does that algebra structure correspond to the classical
constraint algebra of general relativity? In general it will not.
In this paper we present a lattice formulation of quantum gravity in the
loop representation constructed in such a way that from the beginning we
have good hopes that the diffeomorphism algebra will close in the limit
when the lattice spacing goes to zero. The strategy will be the
following: in the continuum theory the solutions to the diffeomorphism
constraint in the loop representation are given by knot invariants. We
will introduce a set of constraints in the lattice such that their
solution space is a lattice generalization of the notion of knot
invariants. This means that the solutions, in the limit when the
lattice spacing goes to zero, are usual knot invariants. As a
consequence the constraints are forced to become the continuum
diffeomorphism constraints and close the appropriate algebra. Roughly
speaking, if this were not the case they could not have the same
solution space as the usual continuum constraints. We will check
explicitly the closure in the continuum limit. Although this has already
been achieved for other
proposals of diffeomorphism contraints on the lattice \cite{Re} the main
advantage of our proposal is that our constraints not only close
the appropriate algebra in the continuum limit but also
admit as solutions objects that reduce
in that limit to usual knot invariants, including
the invariants from Chern--Simons theory
We will also introduce a Hamiltonian constraint in the space of
lattice loops and show that it has the correct continuum limit. It
remains to be shown if it satisfies the correct algebra relations with
the diffeomorphism constraint in the continuum limit.
The main purpose of this paper, however, is to explore how to implement
the diffeomorphism symmetry and the idea of knot invariants in the
lattice and its relationship to quantum gravity, both at a kinematical
and dynamical level. We will consider explicit definitions for knot
invariants and polynomials, that are the lattice counterpart of formal
states of quantum gravity in the continuum formulation. In particular
we will introduce the idea of linking number, self-linking number and
invariants associated with the Alexander-Conway polynomial. We will
also discuss the construction in the lattice of a polynomial related to
the Kauffman bracket of the continuum and analyze the issue of the
transform into the loop representation of the Chern-Simons state and its
relation with the framing ambiguity. We will notice that regular
isotopic invariants (invariants of framed loops) are not annihilated by
the diffeomorphism constraint in the lattice. These results allow to
put on a rigorous setting many results that were only formally available
in the continuum.
Moreover, we will introduce a Hamiltonian constraint in the lattice
whose action on knot invariants can be characterized as a set of skein
relations in knot space. This allows us to show rigorously that lattice
knot invariants are annihilated by the Hamiltonian constraint, again
confirming formal results in the continuum. We also point out
differences in the details with the continuum results.
Our approach departs in an important way from previous lattice
attempts \cite{ReSm,Re,Lo}: we will work directly in the loop
representation. There is a good reason for doing this. When one
discretizes a theory there are many ambiguities that have to be
faced. There are many discretized version of a given continuum
theory. If one's objective is to construct a loop representation it is
better to discretize at the level of loops than to try to discretize a
theory in terms of connections and then build a loop
representation. We will also start from a non-diffeomorphism
formulation at a kinematical level, since it is only in this context
that one can study the action of the Hamiltonian constraint of quantum
gravity, which is not diffeomorphism invariant. Moreover it is the
only framework in which one can address the issue of regular isotopy
invariants, which, strictly speaking, are not diffeomorphism
invariant. We will then go to a representation that is based on
diffeomorphism invariant functions and capture the action of the
Hamiltonian constraint and interpret it as skein relations in that
space. We end with a discussion of further possibilities of this
approach.
\section{Lattice loop representation: the diffeomorphism constraint}
\subsection{The group of loops on the lattice}
Consider a three dimensional manifold with a given topology, say $S^3$
and a coordinate patch covering a local section of it. We set up a
cubic lattice in this coordinate system (this is for simplicity only,
none of the arguments we will give depends on the lattice being
square, only on having the same topology as a square lattice).
The position of the sites are labeled with latin
letters $(m,\,n,...)$ where $m=(m_1,m_2,m_3)$ with $m_i$ integers, and
the links emerging from a given site are labeled by $u_{\mu}$ with
$\mu=(-3,-2, \ldots, 3)$, and $u_{\pm 1}=(\pm 1,0,0)$. The typical
coordinate lattice spacing will be denoted by $a$.
Let us now introduce loops with origin $n_0$. A closed curve is
given by a finite chain of vectors
\begin{equation}
(u^1,....u^N) \;\;\hbox{with}\;\; \sum_{i=1}^N u^i =0
\end{equation}
starting at $n_0$. In this paper we will use the word loop in a precise
way, analogous to the one that some authors use in the continuum. This
notion refers to the fact that there is more information in a closed
curve than that needed to compute the holonomy of a connection around
the curve. Therefore there are several closed curves that yield the same
holonomy. The equivalence class of such curves is to what we will refer
to as loops. In order to define loops on the lattice,
we now introduce a reduction
process. We define $R(u^1,....u^N)$ as the chain obtained
by elimination of opposite successive vectors
\begin{equation}
(...u^i_\alpha,u^{i+1}_\mu,u^{i+2}_{-\mu},u^{i+3}_\beta...) \rightarrow
(...u^i_\alpha,u^{i+1}_\beta...).
\end{equation}
Once a couple of vectors is removed,new collinear opposite vectors may
appear and must be eliminated. The process is repeated until one gets an
irreducible chain. One can show that the reduction process is
independent of the order in which the vectors are removed. A loop
$\gamma$ is an irreducible closed chain of vectors starting at $n_0$.
There is a natural product law in loop space. The product of $\gamma_1$
and $\gamma_2$ is the reduced composition of their irreducible chains
and one can show that loops form a group\cite{GaTr81}. The inverse of
$\gamma_1$ is $\bar \gamma_1=(-u^N,....-u^1)$. We will consider a quantum
representation of gravity in which wavefunctions are functions of the
group of loops on the lattice $\Psi(\gamma)$.
Loop representations require considering loops with intersections.
Therefore one is interested in intersecting loops on the lattice. We
define the {\em intersection class} of a loop by assigning a number to
each intersection and making a list obtained by traversing the loop
and listing the numbers of the successive intersections traversed. Two
loops will belong to the same intersection class if the lists of
numbers are the same. The concept of intersection class will help
define later the transformations that behave like diffeomorphisms on
the lattice. Intersecting loops behave as a ``boundary'' between
different knot classes of non-intersecting loops. We will therefore
require that the transformations do not ``cross the boundary'' by
requesting that they do not change the intersection class of the
loops. Any deformation of the loop that does not make it cut itself
preserves the intersection class. Wavefunctions in the loop
representation are cyclic functions of loops and therefore we will
identify loops belonging to the same intersection class through cyclic
rearrangements.
\subsection{The diffeomorphism constraint: continuum limit}
We now proceed to write a generator of deformations on the lattice,
which in the continuum limit will yield the diffeomorphism constraint.
We define an operator $d_\mu(n)$ that deforms the loop at the point
$n$ of the lattice as shown in figure 1. For example, the action of
the operator on a chain,
\begin{equation}
\gamma=(...u^i(n),u^{i+1}(n)...u^j(n),u^{j+1}(n)...u^k(n),u^{k+1}(n)...)
\end{equation}
is,
\begin{eqnarray}
\gamma_D \equiv d_\mu(n) \gamma \equiv
R(...u_\mu,u^i(n),u^{i+1}(n),u_{-\mu}...u_\mu,u^j(n),u^{j+1}(n),u_{-\mu} \\
...u_\mu,u^k(n),u^{k+1}(n),u_{-\mu}...). \nonumber
\end{eqnarray}
\begin{figure}
\hskip 4cm \psfig{figure=fig1.eps,height=3.5cm}
\caption{The action of the deformation operator on a lattice loop at a
regular point and at an intersection.}
\end{figure}
In the above example we have a loop that goes three times through the
same point. If the point $n$ does not lie on the loop considered the
deformation operator leaves the loop invariant. The action of the
deformation operator is to add two plaquettes one before and one after
the point at which it acts along the line of the loop. If the action
takes place at an intersection, as in the example shown, the
deformation adds two plaquettes along each line going through the
intersection.
We now define an {\em admissible deformation} as a deformation that
does not change the intersection type of the loop. Admissible
deformations are constructed with the operator $d\mu(n)$ but defining
its action to be unity in the case in which the loop would change its
intersection class as a consequence of the deformation. Typical
examples of deformations that change the intersection class arise when
one has parallel lines separated by one lattice spacing in the loop
and one deforms in a transverse direction. These deformations are set
to unity.
We now consider a quantum representation formed by wavefunctions
$\Psi(\gamma)$ of loops on the lattice. One can therefore introduce an
operator which implements the
admissible deformations of the loops ${\cal D}_\mu(n) \Psi(\gamma)=
\Psi(d_\mu(n) \gamma)$ and an associated operator,
\begin{equation}
{\cal C}_\mu(n) \Psi(\gamma)
\equiv {(1+P_\mu(n)) \over 4} \left({\cal D}_\mu(n) \Psi( \gamma)
-{\cal D}_{-\mu}(n) \Psi(\gamma)\right).
\end{equation}
where the weight factor $P_\mu(n)$ counts the number of non-admissible
deformations that the displacements would create. If both displacements
are admissible the overall factor is $1/4$, if one is non-admissible, it
is $1/2$. This factor is needed to ensure consistency of the algebra of
constraints.
We will show that this operator, when acting on a holonomy on the
lattice, produces the usual diffeomorphism constraint of the Ashtekar
formulation in the continuum limit. We will take the continuum limit
in a precise way: we will assume that loops are left of a fixed length
and the lattice is refined. As a consequence of this, loops will never
have two parallel sections separated by only one plaquette. Due to
this, for any given calculation we only need to consider three
different kinds of points in the loop: regular points, corners and
intersections. The intersections can have corners or go ``straight
through''.
Before going into the explicit computations it is worthwhile analyzing
up to what extent can one recover diffeomorphisms from a lattice
construction. After all, all deformations on the lattice are discrete
and are therefore {\em homeomorphisms}. It is clear that at regular
points of the loop there is no problem, the addition of plaquettes
becomes in the limit the infinitesimal generator of deformations, as
we shall see. The situation is more complicated at
intersections. Deformations that do not occur at the intersection, but
at points adjacent to it can change the nature of the
intersection. For instance, an intersection that goes ``straight
through'' can be made to have a kink by deforming at an adjacent
point. In the continuum, a diffeomorphism {\em cannot} change straight
lines into lines with kinks, therefore the above kind of deformations
must be forbidden if one wants the lattice transformations to
correspond to diffeomorphisms in the continuum. Because of the way we
are taking the continuum limit this situation does not occur, since
all points of the loop are either {\em at} an intersection or {\em far
away} from it. We will see, however, that this problem resurfaces when
one wants to compute the diffeomorphism algebra, and we will discuss
it there.
In order to see that the above operator corresponds in the continuum
to the generator of diffeomorphism we consider a holonomy along a
lattice loop $T^0(\gamma)\equiv {\rm Tr}[\prod_{l \in \gamma}U(l)]$ where
$U(l)=\exp{a A_{b}(n)}$, $a$ is the lattice spacing and $A_b(n)$ is
the Ashtekar connection at the site n. Then,
\begin{eqnarray}
{\cal C}_{\mu_0}(n) T^0(\gamma) = {(1+P_\mu(n))\over 4}
[T^0(d_\mu \gamma)-T^0(d_{-\mu}\gamma)].
\end{eqnarray}
In the limit $a \rightarrow 0$ while the loop remains finite, this
action is always a local deformation (in the sense that there are no
lines of the loop in a neighborhood of each other). Assuming the
connections are smooth, we get,
\begin{eqnarray}
{\cal C}_{\mu}(n) T^0(\gamma) ={\textstyle {1\over 2}}\left(
a^2 N_n(\gamma)Tr[F_{ab}(n)U(\gamma)]u_{\mu}^a u^b_{\nu}(n)\right.\\
\left.
+a^2N_n(\gamma)Tr[F_{ab}(n)U(\gamma)]
u_{\mu}^a u^b_{\nu'}(n)\right) \label{difinf} \nonumber
\end{eqnarray}
where $\nu$ and $\nu'$ represent the links adjacent to the site $n$ on
the loop and $N_n(\gamma)$ is a function that is $1$ if $n$ is on the loop
$\gamma$ and zero otherwise, so $N_n(\gamma)=\sum_{n'\in\gamma}
\delta_{n,n'}$ where $\delta_{n,n'}$ is a Kronecker delta.
But
\begin{equation}
\lim_{a \rightarrow 0} {{N_n(\gamma) a (u^a_\nu+u^a_{\nu'})}\over{2 a^3}}=
\int_\gamma dy^a\delta(x-y) \equiv X^a(x,\gamma) \label{tan}
\end{equation}
where $x=\lim_{a\rightarrow 0} na$, $x=\lim_{a\rightarrow 0} n'a$
and we have used that
$lim_{a\rightarrow 0} \delta_{na,n'a}/a^3 =\delta(x-y)$. Therefore
\begin{eqnarray}
\lim_{a \rightarrow 0} \textstyle{1/a^4}{\cal C}_{\mu}(n)T^0(\gamma)=
u^a_{\mu}X^b(x,\gamma)\Delta_{ab}(\gamma^x)T^0(\gamma)=\\
u^a_{\mu} \int_\gamma dy^b \delta(x-y) \Delta_{ab}(\gamma^y)T^0(\gamma)=
u^a_{\mu} F_{ab}^i(x) {\delta \over \delta A_b^i} T^0(\gamma) =
u^a_\mu \hat{\cal C}_a T^0(\gamma)
\end{eqnarray}
which is the explicit form of the vector constraint
$\hat{\cal C}_b = \hat{\tilde{E}}^a_i \hat{F}_{ab}^i$
in the Ashtekar formulation.
This procedure is valid for any regular point of the loop or corner.
A similar procedure may be followed for a site including intersections
or corners. It can be straightforwardly verified that the vector
constraint is also recovered in those cases.
\subsection{The diffeomorphism constraint: constraint algebra}
In order to have a consistent quantum theory, one has to show that the
quantum constraint algebra reproduces to leading order in $\hbar$ the
classical one. In a lattice theory, the objective is to show that the
quantum constraint algebra reproduces the classical one {\em in the
continuum limit}. Specifically, for the case of diffeomorphisms,
\begin{equation}
[\lim_{a\rightarrow 0} {{\cal C}_\mu(n)\over a^4},\lim_{a\rightarrow 0}
{{\cal C}_\nu(n')\over a^4}] = \lim_{a\rightarrow 0} {1\over a^8}
[{\cal C}_\mu(n),{\cal C}_\nu(n')].
\end{equation}
We will here show that the correct algebra is reproduced in the limit
in which the lattice spacing goes to zero. This is an important calculation,
since it is not obvious that diffeomorphism symmetry can be implemented in
a square lattice framework as we propose here. We will see that there are
subtle points in the calculation. We will present the explicit calculation
for a regular point of the loop only in an explicit fashion. Even for that
case the calculation is quite involved.
\begin{figure}[t]
\hskip 1cm \psfig{figure=fig2.eps,height=10cm}
\caption{The action of the first portion of terms in the commutator of
two diffeomorphism constraints acting at a regular lattice point.}
\label{deformaprimero}
\end{figure}
Let us now consider the algebra of the operators ${\cal C}_\mu(n)$ and
${\cal C}_\nu(n')$ and its continuum limit. As before, we assume that
a refinement of the lattice has taken place with the loop length
remaining finite in such a way that any point of the loop is either in
a corner, intersection or regular point. We will only discuss
explicitly the case in which the commutator is evaluated at a regular
point. To simplify the notation, we will consider $\mu$ and $\nu$ in
directions perpendicular to the loop and we take three coordinate axes
$1,2,3$ with $3$ parallel to the loop and the other two perpendicular.
We will compute $[\hat{C}_2(n),\hat{C}_1(n')] \psi(\gamma)$. Let us
consider first the term $\hat{C}_2(n)\hat{C}_1(n')$. In order for it
to be nonvanishing, the point $n'$ has to be on the loop $\gamma$.
The first diffeomorphism generates two
terms, corresponding to the addition of two plaquettes in the forward
$1$ direction and backward. The
second diffeomorphism will
lead to non-zero commutator
if the point $n$ lies in
one of the points marked in figure \ref{deformaprimero} on the
deformed loop. There are five possible such points along the
deformation. At each the action of the second diffeomorphism
generates two terms. The resulting ten deformed loops are shown
explicitly in the figure \ref{deformaprimero}. There will be ten similar
terms resulting from the ``backwards'' action of the first
diffeomorphism. To clarify the calculation, let us concentrate on the
first pair of terms displayed in figure \ref{deformaprimero}. The
infinitesimal deformation of the original loop $\gamma$ generated
by the loop derivative operator \cite{Gaplb91}
is represented through the introduction in the holonomy of
field strengths $F_{ab}(P,p)$
, depending on a path $P$ and its end point $p$,
contracted with the element of area of the plaquette.
To unclutter the notation,
when a plaquette in the direction $1-2$ is added, we will
denote this as $F_{12}(p)$ dropping the dependence in the path $P$
(we assume the path goes from the basepoint of the loop to the point
of interest).
With this notation, the contribution of the first two
terms of figure \ref{deformaprimero} is (we are neglecting
all the contributions of powers greater than $a^2$),
\begin{eqnarray}
N_\gamma(n') &\{& {\rm Tr}[\, (\, F_{23}(n'-1)+F_{23}(n'-1)
\,) U(\gamma) ] \nonumber\\&&-
{\rm Tr}[ \, (-F_{23}(n'-1)-F_{23}(n'-1) \, ) U(\gamma) ]\}
\end{eqnarray}
where by $-1$, $-3$, etc we denote the point displaced one lattice
unit in the corresponding direction, the sign indicating forward
or backward respect to the orientation chose in the trihedron shown in
the figure.
We see that the contribution of the plaquettes added by the first
diffeomorphism, namely the $F_{13}'s$ cancel each other at
this order. Taking into
account these cancellations, the result of all the terms considered
in figure \ref{deformaprimero} is,
\begin{eqnarray}
N_\gamma(n') &\{&\delta(n-n'+1)
{\rm Tr}[ (F_{23}(n'-1)+F_{23}(n'-1)) U(\gamma)]\nonumber\\
&&-\delta(n-n'+1)
{\rm Tr}[ (-F_{23}(n'-1)-F_{23}(n'-1)) U(\gamma)]\nonumber\\
&&+\delta(n-n'+1-3)
{\rm Tr}[ (F_{23}(n'-1+3)-F_{12}(n'-1+3)) U(\gamma)]\nonumber\\
&&-\delta(n-n'+1-3)
{\rm Tr}[ (-F_{23}(n'-1+3)+F_{12}(n'-1+3)) U(\gamma)]\nonumber\\
&&+\delta(n-n'+1+3)
{\rm Tr}[ (F_{12}(n'-1-3)+F_{23}(n'-1-3)) U(\gamma)]\nonumber\\
&&-\delta(n-n'+1+3)
{\rm Tr}[(-F_{12}(n'-1-3)-F_{23}(n'-1-3)) U(\gamma)]\nonumber\\
&&+\delta(n-n'-3)
{\rm Tr}[(-F_{12}(n'+3)+F_{23}(n'+3)) U(\gamma)]\nonumber\\
&&-\delta(n-n'-3)
{\rm Tr}[(F_{12}(n'+3)-F_{23}(n'+3)) U(\gamma)]\nonumber\\
&&+\delta(n-n'+3)
{\rm Tr}[(F_{23}(n'-3)+F_{12}(n'-3)) U(\gamma)]\nonumber\\
&&-\delta(n-n'+3)
{\rm Tr}[(-F_{23}(n'-3)-F_{12}(n'-3)) U(\gamma)]\}
\label{eq:D_-1}
\end{eqnarray}
where we have, to make more direct the continuum analysis used the
notation of Dirac deltas for what strictly speaking are Kronecker deltas
at this stage of the calculation.
We now need to consider the terms from the ``backwards'' action of the
first diffeomorphism constraint. This gives rise to the contributions
depicted in figure \ref{deforsegundo} and are explicitly given by,
\begin{eqnarray}
N_\gamma(n') &\{&\delta(n-n'-1)
{\rm Tr}[ (F_{23}(n'+1)+F_{23}(n'+1)) U(\gamma)]\nonumber\\
&&-\delta(n-n'-1)
{\rm Tr}[ (-F_{23}(n'+1)-F_{23}(n'+1)) U(\gamma)]\nonumber\\
&&+\delta(n-n'-1-3)
{\rm Tr}[ (F_{23}(n'+1+3)+F_{12}(n'+1+3)) U(\gamma)]\nonumber\\
&&-\delta(n-n'-1-3)
{\rm Tr}[ (-F_{23}(n'+1+3)-F_{12}(n'+1+3)) U(\gamma)]\nonumber\\
&&+\delta(n-n'-1+3)
{\rm Tr}[ (-F_{12}(n'+1-3)+F_{23}(n'+1-3)) U(\gamma)]\nonumber\\
&&-\delta(n-n'-1+3)
{\rm Tr}[(F_{12}(n'+1-3)-F_{23}(n'+1-3)) U(\gamma)]\nonumber\\
&&+\delta(n-n'-3)
{\rm Tr}[(F_{12}(n'+3)+F_{23}(n'+3)) U(\gamma)]\nonumber\\
&&-\delta(n-n'-3)
{\rm Tr}[(-F_{12}(n'+3)-F_{23}(n'+3-2)) U(\gamma)]\nonumber\\
&&+\delta(n-n'+3)
{\rm Tr}[(F_{23}(n'-3)-F_{12}(n'-3)) U(\gamma)]\nonumber\\
&&-\delta(n-n'+3)
{\rm Tr}[(-F_{23}(n'-3)+F_{12}(n'-3)) U(\gamma)]\}
\label{eq:D_+1}
\end{eqnarray}
\begin{figure}[t]
\hskip 1cm \psfig{figure=fig3.eps,height=7cm}
\caption{The second portion of the terms of the commutator of two
diffeomorphisms at a regular point of the lattice.}
\label{deforsegundo}
\end{figure}
The terms in the above two expressions can be combined to form
a series of differences of delta's times $F$'s,
\begin{eqnarray}
4N_\gamma(n') &\{&
\Delta_1 [ \;\; 2\delta(n'-n) {\rm Tr}[F_{23}(n'))U(\gamma)]\nonumber\\
&&+\delta(n'-n-3) {\rm Tr}[(F_{23}(n'+3)U(\gamma)]\nonumber\\
&&+\delta(n'-n+3) {\rm Tr}[F_{23}(n'-3)U(\gamma)] \;\;]\nonumber\\
&&+\Delta_3 [2\delta(n'-n) {\rm Tr}[F_{12}(n')U(\gamma)]\nonumber\\
&&+\delta(n'-n-1) {\rm Tr}[F_{12}(n'+1)U(\gamma)]\nonumber\\
&&+\delta(n'-n+1) {\rm Tr}[F_{12}(n'-1)U(\gamma)] \;\; ] \},
\label{eq:half-conmutator}
\end{eqnarray}
where the differences $\Delta_\mu$ mean evaluate the quantity at
$n'$ and $n'+\mu$ and take the difference.
The derivatives of $F_{23}$ along the direction 1 are formed by the
first six terms of (\ref{eq:D_+1}) and (\ref{eq:D_-1})
(the term multiplied by 2 comes from the first two lines
and the remaining from the next four lines).
The last four terms give vanishing
contributions for $F_{23}$. The derivatives along $3$ of $F_{12}$
are produced in two identical copies
by (\ref{eq:D_+1}) and (\ref{eq:D_-1}), combining the
contribution of each line with the one two lines below in $F_{12}$.
To understand this result it is useful to pay attention to the shading
in figure \ref{deforsegundo}, since we see that the contributions to
the action of the successive operators is mirrored by the shaded areas
of the figure. All the unshaded plaquettes cancel each other in the
various terms. Only the shaded ones survive. For instance, the $2-3$
shaded plaquettes to the front, of which we have two in the first
drawing of figure \ref{deforsegundo}, two in the second and one in
the third, fourth, fifth and sixth, minus the corresponding terms of
figure \ref{deformaprimero} give rise to the first, second and third
terms of equation (\ref{eq:half-conmutator}).
Equation (\ref{eq:half-conmutator})
can be rewritten as
\begin{eqnarray}
4N_\gamma (n') &\{& \Delta_1 [ \;
2\delta(n-n'){\rm Tr}[F_{23}(n')U(\gamma)] \nonumber \\
&&+ \delta(n-n'-3){\rm Tr}[F_{23}(n'+3)U(\gamma)] \nonumber \\
&&+\delta(n-n'+3){\rm Tr}[F_{23}(n'-3)U(\gamma)] \; ] \nonumber \\
&&+ \Delta_3 [ \; 2\delta(n-n'){\rm Tr}[F_{12}(n')U(\gamma)] \nonumber \\
&&+ \delta(n-n'-1){\rm Tr}[F_{12}(n'+1)U(\gamma)] \nonumber \\
&&+\delta(n-n'+1){\rm Tr}[F_{23}(n'-1)U(\gamma)] \; ] \, \}.
\label{eq:half-conm2}
\end{eqnarray}
In order to write the commutator
$[\hat{C}_2(n),\hat{C}_1(n')] \psi(\gamma)$
we have to subtract to (\ref{eq:half-conm2})
the same expression but interchanging
$1\leftrightarrow 2$ and
$n \leftrightarrow n'$. If we now express the $N_\gamma(p)$
as $\sum_{m\in \gamma} \delta(p-m)$ then we get
\begin{eqnarray}
[\hat{C}_2(n),\hat{C}_1(n')] =
{1\over 4}\sum_{m' \in \gamma} \delta(n'-m')
\{ \Delta_1 [ \; 2\delta(n-m'){\rm Tr}[F_{23}(m')U(\gamma)] \nonumber \\
+ \delta(n-m'-3){\rm Tr}[F_{23}(m'+3)U(\gamma)] \nonumber \\
+\delta(n-m'+3){\rm Tr}[F_{23}(m'-3)U(\gamma)] \; ] \nonumber \\
+ \Delta_3 [ \; 2\delta(n-m'){\rm Tr}[F_{12}(m')U(\gamma)] \nonumber \\
+ \delta(n-m'-1){\rm Tr}[F_{12}(m'+1)U(\gamma)] \nonumber \\
+\delta(n-m'+1){\rm Tr}[F_{23}(m'-1)U(\gamma)] \; ] \, \} \nonumber \\
-{1\over 4}\sum_{m \in \gamma} \delta(n-m) \{ \Delta_1 [ \;
2\delta(n'-m){\rm Tr}[F_{13}(m)U(\gamma)] \nonumber \\
+ \delta(n'-m-3){\rm Tr}[F_{13}(m+3)U(\gamma)] \nonumber \\
+\delta(n'-m+3){\rm Tr}[F_{13}(m-3)U(\gamma)] \; ] \nonumber \\
+ \Delta_3 [ \; 2\delta(n'-m){\rm Tr}[F_{21}(m)U(\gamma)] \nonumber \\
+ \delta(n'-m-2){\rm Tr}[F_{21}(m+2)U(\gamma)] \nonumber \\
+\delta(n'-m+2){\rm Tr}[F_{21}(m-2)U(\gamma)] \; ] \, \}
\label{eq:conmutator}
\end{eqnarray}
If we now neglect terms of higher order in the lattice spacing, we can
move all the delta's and $F$'s to the same locations and, taking into
account the explicit dependence on the lattice spacing $a$
the equation (\ref{eq:conmutator}) becomes
\begin{eqnarray}
[\hat{C}_2(x),\hat{C}_1(y)] =
\lim_{a\rightarrow 0} [{\hat{C}_2(na)\over a^4},{\hat{C}_1(n'a) \over a^4}]
= \lim_{a\rightarrow 0} {1\over a^6} \sum_{m \in \gamma} \nonumber \\
&&\{ \; \; \delta(n'-m)
\Delta_1 \delta(n-m) {\rm Tr}[F_{23}(m)U(\gamma)]
\nonumber \\
&&- \Delta_2 \delta(n'-m) \delta(n-m) {\rm Tr}[F_{13}(m)U(\gamma)]
\nonumber \\
&&+ \delta(n'-m) \delta(n-m)
\Delta_1 {\rm Tr}[F_{23}(m)U(\gamma)] \nonumber \\ &&
+ \delta(n'-m) \delta(n-m)
\Delta_2 {\rm Tr}[F_{31}(m)U(\gamma)] \nonumber \\ &&
+ \delta(n'-m) \delta(n-m)
\Delta_3 {\rm Tr}[F_{12}(m)U(\gamma)] \nonumber \\ &&
+\Delta_3 \delta(n'-m) \delta(n-m)
{\rm Tr}[F_{12}(m)U(\gamma)] \nonumber \\ &&
+ \delta(n-m) \Delta_3 \delta(n'-m)
{\rm Tr}[F_{12}(m)U(\gamma)] \nonumber \\ &&
+ \delta(n-m) \delta(n'-m) \Delta_3
{\rm Tr}[F_{12}(m)U(\gamma)] \; \}
\label{eq:conmutator2}
\end{eqnarray}
where the power $1/a^6$ arises from the fact that implicit in all
the previous calculations was an $a^2$ power, as we mentioned at the
beginning.
The third, fourth and fifth terms of (\ref{eq:conmutator2})
cancel out by virtue of the Bianchi identity. The
sixth, seventh and eighth form a total derivative
along the 3-direction.
Recalling that $\delta(na-ma)/a^3 \rightarrow \delta(x-z)$ and
$\Delta_1 \delta(na-ma)/a^4 \rightarrow \partial_1 \delta(x-z)$
we arrive to
\begin{eqnarray}
[\hat{C}_2(x),\hat{C}_1(y)] = \nonumber \\
&&\{ \; \partial_1 \delta(y-x)
\int_\gamma dz \delta(y-z) {\rm Tr} [F_{23}(y)U(\gamma)] \nonumber \\
&&-\partial_2 (\, \delta(x-y) \,) \int_\gamma dz \delta (x-z)
{\rm Tr} [F_{13}(x)U(\gamma)] \; \}.
\end{eqnarray}
The calculation displayed above is true at a regular point of the
loop. A similar calculation can be performed at corners with the same
result. A problem arises, however, at intersections. This is related
to the issue we discussed before of homeomorphisms vs
diffeomorphisms. As we pointed out, the operator we introduced
generates diffeomorphisms at regular points of the loop or at
intersections or corners. A problem develops if one acts in points
that are immediately adjacent to intersections, since the deformation
can change the character of intersections (introducing kinks). One
cannot ignore this fact in the commutator algebra. When one acts with
two successive diffeomorphisms at an intersection, it is inevitable to
consider one of the operators as acting at a point adjacent to the
intersection {\em before taking the continuum limit}. This leads to
changes in the character of the intersection that we do not want to
allow. If we did so, we would not recover diffeomorphism in the
continuum but homeomorphisms, which can introduce kinks in the
intersection. If one allows these kind of deformations in the lattice,
the calculation of the algebra at intersections goes through with no
problem. In the continuum, the only difference between the
homeomorphism and diffeomorphism algebra is given by the nature of the
smearing functions of the constraints, so it is not surprising that we
recover the same algebra. If one restricts the
action of the deformations in order to avoid changing the character of
intersections, by defining the action of the operator to be unity if
it changes the character of the intersection, the commutator algebra
fails at intersections. More precisely, the constraints {\em commute}
at intersections. Because this failure of the constraint algebra
occurs only at a zero measure set of points, one can still consider
the quantum theory to be satisfactory, since one is smearing the
constraints with smooth functions and therefore cannot distinguish
this algebra from one with different values of the commutators at a
zero measure set of points. We will adopt this latter point of view in
this paper, since our primary motivation is to implement
diffeomorphism symmetry in the continuum. This will also have a
practical consequence: because homeomorphism invariance is more
restrictive than diffeomorphism invariance we will see that it is
considerably easier to find invariants under diffeomorphisms than under
homeomorphisms.
\section{Knots on the lattice}
In this section we will discuss certain ideas of knot theory, in
particular those which pertain to knot invariants derived from
Chern--Simons theory, in the context of the lattice regularization. We
define a knot invariant on the lattice as a quantity dependent on a loop
that is invariant under admissible deformations of the loops.
The motivation for all this is that several of the knot invariants that
arise from Chern--Simons theory have difficulties with their definition.
These difficulties are generically known as framing ambiguities. They
refer to the fact that certain invariants are not well defined for a
single loop or their definition might have difficulties when extended to
loops with intersections. The usual solution to these difficulties is
to ``frame" the loops, ie, convert them to ribbons. It is the
mathematical language version of the problem of regularization in
quantum field theory. We will see that the lattice treatment provides a
natural framing for knot invariants. We will show that the results
obtained are in line with results obtained in the continuum and we will
see that the framing provided excludes regular isotopic invariants from
being candidates for states of quantum gravity. The framed invariants
will be well defined, but will not be invariant under diffeomorphisms on
the lattice.
\subsection{The framing problem in the continuum}
In order to consider concrete knot invariants on the lattice we will
give lattice analogues of constructions that lead to knot invariants in
the continuum. Let us therefore start by briefly recalling some of the
ideas that lead to knot invariants in the continuum. An important
development in the last years has been the realization that topological
field theories can be powerful tools to construct explicit expressions
for knot invariants. In particular, it was shown by Witten that in a
gauge theory given by the Chern-Simons action,
\begin{equation}
S_{CS} = {\rm Tr}\left[ \int d^3x \epsilon^{abc} (A_a \partial_b A_c
+{\textstyle {2 \over 3}} A_a A_b A_c)\right]
\end{equation}
the expectation value of the Wilson loop $W_\gamma[A]= {\rm Tr} P
\exp\left(i \oint_\gamma dy^a A_a\right)$ ,
\begin{equation}
<W(\gamma)> = \int DA W_\gamma[A] \exp\left(i {k\over 8\pi} S_{CS}\right)
\end{equation}
is an expression that satisfies the skein relation of the regular
isotopic knot invariant known in the mathematical literature as the
Kauffman bracket, evaluated for a particular combination of its
variables related to $k$.
A simpler version of the above statement is the observation that for
an Abelian Chern-Simons theory $S_{CS}^{\rm Abel} = {\textstyle {k\over
8\pi}}
\int d^3x
\epsilon^{abc} A_a \partial_b A_c$ the expectation value of
a Wilson loop is the exponential of the
Gauss (self) linking number of the loop,
\begin{equation}
\exp\left({\textstyle {i \over 2 k}} \oint_{\gamma} dx^a
\oint_{\gamma}dy^b \epsilon_{abc} {(x-y)^c\over |x-y|^3}\right).
\end{equation}
This expression is reminiscent of the linking number of {\em two}
loops,
\begin{equation}
{\textstyle {1 \over 4\pi}} \oint_{\gamma_1} dx^a
\oint_{\gamma_2} dy^b \epsilon_{abc} {(x-y)^c\over |x-y|^3},
\end{equation}
a quantity that measures the number of times one loop ``threads
through'' the other. There is however, an important difference: in the
latter expression the two integrals are along different loops and the
points $x$ and $y$ never coincide. In appearance, therefore, the
self-linking number is ill-defined.
In spite of its appearance, the integral is finite, but is ambiguously
defined, its definition requires the introduction of a normal to the
loop, which is not a diffeomorphism invariant concept. This problem is
known as the framing ambiguity and it is clearly related to
regularization. Suppose one attempts to define the self-linking number
by considering the linking number of a loop as the linking number of
the loop with a ``copy'' of itself, obtained by displacing the loop an
infinitesimal amount. This clearly regularizes the integral. However,
the result depends on how one creates the ``copy'' of the loop and how
it winds around the original loop. One of the main purposes of this
paper will be to show how to address the framing ambiguity in the
lattice. We will start by discussing the Abelian case.
The above problem is more acute if the loops have intersections. In
that case the integral tha appears in the self-linking number is not
even finite. Again, one can in principle solve the problem through a
mechanism of framing. An important aspect of the lattice construction
will therefore be to provide a precise mechanism for framing knot
invariants with intersections.
Why is one interested in these particular invariants?. There are two
reasons. On one hand, because they are constructed as loop transforms
of quantities in terms of connections these invariants automatically
satisfy the ``Mandelstam constraints'' that must be satisfied by
wavefunctions of quantum gravity in the loop representation. This
makes them candidates for quantum states of gravity. Moreover, it has
been concretely shown that some of the invariants that arise due to
Chern-Simons theory formally solve the Hamiltonian constraint of
quantum gravity in the continuum. The lattice framework is an
appropriate environment to discuss these solutions in a rigorous
setting.
\subsection{Abelian Chern-Simons theory on the lattice}
Let us start with the simple case of a $U(1)$ Chern-Simons theory and
study its lattice version. Although this is not directly connected
with quantum gravity we will see that it already contains several
ingredients we are interested in and in particular it allows the
discussion of the self-linking number, which plays a central role in
quantum gravity in the continuum.
The Chern-Simons invariant can be written naturally in the lattice in
terms of a wedge product, as discussed in references
\cite{FrMa,Po}. In order to give more details we need to briefly introduce
the differential form calculus on the lattice \cite{BeJo}.
A $k$-form
on the lattice is a function associated with a $k$-dimensional cell
$C_k$ in
the lattice that is skew-symmetric with respect to the orientation of
the cell. For instance, a zero-form is a scalar field (associated with
lattice sites), a one-form is a variable associated with lattice links
that changes sign when one changes the orientation of the link.
A two-form is a variable associated with a plaquette, with sign
determined by the orientation of the plaquette. The exterior
derivative is a map between $k$ and $k+1$ forms defined by,
\begin{equation}
d\phi(C_{k+1}) = \sum_{C'_{K} \in \partial C_{k+1}} \phi(C'_k)
\end{equation}
where $\phi(C_n)$ is an $n$-form and the sum is along the cells that
form the boundary of $C_{k+1}$. For instance, given a one form
$A(C_1)$, we can define a two-form $F(C_2)$ obtained by summing $A$
along all the links that encircle the plaquette $C_2$ and which we
denote as $F=d A$.
One can also introduce a co-derivative operator $\delta$ which associates a
$k-1$ form with a $k$ form, through
\begin{equation}
\delta \phi(C_{k-1}) = \sum_{\forall C_k / C_{k-1}\in \partial C_k}
\phi(C_{k}).
\end{equation}
{}From the above definitions one can see that both operators are
nilpotent,
\begin{equation}
\delta^2=0,\qquad\qquad d^2=0,
\end{equation}
and in terms of these operators one can define a Laplacian, which
maps $k$-forms to themselves,
\begin{equation}
\nabla^2 \equiv \delta d+d \delta.\label{defdelta}
\end{equation}
To conclude these mathematical preliminaries we introduce a notion of
inner product on the lattice. The inner product of two $k$-forms is
defined by,
\begin{equation}
<\phi,\varphi> = \sum_{\forall C_k} \phi(C_k) \varphi(C_k),
\end{equation}
and this product has the property of ``integration by parts'',
\begin{equation}
<d \phi,\varphi> = <\phi,\delta \varphi>.
\end{equation}
With these elements we are able to introduce the Chern-Simons form
through the definition of a wedge product that associates (in three
dimensions) a one-form to each two-form and vice-versa. Given a
two-form associated with a plaquette $C_2$, the wedge product defines
a one-form associated with a link $C_1$ orthogonal to $C_2$, whose
value is given by the value of the one-form evaluated on
$C_2$. Evidently there is an ambiguity in how to define the wedge
product since there are eight links orthogonal to each plaquette. We
will see that different definitions of the wedge product will
correspond to different framings in the context of knot
invariants. There exist several definitions which all imply the
following identities,
\begin{eqnarray}
\# d A &=& \delta \# A,\label{hash1}\\
\# \delta A &=& d \# A.\label{hash2}
\end{eqnarray}
One possible consistent way to define the $\#$ operation is to assign
to a two-form associated with an oriented plaquette an average of
the one-forms associated with each of the eight links perpendicular to
the plaquette along its perimeter. This differs from the definition
taken in \cite{Po} which assigns only one link, but we will adopt it
in this paper because it yields a more symmetric framing of the knot
invariants.
With the above notation, the Chern-Simons form in the lattice can be
written as,
\begin{equation}
S^{Abel}_{CS} = i k <F, \# A>.
\end{equation}
This action is gauge invariant under transformations $A \rightarrow
A+d \xi$, since the transformation implies
\begin{equation}
S_{CS}\rightarrow S_{CS} +
i k <F,\#d\xi> = S_{CS} +
i k <F,\delta \#\xi> = S_{CS} +
i k <d F,\#\xi> = S_{CS} +i k <d^2 A,\#\xi> = S_{CS}.
\end{equation}
One can now proceed to compute the expectation value of a Wilson loop
in a Chern-Simons theory on the lattice,
\begin{equation}
<W(\gamma)> = \int DA \exp\left(S^{\rm Abel}_{CS}\right) T^0(\gamma),
\label{cslat}
\end{equation}
which, after a straightforward Gaussian integration yields,
\begin{equation}
\exp\left(-{i\over 2 k} <\# l_\gamma,d \nabla^{-2} l_\gamma>\right)
\end{equation}
where $l_\gamma(C_1)$ is a one-form such that it is one if $C_1$
belongs to the loop $\gamma$ and zero otherwise. This expression takes
integer values and has a simple interpretation. In order to see this,
recall that in $S^3$ (or any simply connected region of an arbitrary
manifold), the one form $l$ satisfies $\delta l=0$ and therefore
can be written as a co-derivative of a two form $m$,
\begin{equation}
l=\delta m\label{gradiente}
\end{equation}
and substituting in the expression (\ref{cslat}) and using
(\ref{hash1}) and (\ref{defdelta}) expression (\ref{cslat}) reduces
to,
\begin{equation}
<\# m_\gamma,l_\gamma>.
\end{equation}
This expression is the particularization to one loop of the linking
number on the lattice, which we discuss in detail in the next section
and is usually called the self-linking number. We will also return, at
the end of it, to the self-linking number.
\subsection{The Gauss linking number in the lattice and self-linkings}
The linking number of two loops $\eta$ and $\gamma$ on the lattice is
given by
\begin{equation}
N(\eta,\gamma) = <\# m_\eta,l_\gamma>.\label{latlink}
\end{equation}
To understand this expression better notice that the simplest solution to
equation (\ref{gradiente}) consists of a set of plaquettes (it is easy to
see that the result is independent of the set chosen) with boundary
coincident with $\eta$ and consider an $m$ which is equal to $1$ on
each plaquette and $0$ otherwise. It is immediate from the definition
of $\delta$ that $\delta m$ only has contribution on the boundary of
the loop. As a consequence, the one form $\# m$ takes value on all
links orthogonal to the plaquettes considered. Therefore the inner
product of these one-forms with $l_\gamma$ will be non-vanishing
only if $\gamma$ threads through $\eta$, which is the
traditional definition of the linking number of two loops. The result
of the calculation is $1$ if the loops thread once each other and
counts the number of threadings in the case of multiple threadings. If
the loops are not linked the result is zero.
Notice that the above definition also includes as a particular case
that of two intersecting loops. To simplify the calculation, notice
that in this case the contribution is equal to counting the number of
links perpendicular to one of the loops that belong to the other
loop and dividing by eight. For instance, in the case of two planar
loops intersecting in such a way that one is perpendicular to the
plane of the other, the contribution is $1/2$. If the same
intersection occurs at the corner of one of the loops, the
contribution is $1/4$. We therefore see that the expression for the
linking number on the lattice is automatically ``framed''. Recall that
the definition of the linking number in the continuum was ill-defined
for intersecting loops. The lattice provides a prescription to assign
values to the linking number in the case of intersections. Evidently,
the particular ``framing'' obtained depends on the definition for the
$\#$ product considered. If instead of summing over the eight links
associated with each plaquette and dividing by eight we had had chosen
just one link (as is done in reference \cite{Po}) the result would be
different. Such a choice implies a preferred orientation in the
lattice. The linking number of two intersecting loops would depend
on which side of the loop with respect that preferred orientation.
The invariant discussed above (with the particular framing given for
the intersections) is invariant under the type of deformations that we
considered as analogues of diffeomorphism in the lattice in section
2. Those deformations did not change the type of intersections of
loops. Therefore the value of the linking number for intersecting
loops is invariant. Had we admitted the homeomorphisms as the symmetry
of the theory instead, the linking number we defined above would not
have been invariant for intersecting loops, since its value depends on
the particular type of intersection considered.
Contrary to what happens in the continuum, in the lattice the
self-linking number defined as the linking number of a loop with itself
is a well defined quantity without the introduction of any external
framing. If the loop is planar, the self-linking number vanishes. If
not, it will be determined by the kinks and self-intersections of the
loop. The evaluation in practice of the self-linking number can be done
just applying the definition of the linking number twice with the same
loop. However the resulting quantity is {\em not} invariant under the
diffeomorphisms on the lattice we have considered. To see this,
consider a planar loop and apply a deformation perpendicular to the
plane of the loop. Due to definition (\ref{latlink}) the only
nonvanishing contributions come from the two links emerging
perpendicular to the loop in the deformation. At regular points, each
link has a contribution equal and opposite in sign and cancel each
other. However, at corners, the link at the corner contributes $1/2$ of
what the other does (in the definition of the $\#$ product introduced
corners of a loop only have one associated plaquette instead of two as a
regular point) and the self-linking number is therefore not invariant.
There are several attitudes possible in face of this non-invariance of
the self-linking number. One could try to limit the action of
diffeomorphism such that they do not deform corners. This is
unacceptable. Corners are generic points in the lattice and
diffeomorphisms must act at them. Consider a large square loop in the
continuum. If one considers its lattice representation and
aligns it with the lattice directions, it has only
four corners. However, if one rotates it a bit, many new corners are
introduced. Therefore diffeomorphisms must act at corners. Another
possibility would be to alter the definition of the $\#$ operation
in such a way as to treat corners in the same footing as regular
points. Although we cannot rule out such a re-definition, we have been
unable to find a suitable one.
We therefore see that the lattice is useful to solve the problem of
framing of intersections for invariants of intersecting loops. It is
however, unable to solve the framing problem of regular isotopic
invariants. The corresponding quantities in the lattice are simply not
invariant.
One last point of attention should be that we have not proven that the
Chern-Simons form that we introduced in the lattice is invariant under
the diffeomorphisms we considered. In order to do that we would need a
definition of the constraint in the connection representation, which
we do not have. It could be possible that the Chern-Simons form
considered is not invariant under the discrete diffeomorphisms
and there lies the root of the non-invariance of the self-linking number.
Finally, it is worthwhile pointing out that since the linking number
is obtained from an Abelian theory it displays a certain set of
``Abelian-like'' properties in terms of loops. For instance,
with respect to composition of curves and retracings,
\begin{eqnarray}
N(\gamma_1\circ\gamma_2,\gamma_3)&=&N(\gamma_1,\gamma_3)+
N(\gamma_2\circ\gamma_3) \label{abel1}\\
N(\gamma,\eta^{-1})&=&-N(\gamma,\eta),\label{abel2}
\end{eqnarray}
these properties will be crucial for the results we will derive in the
next section.
In the continuum, through the use of variational techniques, skein
relations have been provided for the Kauffman bracket knot polynomial
\cite{GaPu96}. These results contain as a particular case skein
relations for the linking number with intersections. It is worthwhile
pointing out that these results agree with the results we presented in
the previous paragraphs. The skein relation found states that,
\begin{equation}
N(\raisebox{-3pt}{\psfig{figure=li.eps,height=4mm}}) = {1\over 2}(
N(\raisebox{-3pt}{\psfig{figure=lplus.eps,height=4mm}})+
N(\raisebox{-3pt}{\psfig{figure=lminus.eps,height=4mm}}))
\end{equation}
which implies that the value of the linking number of two loops with an
intersection is equal to average of the values of the linking number
when that intersection is replaced by an upper or an under crossing. In
one case the invariant will be $1$ and in the other zero and therefore
we recover the lattice result that the linking number of two loops that
are only linked through an intersection is one half. If the intersection
had kinks in it, the continuum results contain a free parameter. This
parameter can be adjusted to agree with the lattice results.
\section{The Hamiltonian constraint}
\subsection{Definition}
Let us now define the action of the Hamiltonian constraint. We will
propose an operator largely based on the experience of the continuum
\cite{Gaplb91}. We know the operator is only nonvanishing at points
where the loops have intersections. For pedagogical reasons we first
write the definition for a concrete simple loop and then we give the
general definition. Consider a figure eight loop $\eta$ as shown in figure
\ref{fig8}. The action of the Hamiltonian on the state is given by,
\begin{figure}[t]
\hskip 1cm \psfig{figure=figeight.eps,height=10cm}
\caption{The loops that arise in the action of the Hamiltonian on
a figure eight loop.}
\label{fig8}
\end{figure}
\begin{equation}
H(n) \psi(\eta) = {1 \over
4}[\psi(\eta_1)-\psi(\eta_2)+\psi(\eta_3)-\psi(\eta_4)].
\end{equation}
The action of the operator at intersections can be described as
deforming the loop along one of the tangents at the intersection and
rerouting one of the resulting lobes minus the rerouted deformation in
the opposite direction. This operation is carried along for each
possible pair of tangents at the intersection. The ``lobes'' do not
need to be single lobes as depicted here, the situation is the same in
multiple intersections, each pair of tangents determines univocally
two lobes in the loop.
Let us now describe the general definition of the Hamiltonian acting
on a loop with possible multiple intersections at a point with
possible kinks. We start from a generic loop on the lattice described
by a chain of links,
\begin{equation}
\gamma= (\ldots,u_-(n),u_+(n),\ldots,v_-(n),v_+(n),\ldots)
\end{equation}
where $u_-(n),u_+(n),\ldots$ are the incoming and outgoing links at the
intersection, which we locate at the site $n$. The intersection can
be multiple, in that case one has to choose pairs of lines that
go through it and add a contribution similar for each of them.
The general action of
the Hamiltonian is defined as,
\begin{equation}
{\cal H}(n) \psi(\gamma) = {1 \over 8}I(n,\gamma)
\sum_{l(n),l'(n)\in \gamma}
[\psi(\gamma'_{l,o,l'} \circ {\bar\gamma'_{l,l'}})-
\psi(\gamma''_{l,o,l'} \circ {\bar\gamma''_{l,l'}})] \label{hamilt}
\end{equation}
where the loop $\gamma'$ is defined by,
\begin{equation}
\gamma' \equiv (....u_+(n),u_-(n),u_+(n),{\bar
u}_+(n)...u_+(n),v_-(n),v_+(n),{\bar u}_+(n)...)
\end{equation}
\begin{equation}
\gamma'' \equiv (....\bar{u}_+(n),u_-(n),u_+(n),
u_+(n)...{\bar u}_+(n),v_-(n),v_+(n), u_+(n)...)
\end{equation}
where the function $I(n,\gamma)$ is one if $n$ is at an intersection
of $\gamma$ and the summation goes through all pairs of links $l,l'$
that start or
end at $n$. The notation $\gamma'_{l,o,l'}$ indicates the portion
of the loop $\gamma'$ that goes from the link $l$ to $l'$ through an
arbitrary origin $o$ fixed along the loop, whereas $\gamma'_{l,l'}$
represents the rest of the loop. An overbar denotes reverse
orientation. Therefore $\gamma'_{l,o,l'} \circ {\bar\gamma'_{l,l'}}$
corresponds, in the case of a figure eight loop to the deformed and
rerouted loop we discussed above as $\eta_1$.
An important property of the Hamiltonian that needs to be pointed out
is that its action on double intersections trivializes in the space of
wavefunctions that are invariant under diffeomorphisms. If the
wavefunction is invariant under diffeomorphisms, it is a fact that
$\psi(\eta_1)=\psi(\eta_2)$ and $\psi(\eta_3)=\psi(\eta_4)$ where
the $\eta_1,\ldots,\eta_4$ refer to the loops in figure
\ref{fig8}. Therefore we need only concern ourselves, when analyzing
solutions in terms of knot invariants, with triple or higher
intersections. In the lattice, if one is not considering double lines
(as we are doing in this paper for simplicity) that means only up to
triple intersections (possibly with kinks).
\subsection{Continuum limit}
Let us analyze the continuum limit of the above expression. What we
would like to study is its action on a holonomy based on a smooth loop
of a smooth connection and show that it yields the same expression as
the action of the Hamiltonian constraint of quantum gravity on a
holonomy in the connection representation. In order to see this let us
evaluate the Hamiltonian we just introduced on a Wilson loop
$W(\gamma) ={\rm Tr}(U(\gamma))$ where $U(\gamma)$ is the holonomy,
\begin{equation}
{\cal H}(n) W(\gamma) = {1 \over 8}I(n,\gamma)
\sum_{l(n),l'(n)\in \gamma}
{\rm Tr}[U(\gamma'_{l,o,l'} \circ {\bar\gamma'_{l,l'}})-
U(\gamma''_{l,o,l'} \circ {\bar\gamma''_{l,l'}})]
\end{equation}
and as in the section where we discussed the diffeomorphism
constraint, we represent the deformation of the loop introduced by the
Hamiltonian through the insertion of $F_{ab}$'s in the holonomy
multiplied by the element of area of the plaquette, specifically we need
to compute the deformations of the rerouted loop shown in figure \ref{fig8},
\begin{eqnarray}
{\cal H}(n) W(\gamma) = {1 \over 4}I(n,\gamma)
\sum_{l'(n)<l(n)\in
\gamma}
&\{&
Tr[(1+a^2F_{ab}(\gamma_o^{l'}))(1+a^2F_{ba}(\gamma_o^{l'}
\circ{\bar\gamma_{l}^{l'}}))
U(\gamma_{l,o,l'} \circ {\bar\gamma_{l,l'}})] \nonumber \\
&&- Tr[(1+a^2F_{ba}(\gamma_o^{l'}))(1+a^2F_{ab}(\gamma_o^{l'}
\circ{\bar\gamma_{l}^{l'}}))
U(\gamma_{l,o,l'} \circ {\bar\gamma_{l,l'}})] \nonumber\\
&&+ Tr[(1+a^2F_{ba}(\gamma_o^{l'}\circ{\bar\gamma_{l}^{l'}}))
(1+a^2F_{ab}(\gamma_o^{l'}))
U(\gamma_{l,o,l'} \circ {\bar\gamma_{l,l'}})] \nonumber\\
&&- Tr[(1+a^2F_{ba}(\gamma_o^{l'}))(1+a^2F_{ab}(\gamma_o^{l'}
\circ{\bar\gamma_{l}^{l'}}))
U(\gamma_{l,o,l'} \circ {\bar\gamma_{l,l'}})]\}
u^a(l) u^b(l')
\end{eqnarray}
so the action of the Hamiltonian, listing explicitly the lattice
spacing, is finally given by,
\begin{equation}
{{\cal H}(n) \over a^6} W(\gamma) = a^2 {I(n,\gamma)\over a^6}
\sum_{l'(n)<l(n)\in \gamma}
{\rm Tr}[F_{ba}(n) U(\gamma_{l,o,l'} \circ {\bar\gamma_{l,l'}})]
u^a(l) u^b(l').
+ {\rm Tr}[F_{ab}(n) U(\bar\gamma_{l,l'} \circ
{\gamma_{l,o,l'}})] u^a(l) u^b(l').
\end{equation}
This expression is immediately identified as the lattice version of the
continuum formula (see for instance equation 8.32 of \cite{GaPubook}),
\begin{equation}
{\cal H}(x) W(\gamma) = \oint_\gamma dy^a \oint_\gamma dz^b
\delta(x-y)\delta(x-z)
{\rm Tr}F_{ab}(y)[U(\gamma_{y,o,z} \circ {\bar\gamma_{y,z}})+
U({\bar\gamma_{y,z}} \circ\gamma_{y,o,z})]
\end{equation}
which has been the starting point for the formulation of the
Hamiltonian constraint in the loop representation \cite{Gaplb91}, and
which corresponds to the action of the Hamiltonian constraint in the
connection representation when acting on a Holonomy,
\begin{eqnarray}
{\cal H}(x) W(\gamma) &=& {\rm Tr}[F_{ab}(x)
{\delta \over \delta A_a(x)} {\delta \over \delta A_b(x)}] U(\gamma)\\
&=& {\rm Tr}( F_{ab}(x)\hat{E}^a(x) \hat{E}^b(x)) W(\gamma).
\end{eqnarray}
There are some general comments that one can make about this particular
choice of Hamiltonian constraint. Of course, there is significant
freedom in defining the operator. The choice we have made here,
especially in what concerns how to implement the action of the curvature
that appears in the constraint, is in line with what we did for the
diffeomorphism constraint. One could suggest immediately other
possibilities. The first that comes to mind is to implement the
curvature instead of shifting the loop to the right and left by elements
of area, to only shift in one direction. This less symmetrical ordering
yields a quite different action of the Hamiltonian. Whereas our present
choice converts triple straight through intersections into double ones,
the unsymmetrical choice maps triple intersections to double and triple
ones. As we will see in the next section this would imply a difference
in the kinds of solutions one would find. It might also impact the issue
of the constraint algebra. Although we will not discuss the constraint
algebra for the Hamiltonian we are proposing, one can see that problems
might arise in computing the commutator of two Hamiltonians
successively, since they tend to reduce the order of intersections. One
should take these comments cautiously. All these issues
require to examine the action of the Hamiltonian on all possible kinds
of intersections (for instance, multiply traversed, or with kinks),
not just straight through ones.
\section{Solutions to the Hamiltonian constraint}
We will now analyze a solution to the Hamiltonian constraint in terms
of knot invariants on the lattice, which therefore are solutions of
the diffeomorphism constraint as well and as a consequence are quantum
states of gravity. It has been known for some time from
formal calculations in the continuum
\cite{BrGaPu} that the second coefficient of the Conway polynomial is
annihilated by the Hamiltonian constraint of quantum gravity. This was
concluded after lengthy formal calculations both in the loop
representation and in the extended loop representation
\cite{DiGaGr}. In the latter case a regularized calculation was
performed but it required the introduction of counterterms. We will
show here that the lattice version of the second coefficient of the
Conway polynomial is annihilated by the lattice Hamiltonian
constraint.
We will also show that the third coefficient in terms of $\Lambda$ in
the $q=e^\Lambda$ expansion of the Jones polynomial is annihilated by
the Hamiltonian constraint. Prima facie, this {\em does not} agree
with the results of the extended loop representation. Although the
terms generated by the action of the Hamiltonian are quite similar in
both cases, they are not exactly equal and a cancellation occurs in
the lattice that does not happen in terms of extended loops. This
could be ascribed simply to the fact that we are dealing with two
different regularizations and therefore the results do not necessarily
have to agree or to the fact that in the lattice we are exploring only
a few examples of possible intersections while doing the calculations
and therefore it could simply be that for more
complex intersections it is not annihilated and the results {\em do}
agree with those of the extended loop representation. The extended
loops naturally incorporate all possible types of intersections from
the outset.
It is worthwhile recapitulating a bit on the results of the extended loop
approach to give a context to the solutions we will present in the
lattice, although the derivations are completely independent. It is a
known fact that the exponential of the Chern--Simons form is a solution
of the Hamiltonian constraint with a cosmological constant $\Lambda$ in
the connection representation \cite{Ko,BrGaPu},
\begin{equation}
\Psi^{CS}[A] = \exp\left(-{6\over \Lambda} S_{CS}\right)
\end{equation}
where $S_{CS}= \int d^3 x {\rm Tr}[A \wedge \partial A +{2\over
3} A\wedge A \wedge A]$. It is not difficult to see that this state
is annihilated by the Hamiltonian constraint with cosmological constant.
This state has the property that the action of the magnetic field
quantum operator on it is proportional to the electric field (in the
gravity case the triad) operator. That makes the two terms in the
Hamiltonian constraint proportional to each other. One adjusts the
proportionality factor so they cancel.
If one transforms this state into the loop representation one gets,
\begin{equation}
\Psi^{CS}(\gamma)= \int DA\, e^{{6\over \Lambda} S_{CS}} W_\gamma[A]
\end{equation}
so this is equivalent to the expectation value of the Wilson loop in a
Chern--Simons theory. This is a well studied quantity first considered by
Witten \cite{Wi89} and it is known
that for the case of $SU(2)$ connections that
it corresponds to a knot polynomial that is
known as the Kauffman bracket with the polynomial variable $q$ taking the
value $q=\exp \Lambda$. It is also well known that the Kauffman bracket
is related in the vertical framing to the Jones polynomial through the
relation,
\begin{equation}
{\rm Kauffman}\, {\rm Bracket}_{e^\Lambda}(\gamma)=\exp(\Lambda {\rm
Gauss}(\gamma))\, {\rm Jones}_{e^\Lambda}
\end{equation}
where Gauss$(\gamma)$ is the self-linking number of the loop $\gamma$.
Using first formal loop techniques in the continuum and later extended
loop techniques, it was established that in order for the Kauffman
bracket to be a solution of the Hamiltonian constraint with
cosmological constant, it had to happen that the second coefficient of
the Jones polynomial (not in its original variable, but in the
infinite expansion resulting of setting it equal to $\exp(\Lambda)$)
had to be annihilated by the Hamiltonian constraint of quantum gravity
(with no cosmological constant). The coefficients of the Jones
polynomial in this infinite expansion are known to be Vassiliev
invariants and the second one is known to coincide up to numerical
factors with the second coefficient of the Alexander--Conway \cite{Co}
knot polynomial. The third coefficient in the expansion was shown {\em
not} to be a solution of the Hamiltonian constraint (with or without
cosmological constant) \cite{Gr}.
We will now examine the counterpart of these results in the lattice
approach. Of course, a full examination of any solution would require
evaluating the action of the Hamiltonian on all possible types of
intersections, including multiple lines. We will not do this here. We
will first study triple straight-through intersections and we will end
the section with a discussion of kinks.
\subsection{The second coefficient}
We need to define the second coefficient of the Alexander--Conway \cite{Co}
polynomial in the lattice, including intersecting loops. In order to do
this we draw from our knowledge of the fact that the second coefficient
of the Alexander--Conway polynomial coincides with the second
coefficient of the expansion of the Jones polynomial in terms of
$\Lambda$ when the polynomial variable is $\exp(\Lambda)$. As we
discussed before, this is related to the expectation value of a Wilson
loop in a
Chern--Simons theory. Putting together the results of \cite{GaPu96} with
the discussion of Abelian Chern--Simons theory on the lattice we presented
here (whic allows us to find the Gauss linking number with
intersections) one gets that the second coefficient $a_2(\gamma)$
satisfies,
\begin{eqnarray}
a_2(\raisebox{-3pt}{\psfig{figure=lplus.eps,height=4mm}}) -
a_2(\raisebox{-3pt}{\psfig{figure=lminus.eps,height=4mm}}) &=&
a_1(\raisebox{-3pt}{\psfig{figure=l0.eps,height=4mm}})\\
a_2(\raisebox{-3pt}{\psfig{figure=lo.eps,height=4mm}}) &=& 0,
\end{eqnarray}
where $a_1$ is an invariant that coincides with the Gauss linking
number if the involved loop has two components
(up to a factor, actually $a_1(\gamma_1,\gamma_2)=3
lk(\gamma_1,\gamma_2)$ with the definitions of this paper)
and is zero otherwise.
The relationship to the
Chern--Simons state allows us to define an extension of the second
coefficient for intersections, using the techniques of
reference \cite{GaPu96}. The result is,
\begin{eqnarray}
a_2(\raisebox{-3pt}{\psfig{figure=li.eps,height=4mm}}) &=& {1\over 2}
(a_2(\raisebox{-3pt}{\psfig{figure=lplus.eps,height=4mm}}) +
a_2(\raisebox{-3pt}{\psfig{figure=lminus.eps,height=4mm}})) +{1\over 8}
(a_0(\raisebox{-3pt}{\psfig{figure=lplus.eps,height=4mm}}) +
a_0(\raisebox{-3pt}{\psfig{figure=lminus.eps,height=4mm}}))\\
a_2(\raisebox{-3pt}{\psfig{figure=lv.eps,height=4mm}}) &=&
\label{pricecol} a_2(\raisebox{-3pt}{\psfig{figure=l0.eps,height=4mm}})
\end{eqnarray} where $a_0=2^{n_c}/2$ where $n_c$ is the number of
connected components of the knot ($a_0=1$ for a single component knot
.)
The above relations imply that for the second coefficient we can turn
an intersection into an upper or under-crossing ``at the price'' of a
term proportional to the linking number and another one to the
number of connected components,
\begin{eqnarray}
a_2(\raisebox{-3pt}{\psfig{figure=li.eps,height=4mm}}) &=&
a_2(\raisebox{-3pt}{\psfig{figure=lminus.eps,height=4mm}}) +
{1\over 2} a_1(\raisebox{-3pt}{\psfig{figure=l0.eps,height=4mm}})
+{1\over 4}a_0(\raisebox{-3pt}{\psfig{figure=lminus.eps,height=4mm}})
\label{priceup}\\
&=&a_2(\raisebox{-3pt}{\psfig{figure=lplus.eps,height=4mm}}) -
{1\over 2} a_1(\raisebox{-3pt}{\psfig{figure=l0.eps,height=4mm}})
+{1\over 4}a_0(\raisebox{-3pt}{\psfig{figure=lplus.eps,height=4mm}}).
\label{pricedown}
\end{eqnarray}
We now consider the action of the Hamiltonian on $a_2$. We start with
a straight-through intersection for simplicity, we will later discuss
the case of an intersection with a kink. The action of the
Hamiltonian constraint requires considering the six possible pairs of
tangents that are involved in a triple intersection and in each case
produces two terms, deforming back and forth along one tangent the
loop in the direction given by the other tangent. In the case of the
straight through intersection, due to symmetry reasons, it is only
necessary to consider three pairs of tangents, the other three yield
the same contributions. The action of the Hamiltonian is easily
understood pictorially, so we refer the reader to figure \ref{a2fig}.
As we see, the orientation and routing through the intersection of the
original loop is labelled by the numbers 1-6 at the intersection. The
only information needed from the loop is that it has
a triple intersection and the connectivity suggested
by the numbers, otherwise it can have any possible kind of knottings
or interlinkings between petals
away from the intersection, we denote this in the
figure with black boxes.
\begin{figure}[b]
\hspace{3cm}{\psfig{figure=fig4.eps,height=12cm}}
\vspace{0.1cm}
\caption{The action of the Hamiltonian on the generic loop considered
in the $a_2$ calculation. Given a generic loop with a triple
intersection (a), the Hamiltonian splits the triple intersection (b)
into two double ones, back (f) and forth (c). For the case of the
second coefficient of the Conway polynomial the resulting loop can be
rearranged into a loop without intersections using the skein
relations. First one uses the skein relation for an intersection with
a kink and obtains loop (d). Then one uses the skein relation for
regular intersections to convert the loop to a loop without
intersections. Similar operations are depicted in (f-h) for the other
deformation produced by the action of the Hamiltonian. From (f) to (g)
one again uses the skein relation for the intersection with a kink and
from (g) to (h) one uses the one for regular intersections. Since the
two resulting (h) and (d) are deformable to each other the
contributions from the $a_2$ cancel. One is left with the
contributions of linking numbers and connected components introduced
when one uses the skein relation for regular intersections. When one
considers these contributions for all possible pairs of tangents the
resulting expression vanishes taking into account the Abelian nature
of $a_1$ and $a_0$ . The black squares indicate that the different
lobes of the loop could have arbitrary knottings and
interlinkings. The result only depends on the local connectivity at
the intersection.}
\label{a2fig}
\end{figure}
As can be seen in the figure, the action of the Hamiltonian at the
first pair of tangents we choose (2 and 5) produces as a result the
difference of value of the second coefficients evaluated for two
different loops. We will now study the value of this difference using
the skein relations for the second coefficient. Looking at figure
\ref{a2fig} we notice that in the loop obtained deforming to the
right, one of the intersections we are left with (the one at the
right) is a ``collision type'' intersection (marked $w$ in (c)) and
using the skein relation (\ref{pricecol}) we can remove the
intersection simply separating the lines. Similar comments apply to
the intersection at the left in the loop obtained deforming the
original triple intersection to the left (f). We are now left with
the second coefficient evaluated on two different loops, each of them
with a single, straight through intersection (figures \ref{a2fig}d,g).
We can remove that intersection either using (\ref{priceup}) or
(\ref{pricedown}). The remarkable thing is that if we apply
(\ref{priceup}) in the diagram \ref{a2fig}d (``lifting the line") and
(\ref{pricedown}) in \ref{a2fig}g (``lowering the line") we produce
two loops with exactly the same topology. To be more precise,
formulas (\ref{priceup},\ref{pricedown}) have two types of
contributions, $a_2$ and $a_0$ evaluated on the loop with the line
raised or lowered, plus $a_1$ contributions. The point is that by
arranging things in the way we did, we end up with the difference of
the second coefficients evaluated on loops with exactly the same
topology (the difference comes because the ``forward'' and
``backward'' actions in the Hamiltonian come with opposite signs).
Therefore this contribution cancels (so does the contribution of the
$a_0$'s). Let us now concentrate on the terms involving $a_1$'s.
These terms imply replacing the intersection by a reconnection
of the original loop into two disjoint loops. For figure
\ref{a2fig}d this yields a link consisting of $\gamma_3^{-1}$ and
$\gamma_1\circ \gamma_2^{-1}$. Given that $a_1$ for links with two
components is (up to a factor of 3 that is irrelevant for what
follows) the linking number of the components, the result for this
contribution is,
\begin{equation}
lk(\gamma_3^{-1},\gamma_1\circ\gamma_2^{-1})=
-lk(\gamma_3,\gamma_1)+lk(\gamma_3,\gamma_2)
\end{equation}
due to the Abelian relations (\ref{abel1},\ref{abel2}). For the loop
\ref{a2fig}g the linking number contribution is
\begin{equation}
lk(\gamma_2^{-1},\gamma_3^{-1}\circ\gamma_1) =
lk(\gamma_2,\gamma_3)-lk(\gamma_2,\gamma_1)
\end{equation}
so the total contribution to the Hamiltonian due to the deformations
along the (2-5) tangents is equal to
\begin{equation}
-lk(\gamma_3,\gamma_1)+lk(\gamma_2,\gamma_1).\label{1cont}
\end{equation}
\begin{figure}[b]
\hspace{3cm}{\psfig{figure=fig5.eps,height=5cm}}
\caption{The contribution to the Hamiltonian of the
deformation along the (2-3) pair of tangents.} \label{a2fig2}
\end{figure}
We now study the deformations along the (2-3) and (5-4) pairs of
tangents (the labels refer to the original intersection, shown in
figure \ref{a2fig}a), as shown in figures \ref{a2fig2} and
\ref{a2fig3}. Using exactly the same ideas, ie, using (\ref{pricecol})
to remove collision intersections and (\ref{priceup}) or
(\ref{pricedown}) in straight through intersections in such a way as
to cancel all $a_2$ and $a_0$ contributions we get for the deformation
in figure \ref{a2fig2},
\begin{equation}
lk(\gamma_3\circ\gamma_2^{-1},\gamma_1)-
lk(\gamma_3,\gamma_2^{-1}\circ\gamma_1)=
-lk(\gamma_1,\gamma_2)+lk(\gamma_3,\gamma_2)\label{2cont}
\end{equation}
and similarly for the contribution along (5-4),
\begin{equation}
lk(\gamma_2,\gamma_3^{-1}\circ\gamma_1)-
lk(\gamma_3^{-1}\circ\gamma_2,\gamma_1)=
-lk(\gamma_2,\gamma_3)+lk(\gamma_1,\gamma_3) \label{3cont}
\end{equation}
and we therefore see, by adding (\ref{1cont},\ref{2cont},\ref{3cont})
that the total contribution to the Hamiltonian vanishes.
Rigorously one should carry out this kind of computations
for {\em all} possible pairs of
tangents. It turns out that due to symmetry reasons, for a straight
through intersection the other three pairs produce exactly the same
contributions as the ones we listed here. We have presented enough
elements for the reader to check this fact if needed.
\begin{figure}[b]
\hspace{3cm}{\psfig{figure=fig6.eps,height=5cm}}
\caption{The contribution to the Hamiltonian of the deformation along
the (5-4) pair of tangents.}
\label{a2fig3}
\end{figure}
So we see that the root of the annihilation of the Hamiltonian
constraint on the $a_2$ can be traced back to the fact that the skein
relations for that coefficient relate it to the $a_1$, which is a very
simple invariant given its ``Abelian" nature and that allows for several
cancellations to occur. We will see that rather unexpectedly
this seems to be also
the case for higher coefficients.
\subsection{The third coefficient}
The third coefficient in the expansion in terms of $q=e^\Lambda$ of
the Jones polynomial is not annihilated by the Hamiltonian constraint
in the extended loop representation \cite{Gr}. We will here apply our
lattice Hamiltonian to the third coefficient. We will see that it is
annihilated by the lattice Hamiltonian, at least for straight-through
triple intersections. The calculation goes exactly along the same
lines as in the previous section, that is, the Hamiltonian deforms and
reroutes in the same way. What changes are the skein relations
involved in managing the resulting action. Prima facie it seems like
the calculation is not going to give zero. The skein relations for the
third coefficient do not allow us to relate everything to the linking
number and its convenient Abelian properties that helped produce the
cancellation in the $a_2$ case. In fact, this is exactly what prevents
the extended loop calculation from giving zero \cite{Gr}. We will see,
however, that unexpected cancellations take place and the result is
zero.
The skein relations for the third coefficient are, again, obtained by
drawing on the relationship to the Chern--Simons state and using the
techniques of \cite{GaPu96}. The results are,
\begin{eqnarray}
a_3(\raisebox{-3pt}{\psfig{figure=lplus.eps,height=4mm}})-
a_3(\raisebox{-3pt}{\psfig{figure=lminus.eps,height=4mm}}) &=&
a_2(\raisebox{-3pt}{\psfig{figure=l0.eps,height=4mm}})-
a_2(\raisebox{-3pt}{\psfig{figure=lplus.eps,height=4mm}})-
a_2(\raisebox{-3pt}{\psfig{figure=lminus.eps,height=4mm}})-
{1\over 4}\\
a_3(\raisebox{-3pt}{\psfig{figure=li.eps,height=4mm}}) &=&
{3\over 8} a_1(\raisebox{-3pt}{\psfig{figure=l0.eps,height=4mm}})
+a_3(\raisebox{-3pt}{\psfig{figure=lplus.eps,height=4mm}})+
{a_2(\raisebox{-3pt}{\psfig{figure=lplus.eps,height=4mm}})+
a_2(\raisebox{-3pt}{\psfig{figure=lminus.eps,height=4mm}})-
a_2(\raisebox{-3pt}{\psfig{figure=l0.eps,height=4mm}})\over 2}
+{1 \over 8}\\
&=&
{3\over 8} a_1(\raisebox{-3pt}{\psfig{figure=l0.eps,height=4mm}})
+a_3(\raisebox{-3pt}{\psfig{figure=lminus.eps,height=4mm}})-
{a_2(\raisebox{-3pt}{\psfig{figure=lplus.eps,height=4mm}})+
a_2(\raisebox{-3pt}{\psfig{figure=lminus.eps,height=4mm}})-
a_2(\raisebox{-3pt}{\psfig{figure=l0.eps,height=4mm}})\over 2}
-{1 \over 8}\\
a_3(\raisebox{-3pt}{\psfig{figure=lv.eps,height=4mm}}) &=&
a_3(\raisebox{-3pt}{\psfig{figure=l0.eps,height=4mm}})
\end{eqnarray}
where the last three equations show that again one can eliminate
intersections ``at a price''. In this case the price is getting
contributions proportional to the linking number and the second
coefficient. Notice however, that what appears is the second
coefficient evaluated for
$(\raisebox{-3pt}{\psfig{figure=l0.eps,height=4mm}})$. That means that
if one started from a loop with a single component, the loop on which
the second coefficient is evaluated might have two components. Up to
now we had only worried about $a_2$ evaluated for single loops. In
order to know its value for double loops, one can recourse again to
the fact that $a_2$ is related to the Kauffman bracket and the latter
has to satisfy the Mandelstam identities for multiloops. Using the
relation of $a_2$ with the Kauffman bracket, one gets,
\begin{equation}
a_2(\gamma_1,\gamma_2) = a_2(\gamma_1\circ\gamma_2) +
a_2(\gamma_1\circ\gamma_2^{-1}) +{9 \over 2} lk(\gamma_1,\gamma_2)^2,
\label{mandela2}
\end{equation}
where $a_2(\gamma_1,\gamma_2)$ means evaluating the second coefficient
on the link formed by loops $\gamma_1$ and $\gamma_2$.
Armed with these relations, we again go through the same calculation as
before, ie, figures \ref{a2fig}-\ref{a2fig3}. Again eliminating the
$a_3$ contributions by deforming the straight through intersections up
and down as needed in the back and forth action of the constraint, one
is left with contributions involving $a_2$, $a_1$ and $a_0$. It is not
too lengthy to prove that these contributions cancel and one is only
left with contributions of $a_2$'s evaluated on two loops. The result
is,
\begin{eqnarray}
{\cal H} a_3(\gamma_1\circ\gamma_2\circ\gamma_3) &=&
{1\over 8} \left [
a_2(\gamma_2^{-1},\gamma_3^{-1}\circ\gamma_1)-
a_2(\gamma_3^{-1},\gamma_2^{-1}\circ\gamma_1)+
a_2(\gamma_3,\gamma_2^{-1}\circ\gamma_1)\right.\\
&&\left. -a_2(\gamma_3,\gamma_2^{-1}\circ\gamma_1)+
a_2(\gamma_1,\gamma_2\circ\gamma_3^{-1})-
a_2(\gamma_2,\gamma_3^{-1}\circ\gamma_1)\right]\nonumber
\end{eqnarray}
where we have denoted by $\gamma_1$, $\gamma_2$ and $\gamma_3$ the
three ``petals'' determined by the triple self-intersection of the
loop on which we are acting. We also denote by $\circ$ the composition
of loops as before.
If one now uses Eq. (\ref{mandela2}) and the Abelian properties of
the linking number one sees that all contributions involving linking
numbers cancel. Furthermore, using that the remaining Mandelstam
identities for $a_2$, namely
\begin{eqnarray}
a_2(\gamma)&=&a_2(\gamma^{-1})\\
a_2(\gamma_1\circ\gamma_2\circ\gamma_3)&=&
a_2(\gamma_2\circ\gamma_3\circ\gamma_1)=
a_2(\gamma_3\circ\gamma_2\circ\gamma_1)
\end{eqnarray}
one sees that all contributions finally cancel.
It should be remarked that the $a_3$ could not be a state of quantum
gravity even if it is annihilated by the Hamiltonian constraint because
it does not satisfy the Mandelstam identities that loop states have to
satisfy. This can be easily seen again by considering the fact that
the Kauffman bracket does satisfy it and at third order in the
expansion the coefficient of the Kauffman bracket is given by a
combination involving $a_3$ plust $a_2$ times the linking number and
the linking number cube. It is easy to see that $a_2$ times the
linking number does not satisfy the Mandelstam identities, so $a_3$
does not.
Therefore we see that for this particular kind of loops the result
vanishes in a very nontrivial way. It is yet to be determined what
will happen for loops with kinks or multiple lines at intersections.
\subsection{Intersections with kinks}
The action of the Hamiltonian we have is well defined at intersections
with kinks, in the sense that it is immediate to see that it has the
correct continuum limit. Therefore one is in a position to attempt to
apply the Hamiltonian to the coefficients we studied in the previous
sections for the case with kinks. In the lattice there is only a
finite number of possible cases ---at least if one restricts oneself
to simply traversed lines--- so there is the chance of carrying out an
exhaustive analysis. Rigorously speaking it is not true that a
wavefunction is a state of quantum gravity until one has completed
such analysis. We will not, however, carry out this analysis here.
The reason for this is that the Hamiltonian acting on a triple
intersection with kinks (it is easy to see that for double
intersections it again identically vanishes) has a more complex action
than on straight through intersections. In straight through triple
intersections the Hamiltonian produced a loop with double
intersections only. This is not the case if there are kinks, as
figure \ref{intkink} shows. This is actually quite reasonable. If
the action of the Hamiltonian always simplified the kind of
intersection one runs the danger of producing an operator that would
identically vanish after successive applications. This is obviously
not reasonable for the Hamiltonian constraint. It would also pose
immediate problems for the issue of the constraint algebra. However,
this reasonable fact complicates significantly the application of the
Hamiltonian to the coefficients we discussed before in the case of
kinks, since we now need skein relations for the coefficients, not
only for triple intersections, but for triple intersections with kinks
and double lines. The techniques of \cite{GaPu96} actually allow to
figure out such skein relations, but a detailed analysis has not yet
been performed. Such an analysis is more complex than that of double
intersections, since it was shown that there are regularization
ambiguities in the definition of skein relations for knot invariants
with kinks. The calculations involved, however, are well defined and
the analysis can be performed, it just exceeds the scope of this
paper.
\begin{figure}
\hspace{3cm}{\psfig{figure=fig7.eps,height=5cm}}
\label{intkink}
\caption{The action of the Hamiltonian on a triple intersection with
kinks is not simply given by double intersecting loops.}
\end{figure}
The lack of a confirmation that the Hamiltonian annihilates the
coefficients in the case of kinks prevents us from really claiming
that one has a rigorous proof that the coefficients are solutions. In
fact, it could well be that the third coefficient is not annihilated
in the case of intersections with kinks and therefore the results
presented are in fact compatible with the extended loop
results. Further investigations should confirm if this is the case.
Even if we were able to perform the calculations for all the possible
cases with kinks, we should remind that we are only dealing here with
simply-traversed loops. For multiply-traversed loops again the
computations are well defined, but have not been performed. In fact,
the number of possible cases to be analyzed is infinite, since one has
to consider multiple lines. One might worry therefore that it will
never be possible to show that a state is a solution. It might be
useful to rethink all of this approach in terms of spin networks,
where a more unified treatment of all possible cases can be had from
the outset. It might be possible that generic behaviors are found and
one is not really required to compute all possible cases.
\subsection{Regular isotopic invariants}
In the extended loop representation, apart from the second and third
coefficient, there exist other states of quantum gravity if one adds a
cosmological constant to the theory. The addition of a cosmological
constant implies that the Hamiltonian constraint gains a terms of the
form $\Lambda {\rm det} g$ proportional to the determinant of the
three dimensional metric. It is not difficult to implement such a
term in the lattice. Based on its action in the continuum
\cite{GaPuBa}, which basically acts nontrivially only at triple
intersections and introduces reroutings and kinks (no
deformations). One would therefore be tempted to try to apply the
Hamiltonian with cosmological constant in the lattice on these states
and check if it is satisfied. Unfortunately, all states with
cosmological constant that appear in the extended loop representation
(essentially the exponential of the self-linking number and the
Kauffman bracket polynomial itself) are framing-dependent
invariants. This is not a problem in the extended representation,
which in a sense is a framing procedure, but is an inescapable
obstacle in the lattice context. As we argued in the section on knot
theory on the lattice, although lattice Chern--Simons theory provides
an unambiguous definition for the framing dependent invariants, the
resulting objects are not annihilated by the diffeomorphism
constraint. Therefore the most reasonable attitude is to not consider
these states in this context, at least in the particular
implementation we are proposing.
\section{Conclusions}
This paper can only be taken as the first exploratory steps in the
construction of a possible theory of quantum gravity using lattice
regularizations. We have discussed several issues we would like to
summarize here.
a) It seems possible to implement a lattice notion of
``diffeomorphisms'' in the sense of having a set of symmetries that
become diffeomorphisms in the continuum. These operations are
represented quantum mechanically by an operator that has the correct
continuum limit and the correct algebra in the continuum limit.
Moreover, the solutions to the constraint become in the continuum knot
invariants and are the basis for establishing a notion of knot theory on
the lattice.
b) We proposed a lattice regularization of the Hamiltonian constraint
that has a simple and geometrical action on the lattice. In fact, its
action can be viewed as a set of skein relations in knot space, though
we leave the discussion of this viewpoint to a separate paper
\cite{GaPu96b}.
c) We tie up together the results of knot theory in the lattice and the
Hamiltonian constraint by showing that certain knot invariants are
actually annihilated by the constraint (at least in certain examples of
loops). These results have a direct counterpart in the continuum. They
also exhibit the strengths and weaknesses of the whole approach: though
we have always well defined computations, to prove results rigorously
(like to claim that a state is annihilated by the constraint), implies
in principle performing an infinite number of computations.
d) By starting directly in the loop representation and building a theory
that has certain properties that are desirable, one avoids the many
ambiguities one is faced with if one tries to first implement
classical general relativity on a lattice, quantize that theory and
only then introduce loops.
Concerning possible future developments of this whole approach, there
are several evident directions that need further study:
a) The issue of the constraint algebra, in particular implying the
Hamiltonian constraint is yet to be completed.
b) The extension of the results to intersections with kinks is needed,
especially to try to settle the issue of the third coefficient and to
consolidate the status of the solutions in general.
c) One could consider extending the results of lattice Chern--Simons
theory to the non-Abelian case. This would open up the possibility of
applying numerical techniques to the evaluation of the path integrals
involved in the definition of knot invariants. One could use this method
to confirm the results from other approaches. This would require,
however, the development of techniques to handle the
non-positive-definite Chern--Simons action numerically, although some
results in this direction already exist \cite{Gock}.
As far as relations with other approaches that are currently being
pursued in quantum gravity, it is evident that the whole idea of a
lattice approach can benefit many efforts. The details, of course
would change from one particular situation to another. In general it
would be quite important to extend the results presented here to the
spin network context, since it has many attractive features for
dealing with wavefunctions in the loop representation. One problem is
that in spin networks one is dealing from the beginning with arbitrary
number of kinks, multiple lines and intersections, so the whole
approach may have to shift from the one we have taken in this paper,
where we treated all these issues one at a time in increasing
complexity. On the other hand, the payoff could be important: it may
happen that the spin network perspective actually allows to deal with
all these issues simultaneously in an efficient way. For instance
Kauffman and Lins \cite{KaLi} have made important progress in dealing
with certain aspects of intersecting knot theory in terms of the spin
network language. Moreover, recently Thiemann \cite{Th} has introduced
a very promising version of the Hamiltonian constraint of Lorentzian
general relativity (in this paper, as is usual in new variables
formulation, we were really dealing with either the Euclidean theory
or complex general relativity) at a classical level, that can be
promoted to a well defined quantum operator. This is most naturally
accomplished in the context of spin networks. It would be very
interesting if one can find a simple action for the quantum lattice
version Thiemann's Hamiltonian that annihilated the knot polynomials
we discussed in this paper. The techniques we discussed here could be
used as a guideline to perform that calculation.
An interesting possibility would be to try to derive the current
lattice approach from a four dimensional lattice theory using
transfer matrix methods \cite{Sch64}. This could allow the use of statistical
methods to provide solutions to the Hamiltonian constraint. The
connection with the four dimensional theory should elucidate the
role of the Lagrange multipliers in the lattice theory. As was
noted in \cite{FrJa} if one starts from a four dimensional lattice
approach one does not necessarily recover a three dimensional theory
with free Lagrange multipliers, as the one we considered here.
Summarizing, we have shown that a lattice approach to canonical quantum
gravity in the loop representation is feasible, that it allows to pose
several questions of the continuum theory in a well defined setting and
that several results of the continuum are recovered in a rather
unexpected way bringing together notions of knot theory into the lattice
framework. We expect that in the near future several of the developments
outlined above can be pushed forward towards a more complete picture of
the theory.
\acknowledgements
We wish to thank Jorge Griego for help with the $a_3$ calculations.
RG and JP wish to thank Peter Aichelburg and Abhay Asthekar for
hospitality and financial support at the International Erwin
Schr\"odinger Institute (ESI) in Vienna, where part of this work was
completed. HF wishes to thank Abhay Ashtekar and the CGPG for
hospitality. This work was also supported in part by grants
NSF-INT-9406269, NSF-PHY-9423950, by funds of the Pennsylvania State
University and its Office for Minority Faculty Development, and the
Eberly Family Research Fund at Penn State. We also acknowledge
support of CONICYT and PEDECIBA (Uruguay). JP also acknowledges
support from the Alfred P. Sloan Foundation through an Alfred P.
Sloan fellowship.
|
2,877,628,091,234 | arxiv | \section{Introduction}
\label{sec:intro}
Recent developments in large interferometers (e.g.,\,ALMA and NOEMA) have made it possible to spatially resolve the brightness distribution of \add{many more} astronomical objects. These observations have enabled us to obtain a three-dimensional (R.A., Dec., and the line-of-sight velocity) structure of the gas emission and two-dimensional images of the continuum \add{within galaxies} with high spatial resolution and sensitivity. As a result, the data allow for detailed image analysis; for example, \add{investigating} spectral features of spatially resolved regions, \add{characterizing} faint and extended structures, \add{performing} Fourier analysis of the image, etc. However, the spatial correlation of the noise in interferometric images makes it difficult to evaluate the uncertainty of the results. There has been a lack of quantitative understanding of the spatial correlation of noise and methods to evaluate the statistical uncertainty of measured quantities and the significance of scientific results, such as signal detection and image analysis.
\add{To estimate the statistical uncertainty of integrated fluxes or spectra under correlated noise, both variance and covariance of the pixel pairs in the integrated region need to be taken into account for the uncertainty propagation. Sun et al.\cite{Sun2014-nz} proposed a method based on an approximation that the covariance of noise between pixels is proportional to the synthesized beam. However, the covariance is actually proportional to the autocorrelation of the synthesized beam\cite{Refregier1998-ui}. Also, more importantly, the method approximates the synthesized beam with a single Gaussian. In contrast, the true synthesized beam has a complex structure with a main lobe inducing short-range strong noise correlation and side lobes inducing long-range weak noise correlation.
Such an oversimplified assumption can lead to underestimation of the uncertainty. More widely used method to estimate the statistical uncertainty of integrated fluxes is based on an intuitive interpretation that the noise can be regarded as independent across beam-sized regions as described by Alatalo et al.\cite{Alatalo2013-ix}. The noise variance in the integrated values is evaluated by scaling the noise variance of individual pixels by the number of the beam area in the integrating aperture. This method also implicitly assumes the Gaussian beam to estimate the beam area of the synthesized beam. To evaluate the statistical uncertainty of the best-fitting parameters in a model fitting to the interferometric data with the correlated noise, Davis et al.\cite{Davis2017-lu} proposed a method to construct a covariance matrix from the synthesized beam, which describes the covariance of the noise and uses it to compute the $\chi^2$ value. Again, they also assumed a simple Gaussian function for the synthesized beam.}
\add{Several studies have used workarounds to avoid the need to characterize the correlated noise using Monte Carlo methods. Harikane et al.\cite{Harikane2020-nr} measure the statistical uncertainty of the integrated flux by randomly placing identical apertures to the emission-free region and adoptting the root mean square of the summed values. Boizelle et al.\cite{Boizelle2019-wg} estimate the statistical uncertainty of the fitting parameters by Monte Carlo resampling of the best-fitting parameters: adding noise extracted from the emission-free regions to the original data and refitting models. Another common technique is to fit the model in the visibility plane to measure the source size and shape where the noise in the visibility measurements is independent. This method is particularly beneficial if the source is small compared to the resolution of the interferometer since the model needs to be simple and axisymmetric for computational efficiency. However, in many cases, analysis in the image plane is necessary (e.g. complex structures such as spiral arms, bar, clumpy structures)\cite{2014A&A...563A.136M}.}
\add{There have yet to be any attempts to evaluate the statistical uncertainty for the general measurements using interferometric images (etc., integrated flux, spectra, fitting, etc.) by fully characterizing the detailed noise correlation. Refregier and Brown\cite{Refregier1998-ui} proposed to use the noise ACF to characterize the correlated noise of the Very large array (VLA) FIRST radio survey data. They used the noise ACF to explore the effect of the spatially correlated noise in the signal of the ellipticity correlation function, which encodes the imprint of the weak lensing signal by the large-scale structure of the universe. The noise ACF fully characterizes the noise correlation properties of interferometric images and provides the covariance of noise between different pixels, which allows us to measure statistical uncertainty under the noise correlation.}
In this paper, we present \add{a} method and \add{associate} Python code to characterize the spatial correlation of noise \add{in interferometric images} by measuring the noise autocorrelation function (ACF) and evaluat\add{ing} its effect on the measured quantities and the analysis results.
This paper is organized as follows.
In Sec.~\ref{sec:method}, we present the noise correlation properties of ALMA data characterized by the autocorrelation function (ACF) and show that the noise correlation originates from the synthesized beam (dirty beam) structures, which remain even in the \add{\textsc{clean}} image and cannot be removed by any deconvolution algorithm. In Sec.~\ref{sec:result2}, we introduce methods \add{for} (1) estimating the statistical uncertainties associated with spatially integrated flux or spectra\add{;} (2) generating simulated noise maps from the measured noise ACF, which are useful to estimate the statistical significance of the result obtained by any image analysis\add{;} and (3) constructing the covariance matrix from the noise ACF which can be used in the \add{$\chi^2$} formalism of the model fitting to the observed image, with example applications to real scientific data from Ref.~\citenum{Tsukui2021-sg}.
Throughout the paper, we use the noise map from emission line cube and continuum image data taken by ALMA Band 7 (2017.1.00394.S\add{;} PI\add{:} González López, Jorge) as an example, but the method proposed by this paper can be applied to other interferometric images. A Python package for easy application of the methods described in this paper, Evaluating Statistical Significance undEr Noise CorrElation (ESSENCE), is publicly available at \url{https://github.com/takafumi291/ESSENCE}.
\section{NOISE CHARACTERIZATION OF INTERFEROMETRIC IMAGE}
\label{sec:method}
\subsection{The characterization of spatially correlated noise}
\label{subsec:acf}
First, we consider a two-dimensional noise map $N(\mathbf{x})$, where $\mathbf{x}$ denotes the position of the pixels\add{. P}ixel regions with signal from the object of interest are excluded. The statistical properties of the noise are assumed to be uniform in the noise image, which appears to be valid in the interferometric image\footnote{We discuss this in Sec.~\ref{subsec:origin}}.
In most of the literature, the noise in the radio interferometric image is quantified and reported with the root mean square \add{(rms)} of the noise map $N(\mathbf{x})$,
\begin{equation}\label{eq:eq1}
\sqrt{\langle N(\mathbf{x})^2\rangle} \equiv \sigma_\mathrm{N},
\end{equation}
where the brackets denote the expected value for each pixel, which is practically estimated by averaging over the noise map. The mean of the noise in the image $\mu \equiv \langle N(x) \rangle
\approx 0$, since most of the noise represented by the system temperature $T_{\mathrm{sys}}$ is not correlated in a pair of antennas and the power of the noise does not appear in the correlator output of interferometers such as ALMA. \add{E}xtended background emission such as the cosmic microwave background (CMB) is resolved out without total power observation. However, these \add{sources of} noise contribute to the random noise associated with visibility measurements that propagate into the noise on the image by the Fourier transform. In the rest of the paper, we assume the mean of the noise map to be zero or already have been subtracted in other cases, and thus the root mean square and the standard deviation of the noise can be used interchangeably. Figure~\ref{fig:fig1} shows the example ALMA \add{B}and 7 noise map and its histogram from the observation targeting the hyper luminous infrared galaxy BRI 1335-0417 at redshift of 4.4, which will be used in the later sections. The noise map is created by eliminating \add{astronomical} source\add{s} \add{by} 4 sigma clipping \add{as well as removing} pixels \add{adjacent to these} clipped \add{regions out to} 3 \add{times the} full width of half maximum (FWHM) of the synthesized beam.
The histogram of the pixel values in the noise map is well fitted by \add{a} Gaussian \add{function}.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{Fig1.pdf}
\caption{Left: Example ALMA \add{B}and 7 noise map. The source emission region is eliminated with the 4 sigma clipping; see text. Right: The histogram of the pixel values of the noise map. The red dashed line indicates the best-fit Gaussian with the mean $\mu=0.000$ and the standard deviation $\sigma=0.036$ (mJy beam$^{-1}$). \label{fig:fig1}}
\end{figure}
When noise can be assumed to be Gaussian, the statistical and correlation properties of noise are fully quantified by the noise autocorrelation function (ACF)\cite{Refregier1998-ui},
\begin{equation}\label{eq:eq2}
\xi(\mathbf{x}_{i,j})\equiv\langle N(\mathbf{x}+\mathbf{x}_{i,j})N(\mathbf{x})\rangle,
\end{equation}
where the expected value is estimated by averaging all pairs of pixels with the relative distance $\mathbf{x}_{i,j}=(i,j)$ in the noise image. The value of the ACF noise at zero lag, $\mathbf{x}_{i,j}=\mathbf{0}$, is equal to the variance of the noise as $\xi(0)= \langle N(\mathbf{x})^2\rangle =\sigma^2_N$. When the noise has no inter-pixel correlations, the noise ACF becomes
\begin{equation}\label{eq:eq3}
\xi(\mathbf{x}_{i,j})=
\begin{dcases}
\sigma^2_N & \text{if } \mathbf{x}_{i,j}=0\\
0 & \text{otherwise}
\end{dcases}
\end{equation}
To evaluate the statistical uncertainty of the derived noise ACF, we first considered the number of independent pixel pairs $N_{\mathrm{pair}}$ in the number of all available pairs $N'_{pair}$ used to evaluate the bracket in Eq.~\ref{eq:eq2}, since the pixels within a beam \add{area} are expected to be strongly correlated and not independent. We estimated the number of independent pixel pairs $N_{\mathrm{pair}}$ as the ratio of the number of all pixel pairs $N'_{\mathrm{pair}}$ and the number of pixels in the beam (beam area in pixels)\footnote{The beam area in pixels is typically estimated by $2\pi b_\mathrm{maj} b_\mathrm{min}/8\mathrm{ln}2$, where $b_\mathrm{maj}$ and $b_\mathrm{min}$ are the major and minor FWHMs of the mainlobe of the synthesized beam (\add{the ``}\add{\textsc{clean}}" beam).} $A_{\mathrm{beam}}$,
\begin{equation}\label{eq:eq4}
N_{\mathrm{pair}}=N'_{\mathrm{pair}}/A_{\mathrm{beam}}.
\end{equation}
Then, the associated statistical uncertainty of the noise ACF $\Delta\xi(\mathbf{x}_{i,j})$ is calculated as the usual standard error of the mean but with an independent sample size $N_{\mathrm{pair}}$, that is, the standard deviation of the multiplication of the values across all pairs of pixels with separation $\mathbf{x}_{i,j}$ divided by the root of the number of independent pixel pairs $N_{\mathrm{pair}}$,
\begin{equation}\label{eq:eq5}
\Delta\xi(\mathbf{x}_{i,j})=\sqrt{\langle N(\mathbf{x}+\mathbf{x}_{i,j})^2 N(\mathbf{x})^2 \rangle/N_{\mathrm{pair}}}.
\end{equation}
Figure~\ref{fig:fig2} shows the results of the noise ACF (Eq.~\ref{eq:eq2}) computed for the noise map shown in Fig.~\ref{fig:fig1}, and the synthesized beam of the observation, both of which are normalized so that the central value is one. The noise ACF shows a pattern similar to that of the synthesized beam, with a correlation signal near the center and a correlation signal away from the center corresponding to the main lobe and side lobe of the synthesized beam, respectively. This suggests that most of the correlation of the noise originates from the discrete Fourier transform involved in the interferometric imaging, which is illustrated in the following subsection.
Note that the noise ACF are measured for the noise maps of the \add{B}and 7 continuum image (shown in Fig.~\ref{fig:fig2}), [C\textsc{ii}] line (velocity integrated over the velocity range of -400 to 400 km s$^{-1}$\add{, where the velocity is computed with respect to the redshifted [C\textsc{ii}] line frequency with the galaxy's redshift of 4.4074\cite{Guilloteau97}}) moment 0 map, and the [C\textsc{ii}] line cubes (each velocity channel map). These maps are primary beam uncorrected and \textsc{clean}ed images produced by the \textsc{clean} algorithm in CASA (see details in Ref.~\citenum{Tsukui2021-sg}).
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{Fig2.pdf}
\caption{The noise ACF computed for the ALMA Band 7 noise map (left), showing a similar pattern in the synthesized beam of the observation (right). \label{fig:fig2}}
\end{figure}
\subsection{Origin of the noise correlation}
\label{subsec:origin}
In interferometric observations, measurable quantities are visibility \add{(}Fourier amplitude and phase\add{)} of the astronomical image at the given spatial frequencies $(u,v)=\textbf{D}/\lambda$, which are related to the antenna baseline vector \textbf{D}\footnote{separation vector of pairs of antennas} projected onto the plane of the sky and the observed wavelength $\lambda$. The image is then computed by the Fourier transform of the measured visibility.
To explore the origin of the noise correlation in the image, seen in (Fig.~\ref{fig:fig2}), we start with the ideal case in which the observation measures visibilit\add{ies} at all spatial frequencies ($u,v$). The visibility of the source of interest is $V(u, v)$, which is the Fourier transform of the true flux distribution of the source in the image, $\hat{S}(x,y)=\mathrm{FT}[V(u, v)]$, where FT denotes the Fourier transform. A measurement of $V(u,v)$ usually involves uncorrelated random noise, which we describe with the random variable $\hat{N}_{\mathrm{vis}}(u,v)$ with zero mean. We assume that the statistical property of the random variable $\hat{N}_{\mathrm{vis}}(u,v)$ is uniform as a function of $u$ and $v$, that is, the system noise temperature is the same for all antennas.
The image obtained $I(x,y)$ is the Fourier transform of the measurement $V(u,v)+\hat{N}_{\mathrm{vis}}(u,v)$,
\begin{equation}
\begin{split}
I(x,y)&=\hat{S}(x,y)+\hat{N}(x, y)\\
&=\mathrm{FT}[V(u,v)+\hat{N}_{\mathrm{vis}}(u,v)]
\end{split}
\end{equation}
where $\hat{N}(x, y)=\mathrm{FT}(\hat{N}_{\mathrm{vis}}(u,v))$ is the noise component of the image, which is a random variable with zero mean\footnote{\add{The} Fourier transform of the random variable with zero mean is also random variable with zero mean.}.
The noise component of the image $\hat{N}(x,y)$ is due to the random noise associated with visibility measurements $\hat{N}_{\mathrm{vis}}(u,v)$, and the resulting noise map $N(x,y)=\hat{N}(x,y)$ is not spatially correlated in the ideal case where all spatial frequencies are measured.
\add{In practice, visibilities are measured only at the limited spatial frequencies} \{($u_1,v_1$), ($u_2,v_2$),..., ($u_M,v_M$)\} (uv coverage).
\add{The spatial transfer function, $W(u,v)$, is used to describe the spatial frequencies $(u,v)$ at which we measure the visibility. This function $W(u,v)$ is non-zero if the visibility at $(u,v)$ is actually measured, which can be expressed as,}
\begin{equation}
W(u, v)=\sum_{i=0}^M \delta(u-u_i,v-v_i)+\delta(u+u_i,v+v_i),
\end{equation}
where $\delta$ is the Dirac delta function.
The synthesized beam $b(x,y)$ is the Fourier transform of the spatial transfer function $W(u, v)$, $b(x,y)=\mathrm{FT}[W(u,v)]$. The resulting image $I(x,y)$, decomposed as the signal $S(x,y)$ from the source and noise map $N(x,y)$, is
\begin{equation}
\begin{split}
I(x,y) & =S(x,y)+N(x,y)\\
& =\hat{S}(x,y)*b(x,y)+\hat{N}(x,y)*b(x,y)\\
& =\mathrm{FT}[(V(u,v)+\hat{N}_{\mathrm{vis}}(u,v))W(u,v)],
\end{split}
\label{eq:eq8}
\end{equation}
where $*$ represents convolution.
As the noise correlation pattern (noise ACF) and the synthesized beam show a similar pattern in Fig. \ref{fig:fig2}, the noise component of the image $N(x,y)$ is the convolution product of the random variable $\hat{N}(x,y)$ and the synthesized beam $b(x,y)$. \add{Because of this,} the noise in the image \add{is well} behave\add{d}; in particular, its statistical properties are uniform over the image, as assumed in Sec.~\ref{subsec:acf}, \add{when} measur\add{ing} the noise ACF .
For convenience, by replacing the sky position of \add{$(x,y)$} with the pixel position \textbf{x}, \add{the} noise map of the image in Eq.~(\ref{eq:eq8}) is written as
\begin{equation}
N(\mathbf{x})=b(\mathbf{x})*\hat{N}(\mathbf{x})=\sum_{i,j}b(\mathbf{x}_{i,j})\hat{N}(\mathbf{x}+\mathbf{x}_{i,j}).
\end{equation}
The autocorrelation of the noise map \add{then becomes}\cite{Refregier1998-ui}
\begin{equation}
\begin{split}
\xi(\mathbf{x}_{i,j})&=\langle N(\mathbf{x}+\mathbf{x}_{i,j})N(\mathbf{x})\rangle \\
&=\langle \sum_{i',j'}b(\mathbf{x}_{i',j'})\hat{N}(\mathbf{x}+\mathbf{x}_{i,j}+\mathbf{x}_{i',j'})\sum_{i'',j''}b(\mathbf{x}_{i'',j''})\hat{N}(\mathbf{x}+\mathbf{x}_{i'',j''}) \rangle \\
&=\sum_{i',j'}\sum_{i'',j''}b(\mathbf{x}_{i',j'})b(\mathbf{x}_{i'',j''})\langle\hat{N}(\mathbf{x}+\mathbf{x}_{i,j}+\mathbf{x}_{i',j'})\hat{N}(\mathbf{x}+\mathbf{x}_{i'',j''}) \rangle\\
&=\sigma_N^2 \alpha(\mathbf{x}_{i,j}),
\end{split}\label{eq:eq10}
\end{equation}
where we used the noise ACF property of uncorrelated noise $\hat{N}$ (Eq.~\ref{eq:eq3}) for the fourth equality \add{and we have defined $\alpha(\mathbf{x}_{i,j})$ as} beam autocorrelation,
\begin{equation}
\alpha(\mathbf{x}_{i,j}) = \sum_{i',j'}b(\mathbf{x}_{i',j'})b(\mathbf{x}_{i,j}+\mathbf{x}_{i',j'}).
\end{equation}
Equation~\ref{eq:eq10} implies that the noise ACF is related to the ACF of the synthesized beam with a constant multiplicative factor, which is the variance of the noise. In Fig.~\ref{fig:fig4} we compare the ACF of noise and that of the synthesized beam, along with the residuals (noise ACF minus beam ACF). Although the noise ACF and beam ACF show a common characteristic pattern, they do not completely coincide. \add{The difference of the two ACF shows} an extended weak positive correlation and a relatively large negative around the main beam in the residual. This disagreement is likely due to not only (1) the remaining contamination by the emission from the sources, but also (2) the process involved in the imaging of the visibility measurements. These are discussed in detail in the Appendix~\ref{sec:ap1} comparing Fig.~\ref{fig:fig4} obtained from the actual data with the one (Fig.~\ref{fig:fig12}) obtained from the simulated data with a similar observational setup and realistic noise in \add{the} visibilit\add{ies}, but without emission in the sky. \add{Note that} our interest is on the statistical property of the noise in the image plane, which we characterize by the noise ACF including effects of contamination from the source and the imaging process.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{Fig3.pdf}
\caption{Top left: the same noise ACF shown in Fig.~\ref{fig:fig2}, top right: the ACF of the synthesized beam, bottom: The residual of the noise ACF minus the ACF of the synthesized beam}
\label{fig:fig4}
\end{figure}
Due to the limited spatial frequency coverage, the synthesized beam $b(x,y)$ has a complex structure with sidelobes that extend from the center to a large radius. The flux from the source is spread out by the side lobes to the distant pixels in the image. The \add{\textsc{clean}} algorithm, which is most commonly used in radio imaging, deconvolves the beam pattern $b(x,y)$ for signals with high S/N ($S(x,y)/\sigma_\mathrm{N})>3$) and replaces it with a \add{\textsc{clean}} beam without side lobes (a Gaussian that approximates the mainlobe of the synthesized beam). The \add{\textsc{clean}} algorithm successfully \add{suppresses} the influence of the sidelobe and produces a high-fidelity image, but cannot remove the spatial correlations that exist in stochastic noise $N(x,y)$. Therefore, it is important to evaluate their effects on image analysis and signal detection, which we will describe in Sec.~\ref{sec:result2}.
\section{EXAMPLE APPLICATION TO SCIENTIC DATA}
\label{sec:result2}
\subsection{Contribution of the correlated noise to the statistical uncertainty in the measured flux}
The most fundamental measurement of astronomy is the total flux \add{distributing} over some sky region in the images, which are measured by summing the pixel values over the region of interest (i.e., aperture photometry in optical astronomy). In particular, at the submillimeter band of ALMA, the flux of the continuum emission arising \add{primarily} from \add{thermal} dust, and line emission and absorption by the various atomic and molecular gases are used to estimate the physical properties of the interstellar medium (e.g., dust mass, gas mass, the energy source of the ionization or excitation, etc.). \add{I}t is important to estimate the uncertainty of the measured quantities. As shown in Fig. 2, the noise in interferometric images correlates significantly between pixels, making the estimation of the noise in the integrated flux difficult. \add{In} previous literature, statistical uncertaint\add{ies} of integrated flux\add{es} \add{were estimated} by \add{one of} two methods: (1) randomly placing \add{identical} apertures in the noise region of the image, measuring the sum within \add{each} aperture and then adopting the rms as the noise in the \add{sum of pixels in the aperture}\cite{Harikane2020-nr}; and (2) assuming that the regions in the image separated with a beam size do not correlate and adopting $\sigma_{\mathrm{N}}\add{A_{\mathrm{beam}}}\sqrt{N_\mathrm{beam}}$, where $N_\mathrm{beam}$ is the number of beams (independent regions) in the aperture\cite{Alatalo2013-ix}. $N_\mathrm{beam}$ is estimated as $A_{\mathrm{aperture}}/A_{\mathrm{beam}}$, where $A_{\mathrm{aperture}}$ and $A_{\mathrm{beam}}$ are the aperture area and the \add{\textsc{clean}} beam area \add{in pixels}, respectively. For convenience, we call methods (1) and (2) "random aperture method" and "independent beam method," respectively, in this paper.
\add{In the "independent beam method" ($\sigma_{\mathrm{N}}A_{\mathrm{beam}}\sqrt{N_\mathrm{beam}}$), the factor $\sigma_{\mathrm{N}}A_{\mathrm{beam}}$ is the standard deviation of the sum of noise in individual pixels within a beam assuming that the noise perfectly correlates within a beam. Then the standard deviation of the sum of the noise of each independent beam area in the aperture is computed by scaling by the square root of the number of independent beams $N_\mathrm{beam}$ within the aperture. The terms $A_{\mathrm{beam}}$ and $N_\mathrm{beam}$ in the $\sigma_{\mathrm{N}}A_{\mathrm{beam}}\sqrt{N_\mathrm{beam}}$ denote just the number of data points to be summed. Therefore, we caution readers that $\sigma_{\mathrm{N}}A_{\mathrm{beam}}\sqrt{N_\mathrm{beam}}$ has the same unit with $\sigma_{\mathrm{N}}$. Most interferometric maps and measured $\sigma_{\mathrm{N}}$ are in brightness units e.g., Jy beam$^{-1}$ km s$^{-1}$ or Jy beam$^{-1}$. So we need to divide $A_{\mathrm{beam}}$ to compare with the integrated flux or spectral flux density, e.g., Jy km s$^{-1}$ or Jy. $\sigma_{\mathrm{N}}A_{\mathrm{beam}}\sqrt{N_\mathrm{beam}}$ is a factor of $A_{\mathrm{beam}}$ different from $\sigma_{\mathrm{N}}\sqrt{N_\mathrm{beam}}$ described in Alatalo et al. \cite{Alatalo2013-ix} due to the unit difference where they assume the quantity in the unit of flux.}
This section introduces how to derive the statistical uncertainty associated with the spatially integrated flux directly from the computed noise ACF. We consider adding all the pixel values at pixel positions $\mathbf{x}$ within the sky region of interest S. The random noise $N(\mathbf{x})$ in the map is characterized by the noise ACF, $\xi(\mathbf{x}_{i,j})$. The 1$\sigma$ statistical uncertainty associated with the summed value within the pixel region S, $\sigma_{\mathrm{int}}$, can be estimated as
\begin{equation}
\begin{split}
\sigma_{\mathrm{int}}^2 & = \mathrm{Var}(\sum_{\mathbf{x}<S}N(\mathbf{x}))\\
& = \sum_{\mathbf{x}<S}\mathrm{Var}(N(\mathbf{x}))+\sum_{\mathbf{x}<S} \sum_{\substack{ \mathbf{x}'\neq\mathbf{x}\\\mathbf{x}'<S} }\mathrm{Cov}(N(\mathbf{x}),N(\mathbf{x}'))\\
& = N_{\mathrm{pix}}\sigma_\mathrm{N}^2+\sum_{\substack{\mathbf{x}_{i,j}=\mathbf{x}-\mathbf{x}'\\ \mathbf{x}'\neq\mathbf{x}\\\mathbf{x}, \mathbf{x}'<S} }\xi(\mathbf{x}_{i,j}),
\end{split}\label{eq:14}
\end{equation}
where $N_{\mathrm{pix}}$ is the number of pixels in the region S. Var and Cov indicate variance and covariance, respectively. The second term of the last line is the sum of the noise ACF for all possible pixel separation vectors $\mathbf{x}_{i,j}$ between two pixels within the region S. If the noise does not have an inter-pixel correlation, the second term becomes zero, resulting in $\sigma_{\mathrm{int}}=\sigma_{\mathrm{N}}\sqrt{N_{\mathrm{pix}}}$. \add{The method Sun et al. \cite{Sun2014-nz} proposed to estimate $\sigma_\mathrm{int}$ is equivalent to the Eq.~\ref{eq:14} but approximately substituting the covariance $\xi(\mathbf{x}_{i,j})$ by $\sigma_\mathrm{N}^2 b(\mathbf{x}_{i,j})$. However, $\xi(\mathbf{x}_{i,j})=\sigma_N^2\alpha(\mathbf{x}_{i,j})$ as shown in Eq.~\ref{eq:eq10}.}
As an illustration, we used the spatially resolved [C\textsc{ii}] moment 0 map of BRI 1335-0417 taken by ALMA, shown in Fig.~\ref{fig:fig5}, where the emission spreads over multiple pixels. We calculated the noise in the integrated flux measured using a variety of apertures S with different sizes. The largest aperture is a dotted line shown in Fig.~\ref{fig:fig5}.
Figure~\ref{fig:fig6} shows the computed noise from the measured noise ACF compared to the previously used "random aperture" and "independent beam" methods. The noise calculated from the ACF is in excellent agreement with the random aperture method, while the independent beam method tends to overestimate \add{$\sigma_{\mathrm{int}}$} at smaller apertures\footnote{Considering the limiting case that the aperture is a single pixel, the obtained value by "independent beam method", $\sigma_N\add{\sqrt{A_{beam}}})$ is clearly different from the \add{fiducial} value $\sigma_N$.} and underestimate \add{$\sigma_{\mathrm{int}}$} at large apertures, showing that the assumption of "independent beam" is oversimplified.
When the field of view is small (the field of view becomes smaller at higher frequency bands in ALMA) or the aperture area becomes larger, the random aperture method cannot place apertures randomly in the limited area of the emission-free region, and thus the standard error of the estimate increases, as shown in the blue shade\add{d region} in Fig.~\ref{fig:fig6}. The proposed method \add{of calculating $\sigma_{\mathrm{int}}$}, however, can provide the best estimate by exploiting all available data to estimate the noise ACF. The total [C\textsc{ii}] flux measured with the aperture shown in Fig.~\ref{fig:fig5} is 29.51 $\pm$ 1.05 Jy km s$^{-1}$ (1$\sigma$ statistical uncertainty calculated from the noise ACF).
\begin{figure*}[ht]
\begin{minipage}[t]{0.55\linewidth}
\includegraphics[width=\linewidth]{Fig4.pdf}
\caption{[C\textsc{ii}] velocity integrated intensity map of BRI 1335-0417\cite{Tsukui2021-sg} The white contour is shown every 4$\sigma$ from 3$\sigma$ to 27$\sigma$. The white elipse shown in bottom-left corner indicates the FWHM of the main lobe of the synthesized beam. The dotted line circle shows the aperture corresponding to the point with the largest number of pixels shown in Fig \ref{fig:fig6}. The [CII] spectrum within the aperture is also shown in Fig \ref{fig:fig9}. \\\label{fig:fig5}}
\end{minipage}
\hspace{0.01\textwidth}
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[width=\linewidth]{Fig5.pdf}
\caption{Top: The noise rms in the integrated flux for apertures with different sizes estimated by various methods: the one computed from the noise ACF (black points), the one estimated from the random aperture methods (blue \add{dashed} line) with the standard error (blue shade), and the one estimated from the independent beam method \add{(black solid line)}. Bottom: the fractional difference between the one computed from the noise ACF and the ones from the random aperture method and independent beam method. \\\label{fig:fig6}}
\end{minipage}
\end{figure*}
To demonstrate the significant effect of spatially correlated noise, Fig.~\ref{fig:fig7} shows the noise variance $\sigma_\mathrm{N}^2$ in the integrated flux calculated from the ACF along with the contributions of the noise variance of individual pixels in the aperture (the first term in Eq.~\ref{eq:14}, this is the value we obtain if we are unaware of the noise correlation) and the interpixel correlation in the aperture (the second term in Eq.~\ref{eq:14}). In the case of an aperture with two pixels, it is not mathematically allowed for the first term to exceed the second term. However, as the number of pixels in the aperture, $N_{\mathrm{pix}}$, increases, the contribution of the second term \add{dominates} over the first term because the number of the first terms is $N_{\mathrm{pix}}$ while the number of the second terms is $N_{\mathrm{pix}}(N_{\mathrm{pix}}-1)$. \add{Ignoring} the noise correlation will lead to a significant underestimation of the integrated flux uncertainty.
Figure~\ref{fig:fig8} further divides the variance due to noise correlation into the effects of the mainlobe (correlation due to the mainlobe of the synthesized beam) and the sidelobe (long-range pixel correlation due to the sidelobe of the synthesized beam). \add{To illustrate the significance of the long-range correlation conservatively}, we define \add{short-range/long-range correlation components of the noise ACF inside/outside the ellipse with the beam FWHM in radius}, respectively. After $N_{\mathrm{pix}}$ exceeds 200, the effect of the sidelobe becomes significant. This explains the deviation of the estimate by the independent beam method from the true value (computed by the noise ACF or the random aperture method; see Fig.~\ref{fig:fig6}); the number of independent beams $N_\mathrm{beam}$ is estimated by the area of the \add{\textsc{clean}} beam, and the long-range correlation due to the sidelobe is not properly taken into account.
Another important measurement in astronomy is the shape of the spectrum integrated over a certain region of interest. Similarly to deriving the noise in the integrated flux, we can derive the underlying noise in the spatially integrated spectrum using the noise ACFs computed for every velocity channel of the data cube. Fig\add{ure}~\ref{fig:fig9} shows the spatially integrated [CII] spectrum enclosed by the aperture shown in Fig.~\ref{fig:fig5} with 1$\sigma$ and 3$\sigma$ noise levels. The [CII] spectrum of BRI 1335-0417 is well described by a single Gaussian without deviations from the Gaussian above 3$\sigma$. Because emission lines from interesting astronomical phenomena such as outflows, tidal tails, etc.\add{,} are faint, it is important to accurately estimate the noise, otherwise it will lead to false detections.
\begin{figure*}[ht]
\begin{minipage}[t]{0.49\linewidth}
\includegraphics[width=\linewidth]{fig6.pdf}
\caption{The measured noise variance estimated from the noise ACF for different apertures S (blue \add{points} and \add{solid} line), with the contribution from the noise variance of the individual pixels in the aperture (the first term of the last line in Eq.~\ref{eq:14}, green \add{points} and \add{dotted} line) and the contribution from the covariance due to the interpixel correlation (the second term of the last line in Eq.~\ref{eq:14}, orange \add{points} and \add{dashed} line) \label{fig:fig7}}
\end{minipage}
\hspace{0.01\textwidth}
\begin{minipage}[t]{0.49\linewidth}
\includegraphics[width=\linewidth]{fig7.pdf}
\caption{The covariance term due to the interpixel correlation shown in Fig.~\ref{fig:fig7}. (the second term of the last line in Eq.~\ref{eq:14}, orange \add{points} and \add{solid} line), which are further decomposed into the component of the long-range correlation due to the sidelobe of the synthesized beam \add{(purple points and dotted line)} and short-range correlation due to the mainlobe of the beam \add{(red points and dashed line)}. \label{fig:fig8}}
\end{minipage}
\end{figure*}
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\textwidth]{Fig8.pdf}
\caption{The top panel shows the spatially integrated flux within the aperture shown in Fig.~\ref{fig:fig5} (solid black line) and the best-fit Gaussian (solid blue line). The bottom panel shows the residual (measured spectrum minus the best-fit Gaussian). The gray shade and dashed line in both panels show the statistical uncertainty of 1$\sigma$ and 3$\sigma$, respectively, calculated from the noise ACF. The figure is reproduced from Ref.~\citenum{Tsukui2021-sg} where the underlying noise is recalculated by the noise ACF of each velocity channel. \label{fig:fig9}}
\end{figure}
\subsection{Simulating the noise maps}
\label{sec:simulatenoise}
In image analysis, generating random noise based on the statistical properties of the noise is useful to assess the significance of the results. In this section, we describe how to generate random noise based on the noise ACF, which fully characterizes the correlated Gaussian noise. \add{Using an example of simulated noise, we} demonstrate \add{how ignoring correlated noise leads} to misinterpretation of results.
Once the noise ACF is measured, the noise at the \add{$\mathbf{x}_{i,j}$} positions in images with a size of $M\times M$ pixels can be generated randomly by the joint probability distribution, the probability that $N(\mathbf{x}_{i,j})$ takes the value in small intervals ($N_{i,j}+\mathrm{d}N_{i,j}$) given by\cite{Binney2008-fu}
\begin{equation}\label{eq:jointprob}
\mathrm{d}p=\frac{\mathrm{d}N_{1,1}\cdot\cdot\cdot\mathrm{d}N_{i,j}\cdot\cdot\cdot\mathrm{d}N_{M,M}}{(2\pi)^{M^2/2}|{B}|^{1/2}}\exp{\left(-\frac{1}{2}\Sigma^M_{a,b,c,d=1}N_{a,b}{B}^{-1}_{a-c,b-d}N_{c,d}\right)},
\end{equation}
where ${B}^{-1}$ is the inverse of the matrix ${B}$ defined by
\begin{equation}
B_{i,j}=\xi(\mathbf{x}_{i,j})
\end{equation}
Figure~\ref{fig:fig10} shows the comparison of the noise of the observed data, the noise randomly generated from the measured noise ACF using the joint Gaussian probability distribution (Eq.~\ref{eq:jointprob}; we used the \textsc{multivariate}\_\textsc{normal} function from the scipy package\cite{2020SciPy-NMeth})\add{,} and the spatially uncorrelated Gaussian noise, all of which have the same standard deviation \add{$\sigma_\mathrm{N}$}. The observed noise and the randomly generated noise using the noise ACF are qualitatively
similar, while the spatially correlated noise and the spatially uncorrelated noise look completely different, illustrating how dangerous it is to assume naively \add{that noise is uncorrelated} in image analysis.
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\textwidth]{Fig9.pdf}
\caption{Noise map in the observed data (top left), noise map generated from the measured noise ACF (top right), and uncorrelated noise map (bottom). All of which have the same standard deviation $\sigma_{N}$. \label{fig:fig10}}
\end{figure}
In the literature, emission-free regions have been extracted and used as realistic noise maps to estimate the statistical uncertainty associated with the parameters derived by the model fitting \cite{Boizelle2019-wg}. The field of view of the interferometric observation is generally small, preventing us from obtaining a sufficient number of independent noise maps \add{or cubes} to conduct Monte Carlo experiments. \footnote{\add{As the $T_\mathrm{sys}$ is not strongly variable across a spectral window, we may use the line-free channels with cautions of the channels affected by emission lines in the atmosphere and the usually negligible variation of spatial frequencies ($u, v$) with spectral frequency.}} The proposed method can generate random noise repeatedly from the measured statistical properties of the noise (the noise ACF) in the interferometric image with the best precision limited by the available area of the emission-free region.
There are many potential applications of generating noise. We illustrate the significance of the correlated noise using one example: the image analysis done in Ref.~\citenum{Tsukui2021-sg}, which expands the image with series of logarithmic spirals and identifies the dominant spiral structure in the image. In Fig.~\ref{fig:fig11}, we show the Fourier spectrum of the logarithmic spirals in the [CII] intensity map of the BRI 1335-0417 shown in Fig.~\ref{fig:fig5} (black solid line) and the underlying noise spectr\add{a} estimated from the simulated noise maps from noise ACF (blue shaded region) and the simulated noise maps with no spatial correlation but the same standerd deviation $\sigma_{\mathrm{N}}$ (orange shaded region). The spectrum shows the amplitude of the logarithmic spiral with $m$ arms ($m$-fold symmetry) \add{as a function of a variable $p$, which is related to} the pitch angle of $\alpha=\arctan{(-m/p)}$\add{. To calculate the spectrum, t}he image (Fig.~\ref{fig:fig5}) was first deprojected to be viewed face-on with an inclination angle of 37.8$^{\circ}$ and a position angle of 4.5$^{\circ}$, where the package \textsc{scikit-image} performed rotation and stretching for the deprojection (which induces an additional noise correlation in the image). Then the amplitude of each Fourier component of the logarithmic spiral $m$ and $\alpha$ is calculated (see Equation~S2 in Ref.~\citenum{Tsukui2021-sg}). The noise spectrum \add{was} measured by generating 300 noise maps, calculating the amplitude of each Fourier component in the same way as the image, and taking the 84th percentiles. The noise spectrum computed by assuming the \add{uncorrelated} Gaussian noise map significantly underestimates the true noise spectrum, which could lead to the false detection of statistically insignificant structures such as the second peak in $m=2$ and multiple peaks in $m=3, 4$. The estimated noise spectra are higher than those reported in the original paper because the image stretching and rotation were not applied to the noise maps in the original paper, resulting in an underestimation of the noise level. However, the result of the original article does not change except for the updated statistical significance of each peak in $m=1,2,3,4$ being 2.1$\sigma$, 3.5$\sigma$, 1.7$\sigma$, and 1.6$\sigma$, respectively. \add{The} $m=1$ peak with 2.1$\sigma$ corresponds to the fact that the northern arm is longer than the southern arm.
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\textwidth]{Fig10.pdf}
\caption{Fourier spectra of logarithmic spiral models (solid black lines) and the underlying 1$\sigma$ noise due to the noise in the images computed from the simulated noise maps by the noise ACF (blue shaded region) and the \add{uncorrelated} Gaussian noise maps with the same standard deviation (orange shaded region). The peak in $m=2$ indicates that the dominant component of the image is a 2 armed spiral structure with a pitch angle of $\alpha=26.7$. The figure is adopted from Ref.~\citenum{Tsukui2021-sg} and the underlying noise spectrum is recalculated using the noise maps simulated by the noise ACFs. \label{fig:fig11}}
\end{figure}
\subsection{Fitting a model to an interferometric image under spatially correlated noise}\label{sec:fitting}
Fitting an analytic model of the intensity distribution to the 2D observed image is one of the most fundamental tasks in astronomy, allowing us to extract information to characterize astronomical sources from images efficiently. For a simple example, fitting 2D Gaussian to the intensity image provides essential information regarding the source's position, size, and brightness. This section introduces the construction of a covariance matrix from the measured noise ACF, which is required to calculate $\chi^2$, to properly estimate the statistical uncertainty on the parameters derived by fitting the model to the observed interferometric image.
Consider fitting a 2D model $I_{i,j}(\theta)$ to an observed image $I'_{i,j}$ with $M \times M$ pixels, where $\theta$ denotes a set of model parameters. We denote the flattened model image and the observed image by $I_{m}(\theta)$ and $I'_{m}$, respectively, with the flat indices $m=(1,2,3,...,M^2)$ that correspond to the coordinate indices $(i,j)=((1,1),...,(M, 1),(1,2),...(M,2),...,(M,M))$ in the images. The formal expression \add{to calculate} $\chi^2$ is
\begin{equation}\label{eq:cochi}
\chi^2=\mathbf{r}(\theta)^{\mathrm{T}}C^{-1}\mathbf{r}(\theta),
\end{equation}
where $C^{-1}$ is the inverse of the covariance matrix $C$ with the size of $M^2 \times M^2$, and $\mathbf{r}(\theta)$ is the residual vector with the elements $r_{m}(\theta)=I'_m-I_m(\theta)$.
When noise has no correlation between pixels, the covariance matrix $C=\sigma_N^2E$ ($E$ is the identity matrix) and $\chi^2$ leads to
\begin{equation}\label{eq:nocochi}
\chi^2=\sum_m^{M^2}(I'_m-I_m(\theta))^2/\sigma_N^2
\end{equation}
When fitting the 2D model to the observed interferometric image, in the literature, the noise correlation has been ignored to estimate the statistical uncertainties on the derived parameters using Eq. \ref{eq:nocochi} instead of Eq. \ref{eq:cochi}; therefore, the statistical uncertainties reported are, in most cases, significantly underestimated.
We can construct the covariance matrix C in Eq. \ref{eq:cochi} from measured noise ACF by,
\begin{equation}\label{eq:covarianceacf}
C_{m,m'}=\begin{cases}
\xi(\mathbf{x}_{0,0})=\sigma_N^2, & \mathrm{if\ } m = m'\\
\xi(\mathbf{x}_{i,j}-\mathbf{x}_{i',j'}), & \mathrm{if\ } m \neq m'
\end{cases}
\end{equation}
Here, recall that $m$ (and $m'$) are flattened indices with a one-to-one relationship with the pixel coordinate $m\rightarrow(i,j)$. As \add{$m$} and \add{$m'$} are exhangable, the covariance matrix \add{$C$} is a real-valued symmetric matrix, whose inverse can be efficiently calculated using the \textsc{linalg.pinvh} function in the Scipy package.
Now, the question \add{becomes;} what effect \add{does the} noise correlation \add{have} on the estimated statistical uncertainties of the model parameters derived from the fitting \add{process?} To explore this, we fit the 2D Gaussian model to a noiseless 2D Gaussian image and sampled the posterior distribution of the model parameters: total flux of the Gaussian distribution $L$, center of the Gaussi\add{a}n \add{(}$x, y$\add{)}, major axis and minor axis of the Gaussian \add{(}$\sigma_{\mathrm{maj}}, \sigma_{\mathrm{min}}$\add{)}, and position angle \add{(P.A.)} from the north to the major axis (counterclockwise). \add{The noise effect perturbing the best-fitting parameters is effectively taken into account by the covariance matrix $C$ in the likelihood (Eq.~\ref{eq:cochi}).} We compared the posterior distributions that resulted in two cases \add{where} (1) the noise has no spatial correlation (using Eq.~\ref{eq:nocochi}), and (2) the noise has spatial correlation characterized by the noise ACF shown in Fig.~\ref{fig:fig2} (using Eq.~\ref{eq:cochi}, and the covariance matrix constructed from the noise ACF). Note that in both cases, the noise variance in individual pixels is the same, but the latter case has spatial correlation. We used \textsc{emcee} \cite{Foreman-Mackey2013-yn} to sample posterior distributions with logarithmic likelihood $L\propto-0.5\chi^2$ and \add{a} uniform prior. Figure~\ref{fig:figex1} shows the posterior distributions of the parameters of the Gaussian model in two cases. Accounting for the spatial correlation of the noise results in a significantly larger confidence interval than that obtained assuming that there is no spatial correlation of the noise. The model parameters would appear to be constrained too well if we ignored the noise correlation in the fitting process.
Figure~\ref{fig:figex2} compares the posterior distribution sampled with the \add{$\chi^2$} covariance matrix (the same as Fig.~\ref{fig:figex1}) with the distribution estimated by Monte Carlo resampling\add{.} \add{The resampling is performed by} repeatedly obtain\add{ing} \add{bestfitting parameter values after adding the randomly-generated} correlated noise to the noiseless image (the method described in Sec.~\ref{sec:simulatenoise}). \add{This confirms that the posterior distributions estimated by the two approaches agree well.}
\add{In addition to the 2D fitting problem \add{when} investigat\add{ing} emission distributions, many recent studies \add{fit} 3D model cubes to observed 3D data cubes (R.A., Dec., velocity) thanks to the improvement of interferometric imaging in terms of spatial resolution and sensitivity. Radio interferometer has sufficiently high frequency resolution ($0.01$ km s$^{-1}$ or $R=3\times10^7$ at 110 GHz for ALMA), which is usually further binned to increase the signal to noise. Therefore, the noise can be regarded as independent across the frequency (or velocity) channels, and the above method described for the 2D case can be applied to compute $\chi^2$ for the 3D case. One of the main examples of 3D cube fitting is the kinematical modelling of emission lines.}
\add{There are several publicly available modelling codes for this purpose, which assume the disk geometry of the emission line, parametrize the rotation curve, velocity, dispersion, and 3D geometry of the disk (position angle, inclination), and produce 3D model cubes that can be directly compared to observations taking into account the resolution and pixel binning (e.g., \textsc{TiRiFiC}\cite{TiRiFiC07}, \textsc{KinMS}\cite{Davis2013-wm}, \textsc{3D-Barolo}\cite{Teodoro15}, \textsc{GalPaK 3D}\cite{GalPaK3D}, and \textsc{Qubefit}\cite{mneeleman_2021_4534407}). Under dynamical equilibrium, rotation curves are powerful tracers for the mass distribution of galaxies that can be further decomposed into black hole mass, stellar bulge, exponential disk, etc., depending on the data quality. These codes have been widely used to measure galaxy mass distributions and BH masses from distant to nearby galaxies (e.g., Refs.~\citenum{Davis2013-wm, Tsukui2021-sg}). However, these codes do not consider the correlation of the noise. As the number of data becomes large and the effect of noise correlation becomes significant --- estimates of the statistical uncertainty on best-fitting parameters are too small.}
Davis et al.\cite{Davis2017-lu} estimate a covariance matrix from a point spread function and \add{use it to calculate the $\chi^2$ values.} \add{They approximate the point spread function by a single Gaussian function ignoring the side-lobe, which is an oversimplification as seen in Fig.~\ref{fig:fig8}. We find that the side-lobe effect can dominate as an image region becomes bigger. As a result, the estimates of the statistical uncertainty of parameters can be underestimated.} In addition, as seen in Sec. \ref{subsec:origin}, the point spread function does not coincide with the actual noise correlation pattern in the image. \add{Therefore, we recommend constructing the covariance matrix using Eq.~\ref{eq:covarianceacf} from the noise ACF, which fully characterizes the actual noise correlation pattern rather than from the point spread function.}
Boizelle et al.\cite{Boizelle2019-wg} \add{proposed to estimate the formal $\chi^2$ by block averaging the data and model cube to form roughly beam-sized cells to avoid the need to calculate the covariance matrix mentioned above. They stated that the block averaging method does not fully mitigate the correlation between neighbouring pixels with the presence of the long-range correlation as shown in Fig.~\ref{fig:fig8}. They estimated the final statistical uncertainty on the model parameters} by Monte Carlo realisation using line-free channels.
\add{The example calculation for the 2D fitting} confirms that the two approaches, Monte Carlo resampling \add{(in 2D or 3D)} \cite{Boizelle2019-wg} \add{as well as} the use of the \add{c}ovariance matrix during fitting are equivalent and should provide the correct estimation under spatially correlated noise.
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\textwidth]{Fig11.pdf}
\caption{The posterior distributions that we obtain when fitting the Gaussian model to the image with different assumptions about noise. Black: The posterior distribution sampled using the covariance matrix constructed from the noise ACF shown in Fig.~\ref{fig:fig2}. Blue: posterior distribution sampled assuming that there is no spatial correlation in the noise but with the same $\sigma_{N}$. The black distribution corresponds to the expected posterior distribution that we obtain if we properly take into account the noise correlation by measuring the noise ACF, while the blue shows that we obtain if we ignore the noise correlation and just naively use Eq.\ref{eq:nocochi} with the sky noise level $\sigma_N$. True values are shown with solid cyan lines. The three images on the top right show \add{(a)} a noiseless image to be fitted, \add{(b)} an example image with the correlated noise added, and \add{(c)} an example image with the uncorrelated noise added. \add{The comparisons with (b) and (c) illustrate that the image data with uncorrelated noise can retain more information on the intrinsic Gaussian distribution than the image with the correlated noise with the same $\sigma_{N}$.} The white contours show emission levels of $1\sigma_N$, $3\sigma_N$, $7\sigma_N$ in both images.}
\label{fig:figex1}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\textwidth]{Fig12.pdf}
\caption{Black: same as in Fig.\ref{fig:figex1} Green: posterior distribution estimated by Monte Carlo resampling: repeating the fitting the Gaussian model to the image added with randomly generated noise map from the noise ACF (the method described in Sec.~\ref{sec:simulatenoise}), showing the agreement between the black and green distributions. The two images on the top right\add{, (a) and (b)}, are the same as \add{(a) and (b)} in Fig.\ref{fig:figex1}.}
\label{fig:figex2}
\end{figure}
\section{CONCLUSIONS}
Understanding the spatial correlation of noise in interferometric images is important to correctly evaluate the statistical significance of the result. We \add{have} show\add{n} that the noise \add{autocorrelation function} \add{(}ACF\add{)} of \add{an} ALMA noise image has a pattern similar to that of the synthesized beam (dirty beam) and that the spatial correlation of the noise originates from the limited uv coverage. To correctly evaluate the statistical uncertainty of the measured quantities, we propose first measuring the noise ACF in the interferometric image, which can provide the best estimate of the full statistical properties of the noise (correlation properties) with all emission-free regions available. Once the noise ACF is measured, we can directly (1) evaluate the statistical uncertainty associated with \add{a} spatially integrated flux \add{or} spectrum, (2) randomly generate noise maps with the same correlation property, and (3) construct the covariance matrix and \add{determine a $\chi^2$} value \add{when fitting a 2D} model to an image. The method to deal with the spatially correlated noise in the interferometric image has not been documented in the astronomical literature, even for the basic spatially integrated flux \add{measurements}. We demonstrated example applications of our methods to scientific data showing that \add{ignoring} noise correlation leads to significant underestimation of statistical uncertainty of the results and false detections. A Python package for easy application of the method described in this paper, Evaluating Statistical Significance undEr Noise CorrElation (ESSENCE), is publicly available at \url{https://github.com/takafumi291/ESSENCE}.
\add{Drizzling has become a common technique in producing optical-IR observational images. The drizzling method resamples raw under-sampled images and corrects for geometric distortion by shifting, rotating, and interpolation, and coadd these corrected images to have a common Cartesian grid. This process induces significant pixel correlation (e.g., Refs.~\citenum{Labb2003AJ....125.1107L, Sharp2015MNRAS.446.1551S}.}).
Measuring the noise ACF does not \add{require any assumption about the probability distribution of the} noise (e.g. Gaussian). Therefore, our method has potential applications to a range of astronomical images not only of interferometers but also of optical-IR observations.
|
2,877,628,091,235 | arxiv | \section{Introduction}
We investigate graphs and families of graphs
which asymptotically behave like strongly regular graphs (SRGs).
In particular, we generalize existence conditions.
Our interest stems from the fact that for some extremal
problems such as the cap set problem or optimally pseudorandom
clique-free graphs (see \S\ref{sec:app}) it is natural
to look for constructions which behave
very similarly to strongly regular graphs.
{\medskip \footnotesize
All graphs in this document are finite and simple.
Let us repeat some basic facts about strongly
regular graphs and bounds on their parameters:
Our notation for strongly regular graphs is standard,
cf. \cite{BCN,BvM}.
A strongly regular graph $\Gamma$ with parameters $(v, k, \lambda, \mu)$
is a $k$-regular graph (not complete, not edgeless) of order $v$
such that two distinct adjacent vertices
have precisely $\lambda$ common neighbors, while two distinct nonadjacent
vertices have precisely $\mu$ common neighbors.
One of the parameters depends on the others: For a fixed vertex $a$,
counting the pairs $(b, c)$ with $a \sim b \sim c \not\sim a$ in two ways
shows that $(v-k-1)\mu = k(k-\lambda-1)$.
Call an eigenvalue of the adjacency matrix of a
regular graph {\it restricted} if
it has an eigenvector orthogonal to the all-ones vector.
Then, alternatively, a strongly regular graph can be defined as a $k$-regular
graph whose adjacency matrix $A$ has exactly two
restricted eigenvalues $r \geq 0$ and $s < 0$.
Denote the multiplicity of $r$ by $f$ and
the multiplicity of $s$ by $g$.
We have the identities
\begin{align*}
& \lambda - \mu = r+s, && k-\mu = -rs.
\end{align*}
Explicit formulas for $f$ and $g$ can be found using $1+f+g = v$
and $k+fr+gs = \mathrm{tr}(A) = 0$.
\par\medskip}
The Krein bound\footnote{Named somewhat indirectly after
Mark Grigorievich Krein, cf. \cite[p. 26]{BvM}.} and
the absolute bound provide asymptotic conditions on
the parameters $(v, k, \lambda, \mu)$ of a strongly regular graph.
{\smallskip\footnotesize
As a toy example for this introduction,
we consider the parameter set
$v = (1+o(1))\lambda^{11}$, $k = (1+o(1))\lambda^{10}$, and $\mu = (1+o(1)) \lambda^9$
(as $\lambda \rightarrow \infty$).
See \S\ref{sec:bigO} for a discussion of big-$O$ (and similar) notation.
\par}
\begin{Theorem}[Krein Bound for SRGs, {\cite[p. 26]{BvM}}]\label{thm:krein_srg}
The eigenvalues $k \geq r \geq 0 > s$ of a strongly regular graph satisfy
\begin{align*}
&1 + \frac{s^3}{k^2} - \frac{(s+1)^3}{(v-k-1)^2} \geq 0, &&1 + \frac{r^3}{k^2} - \frac{(r+1)^3}{(v-k-1)^2} \geq 0.
\end{align*}
\end{Theorem}
{\footnotesize
In the toy example above, $s = (-1+o(1)) \lambda^9$, so Theorem \ref{thm:krein_srg}
implies $1 + (-1+o(1)) \lambda^7 - (1+o(1)) \lambda^5 \geq 0$ which is impossible.\par}
\smallskip
The absolute bound for strongly regular graphs is a corollary of the
well-known result by Delsarte, Goethals, and Seidel that
a family of $n$ unit vector in ${\mathbb R}^d$ with at most three distinct inner products
satisfies $n \leq \frac12 d(d+3)$, see Theorem 4.8 and Theorem 4.11 in \cite{DGS1977}.
\begin{Theorem}[Absolute Bound for SRGs, {\cite[Prop. 1.3.14]{BvM}}]\label{thm:absolute_srg}
The multiplicities $f,g$ of a primitive strongly regular graph
satisfy $v \leq \frac12 f(f+3)$ and $v \leq \frac12 g(g+3)$.
\end{Theorem}
{\footnotesize In the toy example above, $g = (1+o(1)) \lambda^3$, so Theorem \ref{thm:absolute_srg}
implies $(1+o(1)) \lambda^{11} \leq \frac12 \lambda^6$ which is impossible.
\par}
\smallskip
In the first part of this document,
we generalize Theorem \ref{thm:krein_srg}.
\begin{Proposition}[Krein Bound, Variant for Regular Graphs] \label{prop:krein_regular}
Let $\Gamma$ be a $k$-regular graph of order $v$ with adjacency matrix $A$.
Let $r$ denote the second largest and $s$ the smallest eigenvalue of $A$. Then
\begin{align*}
(s+r^2)v + (k-r)(r-s) \geq 0, && (r+s^2)v + (k-s)(s-r) \geq 0.
\end{align*}
\end{Proposition}
There is a poor man's version of the absolute bound which only shows $v \leq f^2$
and $v \leq g^2$. We give a variant of this poor man's result.
\begin{Proposition}[Absolute Bound, Variant]\label{prop:abs_reg}
Consider a $k$-regular graph of order $v$ with adjacency matrix $A$.
Let $r,s$ be real numbers with $k > r \geq 0 > s$.
Suppose that $A$ has at least $f_1$ and
at most $f_2$ restricted eigenvalues in $[r, k]$,
all eigenvalues of $A$ are at least $s-\varepsilon$ for some $\varepsilon > 0$,
and at least $v-f_2$ eigenvalues of $A$ are in $[s-\varepsilon,s]$.
If $s^2+s > \varepsilon$,
then $v \leq f_2(f_2+1)-f_1$.
\end{Proposition}
In the second part of this document, we consider what
we call approximately strongly regular graphs.
For two adjacent vertices $a$ and $b$ of a graph $\Gamma$, let $\lambda_{ab}$ denote
the number of common neighbors of $a$ and $b$ in $\Gamma$.
Similarly, for two distinct nonadjacent vertices $a$ and $b$ of a graph $\Gamma$,
let $\mu_{ab}$ denote the number of common neighbors of $a$ and $b$ in $\Gamma$.
Let $\Lambda$ (respectively, $M$) denote the set of all pairs of adjacent
(respectively, distinct nonadjacent) vertices in $\Gamma$.
We call a $k$-regular graph (not complete, not edgeless)
$\Gamma$ of order $v$ an {\it approximately strongly regular graph}
with parameters $(v, k, \lambda, \mu; \sigma)$, where $\sigma \geq 0$,
if ${\mathbb E}(\lambda_{ab}) \coloneq \frac{1}{|\Lambda|} \sum_{(a, b) \in \Lambda} \lambda_{ab} = \lambda$
and $\mathrm{Var}(\lambda_{ab}) \coloneq \frac{1}{|\Lambda|} \sum_{(a, b) \in \Lambda} (\lambda_{ab}-\lambda)^2 \leq \sigma^2$,
and ${\mathbb E}(\mu_{ab}) \coloneq \frac{1}{|M|} \sum_{(a,b) \in M} \mu_{ab} = \mu$
and $\mathrm{Var}(\mu_{ab}) \coloneq \frac{1}{|M|} \sum_{(a,b) \in M} (\mu_{ab}-\mu)^2 \leq \sigma^2$.
Strongly regular graphs are precisely the approximately strongly
regular graphs with $\sigma=0$.
The complement of an approximately strongly regular graph
with parameters $(v, k, \lambda, \mu; \sigma)$ is an approximately
strongly regular graph with parameters $(v, v-k-1, v-2k+\mu, v-2k+\lambda; \sigma)$.
Counting triples $(a, b, c)$ with $a \sim b \sim c \not\sim a$
shows $\sum_{(a,c) \in M} \mu_{ac} = \sum_{(a,b) \in \Lambda} (k-\lambda_{ab}-1)$.
Hence, $(v-k-1)\mu = k(k-\lambda-1)$ also holds for
approximately strongly regular graphs.
{\smallskip\footnotesize
In our toy example with $v = (1+o(1))\lambda^{11}$, $k = (1+o(1))\lambda^{10}$, and $\mu = (1+o(1)) \lambda^9$,
we will see that
Proposition \ref{prop:krein_regular} rules out the existence of approximately strongly regular graphs
with $\sigma = o(\lambda^{2.5})$.\par}
\medskip
In the third part of this document, we apply our results to the cap set problem
and to optimally pseudorandom clique-free graphs.
{\smallskip \footnotesize
For instance, if there exists a cap of size $(1+o(1))q^9$ in
the projective space $\mathrm{PG}(10, q)$, then
a standard construction yields a approximately strongly regular graph with
the same parameters of our toy example (where $\lambda = q-2$).\par}
\section{Proofs of Bounds}
{\smallskip\footnotesize
Denote the all-ones vector by $j$, the all-ones matrix by $J$,
and the identity matrix by $I$.\par}
\subsection{Krein Bounds}
{\smallskip\footnotesize
The following proof is based on Remark (i) on page 50 in \cite{BCN}. \par}
\begin{proof}[Proof of Proposition \ref{prop:krein_bnd_for_asrg}]
Consider the matrices $E_1$ and $E_2$ defined by
\begin{align*}
& E_1 = \frac{1}{r-s} \left( A - sI - \tfrac{k-s}{v} J\right),
&& E_2 = \frac{1}{s-r} \left( A - rI - \tfrac{k-r}{v} J\right).
\end{align*}
The spectrum of $E_2$ is in $[0, 1]$
as $(s-r)E_2$ has only eigenvalues in $[s, r]$.
Write $E_2 \circ E_2$ as a linear combination of the matrices $A, I, J$.
Then the coefficients of $I$ and $A$ are
\begin{align*}
& \frac{1}{(s-r)^2} \left( r^2 + r \tfrac{k-r}{v} \right) \text{ for } I,
&& \frac{1}{(s-r)^2} \left( 1 - \tfrac{k-r}{v} \right) && \text{ for } A.
\end{align*}
Now we write $E_2 \circ E_2$ as a linear combination of the matrices $E_1, E_2, J$,
that is we replace $A$ and $I$ by $E_1$ and $E_2$. We obtain the coefficients
\begin{align*}
&\frac{r + r^2}{(s-r)^2} \text{ for $E_1$, }
&&
t \coloneq \frac{(s + r^2) v + (k-r)(r-s)}{v (s-r)^2} \text{ for $E_2$}.
\end{align*}
Let $\chi$ be an eigenvector of $A$ with $A\chi = s\chi$.
Then $E_1 \chi = 0$. Hence, $(E_2 \circ E_2) \chi = t E_2 \chi = t \chi$.
Hence, $\chi$ is an eigenvector
of $E_2 \circ E_2$ which shows that $t \geq 0$.
Hence, using our expression for $\beta$ and $u_i - r < u_i$,
we obtain the first inequality.
For the second inequality, consider $E_1 \circ E_1$ instead of $E_2 \circ E_2$.
\end{proof}
\subsection{Absolute Bounds}
{\smallskip\footnotesize The following is based on the proof of Theorem 2.3.3 in \cite{BCN}.
We use eigenvalue interlacing (cf. \cite{Haemers1995}).
More precisely, if $A$ is a real symmetric matrix of order $v$ with eigenvalues
$u_1 \geq \ldots \geq u_v$ and $B$ is a principal submatrix of $A$
of order $w$ with eigenvalues $\nu_1 \geq \ldots \geq \nu_w$,
then $u_i \geq \nu_i \geq u_{i-w+v}$. \par}
\begin{proof}[Proof of Proposition \ref{prop:abs_reg}]
Consider $M \coloneq A - sI - \frac{k-s}{v} J$.
Then $M$ has at least $v - f_2$ eigenvalues in $[-\varepsilon, 0]$,
and between $f_1$ and $f_2$ eigenvalues at least $r-s$.
Hence, $M \otimes M$ has at most $f_2^2$ eigenvalues at least $(r-s)^2$,
while all its other eigenvalues are at most $\varepsilon^2$.
Furthermore,
\[
M \circ M = (1 - \tfrac{k-s}{v}) A +
\left( s^2 + s \tfrac{k-s}{v} \right) I + \left( \tfrac{k-s}{v} \right)^2 J.
\]
Hence, $M \circ M$ has one eigenvalue
$(k - k \frac{k-s}{v}) + (s^2 + s \frac{k-s}{v}) +
\left( \frac{k-s}{v} \right)^2 v \geq s^2+s$ with eigenvector $j$,
at least $f_1$ eigenvalues
at least $(r - r \frac{k-s}{v}) + (s^2 + s\frac{k-s}{v}) \geq s^2+s$ and
at least $v-f_2$ eigenvalues
at least $(s - s \frac{k-s}{v}) + (s^2 + s \frac{k-s}{v}) = s^2+s$.
The matrix $M \circ M$ is a principal submatrix of $M \otimes M$,
so the eigenvalues of $M \circ M$ interlace those of $M \otimes M$.
Hence, $M \circ M$ has at most $f_2^2$ eigenvalues greater than $\varepsilon^2 < s^2+s$.
We obtain that $v-f_2+f_1 \leq f_2^2$.
\end{proof}
\section{Approximately Strongly Regular Graphs}
Consider the adjacency matrix $A$ of an approximately strongly regular graph $\Gamma$
with parameters $(v, k, \lambda, \mu; \sigma)$.
We can write $A^2 = kI + \lambda A + \mu (J - I - A) + E$, where
$(E)_{ab} = \lambda_{ab} - \lambda$ when $a,b$ are adjacent,
$(E)_{ab} = \mu_{ab} - \mu$ when $a,b$ are distinct and nonadjacent,
and $(E)_{ab} = 0$ when $a=b$.
Let $\chi$ be an eigenvector of $A$ orthogonal
to the all-ones vector $j$ with eigenvalue $u$.
Then $u^2 \chi = A^2 \chi = (k-\mu) \chi + (\lambda-\mu) u \chi + E \chi$.
Hence, $\chi$ is an eigenvector of $E$ with some eigenvalue $\nu$.
By solving for $u$, we find that
\[
u = \frac12 \left( (\lambda-\mu) \pm \sqrt{ (\lambda-\mu)^2 + 4(k-\mu+\nu) } \right).
\]
We say that $u$ has {\it positive form} if
\[
u = \frac12 \left( (\lambda-\mu) + \sqrt{ (\lambda-\mu)^2 + 4(k-\mu+\nu) } \right),
\]
and that $u$ has {\it negative form} if
\[
u = \frac12 \left( (\lambda-\mu) - \sqrt{ (\lambda-\mu)^2 + 4(k-\mu+\nu) } \right).
\]
Let $u_1, u_2, \ldots, u_v$ denote the eigenvalues of $A$.
For $u_i$ an eigenvalue of $A$, let $\nu_i$ denote the corresponding eigenvalue of $E$.
\bigskip
The next result shows that if $\sigma$ and $v$ are sufficiently small,
then there are few large $\nu_i$.
\begin{Lemma}\label{lem:evs_E}
The eigenvalues $\nu_1, \ldots, \nu_v$ of $E$ satisfy
$\sum \nu_i^2 \leq v(v-1) \sigma^2$.
\end{Lemma}
\begin{proof}
We have
\[
\sum \nu_i^2 = \mathrm{tr}(E^2) = \sum_{a \sim b} (\lambda_{ab}-\lambda)^2
+ \sum_{a \not\sim b} (\mu_{ab}-\mu)^2 \leq v(v-1) \sigma^2. \qedhere
\]
\end{proof}
\subsection{Big-\texorpdfstring{$O$}{O} Notation} \label{sec:bigO}
We use the symbols $O, \Omega, \Theta, o, \omega$ in the following way:
\begin{align*}
& f(x) = O(g(x)) \text{ (as $x \rightarrow a$)} && \text{ if and only if }
&&\limsup_{x \rightarrow a} \tfrac{|f(x)|}{g(x)} < \infty,\\
& f(x) = \Omega(g(x)) \text{ (as $x \rightarrow a$)} && \text{ if and only if }
&&\limsup_{x \rightarrow a} \tfrac{|f(x)|}{g(x)} > 0,\\
& f(x) = \Theta(g(x)) \text{ (as $x \rightarrow a$)} && \text{ if and only if }
&&0 < \limsup_{x \rightarrow a} \tfrac{|f(x)|}{g(x)} < \infty,\\
& f(x) = o(g(x)) \text{ (as $x \rightarrow a$)} && \text{ if and only if }
&&\lim_{x \rightarrow a} \tfrac{|f(x)|}{g(x)} = 0,\\
& f(x) = \omega(g(x)) \text{ (as $x \rightarrow a$)} && \text{ if and only if }
&&\lim_{x \rightarrow a} \tfrac{|f(x)|}{g(x)} = \infty.
\end{align*}
{\footnotesize
Usually, we have $a = \infty$.
If there are several variables involved, then
we specify the relevant one.
We also use the big-$O$ notation in minor order terms.
For instance, we can write $x^2+x+100 = x^2 + O(x)$ (as $x \rightarrow \infty$)
as $x+100 = O(x)$.
For us the sign of $f(x)$ is often important, so for convenience,
we aim to use big-$O$ notation with $f(x) > 0$.
For instance, we write $-s = O(\mu)$ even though $s = O(\mu)$ is equally correct.
\smallskip
If we talk about a $k$-regular graph $\Gamma$ of order $v$
with $k = O(g(v))$ for some function $g$, then we mean that we consider
an infinite family of graphs $(\Gamma_n)$, where $\Gamma_n$
is of order $v_n$ and $k_n$-regular with $k_n = O(g(v_n))$ as $n \rightarrow \infty$.
In particular, if we say that $\Gamma$ is an approximately strongly regular graph
or a family of approximately strongly regular graphs
with parameters $(v, k, \lambda, \mu; \sigma)$ and
$k = o(|\mu-\lambda|^{\frac32})$, then there is an
infinite family of approximately strongly regular graphs $(\Gamma_n)$
with parameters $(v_n, k_n, \lambda_n, \mu_n; \sigma_n)$
such that $k_n = o(|\mu_n-\lambda_n|^{\frac32})$ as $n \rightarrow \infty$.
We also use big-$O$ notation for the eigenvalues $u_i$ and $\nu_i$:
Assume that $|\nu_1| \geq |\nu_2| \geq \cdots \geq |\nu_v|$.
If we write $\nu_i = O(g(n))$, then there is
a function $h(n)$ such that $\nu_{h(n)} = O(g(n))$.
For instance, we might assume that $\nu_i = O(\mu_n)$
and show some property (P) for $\nu_i$.
By Lemma \ref{lem:evs_E}, $\sum \nu_i^2 \leq v^2 \sigma^2$,
so the number of $\nu_i$ with $\nu_i = \omega(\mu)$
is at most $o(\frac{v\sigma}{\mu})$.
Thus, (P) holds for $\nu_{h(n)}$
with $h(n) = \Omega(\frac{v_n\sigma_n}{\mu_n})$.
\par}
\subsection{Asymptotic Bounds on Eigenvalues}
\begin{Lemma}\label{lem:approx}
Let $f: {\mathbb R} \rightarrow {\mathbb R}$ with $f(y) = o(y^2)$ (as $y \rightarrow \infty$).
Then
\begin{align*}
\sqrt{y^2 + f(y)} - y = (\tfrac12 + o(1)) \tfrac{f(y)}{y}.
\end{align*}
\end{Lemma}
\begin{proof}
Write $\sqrt{y^2 + f(y)} - y = y (\sqrt{1+\frac{f(y)}{y^2}} - 1)$.
The Taylor expansion of $\sqrt{\cdot}$ at $1$ shows that (as $x \rightarrow 0$)
\[
\sqrt{1+x} = 1 + \tfrac12 x + O(x^2). \qedhere
\]
\end{proof}
The following gives us approximate versions
of the equations $\mu - \lambda = r+s$ and $k-\mu = -rs$ for strongly regular graphs.
\begin{Lemma}\label{lem:asrg_ev2}
For a family of approximately strongly regular graphs
with parameters $(v, k, \lambda, \mu; \sigma)$,
consider an eigenvalue $u_i$ with associated eigenvalue $\nu_i$.
\smallskip
\noindent
If $\mu > \lambda$, $k = o(|\lambda - \mu|^2)$,
and $|\nu_i| = o(|\lambda - \mu|^2)$, then the following holds:
\begin{enumerate}[(i)]
\item If $u_i$ has positive form,
then $u_i = (1+o(1)) \frac{k-\mu+\nu_i}{\mu-\lambda}$.
\item If $u_i$ has negative form,
then $u_i = -(1+o(1)) (\mu-\lambda)$.
\end{enumerate}
If $\mu < \lambda$, $k = o(|\lambda - \mu|^2)$,
and $|\nu_i| = o(|\lambda - \mu|^2)$, then the following holds:
\begin{enumerate}[(i)]
\item[(iii)] If $u_i$ has positive form,
then $u_i = (1+o(1)) (\lambda - \mu)$.
\item[(iv)] If $u_i$ has negative form,
then $u_i = -(1+o(1)) \frac{k-\mu+\nu_i}{\lambda-\mu}$.
\end{enumerate}
If $k = \Omega(|\lambda-\mu|^2)$ and $|\nu_i| = O(k)$, then the following holds:
\begin{enumerate}[(i)]
\item[(v)] If $u_i$ has positive form,
then $u_i = \Theta(\sqrt{k})$.
\end{enumerate}
If $|\nu_i| = \Omega(|\lambda-\mu|^2)$ and $|\nu_i| = \Omega(k)$,
then the following holds:
\begin{enumerate}[(i)]
\item[(vi)] If $u_i$ has positive form,
then $u_i = \Theta(\sqrt{|\nu_i|})$.
\end{enumerate}
\end{Lemma}
\begin{proof}
We only show (i) and (ii) as the other cases are similar.
Using Lemma \ref{lem:approx} with $y=\mu-\lambda$ and $\mu = o(k)$,
we find that if $u_i$ has positive form, then
\begin{align*}
u_i &= \tfrac12 \left( \lambda - \mu + (1+o(1)) \sqrt{(\lambda-\mu)^2 - 4(k-\mu+\nu_i)} \right)\\
&= (1+o(1)) \tfrac{k-\mu+\nu_i}{\mu-\lambda}.
\end{align*}
If $u_i$ has negative form, then
\begin{align*}
u_i &= \tfrac12 \left( \lambda - \mu - (1+o(1)) \sqrt{(\lambda-\mu)^2 - 4(k-\mu+\nu_i)} \right)\\
&= -(1+o(1)) (\mu-\lambda). \qedhere
\end{align*}
\end{proof}
\subsection{Parameter Restrictions}
The expected size of the common neighborhood of two distinct
vertices is $(1+o(1)) \frac{k^2}{v}$. Hence, if $k = o(v)$,
then $\mu = (1+o(1)) \frac{k^2}{v}$.
\begin{Proposition}[Krein Bound, Approximately Strongly Regular Graphs]\label{prop:krein_bnd_for_asrg}
Consider a family of approximately strongly regular graphs with
$\mu > \lambda$, $k=o(v)$, and $k = o(|\mu-\lambda|^{\frac32})$.
Then $\sigma \geq (1+o(1)) \frac{|\mu-\lambda|^{\frac32}}{v}$.
\end{Proposition}
\begin{proof}
Suppose to the contrary that $\sigma \leq (D+o(1)) \frac{(\mu-\lambda)^{\frac32}}{v}$
for some constant $D < 1$.
By Lemma \ref{lem:evs_E}, $\nu_i \leq (D+o(1)) (\mu-\lambda)^{\frac32}$
for an eigenvalue $\nu_i$ of $E$.
Let $u_i$ be an eigenvalue of $A$.
By Lemma \ref{lem:asrg_ev2} (i), if $u_i$ has positive form, then
\begin{align*}
u_i &= (1+o(1)) \left( \tfrac{k-\mu}{\mu-\lambda} + D \sqrt{\mu-\lambda} \right)
= (D +o(1)) \sqrt{\mu - \lambda} \eqcolon r.
\end{align*}
By Lemma \ref{lem:asrg_ev2} (ii), if $u_i$ has negative form, then
\begin{align*}
u_i &= -(1+o(1)) (\mu-\lambda) \eqcolon s.
\end{align*}
Hence, $r^2 = -(D^2+o(1)) s$.
By Proposition \ref{prop:krein_regular},
\begin{align*}
0 &\leq (s+r^2)v + (k-r)(r-s) \\
&= (1-D^2+o(1)) sv - sk = (1-D^2+o(1)) s v < 0.
\end{align*}
This is a contradiction, so $\sigma \geq (1+o(1)) |\mu-\lambda|^{\frac32}$.
\end{proof}
\begin{Proposition}[Absolute Bound, Approximately Strongly Regular Graphs]\label{prop:absolute_bnd_for_asrg}
Consider a family of approximately strongly regular graphs such that
$\lambda > \mu$ and $\sqrt{v} \cdot k = o((\lambda-\mu)^2)$.
Then $\sigma \geq (\tfrac13+o(1)) \frac{k}{v}$.
\end{Proposition}
\begin{proof}
Our plan for applying Proposition \ref{prop:abs_reg} is as follows:
We can ignore $f_1$ as it is a minor order term.
We suppose that $\sigma \leq (D + o(1)) \frac{k}{v}$
for some constant $D<\frac13$.
Put $r = (1+o(1)) (\lambda-\mu)$,
$s = -(1-D+o(1)) \frac{k-\mu}{\lambda-\mu}$, and
$\varepsilon = (2D+o(1)) \frac{k-\mu}{\lambda-\mu}$.
As $D < \frac13$, we have that $s^2+s > \varepsilon^2$.
\medskip
By Lemma \ref{lem:evs_E}, any $\nu_i$ satisfies
$|\nu_i| \leq v \sigma \leq (D+o(1)) k$.
If an eigenvalue $u_i$ has positive form, then,
by Lemma \ref{lem:asrg_ev2} (iii),
\begin{align*}
u_i = (1+o(1))(\lambda-\mu) = r.
\end{align*}
If an eigenvalue $u_i$ has negative form, then,
by Lemma \ref{lem:asrg_ev2} (iv),
\begin{align*}
s-\varepsilon = -(1+o(1)) \tfrac{k-\mu+Dk}{\lambda-\mu}
\leq u_i \leq -(1+o(1)) \tfrac{k-\mu-Dk}{\lambda-\mu} = s.
\end{align*}
To apply Proposition \ref{prop:abs_reg},
it remains to determine $f_2$, that is we need to bound the number
of restricted eigenvalues in $[r, k]$.
We already saw that all such $u_i$ are of positive form.
Using $\sum u_i^2 = \mathrm{tr}(A^2) = vk$, we see that there
are at most $(1+o(1)) \frac{vk}{(\lambda-\mu)^2}$
eigenvalues $u_i$ of positive form in this interval.
Using $\sqrt{v} \cdot k = o((\lambda-\mu)^2)$,
we see $\frac{vk}{(\lambda-\mu)^2} = o(\sqrt{v})$.
\end{proof}
{\smallskip \footnotesize
For instance, for $v = (1+o(1)) q^{11}$, $k = (1+o(1)) q^9$,
$\lambda = (1+o(1)) q^8$, and $\mu = (1+o(1)) q^7$,
Proposition \ref{prop:absolute_bnd_for_asrg} shows
$\sigma \geq (\tfrac13 + o(1)) q^{-2}$.\par}
\section{One Example: Orthogonality Graphs}\label{sec:orth_graphs}
Clearly, strongly regular graphs provide plenty of examples
for approximately strongly regular graphs with $\sigma = 0$.
For our application in \S\ref{sec:Ktfree}, we want to present
one naturally occurring example with small, but nonzero $\sigma$.
\medskip
For $q$ an odd prime power, let $V$ be the $n$-dimensional vector spaces over ${\mathbb F}_q$,
the finite field with $q$ elements.
As $q$ is odd, ${\mathbb F}_q$ contains $\frac{q-1}{2}$ (nonzero) squares and
$\frac{q-1}{2}$ nonsquares.
Put $\gamma = 1$ if $q \equiv 1\pmod{4}$
and $\gamma = -1$ if $q \equiv 3\pmod{4}$.
{\smallskip \footnotesize
A {\it quadratic form} over ${\mathbb F}_q$ is a map $Q: V \rightarrow {\mathbb F}_q$
such that $Q(\alpha v) = \alpha^2 Q(x)$ for all $\alpha \in {\mathbb F}_q$
and $x \in V$ and the function $B(x, y) \coloneq Q(x+y) - Q(x) - Q(y)$ is bilinear.
We can find an $(n \times n)$-matrix $M$ such that $Q(x) = x^T M x$.
We say that $Q$ is {\it nondegenerate} if $\det(Q) \coloneq \det(M) \neq 0$.
From now on we assume that $Q$ is nondegenerate.
For $x \in V$ nonzero, call $\<x\>$ a {\it point}.
Call a point $\<x\>$ {\it singular} when $Q(x) = 0$.
We refer to \S2 and \S3 in \cite{BvM} and \S11 in \cite{Taylor91}
for some background on quadratic forms.
\par\medskip }
If $n=2m+1$ is odd, then there is only one choice for $Q$ up to isomorphism.
The set of nonsingular points $\< x \>$ splits into two parts of
sizes $\frac12 q^m (q^m+\varepsilon)$ for $\varepsilon \in \{ -1, 1\}$.
Here $\varepsilon$ depends on $Q(x)$ being a (nonzero) square or a nonsquare.
Let $NO^{\varepsilon\perp}_{n,q}$ denote the graph
with one of the parts as vertices,
two vertices $x,y$ adjacent when they are orthogonal, that is $B(x, y) = 0$.
We identify $\varepsilon=1$ with $+$ and $\varepsilon=-1$ with $-$,
so we write $NO^{+\perp}_{n,q}$ and $NO^{-\perp}_{n,q}$.
In \cite{BvM} these graphs are called $NO^{\varepsilon\perp}_n(q)$
for $q \in \{ 3, 5 \}$.
The automorphism group of $NO^{\varepsilon\perp}_{n,q}$ acts
transitively on cliques of a given size
as the corresponding orthogonal group acts transitively
on tuples of pairwise orthogonal points of the same type.
A $k$-regular graph $\Gamma$ of order $v$
is {\it edge-regular} with parameters $(v, k, \lambda)$ if
each pair of adjacent vertices lies in exactly $\lambda$ triangles.
In particular, $NO^{\varepsilon\perp}_{n,q}$ is edge-regular with parameters
\begin{align*}
& v = \tfrac12 q^m (q^m+\varepsilon), && k = \tfrac12 q^{m-1} (q^m-\varepsilon),
&& \lambda = \tfrac12 q^{m-1}(q^{m-1}+\gamma\varepsilon).
\end{align*}
For $q=3,5$, the graph $NO^{\varepsilon\perp}_{n,q}$ is
strongly regular with $\mu = \frac12 q^{m-1} (q^{m-1}-\varepsilon)$.
Standard counting for quadratic forms shows that
\begin{align*}
\tfrac12 q^{m-1}(q^{m-1}-1) \leq \mu_{xy} \leq \tfrac12 q^{m-1}(q^{m-1}+1).
\end{align*}
Hence, for fixed $n$ and $q \rightarrow \infty$, the graph $NO^{\varepsilon\perp}_{n,q}$
is approximately strongly regular with $\sigma \leq (1+o(1))q^{m-1}$.
\bigskip
If $n=2m$ is even, then there are two choices for $Q$
up to isomorphism, depending on if $Q$ is of elliptic (put $\varepsilon =-1$)
or hyperbolic type (put $\varepsilon = 1$). We can distinguish them
by the number of singular points which is
$\frac{(q^{m-1}+\varepsilon) (q^{m}-\varepsilon)}{q-1}$.
We can also distinguish them by $\det(Q)$ being a (nonzero) square or a nonsquare.
The nonsingular points $\<x\>$ split into two orbits of equal size
$\frac12 q^{m-1}(q^m-\varepsilon)$ each,
depending on $Q(x)$ being a square or a nonsquare.
Let $NO^{\varepsilon\perp}_{n,q}$ denote the graph
with one of the parts as vertices,
two vertices adjacent when orthogonal, that is $B(x, y) = 0$.
In \cite{BvM} these graphs are called $NO^{\varepsilon}_n(q)$ for $q=3$.
As for $n$ odd, the automorphism group of $NO^{\varepsilon\perp}_{n,q}$
acts transitively on cliques.
In particular, it is edge-regular with parameters
\begin{align*}
& v = \tfrac12 q^{m-1} (q^m - \varepsilon), && k = \tfrac12 q^{m-1}(q^{m-1} - \gamma \varepsilon),
&& \lambda = \tfrac12 q^{m-2} (q^{m-1} + \gamma\varepsilon).
\end{align*}
For $q=3$, the graph $NO^{\varepsilon\perp}_{n,q}$ is strongly regular
with $\mu = \frac12 q^{m-1}(q^{m-2}+ \varepsilon)$.
Standard counting for quadratic forms shows that
\begin{align*}
\tfrac12 q^{m-1}(q^{m-2}-1) \leq \mu_{xy} \leq \tfrac12 q^{m-1}(q^{m-2}+1).
\end{align*}
Hence, for fixed $n$ and $q \rightarrow \infty$, the graph $NO^{\varepsilon\perp}_{n,q}$
is approximately strongly regular with $\sigma \leq (1+o(1))q^{m-1}$.
\bigskip
There is the following tower of graphs
(see \cite[p. 89]{BvM} for the case $q=3$):
Let $NO^{\varepsilon\perp}_{n,q}(x)$ denote the induced subgraph
on the neighborhood of $x$ in $NO^{\varepsilon\perp}_{n,q}$.
We find that $NO^{\varepsilon\perp}_{2m+1,q}(x)$ is
isomorphic to $NO^{\varepsilon\perp}_{2m,q}$,
and that $NO^{\varepsilon\perp}_{2m,q}(x)$ is
isomorphic to $NO^{\gamma\varepsilon\perp}_{2m-1,q}$.
The graph $NO^{+\perp}_{2,q}$
is edgeless if $\gamma = -1$, otherwise
it is the union of $\frac{q-1}{4}$ pairwise disjoint edges.
The graph $NO^{-\perp}_{2,q}$ is edgeless if $\gamma = 1$,
otherwise it is the union of $\frac{q+1}{4}$ pairwise disjoint edges.
By induction, we find that the clique number of $NO^{\varepsilon\perp}_{n,q}$
for $n=2m+1$ or $n=2m$ is $n-1$ if $\gamma \varepsilon = (-1)^m$
and $n$ if $\gamma \varepsilon = -(-1)^m$.
\section{Some Applications}\label{sec:app}
\subsection{Large Caps}\label{sec:caps}
Let $n \geq 2$ and let $q$ be a prime power.
Consider a set of points ${\mathcal C}$ in $\mathrm{PG}(n, q)$, the $n$-dimensional
projective space over ${\mathbb F}_q$.
We use that the number of points in
$\mathrm{PG}(n, q)$ is $\frac{q^{n+1}-1}{q-1}$.
If no three points in ${\mathcal C}$ are collinear,
then ${\mathcal C}$ is called a {\it cap}.
{\smallskip\footnotesize
For the regime of $q=3$ and $n \rightarrow \infty$,
the cap set problem recently gained much prominence due to the
breakthrough result by Ellenberg and Gijswijt, see \cite{EG2017}.
Here we consider the regimes where $n$ is fixed and $q \rightarrow \infty$
as well as where $q$ is fixed and $n \rightarrow \infty$.
Note that \cite{EG2017} considers caps in ${\mathbb F}_q^n$, not $\mathrm{PG}(n, q)$,
but this only changes bounds by constant factor.
We always assume that $q \geq 3$
as $q=2$ is trivial.
As the calculations for $n$ fixed require slightly more care
than those for $q$ fixed (but are essentially identical),
we will only include those.
It is easy to see that a cap has size at most $(1+o(1)) q^{n-1}$
for $n$ fixed and $q \rightarrow \infty$.
The largest known constructions for caps have
size $\Theta(q^{\lfloor \frac23 n \rfloor})$.
This is tight for $n = 2,3$.
See \cite{Edel2004,EB1999} for constructions
of caps for large $n$ or large $q$. \par\medskip }
It is well-known that caps define graphs in various ways.
For a cap ${\mathcal C}$ of $\mathrm{PG}(n, q)$, define an {\it associated graph}
$\Gamma$ as follows:
Consider ${\mathbb F}_q^{n+1}$ with $\mathrm{PG}(n, q)$ as hyperplane at infinity.
Take the vectors of ${\mathbb F}_q^{n+1}$ as vertices,
two distinct $a,b \in {\mathbb F}_q^{n+1}$ adjacent
if $\< a, b \>$ meets $\mathrm{PG}(n, q)$ in a point of ${\mathcal C}$.
Put $t = |{\mathcal C}|$.
It is well-known and easy to verify that
this defines an edge-regular graph with
$(v, k, \lambda) = (q^{n+1}, t(q-1), q-2)$.
An {\it exterior point} of ${\mathcal C}$ is a point of $\mathrm{PG}(n, q)$
not in ${\mathcal C}$ and a {\it secant} of ${\mathcal C}$ is a line of $\mathrm{PG}(n, q)$
which meets ${\mathcal C}$ in precisely two points.
{\smallskip\footnotesize
If each exterior point lies on precisely the same number $h$
of secants, then we obtain a strongly regular graph
with $\mu = \frac{t(t-1)(q-1)^2}{q^{n+1}-1}$.
See \S8.7.1(vi) in \cite{BvM}.
We can say the following about this case.\par\medskip}
\begin{Lemma}
Let ${\mathcal C}$ be a cap of size $t$ in $\mathrm{PG}(n, q)$ such that each exterior point lies on
a constant number of secants. Then
\begin{align*}
&t \leq (1+o(1)) q^{\frac34 n - \frac14} && (\text{as } q \rightarrow \infty),\\
&t = O(q^{\frac34 n}) && (\text{as } n \rightarrow \infty).
\end{align*}
\end{Lemma}
{\smallskip\footnotesize
\begin{proof}
We only prove the first part.
Let us calculate the negative eigenvalue $s < 0$ of
the associated graph $\Gamma$.
We find $s = -(1+o(1)) \mu$.
One of the Krein conditions, Theorem \ref{thm:krein_srg},
requires
\begin{align*}
0 &\leq 1 + \frac{s^3}{k^2} - \frac{(s+1)^3}{(v-k-1)^2}\\
&= 1 - (1+o(1)) \frac{(t^2 q^{-n+1})^3}{(tq)^2}
= 1 - (1+o(1)) \frac{t^4}{q^{3n-1}}.
\end{align*}
Hence, $t \leq (1+o(1)) q^{\frac34 n - \frac14}$.
\end{proof}\par}
How much can we weaken the condition on the exterior points
and secants? From now on let $h$ be the expected
number of secants through an exterior point $p$ of $\mathrm{PG}(n, q)$ and let $h_p$
the actual number of secants through $p$.
\begin{Lemma}\label{lem:var_h_p}
Let ${\mathcal C}$ be a cap of size $t$ in $\mathrm{PG}(n, q)$ with
an associated approximately strongly regular graph $\Gamma$
with parameters $(v, k, \lambda, \mu; \sigma)$.
Then
\begin{align*}
& \mathrm{Var}(h_p) = (\tfrac14+o(1)) \mathrm{Var}(\mu_{ab}) && (\text{as } q \rightarrow \infty),\\
& \mathrm{Var}(h_p) = \Theta(\mathrm{Var}(\mu_{ab})) && (\text{as } n \rightarrow \infty).
\end{align*}
\end{Lemma}
{\footnotesize
\begin{proof}
We only show the assertion for $q \rightarrow \infty$.
Let $M$ be as in the introduction and let ${\mathcal D}$
denote the set of exterior points of ${\mathcal C}$.
Note that $|M| = q^{2n+2}$ and
that $|{\mathcal C}| = O(q^{n-1})$ implies that $|{\mathcal D}| = (1+o(1)) q^n$.
If for two distinct nonadjacent
vertices $a,b$ the line $\< a, b\>$ meets ${\mathcal D}$ in $p$,
then $2h_p$ is the number of common neighbors of $a$ and $b$.
Hence,
\begin{align*}
\mathrm{Var}(\mu_{ab}) &= \frac{1}{M} \sum_{a \not\sim b} (\mu_{ab} - \mu)^2
= \frac{1}{M} \sum_{p \in {\mathcal D}}
\sum_{\substack{a \not\sim b,\\ \<a,b\> \cap \mathrm{PG}(n, q) = p}} (\mu_{ab} - \mu)^2\\
&= \frac{1}{M} \sum_{p \in {\mathcal D}} 4 \cdot q^{n+1} (q-2) \cdot (h_p - h)^2
= (4+o(1)) q^{-n} \sum_{p \in {\mathcal D}} (h_p - h)^2\\
&= (4+o(1)) \frac{1}{|{\mathcal D}|} \sum_{p \in {\mathcal D}} (h_p - h)^2
= (4+o(1)) \mathrm{Var}(h_p). \qedhere
\end{align*}
\end{proof}\par\medskip}
\begin{Proposition}\label{prop:cap_bound_delta}
For $n \geq 4$,
let ${\mathcal C}$ be a cap of size $t = \Omega(q^{\frac56 n - \frac16})$ in $\mathrm{PG}(n, q)$
and let ${\mathcal D}$ denote its exterior points.
\begin{enumerate}[(i)]
\item If $\mathrm{Var}(h_{p}) \leq (\frac14+o(1)) \sigma^2$, then
$\sigma \geq (1+o(1)) t^3 q^{-\frac52 n + \frac12}$ (as $q \rightarrow \infty$).
\item If $\mathrm{Var}(h_{p}) = \Theta(\sigma^2)$, then
$\sigma = \Omega(t^3 q^{-\frac52 n})$ (as $n \rightarrow \infty$).
\end{enumerate}
\end{Proposition}
\begin{proof}
We only prove the first part.
By Lemma \ref{lem:var_h_p}, $\sigma$
is as in Proposition \ref{prop:krein_bnd_for_asrg}
for the associated graph $\Gamma$.
From $t = \omega(q^{\frac34 n - \frac14})$ and $n \geq 4$,
we obtain $k = (q-1)t = o(\mu^{\frac32})$ and $\lambda = q-2 = o(\mu)$
(as $\mu = (1+o(1)) t^2 q^{1-n}$).
The assertion follows.
\end{proof}
Proposition \ref{prop:cap_bound_delta} implies the following.
\begin{Corollary}\label{cor:caps_bnds}
For $n \geq 4$, let ${\mathcal C}$ be a cap of size $t$ in $\mathrm{PG}(n, q)$.
If $|h-h_p|$ is bounded by a constant, then $t = O(q^{\frac56 n - \frac16})$.
The above holds for $q \rightarrow \infty$ as well as $n \rightarrow \infty$.
\end{Corollary}
{\footnotesize
For $q=3$, Edel constructed caps of size $\Omega(2.21^n)$ \cite{Edel2004}
and there is an upper bound of $o(2.76^n)$ by Ellenberg and Gijswijt \cite{EG2017}.
For the special cases of Corollary \ref{cor:caps_bnds}, we find
upper bounds of $o(2.50^n)$ for (i).
In general it is known that
$t \leq (1 - O(q^{-\frac12})) q^{n-1}$ (as $q \rightarrow \infty$),
cf. Table 4.4(ii) in \cite{HS2001}.
\par}
\subsection{Optimally Pseudorandom Clique-free Graphs}\label{sec:Ktfree}
A $k$-regular graph $\Gamma$ of order $v$
is called {\it optimally pseudorandom}
if the second largest eigenvalue in absolute value
of its adjacency matrix is in $O(\sqrt{k})$, cf. \cite{KS2006}.
\begin{Proposition}[Alon and Krivelevich, \cite{AK1997}]\label{prop:AK}
Let $\Gamma$ be a $K_m$-free $k$-regular graph of order $v$
with smallest eigenvalue $s$ such that $-s = O(\sqrt{k})$.
Then
\[
k = O(v^{1-\frac{1}{2m-3}}).
\]
\end{Proposition}
{\footnotesize
This bound is tight for $m=3$ due to a construction by Alon \cite{Alon1994}.
Alon and Krivelevich gave an example with $k = \Theta(v^{1-\genfrac{}{}{}3{1}{m-2}})$ \cite{AK1997}.
The author noticed that there is
a well-known construction with $k = \Theta(v^{1-\genfrac{}{}{}3{1}{m-1}})$ \cite{BIP2020}.
These are the graphs $NO^{\varepsilon\perp}_{m,q}$ from \S\ref{sec:orth_graphs}
with clique number $m-1$.
The Ramsey number $R(m, n)$ is the largest number such that
there exists a graph on $R(m,n)-1$ vertices without a clique
of size $m$ or an independent set of size $n$.
Ajtai, Koml\'{o}s, and Szemer\'{e}di \cite{AKS1980}
and Bohman and Keevash \cite{BK2010} proved
\begin{align*}
&\Omega\left( \tfrac{n^{\frac{m+1}{2}}}{(\log n)^{\genfrac{}{}{}3{m+1}{2} - \genfrac{}{}{}3{1}{m-2}}} \right)
= R(m,n) = O\left( \tfrac{n^{m-1}}{(\log n)^{m-2}} \right)
&&\text{(as $n \rightarrow \infty$)}.
\end{align*}
Recently, Mubayi and Verstra\"{e}te showed
in \cite{MV2019} that if the upper bound
in Proposition \ref{prop:AK} is tight for some $m$,
then $R(m,n) = \Omega(\frac{n^{m-1}}{(\log n)^{2m-4}})$,
nearly matching the upper bound.
Their result also implies that if
one finds a construction with
$k = \Omega(v^{1-\genfrac{}{}{}3{1}{m+\varepsilon}})$ for some $\varepsilon>0$,
then
\begin{align*}
&R(m,n) = \Omega\left(\tfrac{n^{\frac{m+\varepsilon+1}{2}}}{(\log n)^{m+\varepsilon+1}}\right) &&\text{(as $n \rightarrow \infty$)},
\end{align*}
which would improve the lower bound on $R(n, m)$.
Our technique here cannot show anything better
than $k = O(v^{1 - \genfrac{}{}{}3{1}{m+1}})$.
\par\medskip}
For the remainder of the section
consider an optimally pseudorandom
$K_m$-free $k$-regular graph $\Gamma$ of order $v$,
smallest eigenvalue $s$, and second largest eigenvalue $r$, where $m \geq 3$.
Let $Y$ be a clique of size $i$ of $\Gamma$.
Let $\Gamma(Y)$ be the induced subgraph on the common neighborhood
of $Y$. Let us define the following properties
for $\Gamma_i \coloneq \Gamma(Y)$.
\begin{enumerate}
\item[(P1)] The graph $\Gamma_i$ has $v_i$ vertices and
$\frac12 v_i k_i$ edges, where
\begin{align*}
&k_i \geq (1+o(1)) k \left( \tfrac{k}{v} \right)^i, &&
(1+o(1)) \tfrac{k}{v} \leq \tfrac{k_i}{v_i} = o(1).
\end{align*}
\item[(P2)] The graph $\Gamma_i$ is approximately strongly
regular with parameters $(v_i, k_i, \lambda_i, \allowbreak \mu_i; \sigma_i)$ and
its smallest eigenvalue $s_i$ satisfies
\[
-s_i = O(\mu_i-\lambda_i).
\]
\end{enumerate}
{\smallskip\footnotesize
Clearly, the graphs $NO^{\varepsilon\perp}_{m,q}$ with clique number $m-1$
satisfy (P1). Furthermore, as mentioned in \S\ref{sec:orth_graphs},
the automorphism group of $NO^{\varepsilon\perp}_{m,q}$ acts transitively
on cliques of a given size.
Hence, $\Gamma_i$ is regular.
The graphs $NO^{\varepsilon\perp}_{m,q}$ with clique number $m-1$ often have property (P2).
We will see that property (P2) follows from (P1) for $i=m-3$
when $\Gamma_{m-3}$ is regular as $\Gamma_{m-3}$
is triangle-free, so $\lambda_{m-3} = 0$. More generally,
some $\Gamma_{i}$ must have property (P2) as $\lambda_i \leq (D+o(1)) \genfrac{}{}{}3{k_i^2}{v_i}$
has to occur for some $D<1$ for some $i$.
\par\medskip}
Let us state the expander-mixing lemma for the special case of only one set,
see the proof of Proposition 1.1.6 in \cite{BvM}.
\begin{Lemma}[Expander-Mixing Lemma, Variant]\label{lem:eml}
Let $Y$ be a set of vertices of size $y$ of
a $k$-regular graph $\Gamma$ of order $v$
with second largest eigenvalue $r$ and smallest eigenvalue $s$.
Then the number $e$ of edges in the induced subgraph on $Y$ satisfies
\[
\tfrac12 y \left( \tfrac{y(k-s)}{v} + s\right) \leq e \leq \tfrac12 y \left( \tfrac{y(k-r)}{v} + r\right).
\]
\end{Lemma}
\begin{Lemma}\label{lem:clique-free_paras}
Let $0 \leq i \leq m-3$.
\begin{enumerate}[(i)]
\item If $k = \omega(v^{1-\frac{1}{2i{+}1}})$, then
there exists a $\Gamma_i$ with property (P1).
\item If $\Gamma_i$ is regular and has property (P1), then
\[
-s_i = \Omega\left(k_i \left( \tfrac{k_i}{v_i} \right)^{m-i-2}\right).
\]
\end{enumerate}
\end{Lemma}
\begin{proof}
First we show (i).
For this, let $k_i$
denote the average degree of $\Gamma_i$.
Clearly, the claim is true for $i=0$.
The condition $k = \omega(v^{1 - \frac{1}{2i{+}1}})$
is equivalent to
\begin{align}
(1+o(1)) k \left( \tfrac{k}{v} \right)^{i} = \omega(\sqrt{k}).\label{eq:ki_bnd}
\end{align}
Suppose that the claim is true for $\Gamma_{i-1} = \Gamma(Y)$
for some clique $Y$ of size $i-1$.
Let $a$ be a vertex of $\Gamma_{i-1}$ of degree at least $k_{i-1}$.
Take $\Gamma_i = \Gamma(Y \cup \{ a \})$.
By Lemma \ref{lem:eml} applied to $\Gamma$, using Equation \eqref{eq:ki_bnd},
the average degree $k_i$ of a vertex in $\Gamma_{i}$ satisfies
\begin{align*}
(1+o(1)) k \left(\tfrac{k}{v} \right)^{i} \leq \tfrac{k_{i-1}(k-s)}{v} + s \leq
k_i \leq \tfrac{k_{i-1}(k-r)}{v}
+ r = (1+o(1)) k_{i-1} \cdot \tfrac{k}{v}.
\end{align*}
This shows property (P1) for $\Gamma_{i}$.
\medskip
Next we show (ii).
For some $j$ with $i \leq j \leq m-3$ we require
that $\lambda_j < (D+o(1)) \frac{k_j^2}{v_j}$ for some constant $D < 1$
as $\Gamma$ is $K_m$-free.
Similarly to the above, for the first $j$
for which this occurs,
we find a $\Gamma_j$ with
\[
k_j = (1+o(1)) k_i \left(\tfrac{k_i}{v_i} \right)^{j-i}.
\]
By Lemma \ref{lem:eml}, applied to the regular graph $\Gamma_i$,
\[
(D+o(1)) \tfrac{k_j^2}{v_j} \geq \lambda_{j} \geq
(1+o(1)) k_i \left(\tfrac{k_i}{v_i} \right)^{j-i+1} + s_i.
\]
In the worst case is $j=m-3$ which yields the claim.
\end{proof}
\begin{Proposition}\label{prop:opt_pseudo1}
Suppose that $\Gamma$ has
$k = \allowbreak \omega(v^{1-\frac{1}{3m{-}2i{-}5}})$.
Furthermore, suppose that $\Gamma_i$
has property (P1), and
that $\Gamma_i$ has property (P2) or $i=m-3$.
If $\Gamma_i$ is regular, then $\sigma_i = \Omega(\sqrt{k} \left( \tfrac{k}{v} \right)^{\frac32 m - 2 - i})$.
\end{Proposition}
\begin{proof}
If $\Gamma_i$ has property (P1) and is regular,
then it is an approximately strongly regular graph
with parameters $(v_i, k_i, \lambda_i, \mu_i; \sigma_i)$
with $\mu_i = (1+o(1)) \frac{k_i^2}{v_i^2}$ (as $k_i = o(v)$).
If $\Gamma_{m-3}$ has property (P1), then $\lambda_{m-3} = 0$ and
$\mu_{m-3} = (1+o(1)) \frac{k_{m-3}^2}{v_{m-3}}$,
so $\Gamma_{m-3}$ has property (P2).
\medskip
By Lemma \ref{lem:clique-free_paras}(ii) and property (P2),
\[
\mu_i - \lambda_i = \Omega\left(k_i \left( \tfrac{k}{v} \right)^{m-i-2}\right).
\]
From $k = \omega(v^{1-\frac{1}{3m{-}2i{-}5}})$
and $k_i \geq (1+o(1)) k \left(\frac{k}{v} \right)^i$, we obtain that
$k_i = o(|\mu_i - \lambda_i|^{\frac{3}{2}})$.
By Proposition \ref{prop:krein_bnd_for_asrg},
\begin{align*}
\sigma_i &\geq (1+o(1)) \frac{|\mu-\lambda|}{v}
= \Omega\left(\sqrt{k} \left( \tfrac{k}{v} \right)^{\frac32 m - 2 - i}\right). \qedhere
\end{align*}
\end{proof}
For $i \leq \frac34 m - \frac32$, we have $k = \omega(v^{1-\frac{1}{2i{+}1}})$
in Proposition \ref{prop:opt_pseudo1}, so
we can apply Lemma \ref{lem:clique-free_paras}(i)
and see that there exists a $\Gamma_i$ with property (P1).
Hence, the case $i = \lfloor \frac34 m - \frac32 \rfloor$
is special.
\begin{Corollary}\label{cor:opt_Kt_bnds}
Let $m \geq 5$.
Let $\Gamma_i$ be as in Proposition \ref{prop:opt_pseudo1}.
\begin{enumerate}[(i)]
\item
If $i=\frac34 m - \frac32$,
and $\sigma_{i} = o(\sqrt{k} \left( \tfrac{k}{v} \right)^{\frac34 m - \frac12})$,
then $k = O(v^{1 - \frac{2}{3 m - 4}})$.
\item If $i=m-3$ and $\sigma_{m-3} = o(\sqrt{k} \left( \tfrac{k}{v} \right)^{\frac12 m +1})$,
then $k = O(v^{1 - \frac{1}{m+1}})$.
\end{enumerate}
\end{Corollary}
\section{Future Work}
There are countless results specific to
strongly regular graphs. Generalizing them
to approximately strongly regular graphs seems to
be a worthwhile endeavor.
Maybe one can improve the bounds given here:
our variant of the absolute bound,
Proposition \ref{prop:absolute_bnd_for_asrg}, is not very
satisfying compared to our Krein bounds.
An earlier version of this document contained
a much better version of Proposition \ref{prop:absolute_bnd_for_asrg}
under the assumption that the graph is also $1$-walk-regular
(cf. \cite{DFG2009}). While our proof was incorrect,
we believe that much stronger versions of
Proposition \ref{prop:absolute_bnd_for_asrg} should hold for
slightly stronger regularity conditions.
There is also the question if our results --
using the usual connections between caps,
strongly regular graphs, and linear codes --
has any interesting implications for coding theory.
Our primary motivation for this document is to
restrict the search space when looking for constructions
for specific extremal problems.
Maybe the techniques in this paper can be expanded
to obtain more general bounds on caps and
optimally pseudorandom clique-free graphs.
At the time of writing, the author holds the weak belief
that Corollary \ref{cor:caps_bnds} and
Corollary \ref{cor:opt_Kt_bnds} state true upper
bounds for the respective general cases.
\bigskip
\paragraph*{Acknowledgment}
The author is supported by a
postdoctoral fellowship of the Research Foundation -- Flanders (FWO).
\smallskip
The author thanks
Andries E. Brouwer for several suggestions, particularly,
for the term {\it approximately strongly regular graphs},
Dion Gijswijt for several comments on an earlier version
of this manuscript,
and Jacques Verstra\"ete
for hosting him for the last two months in 2021
and discussing caps and clique-free graphs.\footnote{
The author knows for neither of them if they approve of
the present version of this document. Thus, if you disagree with this text,
solely its author is to blame.}
|
2,877,628,091,236 | arxiv |
\section{Movie}
\section{Equations of motion for two incoming photons}
In this Section, we give the equations of motion that were used to
do the numerics for two incoming photons and to obtain Fig.~3(a,b) in the main text.
In the case of two incoming photons, the full
density matrix
\begin{eqnarray}
\rho(t) &=& \epsilon(t) |0\rangle \langle 0| + \rho_1(t) + |\psi_2(t)\rangle \langle \psi_2(t)| \label{XXeq:rhoP}
\end{eqnarray}
consists of the unnormalized two-excitation wavefunction
\begin{eqnarray}
\!\!\!\! |\psi_2(t)\rangle & = & \frac{1}{2} \int d x \int d y EE(x,y,t) \hat \mathcal{E}^\dagger(x) \hat \mathcal{E}^\dagger(y) |0\rangle \nonumber \\
&&+ \int d x \int' d y EP(x,y,t) \hat \mathcal{E}^\dagger(x) \hat P^\dagger(y) |0\rangle \nonumber \\
&&+ \int d x \int' d y ES(x,y,t) \hat \mathcal{E}^\dagger(x) \hat S^\dagger(y) |0\rangle \nonumber \\
&& + \frac{1}{2} \int' d x \int' d y PP(x,y,t) \hat P^\dagger(x) \hat P^\dagger(y) |0\rangle \nonumber \\
&&+ \int' d x \int' d y PS(x,y,t) \hat P^\dagger(x) \hat S^\dagger(y) |0\rangle \nonumber \\
&& + \frac{1}{2} \int' d x \int' d y SS(x,y,t) \hat S^\dagger(x) \hat S^\dagger(y) |0\rangle,
\end{eqnarray}
the unnormalized single-excitation density matrix
\begin{eqnarray}
\rho_1(t) &=& \int d x \int d y \, ee(x,y,t) \hat \mathcal{E}^\dagger(y) |0\rangle \langle 0| \hat \mathcal{E}(x) \nonumber \\
&& + \int d x \int' d y \, ep(x,y,t) \hat P^\dagger(y) |0\rangle \langle 0| \hat \mathcal{E}(x) \nonumber \\
&& + \int' d x \int d y \, pe(x,y,t) \hat \mathcal{E}^\dagger(y) |0\rangle \langle 0| \hat P(x) \nonumber \\
&& + \int d x \int' d y \, es(x,y,t) \hat S^\dagger(y) |0\rangle \langle 0| \hat \mathcal{E}(x) \nonumber \\
&& + \int' d x \int d y \, se(x,y,t) \hat \mathcal{E}^\dagger(y) |0\rangle \langle 0| \hat S(x) \nonumber \\
&&+ \int' d x \int' d y \, pp(x,y,t) \hat P^\dagger(y) |0\rangle \langle 0| \hat P(x) \nonumber \\
&& + \int' d x \int' d y \, ps(x,y,t) \hat S^\dagger(y) |0\rangle \langle 0| \hat P(x) \nonumber \\
&& + \int' d x \int' d y \, sp(x,y,t) \hat P^\dagger(y) |0\rangle \langle 0| \hat S(x) \nonumber \\
&&+ \int' d x \int' d y \, ss(x,y,t) \hat S^\dagger(y) |0\rangle \langle 0| \hat S(x), \label{Xrho1}
\end{eqnarray}
and the vacuum component $\epsilon(t) |0\rangle \langle 0|$. Here $\int$ integrates over $(-\infty,\infty)$, while $\int'$ integrates over $[0,L]$. Without loss of generality, we take $EE$, $PP$, and $SS$ to be symmetric [e.g.~$EE(x,y) = EE(y,x)$].
If the input state had correlations between different Fock states, one would need to include coherences between manifolds of different photon number; the method we discuss can be naturally generalized to these situations.
All terms in $\rho(t=0)$ vanish except for $EE(x,y,0) = \sqrt{2} h(-x) h(-y)$, where we assume
$h(t < 0) = 0$.
The equations of motion for $EE$, $EP$, $ES$, $PP$, $PS$, and $SS$ can be obtained by expressing them in terms of $|\Psi_2\rangle$ [e.g.\ $ES(x,y) = \langle 0|\hat \mathcal{E}(x) \hat S(y)|\psi_2(t)\rangle$] and using Eqs.\ (1-3) in the main text.
For $x \notin [0,L]$, $y \in [0,L]$, they are
\begin{eqnarray}
(\partial_t + \partial_x + \partial_y) EE &=& i g EP, \label{Xeq:EE1}\\
(\partial_t + \partial_x + 1) EP &=& i g EE + i \Omega ES,\\
(\partial_t + \partial_x) ES &=& i \Omega EP,\label{Xeq:ES1}
\end{eqnarray}
and describe
the EIT propagation of photon $y$, while photon $x$ propagates outside the medium with the speed of light ($c=1$ in our units). Using $EE(x,y,t) = \sqrt{2} h(t-x) h(t-y)$ to set the boundary conditions at $y = 0$, these equations are solved for $x \leq 0$, $y \in [0,L]$ to give the boundary conditions for the equations in the region $x,y \in [0,L]$:
\begin{eqnarray}
(\partial_t+ \partial_x + \partial_y) EE &=& i g EP_+,\\
(\partial_t + \partial_x + 1) EP &=& i g (EE + PP) + i \Omega ES,\\
(\partial_t + \partial_x) ES &=& i g PS + i \Omega EP, \label{Xeq:ES2}\\
(\partial_t + 2) PP &=& i g EP_+ + i \Omega ES_+,\\
(\partial_t + 1) PS &=& i g ES + i \Omega (PP + SS), \label{Xeq:PS2}\\
(\partial_t + i V(r)) SS &=& i \Omega PS_+,
\end{eqnarray}
where $EP_\pm(x,y) = EP(x,y) \pm EP(y,x)$, $ES_\pm(x,y) = ES(x,y) \pm ES(y,x)$, $PS_\pm(x,y) = PS(x,y) \pm PS(y,x)$, and $r = x-y$. The solution to these equations can then be used to set the boundary conditions at $x = L$ for Eqs.\ (\ref{Xeq:EE1}-\ref{Xeq:ES1}) in the region $x \geq L$, $y \in [0,L]$, which can in turn be used calculate the outgoing two-photon pulse.
Now we turn to the evolution equations for
the single-excitation density matrix $\rho_1$. We first note that
\begin{eqnarray}
&&es(x,y) = \langle \hat \mathcal{E}^\dagger(x) \hat S(y)\rangle - \int d z EE^*(x,z) ES(z,y) \label{Xeq:esbar} \\
&& - \int' d z EP^*(x,z) PS(z,y) - \int' d z ES^*(x,z) SS(z,y). \nonumber
\end{eqnarray}
The equation of motion for $\langle \hat \mathcal{E}^\dagger(x) \hat S(y)\rangle$ follows from Eqs.\ (1-3) in the main text. Together with the equations of motion for the two-photon amplitudes, this yields the equation of motion for $es(x,y)$, and, similarly, for all matrix elements of $\rho_1$. The following source terms will describe the transfer of population from $|\psi_2\rangle$ to $\rho_1$:
\begin{eqnarray}
f_{ee}(x,y) &=& 2 \int' dz EP^*(x,z) EP(y,z),\\
f_{ep}(x,y) &=& 2 \int' dz EP^*(x,z) PP(y,z),\\
f_{es}(x,y) &=& 2 \int' dz EP^*(x,z) PS(z,y),\\
f_{pp}(x,y) &=& 2 \int' dz PP^*(x,z) PP(y,z),\\
f_{ps}(x,y) &=& 2 \int' dz PP^*(x,z) PS(z,y),\\
f_{ss}(x,y) &=& 2 \int' dz PS^*(z,x) PS(z,y).\label{Xeq:fss}
\end{eqnarray}
As expected, in the interaction-free case ($V = 0$) and assuming perfect EIT, the source terms vanish because $|e\rangle$ is never populated, so all components of $|\psi_2\rangle$ involving $P$ vanish. With these definitions,
for $x,y \in [0,L]$,
\begin{eqnarray}
\!\!\!\!\!\!\!\!\!\!\!\!(\partial_t + \partial_x + \partial_y) ee &=& i g (ep-pe) + f_{ee},\label{Xeq:ee3}\\
\!\!\!\!\!\!\!\!\!\!\!\!(\partial_t + \partial_x + 1) ep &=& i g (ee - pp) +i \Omega es + f_{ep}, \\
\!\!\!\!\!\!\!\!\!\!\!\!(\partial_t + \partial_x) es &=& i \Omega ep - i g \, ps + f_{es},\label{Xeq:es3}\\
\!\!\!\!\!\!\!\!\!\!\!\!(\partial_t + 2) pp &=& i g (pe-ep) + i \Omega (ps-sp) + f_{pp},\\
\!\!\!\!\!\!\!\!\!\!\!\!(\partial_t + 1) ps &=& - i g \, es + i \Omega(pp-ss) + f_{ps},\label{Xeq:ps3}\\
\!\!\!\!\!\!\!\!\!\!\!\!\partial_t ss &=& i \Omega (sp-ps)+ f_{ss},\label{Xeq:ss3}
\end{eqnarray}
while $pe(x,y) = ep^*(y,x)$, $se(x,y) = es^*(y,x)$, and $sp(x,y) = ps^*(y,x)$. Equations of motion outside of $x,y \in [0,L]$ can be obtained in the same way.
We do numerical calculations in the regime of good EIT ($\lesssim 1\%$ single-photon loss). Thus, to a good approximation, photon scattering occurs only when both photons are inside the medium.
We therefore
solve Eqs.\ (\ref{Xeq:ee3}-\ref{Xeq:ss3}) with vanishing initial and boundary conditions.
We note that the equations can easily be extended \cite{Xpeyronel12} to include longitudinally varying density, finite decoherence rate of $S$, as well as cases where the blockade radius is smaller than the transverse extent of the probe beam.
\section{Ideal single-photon generation from 2 photons \label{Xsec:2}}
In this Section, in the case of an input Fock state with $N = 2$, we show how Eqs.\ (9,11) in the main text arise from the full equations of motion presented above.
Let's assume
that
EIT is perfect and that $L_p < L < z_b$.
Then, for $x \leq 0$ and $y \in [0,L]$, Eqs.\ (\ref{Xeq:EE1}-\ref{Xeq:ES1}) give
\begin{eqnarray}
ES(x,y,t) = - \sqrt{2/v_g} h(t-x) h(t- y/v_g),
\end{eqnarray}
where $v_g = (\Omega/g)^2$ in our units. From Eqs.\ (\ref{Xeq:ES2},\ref{Xeq:PS2}), we obtain $\partial_t ES \approx - \partial_x ES - g^2 ES$, so that,
for $x,y \in [0,L]$,
\begin{eqnarray}
PS(x,y,t) & \approx & i g ES(x,y,t) \approx i g ES(x=0,y,t-x) e^{-g^2 x} \nonumber \\
&\approx & - i g \sqrt{2/v_g} h(t) h(t - y/v_g) e^{- g^2 x}, \label{XESq1}
\end{eqnarray}
which describes the absorption of the two-excitation amplitude over the absorption length $1/g^2 \ll L_p$.
Inserting this expression into Eq.\ (\ref{Xeq:fss}), we obtain
\begin{eqnarray}
f_{ss}(x,y,t) = 2 h^2(t) h(t - x/v_g) h(t- y/v_g)/v_g. \label{Xeq:fss2}
\end{eqnarray}
Then from Eqs.\ (\ref{Xeq:es3},\ref{Xeq:ps3},\ref{Xeq:ss3}),
to a good approximation,
\begin{eqnarray}
es &=& - (\Omega/g) ss + (\Omega/g^3) \partial_x ss,\label{Xeq:esapprox}\\
\partial_t ss &=& - 2 \Omega^2 ss - \Omega g (se + es) + f_{ss}.\label{Xeq:sstemp}
\end{eqnarray}
Inserting Eq.\ (\ref{Xeq:esapprox}) into Eq.\ (\ref{Xeq:sstemp}), we obtain
\begin{eqnarray}
\partial_t ss = - v_g \partial_R ss + f_{ss}(x,y,t),\label{Xssvg}
\end{eqnarray}
which describes how the source $f_{ss}$ puts excitations into $ss$; and, as soon as the excitations are put in, they start moving at $v_g$ along $R = (x+y)/2$. This equation assumes the wavepacket's frequency components all fit inside the EIT transparency window. Solving this equation gives
\begin{eqnarray}
\!\!\!\!\!\!\!\!\!\!\!\! ss(x,y,t) =
\frac{2}{v_g} h(t \!-\! \frac{x}{v_g}) h(t \!-\! \frac{y}{v_g}) \int_{t - \frac{\textrm{\scriptsize{min}}(x,y)}{v_g}}^t \!\!\!\! d t' h^2(t'), \label{Xssfinal}
\end{eqnarray}
which, for $N = 2$, generalizes Eqs.\ (9,11) in the main text to cases when the pulse has only partially entered the medium. Eq.\ (\ref{Xssfinal}) yields $\textrm{Tr}[\rho_1^2]/\textrm{Tr}[\rho_1]^2 = 2/3$ for all $t$ and $\textrm{Tr}[\rho_1] = \left[\int^t_{-\infty} d \tau h^2(\tau)\right]^2$, which give the dashed lines in Fig.\ 3(b) in the main text.
To a good approximation, $\rho_1$ satisfies the dark-state-polariton condition $ee = - \sqrt{v_g} es = v_g ss$. This derivation can easily be extended to include the effect of a finite EIT transparency window width, which partially explains the slight discrepancy between analytics and numerics in Fig.\ 3(b) in the main text.
\section{Ideal single-photon generation from $N$ photons}
In this Section, we generalize Sec.~\ref{Xsec:2} to arbitrary $N$.
Let $\mathbf{x}_m \equiv x_1, \dots, x_m$, $E_m \equiv E \dots E$ (where $E$ is repeated $m$ times to denote the $m$-photon wavefunction), and $h(t-\mathbf{x}_m) \equiv \prod_{i=1}^m h(t-x_i)$. Then, for $\mathbf{x}_N < 0$, the incoming $N$-photon state is given by
\begin{eqnarray}
E_N(\mathbf{x}_N) = \sqrt{N!} h(t-\mathbf{x}_N).
\end{eqnarray}
Once the first two photons enter the medium ($\mathbf{x}_{N-2} < 0$ and $x_{N-1},x_N > 0$), we have, in analogy with Eq.~(\ref{XESq1}),
\begin{eqnarray}
&&E_{N-2}PS(\mathbf{x}_N) \\
&& = - i g \sqrt{N!/v_g} h(t-\mathbf{x}_{N-2}) h(t) h(t - x_N/v_g) e^{- g^2 x_{N-1}}. \nonumber
\end{eqnarray}
So, by analogy with Eqs.~(\ref{Xeq:fss},\ref{Xeq:fss2}),
\begin{eqnarray}
&&f_{e_{N-2}se_{N-2}s}(\mathbf{x}_{N-1},\mathbf{x}_{N-1}') \\
&& = 2 \!\! \int'\!\!\!\! d z E_{N-2} PS^*(\mathbf{x}_{N-2}, z, x_{N-1}) E_{N-2} PS(\mathbf{x}'_{N-2}, z, x'_{N-1}) \nonumber \\
&&= \frac{N!}{v_g} h(t \!-\! \mathbf{x}_{N-2}) h(t\!-\!\mathbf{x}'_{N-2}) h^2(t) h(t \!-\! \frac{x_{N-1}}{v_g}) h(t\! -\! \frac{x'_{N-1}}{v_g}). \nonumber
\end{eqnarray}
Applying group velocity propagation along $(x_{N-1}+x'_{N-1})/2$ [as in Eq.~(\ref{Xssvg})], we have [as in Eq.~(\ref{Xssfinal})]
\begin{eqnarray}
&&e_{N-2}se_{N-2}s(\mathbf{x}_{N-1},\mathbf{x}_{N-1}') = \frac{N!}{v_g} h(t - \mathbf{x}_{N-2}) h(t-\mathbf{x}_{N-2}) \nonumber \\
&& \times h(t \!-\! \frac{x_{N-1}}{v_g}) h(t \!-\! \frac{x'_{N-1}}{v_g}) \int_{t - \frac{\textrm{\scriptsize{min}}(x_{N-1},x'_{N-1})}{v_g}}^t \!\! d t' h^2(t').
\end{eqnarray}
Allowing now the third photon to enter the medium ($x_{N-2}, x'_{N-2} > 0$), we have
\begin{eqnarray}
&&f_{e_{N-3}se_{N-3}s}(\mathbf{x}_{N-2},\mathbf{x}_{N-2}') \\
&&= e_{N-2}se_{N-2}s(\mathbf{x}_{N-3},0,x_{N-2},\mathbf{x}_{N-3}',0,x_{N-2}') \nonumber \\
&& = \frac{N!}{v_g} h(t - \mathbf{x}_{N-3}) h(t-\mathbf{x}'_{N-3}) h^2(t) \nonumber \\
&& \times h(t - \frac{x_{N-2}}{v_g}) h(t - \frac{x'_{N-2}}{v_g}) \int_{t - \frac{\textrm{\scriptsize{min}}(x_{N-2},x'_{N-2})}{v_g}}^t d t' h^2(t'). \nonumber
\end{eqnarray}
Applying group velocity propagation along $(x_{N-2} + x'_{N-2})/2$, we have
\begin{eqnarray}
&&e_{N-3}se_{N-3}s(\mathbf{x}_{N-2},\mathbf{x}_{N-2}') \nonumber \\
&& = \frac{N!}{v_g} h(t - \textbf{x}_{N-3}) h(t-\textbf{x}'_{N-3}) h(t - \frac{x_{N-2}}{v_g}) h(t - \frac{x'_{N-2}}{v_g}) \nonumber \\
&&\times \frac{1}{2} \left[\int_{t - \frac{\textrm{\scriptsize{min}}(x_{N-2},x'_{N-2})}{v_g}}^t d t' h^2(t')\right]^2.
\end{eqnarray}
Allowing the fourth photon to enter the medium, we have
\begin{eqnarray}
&&f_{e_{N-4}se_{N-4}s}(\mathbf{x}_{N-3},\mathbf{x}_{N-3}') \\
&& = e_{N-3}se_{N-3}s(\mathbf{x}_{N-4},0,x_{N-3},\mathbf{x}_{N-4}',0,x_{N-3}') \nonumber \\
&& = \frac{N!}{v_g} h(t - \mathbf{x}_{N-4})h(t-\mathbf{x}'_{N-4}) h^2(t) \nonumber \\
&& \times h(t\! -\! \frac{x_{N-3}}{v_g}) h(t \!-\! \frac{x'_{N-3}}{v_g}) \frac{1}{2}\!\! \left[\int_{t - \frac{\textrm{\scriptsize{min}}(x_{N-3},x'_{N-3})}{v_g}}^t \!\! d t' h^2(t')\right]^2. \nonumber
\end{eqnarray}
Applying group velocity propagation along $(x_{N-3} + x'_{N-3})/2$, we have
\begin{eqnarray}
&&e_{N-4}se_{N-4}s(\mathbf{x}_{N-3},\mathbf{x}_{N-3}') \nonumber \\
&& = \frac{N!}{v_g} h(t - x_1) \dots h(t-x'_{N-4}) h(t - \frac{x_{N-3}}{v_g}) h(t - \frac{x'_{N-3}}{v_g}) \nonumber \\
&& \times \frac{1}{3!} \left[\int_{t - \frac{\textrm{\scriptsize{min}}(x_{N-3},x'_{N-3})}{v_g}}^t d t' h^2(t')\right]^3.
\end{eqnarray}
We continue in this way until we reach
\begin{eqnarray}
&&ss(x_1,x_1') =
\nonumber \\
&&= \frac{N}{v_g} h(t - \frac{x_1}{v_g}) h(t - \frac{x'_1}{v_g}) \left[\int_{t - \frac{\textrm{\scriptsize{min}}(x_1,x'_1)}{v_g}}^t d t' h^2(t')\right]^{N-1},
\end{eqnarray}
which generalizes Eqs.\ (9,11) in the main text to cases when the pulse has only partially entered the medium.
\section{Eigenvectors of the single-photon density matrix}
In this Section, we study the eigenvectors and eigenvalues of the single-photon density matrix, Eqs.~(11) and (16) in the main text, obtained via single-photon filtering from Fock-state and coherent-state inputs, respectively.
We first study the eigenvectors $\phi_i$ and eigenvalues $p_i$ of the single-photon density matrix
$\phi(x,y)$ given in Eq.~(11) in the main text.
Defining $\tilde x = \int_{-\infty}^x d z h^2(-z)$,
we obtain $\rho = \int_0^1 d \tilde x d \tilde y \tilde \phi(\tilde x, \tilde y) \hat{\tilde \mathcal{E}}^\dagger(\tilde y) |0\rangle \langle0| \hat{\tilde \mathcal{E}}(\tilde x)$, where $\tilde \phi(\tilde x, \tilde y) = N \left[\textrm{min}(\tilde x, \tilde y)\right]^{N-1}$, $ \hat {\tilde \mathcal{E}}(\tilde x) = \hat \mathcal{E}(x)/h(-x)$, $[\hat{\tilde \mathcal{E}} (\tilde x), \hat{\tilde \mathcal{E}}^\dagger(\tilde y)] = \delta(\tilde x - \tilde y)$. The eigenvalues $p_i$ are then the solutions of the characteristic equation $J_{-1/N}\left[2 \sqrt{(N - 1)/(N p)}\right] = 0$. In particular, in the limit $N \rightarrow \infty$, $p_i$ are the roots of $J_0[2/\sqrt{p}] = 0$. The eigenvectors of $\tilde \phi(\tilde x, \tilde y)$ are
$\tilde \phi_i(\tilde x) \propto \tilde x^{(N-1)/2} J_{1-1/N}\left[2 \sqrt{(N - 1)/(N p_i)} \tilde x^{N/2}\right]$.
In particular, for $N = 2$, $p_i = 2 \pi^{-2} \left(n-\frac{1}{2}\right)^{-2}$, $\tilde \phi_i(\tilde x) = \sqrt{2} \sin\left[\pi \left(n - \frac{1}{2}\right) \tilde x \right]$. While $\tilde \phi(\tilde x,\tilde x)$ and $\tilde \phi_i(\tilde x)$ shorten as $1/N$ with increasing $N$, $\phi(x,x)$ and
$\phi_i(x) = h(-x) \tilde \phi_i\left(\tilde x \right)$
shorten much slower as $1/\sqrt{\log N}$ for a Gaussian $h(x)$.
We now study the eigenvectors of the single-photon density matrix $\phi(x,y)$ given in Eq.~(16) in the main text.
Following the same change of variables, we obtain $\tilde \phi(\tilde x, \tilde y) = \langle n \rangle \exp\left[- \langle n \rangle (1- \textrm{min}(\tilde x, \tilde y))\right]$, which, for $\langle n \rangle \gg 1$, agrees with the above $\phi(\tilde x, \tilde y)$ provided one identifies $N$ with $\langle n\rangle$. For general $\langle n \rangle$, the eigenstates $\tilde \phi_i$ of $\tilde \phi(\tilde x, \tilde y)$ are linear combinations of $e^{-\langle n \rangle (1-\tilde x)/2} J_1[2 e^{-\langle n \rangle (1- \tilde x)/2}/\sqrt{p_i}]$ and $e^{-\langle n \rangle (1-\tilde x)/2} Y_1[2 e^{-\langle n \rangle (1-\tilde x)/2}/\sqrt{p_i}]$, where $p_i$ are the eigenvalues.
\section{Single-photon subtraction}
In this Section, we present a formal derivation of Eq.\ (17) in the main text, which describes the output of a single-photon subtractor \cite{Xhoner11}.
In addition to verifying Eq.\ (17), this method allows one to treat deviations from the ideal result.
Following Ref.\ \cite{Xhoner11}, the atoms can be in one of two collective states $|G\rangle$ and $|E\rangle$. The density matrix then evolves according to the following master equation:
\begin{eqnarray}
\dot \rho &=& - i [\hat H_0, \rho] + \Gamma \int_0^\infty d x \Big[2 \hat \mathcal{E}(x) |E\rangle \langle G| \rho |G\rangle \langle E| \hat \mathcal{E}^\dagger(x) \nonumber \\
&& - \hat \mathcal{E}^\dagger(x) \hat \mathcal{E}(x) |G\rangle \langle G| \rho - \rho |G\rangle \langle G| \hat \mathcal{E}^\dagger(x) \hat \mathcal{E}(x)\Big].
\end{eqnarray}
$\hat H_0$ here describes simple propagation of light in vacuum. The photon is subtracted within a few absorption lengths $\Gamma^{-1}$ of $x = 0$, so the remainder of the medium plays no role provided $z_b > L$; hence we assumed
$L \rightarrow \infty$.
Here, for simplicity, we only present the derivation for two incoming photons $|2\rangle$. Generalization to an arbitrary incoming state $|\psi\rangle = \sum_n c_n |n\rangle$ is straightforward.
Therefore, the full density matrix
\begin{eqnarray}
\rho = \rho_1 + |\psi_2\rangle \langle \psi_2|
\end{eqnarray}
consists of the two-photon wavefunction
\begin{eqnarray}
|\psi_2\rangle = \frac{1}{2} \int d x d y EE(x,y) \hat \mathcal{E}^\dagger(x) \hat \mathcal{E}^\dagger(y) |0\rangle |G\rangle
\end{eqnarray}
and of the single-photon density matrix
\begin{eqnarray}
\rho_1 = \int d x d y ee(x,y) \hat \mathcal{E}^\dagger(y) |0\rangle |E\rangle \langle E| \langle 0| \hat \mathcal{E}(x).
\end{eqnarray}
One then finds the following equations of motion:
\begin{eqnarray}
&& \partial_t EE(x,y) = - \partial_x EE - \partial_y EE - \Gamma [H(x) + H(y)] EE,\nonumber \\
&& \partial_t ee(x,y) = - \partial_x ee - \partial_y ee + 2 \Gamma \!\! \int_0^\infty \!\!\!\!\! d z EE(y,z) EE^*(x,z), \nonumber
\end{eqnarray}
where $H(x)$ is the Heaviside step function.
Starting with the boundary conditions $EE(x,y,t) = \sqrt{2} h(t-x) h(t-y)$ for $x, y \leq 0$, we solve for $EE$:
\begin{eqnarray}
\!\!\!\!\!\!\!\!\! EE(x,y,t) = \sqrt{2} h(t-x) h(t-y) e^{-\Gamma [H(x) x + H(y) y]}.
\end{eqnarray}
Inserting this into the equation of motion for $ee$ and using the fact that the absorption length is much shorter than the (now uncompressed) pulse duration, we obtain
\begin{eqnarray}
&& \partial_t ee(x,y,t) = - \partial_x ee - \partial_y ee \nonumber \\
&&+ 2 h^2(t) h(t-x) h(t-y) [1-H(x)] [1-H(y)].
\end{eqnarray}
For $x, y < 0$, this can be integrated to give
\begin{eqnarray}
ee(x,y,t) = 2 h(t-x) h(t-y) \int_{-\infty}^t h^2(t'),
\end{eqnarray}
so that, in the remaining three quadrants of the $xy$ plane,
\begin{eqnarray}
\!\!\!\!\!\!\!\!\!\!\!\! ee(x,y,t) = 2 h(t-x) h(t-y) \int_{-\infty}^{t-\textrm{max}(x,y)} d t' h^2(t'),
\end{eqnarray}
which is a special case of Eq.\ (17) in the main text.
|
2,877,628,091,237 | arxiv | \section{Introduction}
\label{introduction}
Cataclysmic variables (CVs) are close binary systems composed of a white dwarf (WD) primary and a secondary that transfers mass to the primary.
The secondary fills its Roche lobe and its overflowing matter pours onto the primary through the inner Lagrangian point $L_1$.
Dwarf novae (DNe) are a subclass of CVs and have a property of recurrent outbursts with typically 2--5 mag brightening.
The outburst lasts for days or weeks.
It is considered that the outburst results from a sudden release of gravitational energy which is caused by a rapid increase of the mass accretion rate on the primary by the thermal instability in the disk \citep{osa74DNmodel}.
SU UMa-type DNe are a subclass of DNe characterized by occasional superoutbursts, which have longer durations than normal outbursts with superhumps.
Superhumps are variations of small amplitudes typically of 0.1--0.5 mag and are considered to be a result of the tidal instability that is triggered when the outer disk reaches the 3:1 resonance radius during the outburst \citep{whi88tidal, osa89suuma, lub91SHa, lub91SHb, hir90SHexcess}.
\citet{Pdot} proposed that the superoutburst is divided into three distinct stages by a variation of the superhump period ($P_{\rm SH}$): stage A has a longer superhump period, in stage B the superhump period systematically varies, and stage C is a final stage of superoutburst and has a shorter superhump period.
The amplitude of the superhumps grows during stage A and then decreases during stage B.
Intervals between the superoutbursts (supercycles) are typically several hundred days.
WZ Sge-type DNe are a subclass of the SU UMa-type and show mainly superoutbursts and rarely normal outbursts.
WZ Sge-type DNe are characterized by the large amplitude, long duration superoutbursts and in some cases the existence of post-superoutburst brightenings, which are called rebrightenings \citep[for more detail]{kat15wzsge}.
WZ Sge-type DNe are also characterized by double-wave small variations of magnitude, which are called early superhumps and have almost the same period as the orbital period, before a growth of the stage A superhumps \citep{kat02wzsgeESH, Pdot6, ish01rzleo, ish02wzsgeletter}.
Although the historical classifications of WZ Sge-type DNe were mainly based on the amplitude of superoutbursts (e.g., tremendous outburst amplitude dwarf novae or TOADs \citep{how95TOAD}), the presence of early superhumps is now considered to be a key criterion for classification of WZ Sge-type DNe \citep{kat15wzsge}\footnote{The definitions of WZ Sge-type DNe by American Association of Variable Star Observers (AAVSO) International Variable Star Index (VSX) are (i)unusual long supercycle, (ii)existence of early superhumps, (iii)existence of rebrightenings, (iv)orbital periods with range of 0.05--0.08 d and (v)outburst amplitudes larger than $\sim$7 mag.}.
It is considered that early superhumps are caused when the outer edge of the disk reaches the 2:1 resonance radius during the outburst.
The 2:1 resonance is considered to suppress the deformation of the disk caused by the 3:1 resonance, and the eccentricity change due to the 3:1 resonance grows when the outer edge of the disk falls below the 2:1 resonance radius \citep{osa02wzsgehump,lub91SHa}.
As a consequence, early superhumps are observed in an early stage of the superoutburst and then ordinary superhumps appear subsequently instead of the early superhumps.
In order to reach the 2:1 resonance radius, it is considered that a mass ratio $q$ = $M_2/M_1$ ($M_1$ and $M_2$ represent the mass of the primary and secondary, respectively) should be extremely low.
\citet{osa02wzsgehump} proposed that the outer edge of the disk can reach the 2:1 resonance radius in the low mass-ratio systems with $q<$ 0.08.
Indeed, WZ Sge-type DNe have extremely small mass ratios, which are typically 0.06--0.08, and also have short orbital periods, $P_{\rm orb}$, which are around 0.054--0.056 d \citep{kat15wzsge}.
The mass-transfer rates from the secondary in WZ Sge-type DNe are very small in comparison with typical SU UMa-type DNe and the supercycles are extremely long.
The supercycles are typically a few years or decades \citep{kat15wzsge}.
There are, however, some unusual objects which show superoutbursts with WZ Sge-type features, i.e., large amplitudes of superoutbursts, the existence of rebrightenings or in some systems, the existence of double-wave modulations similar to early superhumps, although they have longer orbital periods than those of other WZ Sge-type DNe.
These long-period objects are classified as WZ Sge-type DNe in \citet{kat15wzsge} based on observational features.
According to the standard evolutionary theory of CVs, a binary separation becomes shorter because of a loss of the total angular momentum due to the magnetic braking and/or gravitational radiation.
If the mass transfer continues, the secondary becomes degenerate at a certain orbital period and then the binary evolves as its separation becomes wider.
There is, therefore, a minimum orbital period of CVs, and the binaries passing the period minimum are called period bouncers \citep[and references therein]{kol99CVperiodminimum,kni11CVdonor}.
As CVs evolve, the mass-transfer rate becomes lower and the supercycles become longer.
Therefore, the mass-transfer rates in WZ Sge-type DNe are considered to be smaller than those in SU UMa-type DNe \citep{osa02wzsgehump}.
In this paper, we present observations of the ASASSN-16eg.
The superoutburst of ASASSN-16eg was detected on 2016 April 9 by All-Sky Automated Survey for Supernovae (ASAS-SN; \citet{ASASSN}) and the magnitude was $V$ = 14.4 at the time of detection.
The coordinates of this object are RA: 17$^{\rm h}$26$^{\rm m}$10$^{\rm s}$.3213 and Dec: +42$^\circ$20$^\prime$02$^{\prime \prime}$.660 (J2000.0) in {\it Gaia} Data Release 1 \citep[for more detail about {\it Gaia} DR1]{GaiaDR1}.
There is a quiescent counterpart of G=19.394 in {\it Gaia} DR1.
ASASSN-16eg was classified in WZ Sge-type DN since this object showed apparently clear double-wave modulations having properties of early superhumps and rebrightening, although its orbital period is particularly long.
We found that ASASSN-16eg has a considerably large mass ratio which is far beyond the upper limit of the mass ratio that the outer edge of the disk is supposed to be able to reach the 2:1 resonance radius \citep{osa02wzsgehump}.
We considered why this object showed a superoutburst that stems from 2:1 resonance despite of its large mass ratio by comparing with other long-period objects.
In section \ref{observation}, we describe the details of our observations and the methods of analyses. In section \ref{result}, we present the results of our observations. In section \ref{discussion}, we discuss the results.
\begin{figure*}
\begin{center}
\FigureFile(170mm,100mm){fig1.eps}
\end{center}
\caption{The overall light curve of the 2016 superoutburst of ASASSN-16eg. The filled square and V-shaped sign represent an observational point and upper limit by ASAS-SN, respectively.}
\label{lightcurve}
\end{figure*}
\section{Observation and Analysis}
\label{observation}
Our time-resolved CCD photometry of the superoutburst of ASASSN-16eg was carried out by the Variable Star Network (VSNET) collaborations \citep{VSNET}.
Logs of our photometric observations are in table S1.
All of the observation times were described in barycentric Julian date (BJD).
We added a constant to each observer's magnitude data to adjust the difference in the zero-point.
We used the phase dispersion minimization (PDM) method \citep{PDM} for period analyses.
The 1$\sigma$ error of the best estimated period by the PDM method was determined by the methods in \citet{fer89error} and \citet{Pdot2}.
We subtracted the global trend of the light curve by subtracting a smoothed light curve obtained by locally weighted polynomial regression (LOWESS: \cite{LOWESS}) before making the period analyses.
We used $O-C$ diagrams, from which we can derive the slight variation of the superhump period (see, e.g., \cite{ste05OCdiagram}).
\section{Result}
\label{result}
\begin{figure}
\begin{center}
\FigureFile(80mm,100mm){fig2.eps}
\end{center}
\caption{Upper panel: $\theta$-diagram of our PDM analysis of early superhumps of ASASSN-16eg (BJD 2457489.4--2457494.0). The gray area represents the 1$\sigma$ error of the best estimated period by the PDM method. Lower panel: Phase-averaged profile of early superhumps.}
\label{a16egpdmearly}
\end{figure}
\subsection{Overall light curve}
Figure \ref{lightcurve} shows the overall light curve of the superoutburst of ASASSN-16eg.
The observation was started on BJD 2457488.
The superoutburst lasted about 20 d during BJD 2457489--2457508 with a slow decline of the brightness, and then rapidly faded.
There was a single rebrightening during BJD 2457512--2457516.
After the rebrightening, the magnitude declined to around V=19.5 and ASASSN-16eg seemed to be in a quiescent state.
\subsection{Early superhumps}
We regarded the variations recorded in BJD 2457489.4--2457494.0 as early superhumps based on the double-wave variation and the variation of the superhump period.
Figure \ref{a16egpdmearly} shows the result of PDM analysis of early superhumps (upper panel) and the mean profile (lower panel) of ASASSN-16eg.
Double-wave variations characterized as early superhumps are clearly seen.
We found the period of early superhumps to be 0.075478(8) d.
\begin{figure}
\begin{center}
\FigureFile(85mm,120mm){fig3.eps}
\end{center}
\caption{Upper panel: The $O-C$ curve of ASASSN-16eg during BJD 2457494--2457508. We used an ephemeris of BJD 2457496.4415+0.0779132$E$ for drawing this figure. Middle panel: The amplitude of superhumps. Lower panel: The light curve. The horizontal axis in units of BJD and cycle number is common to all of these panels.}
\label{a16egOCcurve}
\end{figure}
\begin{figure}
\begin{center}
\FigureFile(80mm,100mm){fig4.eps}
\end{center}
\caption{Upper panel: $\theta$-diagram of our PDM analysis of stage A superhumps of ASASSN-16eg (BJD 2457494.1-2457495.1). The gray area represents the 1$\sigma$ error of the best estimated period by the PDM method. Lower panel: Phase-averaged profile of stage A superhumps.}
\label{a16egpdmA}
\end{figure}
\subsection{Ordinary superhumps}
Figure \ref{a16egOCcurve} shows the $O-C$ curve (upper panel), the amplitude of the superhumps (middle panel) and the light curve (lower panel) of ASASSN-16eg during BJD 2457494--2457508.
We determined the times of maxima of ordinary superhumps in the same way as in \citet{Pdot}.
The resultant times are listed in table S2.
We regarded BJD 2457494.1--2457495.1 (0 $\leq E \leq$ 10) as stage A, BJD 2457495.2--2457502.9 (15 $\leq E \leq$ 106) as stage B, and BJD 2457503.4--2457508.4 (120 $\leq E \leq$ 181) as stage C superhumps from the variation of the superhump period and the amplitude of superhumps.
Figure \ref{a16egpdmA} shows the result of PDM analysis of stage A superhumps (upper panel) and the mean profile (lower panel).
We found the stage A superhump period to be $P_{\rm stA}$ = 0.07989(4) d.
We also found the stage B and stage C superhump period to be 0.077880(3) d and 0.077589(7) d, respectively.
$P_{\rm dot} (\equiv \dot P_{\rm sh}/P_{\rm sh})$, which is a derivative of the superhump period during stage B, was 10.4(0.8) $\times$ 10$^{-5}$.
\section{Discussion}
\label{discussion}
\subsection{Particularly long orbital period and large mass ratio}
\label{massratio_and_period}
As the period of early superhumps is considered to be almost equal to the orbital period \citep{Pdot6}, we estimated the orbital period of ASASSN-16eg to be $P_{\rm orb}$ = 0.075478(8) d.
This value is particularly large compared with that of other WZ Sge-type DNe, which are concentrated around 0.054--0.056 d \citep{kat15wzsge}.
Such long orbital period indicates two possibilities either ASASSN-16eg is in an earlier stage of CV evolution than other WZ Sge-type DNe or ASASSN-16eg is in a final stage of CV evolution as a period bouncer.
We excluded, however, the latter possibility because of its large mass ratio as discussed below.
There are some objects suspected as WZ Sge-type DNe having long orbital periods like ASASSN-16eg.
ASASSN-16eg may be a new candidate of these long-period objects (see subsection \ref{longporb}).
We estimated the mass ratio of ASASSN-16eg from the fractional superhump-period excess for the 3:1 resonance, $\varepsilon^*$ = 1 $-P_{\rm orb}/P_{\rm stA}$, in the same way as proposed in \citet{kat13qfromstageA}.
We estimated $\varepsilon^*$ = 0.0552(6) and then found the mass ratio to be $q$ = 0.166(2).
This value is considerably large for other WZ Sge-type DNe, which are around 0.06--0.08 \citep{kat15wzsge}, and thus the mass ratio of ASASSN-16eg is twice or three times as large as these typical values.
Both the orbital period and the mass ratio of ASASSN-16eg are similar to those of SU UMa-type DNe rather than WZ Sge-type ones (see figure \ref{evol-track}).
However, we note that the period of early superhumps is not exactly but almost equal to the orbital period.
\citet{ish02wzsgeletter} showed that the period of early superhumps of WZ Sge is 0.05\% shorter than the orbital period.
\citet{Pdot6} showed that the differences between the periods of early superhumps and the orbital periods in well-studied WZ Sge-type DNe are very small and periods of early superhumps can be used as approximate orbital periods with an accuracy of 0.1\%, and they also proposed that we can derive the orbital period from the period of early superhumps by assuming fractional excess of early superhumps, $\varepsilon$, of $-$0.05\% if more accuracy is needed.
Considering this difference in ASASSN-16eg, we obtained an improved orbital period of 0.07544026(8) d and then estimated the mass ratio to be 0.167(2) by using the method proposed in \citet{kat13qfromstageA}.
This value is very close to that we estimated from the period of early superhumps, and thus the difference between the period of early superhumps and the orbital period is considered to be not important for the estimation of the mass ratio.
\begin{table*}
\caption{Candidates of long-period objects showing WZ Sge-type superoutbursts}
\begin{tabular}{p{2.6cm}p{1.6cm}p{1.6cm}p{1.2cm}p{3.4cm}p{2cm}p{1.6cm}}
\hline
Object & $P_{\rm orb}$ (d) & $P_{\rm stA}$ (d) & $q$ & Superoutbursts & Supercycle (yr) & References\\
\hline
V1251 Cyg & -- & 0.07616(3) & -- & 1963, 1991, 1994, 1997, 2008\commenta & 3--28 & 1, 2, 3, 4\\
RZ Leo & 0.07626(7) & 0.08072(5) & 0.165(6) & 1918, 1935, 1952, 1976, 1984, 2000\commenta, 2006, 2016 & 6--24 & 5, 6, 7, 8, 9\\
BC UMa & 0.06251(5) & 0.06476(7) & 0.096(6) & 1960, 1962, 1982, 1990, 1992, 1994, 2000, 2003\commenta, 2009 & 2--20 & 1, 6, 10, 11, 12, 13, 14\\
MASTER J004527 & -- & 0.08136(7) & -- & 2013\commenta & -- & 15\\
QY Per & -- & -- & -- & 1999\commenta, 2005, 2015 & 6--10 & 1, 5\\
ASASSN-16eg & 0.075478(8) & 0.07989(4) & 0.166(2) & 2016\commenta & -- & this paper\\
\hline
\end{tabular}
\begin{tabular}{p{16.6cm}}
References: 1. \citet{Pdot}; 2. \citet{web66v1251cyg}; 3. \citet{wen91v1251cyg}; 4. \citet{kat95v1251cyg}; 5. \citet{Pdot8}; 6. \citet{kat15wzsge}; 7. \citet{wol19rzleo}; 8. \citet{ric85rzleo}; 9. \citet{ish01rzleo}; 10. \citet{Pdot2}; 11. \citet{rom64bcuma}; 12. \citet{kun98bcuma}; 13. \citet{boy03bcuma}; 14. \citet{mae07bcuma}; 15. \citet{Pdot6}
\end{tabular}
\begin{tabular}{p{16.6cm}}
\commenta Superoutbursts that we reanalyzed in this paper.
\end{tabular}
\label{tab:longperiod}
\end{table*}
\subsection{Conditions of the 2:1 resonance}
\label{2:1resonance}
\citet{osa02wzsgehump} proposed that the outer edge of the disk cannot reach the 2:1 resonance radius in the systems with $q>$ 0.08.
This upper limit of $q$ for the 2:1 resonance may not be a rigid value since this extension of the disk radius was derived under an assumption of angular momentum conservation of the steady hot disk.
If accretion onto the primary proceeds during the outburst, the outer edge of the disk may expand beyond this radius until the resonance radius or the tidal truncation radius stops its expansion.
Accretion of a large amount of matter onto the primary will lead to a wide extension of the disk and the disk may exceed the tidal truncation radius.
To achieve such a condition, an extremely low mass-transfer rate would be a key.
The low mass transferring leads to a very low viscosity in quiescence because of the poor conductivity of the cold disk and resulting decay of magneto-hydrodynamic turbulence \citep{gam98,osa01egcnc}.
If the viscosity is extremely low, mass transferred from the secondary would be stored in a torus at the outer edge of the disk and thus a large amount of matter would be accumulated on the disk at the onset of an outburst.
In the case of $q$ = 0.166(2), the 2:1 resonance radius for a circular orbit, $R_{2:1}/a$ = $(1/2)^{2/3}(1+q)^{-1/3}$, where $a$ is a separation of the binary, is 0.599.
The Roche-lobe radius that is approximated by \citet{egg83rochelobe} is 0.537 with $q$ = 0.166(2).
These values indicate that the 2:1 resonance radius is larger than the Roche-lobe radius.
The distance of $L_1$, the first Lagrangian point, from the primary, which is given by $R_1/a$ = 0.500--0.227$\log{q}$ in \citet{war95book}, is 0.677 and the 2:1 resonance radius seems to be smaller than $L_1$, which may enable the disk to reach the 2:1 resonance radius without colliding with the secondary.
Therefore, the outer edge of the disk perhaps can reach the 2:1 resonance radius depending on conditions.
Another possibility is that the 2:1 resonance can work even if the outer disk does not strictly reach the resonance radius since the 2:1 resonance is very strong.
\subsection{The properties of long-period objects}
\label{longporb}
In \citet{kat15wzsge}, five objects, i.e. V1251 Cyg, RZ Leo, BC UMa, MASTER OT J004527.52+503213.8 (hereafter MASTER J004527) and QY Per, are supposed as candidates of the borderline class between the SU UMa-type DNe and the WZ Sge-type ones or long-period objects\footnote{Although some period bouncers may have long $P_{\rm SH}$, they are not considered here.}.
These objects showed superoutbursts with features similar to those of WZ Sge-type DNe although these objects have unsuitably long orbital periods for WZ Sge-type ones.
MASTER J004527 and QY Per may be, however, SU UMa-type DNe because no early superhumps were detected in these two objects \citep{Pdot6, Pdot8}.
Except for QY Per, all of these long-period objects showed a single rebrightening as in ASASSN-16eg.
We closely re-examined the past superoutbursts of these long-period objects.
We also reanalyzed the mass ratios of V1251 Cyg \citep{Pdot}, RZ Leo \citep{ish01rzleo} and BC UMa \citep{mae07bcuma} by adding new observations from the AAVSO database and by using a new method proposed in \citet{kat13qfromstageA}.
In these three objects, modulations similar to early superhumps are detected\footnote{In RZ Leo, although \citet{Pdot} suggested that variations of magnitude before the onset of ordinary superhumps are probably early superhumps rather than an extension of ordinary superhumps, \citet{Pdot8} indicated that these modulations may be different from those of typical WZ Sge-type DNe.
However, the existence of ASASSN-16eg supports the existence of long-period WZ Sge-type DNe, and the variations of magnitude similar to early superhumps in RZ Leo may indeed be early superhumps.}.
We summarized our analyses in table \ref{tab:longperiod}.
We list our estimated values for $P_{\rm orb}$ and $P_{\rm stA}$.
The details of the analyses are summarized in the supplementary discussion, figure S1--S10 and table S3--S6.
We should note that these estimations of mass ratios based on the periods of early superhumps and the stage A superhump periods involve large uncertainties mainly because of the short baseline of the data of stage A superhumps.
A reliable value of the period of early superhumps of V1251 Cyg could not be derived from the 2008 superoutburst and thus we could not estimate the mass ratio.
Although the orbital period of BC UMa is longer than those of other WZ Sge-type DNe, it is rather short in comparison with that of RZ Leo or ASASSN-16eg.
The mass ratio of BC UMa also quite small in comparison with that of RZ Leo or ASASSN-16eg.
Thus, this object may be an intermediate class between the SU UMa-type DNe and the WZ Sge-type ones as mentioned in \citet{mae07bcuma}.
The orbital period and mass ratio of RZ Leo are similar to those of ASASSN-16eg.
ASASSN-16eg also may have the same properties as RZ Leo, such as the long recurrence time of superoutbursts.
\subsection{Supercycles of long-period objects}
\label{supercycle}
As we mentioned in subsection \ref{2:1resonance}, the mass-transfer rate may be low in ASASSN-16eg for causing a WZ Sge-type superoutburst.
Similarly, other long-period objects showing WZ Sge-type superoutbursts may have similarly low mass-transfer rates.
The recurrence time of the superoutburst is considered to be proportional to the inverse of the mass-transfer rate \citep{osa95wzsge}.
Therefore, if there is no overlooked superoutburst, the length of supercycle would reflect the mass-transfer rate and the exceptionally long supercycle suggests the exceptionally low mass-transfer rate.
We excluded MASTER J004527 from this discussion since this object showed only one superoutburst.
The shortest supercycle of BC UMa is 2 yr as seen in table \ref{tab:longperiod} and this value is rather short for typical WZ Sge-type DNe, which are about decades, although it is long for typical SU UMa-type ones, which are several hundred days.
As \citet{mae07bcuma} mentioned, however, this object may be an intermediate class and seems to be different from other long-period objects (see also figure \ref{evol-track}).
Thus we also excluded this object from this discussion.
The other three objects, V1251 Cyg, RZ Leo and QY Per, have long supercycles comparable to those of other WZ Sge-type DNe.
Although the shortest supercycle of V1251 Cyg is 3 yr and seems to be fairly short, the 1994 superoutburst was not observed well enough \citep{Pdot} and this superoutburst might be one before mass accumulated sufficiently at the onset of the outburst.
These long supercycles indicate that in these long-period objects the mass-transfer rates are low.
From comparison with these long-period objects, it might be indicated that ASASSN-16eg also has a long supercycle and thus a low mass-transfer rate.
We also searched the past outburst of ASASSN-16eg by using Harvard astronomical plate digitalized by Digital Access to a Sky Century @ Harvard (DASCH; \citet{gri09DASCH,Lay10DASCH}) project.
These plates records objects brighter than B $\sim$ 14 -- 17, and could detect the superoutbursts of ASASSN-16eg if this object showed superoutbursts and it was recorded in plates.
There is, however, no record of brightening of ASASSN-16eg and we could not investigate the supercycle of ASASSN-16eg.
\begin{figure}
\begin{center}
\FigureFile(85mm,95mm){fig5.eps}
\end{center}
\caption{Mass ratio, $q$, versus orbital period, $P_{\rm orb}$. The dashed and solid curves represent the standard and optimal evolutionary tracks in \citet{kni11CVdonor}, respectively. The triangle and filled circle represent SU UMa-type and WZ Sge-type DNe, respectively, which are listed in \citet{kat13qfromstageA} and \citet{kat15wzsge}. The star represents ASASSN-16eg. The diamonds represent RZ Leo and BC UMa, respectively. There are two candidates of period bouncers at the lower right in this panel. They have long orbital periods but extremely small mass ratios in comparison with ASASSN-16eg or RZ Leo.}
\label{evol-track}
\end{figure}
\subsection{CV evolution}
\label{CVevolution}
We show the $P_{\rm orb}$ -- $q$ relation in figure \ref{evol-track}, in which we showed SU UMa-type and WZ Sge-type DNe with $q$ values estimated by \citet{kat13qfromstageA} and \citet{kat15wzsge}.
We also included ASASSN-16eg and other long-period objects, RZ Leo and BC UMa.
As described in section \ref{introduction}, the binary separation of CVs becomes shorter because of a loss of the total angular momentum.
Because of Roche overflow, the mass ratio of the binary also reduces.
Thus CVs generally evolve with the mass ratio decreasing and the separation decreasing.
Therefore, it is considered that DNe evolve into WZ Sge-type DNe through SU UMa-type DNe.
This may indicate that the long-period objects including ASASSN-16eg are at an earlier stage of CV evolution than other WZ Sge-type DNe.
However, as mentioned in section \ref{introduction}, the mass-transfer rate decreases as CVs evolve.
In this picture, the mass-transfer rate is considered a function of orbital period.
In cases of long-period objects with long supercycles, the mass-transfer rate appears to be lower than that expected from this picture.
Such systems may be in a state with temporary decreased mass-transfer rate.
However, the mechanism to realize a low mass-transferring state is still unclear.
As one possibility of realizing the temporarily decreased mass-transferring state, we propose the hibernation scenario \citep{Hibernation, liv92hibernation}, which assumes that after a nova eruption, mass transfers from the secondary decreases because of the increase in binary separation and the weakening of irradiation from the primary, and then the binary system would be in the temporally low mass-transferring state.
Although nova-like stars below the period gap are very rare \citep{pat13bklyn}, this hypothesis could explain the reason why the mass-transfer rate can be low even in long-period objects.
Another possibility to explain the low mass-transfer rate in these long-period WZ Sge-type objects is that these objects trace a different evolutionary track from the standard one.
\citet{gol15nuclearevolution} showed that the evolutionary track of CVs depends on the initial conditions of the primary or the secondary.
The objects discussed here, however, appear to be on the standard evolutionary track as judged from the $P_{\rm orb}$ -- $q$ relation (figure \ref{evol-track}), and this possibility appears to be less likely.
\section{Summary}
We report on our photometric observations of the 2016 superoutburst of ASASSN-16eg.
This object showed a WZ Sge-type superoutburst with clear early superhumps and a post-superoutburst rebrightening.
We derived the period of early superhumps to be 0.075478(8) d.
The orbital period, which is almost identical with the period of early superhumps, is exceptionally long for a WZ Sge-type DN.
The mass ratio estimated from the period of stage A superhumps is 0.166(2), which is also very large for a WZ Sge-type DN.
\citet{osa02wzsgehump} proposed that in systems with $q>$ 0.08 the outer edge of the disk cannot reach the 2:1 resonance radius.
However, if the accretion onto the primary proceeds, the disk continues to expand until the resonance radius or tidal truncation radius stops its expansion.
If the mass-transfer rate is low and thus a large amount of matter accumulates on the disk before the onset of an outburst, the outer edge of the disk may reach to or close to the 2:1 resonance radius beyond the tidal truncation radius by violent outburst.
For candidates of long-period objects showing WZ Sge-type superoutbursts, we examined their supercycles, which are considered to reflect the mass-transfer rates.
We found that V1251 Cyg and RZ Leo have long supercycles in comparison to other WZ Sge-type DNe.
This suggests that these long-period objects have low mass-transfer rates.
From comparison to these long-period objects, it is indicated that ASASSN-16eg also has a long supercycle and thus a low mass-transfer rate.
The long orbital period suggests that ASASSN-16eg is in an earlier stage of CV evolution than other WZ Sge-type DNe.
Although CVs evolve as their mass-transfer rate decreases, long-period objects appear to have a low mass-transfer rate comparable to other WZ Sge-type DNe.
As a mechanism to realize a low mass-transfer rate, we propose the hibernation scenario or possibility that long-period objects trace a different evolutionary track from the standard one.
\section*{Acknowledgement}
This work was supported by a Grant-in-Aid “Initiative for High-Dimensional Data-Driven Science through Deepening of Sparse Modeling” from the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan.
We are grateful to the All-Sky Automated Survey for Supernovae (ASAS-SN) for detecting a large number of DNe and the superoutburst of ASASSN-16eg.
We are thankful to AAVSO and the many amateur observers for providing much of the data used in this research.
This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (http://www.cosmos.esa.int/gaia), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, http://www.cosmos.esa.int/web/gaia/dpac/consortium).
Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement.
This work has made use of the VizieR catalogue access tool provided by CDS, Strasbourg, France, and of Astrophysics Data System (ADS) provided by NASA, USA.
This work also has made use of Digital Access to a Sky Century @ Harvard (DASCH) project and we thank Denis Denisenko for his help with use of DASCH.
\section*{Supporting information}
Supplementary discussion, figure S1--S10 and table S1--S6 are reported in the online version.
|
2,877,628,091,238 | arxiv | \section{Introduction}
The recent article \cite{Das1} investigates two important measures of risk contagion for a given bivariate random vector $(Z_1,Z_2)$, namely the marginal mean excess (MME) and the marginal expected shortfall (MES).
Specifically, under the assumption that $\E{\abs{Z_1}}< \infty$ the MME is defined for any $p\in (0,1)$ by
\bqn{ E(p) = \E{ (Z_1 - VaR_{Z_2}(p))_+ \lvert Z_2> VaR_{Z_2}(p)},
}
whereas MES is given as
\bqn{ S(p) = \E{ Z_1 \lvert Z_2> VaR_{Z_2}(p)},
}
with $VaR_{Z_i}(p)$ the Value-at-Risk at level $p$ of $Z_i$, which is simply the quantile function of $Z_i$ at $p$. In general both
$E(p)$ and $S(p)$ cannot be calculated explicitly. Besides, in the risk management practice the main interest is the calculation of these quantities for $p$ being close to 1. \\
In this paper we shall consider first the approximations of MME and MES for $(Z_1,Z_2)$ being jointly Gaussian with correlation $\rho \in (-1,1)$. Gaussian random vectors are asymptotically independent, i.e., large values occur independently which in our context means that
$$ \lim_{p \uparrow 1} \pk{ Z_1 > VaR_{Z_1}(p) \lvert Z_2 > VaR_{Z_2}(p) } =0.$$
Moreover, Gaussian risks exhibit the dimension reduction phenomenon, i.e., the joint survival probability can be proportional to the marginal survival probability for large values of the threshold, see e.g., \cite{ENJH02,Hashorva05,MR2397662} and the discussion below. Indeed that phenomenon renders the approximations of both MME and MES interesting and challenging.
Under hidden regular variation assumption on $(Z_1,Z_2)$ the recent publications \cite{Das1,Das2} consider approximations of MME and MES under some additional asymptotic conditions. However the Gaussian setup is not covered therein since the marginal distributions are in our setup light-tailed. As discussed recently in \cite{AsmussenE}, see also \cite{Nolde} the light-tailed case is very challenging (even in the one-dimensional setup) and surprisingly very little investigated in the literature. \\
Given the central role of multivariate Gaussian distributions, and the interesting behaviour of light-tailed risks, our principal goal in this contribution is to derive approximations of MME and MES in the Gaussian setup. We state next the result for the bivariate case.
Throughout in the following $\Phi$ denotes the distribution function (df) of an $N(0,1)$ random variable with inverse $\Phi^{-1}$ and $\varphi $ the probability density function (pdf) of a standard Gaussian random vector $(X_1,X_2)$ with correlation $\rho \in (-1,1)$.
\begin{theo} Let $\vk Z=(Z_1, Z_2)$ be jointly Gaussian with $Z_i$ having $N(\mu_i, \sigma_i^2),i=1,2$ df and correlation $\rho \in (-1,1)$ and set $u_p= \Phi^{-1}(p), \beta= (\mu_2- \mu_1)/\sigma_1, \eta = \beta /\sqrt{1- \rho^2}$.\\
i) If $ \sigma_2> \rho \sigma_1 $ and $ \sigma_1> \rho \sigma_2 $, then
\bqn{ \label{stda}
E(p) &\sim & \frac{\sigma_1}{ h_1^2h_2} \sqrt{ 2 \pi} u_p^{-2} e^{ \frac{ u_p^2}{2}} \varphi ( \sigma_2 u_p/\sigma_1+ \beta , u_p) \to 0, \quad p\uparrow 1, }
where
\begin{eqnarray*}
h_1= \frac{\sigma_2 - \rho \sigma_1 }{\sigma_1(1- \rho^2)} >0, \quad h_2=\frac{\sigma_1- \rho \sigma_2 }{\sigma_1(1- \rho^2)}>0.
\end{eqnarray*}
ii) If $\sigma_2 = \rho \sigma_1 $, then
\bqn{ \lim_{p\uparrow 1} E(p) &= & \sigma_1
\sqrt{1- \rho^2} \Bigl( \Phi'( \eta ) - \eta [1- \Phi( \eta) ] \Bigl) \in (0, \infty).
\label{stda2} }
iii) If $\sigma_2 < \rho \sigma_1$, then
\bqn{
E(p) &\sim &(\rho \sigma_1 - \sigma _2) u_p\to \infty , \quad p\uparrow 1.
\label{e15}}
iv) If $\sigma_1 \le \rho \sigma_2 $, then
\bqn{\label{i6} E(p) &\sim& \sigma_1 e^{ -\frac{\beta^2 }{2 }} \Phi( \eta \rho^* )
u_p e^{- \beta \frac{\sigma_2}{\sigma_1} u_p} e^{ -\frac{\sigma_2^2 - \sigma_1^2}{2 \sigma_1^2}u_p^2}\to 0, \quad p\uparrow 1,
}
where $\rho^*=\rho$ if $\sigma_2= \rho \sigma_1$ and $\rho^*=\infty$ otherwise. \\
v) As $p\uparrow 1$ we have
\bqn{
\label{eqe1} S(p) - \mu_1- \sigma_1 \rho u_p &\to& 0.}
\label{th1}
\end{theo}
The above findings show that $E(p)$ and $S(p)$ have a completely different behaviour as $p$ approaches 1. Both \eqref{stda} and \eqref{i6} prove that $E(p)$ tends super-exponentially fast to 0 as $p\to 1$. A completely different behaviour is observed in \eqref{stda2} and \eqref{e15}.
For the approximation of MES we have only one case as shown in \eqref{eqe1}, since its definition is invariant to $\sigma_2$. \\
The bivariate setup is however restrictive; it is possible to have in \eqref{eqe1} a non-zero limit in higher dimensions, see Remark \ref{remkoka}. Indeed, the two-dimensional setup is easier to deal with and there are no additional notation needed, but it does not show how to derive corresponding results in multivariate setup.
It is worth mentioning that extensions of our results to elliptical random vectors
are also possible, but those require more technical efforts and additional assumptions similar to \cite{Jaworski}[Assumption 4]. Moreover, extensions to the larger class of Gaussian like random vectors treated in \cite{farkas2017asymptotic}
can also be obtained, but again further technical treatments are needed and will therefore not be addressed here. Besides, our findings are of certain importance for considering approximations of other risk measures such as multivariate expectiles considered in \cite{maume2017multivariate}.
Brief outline of the rest of the paper: In the next section we focus on the multivariate setup deriving the approximations of MME, MES and the multivariate conditional tail expectation (MCTE). Section 3 contains all the proofs followed by an Appendix.
\section{Main Results}
In this section we shall be concerned with the multivariate setup deriving first an extension of \netheo{th1} and then discussing further some related conditional limit results. Given its importance in application we shall consider also the approximation of MCTE. In the last subsection the three dimensional case will be briefly explored.\\
In our notation below bold lower case symbols are column vectors in $\mathbb{R}^d$.
The Hadamard product $r \vk{x}$ stands for the vector $(r x_1 , \ldots, r x_d)$ where $r\in \R, \vk{x}=(x_1 , \ldots, x_d)^\top \in \R^d$. All other operations with vectors are defined as usual, component-wise. For instance $\vk a \vk x$ is the vector $(a_1 x_1 , \ldots, a_d x_d)^\top $ for any $\vk a, \vk x\in \R^d$ and $\vk x \ge \vk a$ means that $x_i \ge a_i, i\le d$.
\subsection{Approximation of MME and MES}
Let in the following $\vk Z=(Z_1 , \ldots, Z_d)$ be a $d$-dimensional Gaussian random vector with mean $\vk \mu$. As in the bivariate case we define MME for give level $p\in (0,1)$ by
$$ E(p)= \E{(Z_1- A_p)_+ \lvert Z_2> VaR_{Z_2}(p) , \ldots, Z_d > VaR_{Z_d}(p) },$$
with $A_p= \sum_{i=1}^{d-1} a_i VaR_{Z_{i+1}}(p)$ where $a_i$'s are given constants.
Writing $\sigma_i^2$ for the variance of $Z_i$ we have thus
\bqny{
E(p)&=& \sigma_1 \E{(X_1 - ( A_p -\mu_1)/\sigma_1 )_+ \lvert X_2> VaR_{X_2}(p) , \ldots, X_d > VaR_{X_d}(p) }\\
&=& \sigma_1 \E{(X_1 - (\sum_{i=1}^{d-1} a_i ( \sigma_{i+1} u_p + \mu_{i+1}) -\mu_1)/\sigma_1 )_+ \lvert X_2> u_p , \ldots, X_d > u_p },
}
with $\vk{X}=(X_1 , \ldots, X_d) $ a centered Gaussian random vector with covariance matrix $\Sigma$ equal to the correlation matrix of $\vk Z$ and
$u_p= \Phi^{-1}(p)$. For notational simplicity, throughout this paper random vectors are row vectors and therefore we do not use the transpose sign.\\
Consequently, without loss of generality we shall determine next the asymptotics of
$$ E(\vk c,u)= \E{(X_1 - c_1u - \mu)_+ \lvert X_2> c_2 u , \ldots, X_d > c_d u }$$
as $u\to \infty$ for given $\vk c=(c_1 , \ldots, c_d)^\top,\mu$ assuming that $\Sigma$ is a non-singular correlation matrix.\\
In the two-dimensional setup the aimed approximation can be obtained without discussing a closely related and crucial quadratic optimisation problem. However, in the higher dimensional settings we need to solve the following quadratic programming problem
$ \Pi_\Sigma(\vk c)$: determine the minimum of $\vk x^\top \Sigma^{-1}\vk{x}$ subject to $\vk{x} \ge \vk c$ for given $\vk c\in \R^d\setminus (-\infty, 0]^d$ with solution $\tilde{\vk c}$. The reason for discussing $\Pi_\Sigma(\vk c)$ is that our investigation is closely related to the asymptotic tail behaviour as $u\to \infty$ of
$\pk{\vk{X}> \vk c u}$. In view of \cite{ENJH02} (see below \nelem{thmL}) the aforementioned asymptotic tail behaviour is solely determined by $\Pi_\Sigma(\vk c)$.
In view of \nelem{prop1} in Appendix we have that $\vk{\tilde c}$ exists, is unique and there exists a unique index set $I \subset \{1 , \ldots, d \}$ with $m\ge 1$ elements such that
\bqn{\label{cc} \tilde{\vk c}_I= \vk c_I, \quad \tilde{\vk c}_{I^c}= \Sigma_{I^cI}(\Sigma_{II})^{-1} \vk c_I \ge \vk c_{I^c}, \quad
\tilde{\vk c}^\top \Sigma^{-1} \tilde{\vk c} = \vk c_I^\top
(\Sigma_{II})^{-1} \vk c_I>0,
}
where $I^c= \{1 , \ldots, d \} \setminus I$; note in passing that $I^c$
can be empty.\\
Throughout this paper $\Sigma_{IJ}$ is the matrix obtained by $\Sigma$ keeping the rows and columns with indices in $I$ and $J$, respectively and similar notation applies for vectors.
Denote next by $L \subset \{1 , \ldots, d \}$ the maximal index set that contains $I$ such that
$\tilde{\vk c}_L=\vk c_L$. We have by \nelem{prop1} that
$$\vk c^\top \Sigma^{-1} \vk c = \vk c_L^\top
(\Sigma_{LL})^{-1} \vk c_L= \vk c_I^\top
(\Sigma_{II})^{-1} \vk c_I$$
and moreover
\bqny{ \label{raki}
h_i= \vk c_I^\top (\Sigma_{II})^{-1} \vk e_i > 0, \quad \forall i\in I ,
}
where $\vk e_i$ is the unit vector in $\mathbb{R}^m$ with all components equal to 0 apart from the $i$th component equal to 1.
Denote by $L^c$ the complement of index set $L$ with respect to $\{1 , \ldots, d\}$. \\
For illustration purposes, we discuss briefly the case $d=2$.
Consider therefore $\Sigma$ to be a correlation matrix with off diagonal elements equal $\rho\in (-1,1)$ and let $\vk c= (1,c)^\top$. If $c\in ( \rho,1) $, then $\tilde{\vk c}= \vk c$ and hence $I=L=\{1,2\}$ implying that $I^c,L^c$ are empty. The assumption $c=\rho$ yields
$$I=\{1\}, \quad L=\{1 , 2\},$$
whereas supposing that $c< \rho$ implies $\tilde{\vk c}= (1, \rho)^\top$ and $I=L=\{1\}$.
Below we write $\vk z_{-1}$ instead of $\vk z_I$ with $I=\{2 , \ldots, d \}$ for any $\vk z\in \R^d$. We present next the approximation of $E(\vk c,u)$.
\begin{theo} Let $\vk c,\mu$ be two given constants and let $\Sigma$ be the non-singular covariance matrix of the centered Gaussian random vector $\vk{X}$. Let $I,L$ be the index sets identified by $\Pi_\Sigma(\vk c),$ where $\vk c \in \R^d$ has at least one positive component. \\
i) If $1\in I$, then we have
\bqn{ E(\vk c,u) \sim \frac 1 {\vk c_I^\top (\Sigma_{II})^{-1} \vk e_1} \frac{\pk{ X_1> c_1 u+ \mu, \vk X_{-1} > \vk c_{-1} u}}{u \pk{
\vk X_{-1} > \vk c_{-1} u } } \to 0, \quad u\to \infty. \label{27}
}
ii) If $1 \in L \setminus I$, then $c_1 = ( \Sigma_{I^cI} (\Sigma_{II})^{-1} \vk c _I)_1$ and further
\bqn{ \label{meinart}
\limit{u} E(\vk c,u) &=& \E{(Y- \mu)_+}\in (0,\infty),
\label{23}}
where $Y$ has survival function $\overline G(x)= \pk{X_1> x\lvert \vk{X}_I= \vk 0_I}$ if $L=I \cup \{1\}$ and if $N^*=L \setminus (I \cup \{1\})$ is non-empty
\bqn{\label{barG}
\overline G(x)=\frac{ \pk{ X_1 > x, \vk X_{N^*}> \vk 0_{N^*}\lvert \vk{X} _I = \vk 0_I} }{\pk{ \vk X_{N^*}> \vk 0_{N^*}\lvert \vk{X} _I = \vk 0_I}}, \quad x\in \R.
}
iii) If $1\in L^c$, then as $u\to \infty$
\bqn{ E(\vk c,u) \sim u(( \Sigma_{I^cI} (\Sigma_{II})^{-1} \vk c _I)_1 - c_1) \to \infty
}
\label{thm2}
\end{theo}
\begin{remark} i) The tail asymptotics of Gaussian random vectors is well-known, see below \nelem{thmL} for a minor refinement. Hence the exact asymptotic behaviour of
$E(\vk c,u)$ in \eqref{27} can be explicitly calculated by approximating both $\pk{ X_1> c_1 u+ \mu, \vk X_{-1} > \vk c_{-1} u}$ and
$\pk{ \vk X_{-1} > \vk c_{-1} u}$ as $u\to \infty$. \\
ii) As we demonstrate in the Appendix, $E(\vk c,u)$ in \eqref{27} equals $o(e^{- \varepsilon u^2})$ for some small $\varepsilon>0$.\\
\end{remark}
In order to discuss the approximation of MES in this $d$-dimensional setting we define
\bqny{
S(\vk c,u):=\E{ X_1 \lvert \vk X_{-1} > \vk c_{-1} u} &=& c_1 u + \E{ X_1- c_1u \lvert \vk X_{-1} > \vk c_{-1} u} \\
&= :& c_1 u + A(\vk c, u) , \quad \vk c= (c_1 , \ldots, c_d)^\top ,
}
where $\vk c\in \R^d\setminus (- \infty, 0]^d$. Since we are interested in the approximation of
$\E{ X_1 \lvert \vk X_{-1} > \vk c_{-1} u } $ as $u\to \infty$, the natural question here is if we can determine $c_1$ such that $A(\vk c, u)$ is bounded for all large $u$.\\ In view of \cite{MR2397662}[Thm 5.1], we know that for particular choices of $\vk c$ the following convergence in distribution
\bqn{
(X_1 - c_1 u) \lvert ( \vk X_{-1} > \vk c_{-1} u) \overset{d}\rightarrow Y, \quad u\to \infty
\label{lum}
}
holds with $Y$ being a Gaussian or some truncated Gaussian random variable. The aforementioned result suggests that $\limit{u} A(\vk c, u)=\E{Y}$ could be valid, which then for the specific choice of $c_1$ implies
\bqn{ \label{zez}
S(\vk c,u)- c_1u \to \E{Y} , \quad u\to \infty.
}
Our next result shows that indeed \eqref{zez} holds.
\begin{theo} \label{thm3}
Let $\vk b= \vk c_{-1} $ have at least one positive component and let $\mathcal I, \mathcal L $ be the index sets corresponding to $\Pi_B(\vk b )$ with unique solution $\tilde{\vk b}$, where $B$ is the covariance matrix of $\vk X _{-1} $. Suppose for simplicity that $\mathcal I=\{k , \ldots, d-1 \}$. Then \eqref{zez} holds with
$$
c_1= \Sigma_{1, I} (\Sigma_{II})^{-1} \vk c_I, \quad I= \{ k+1 , \ldots, d\}.
$$
Moreover, for the above choice of $c_1$ \eqref{lum} is satisfied with $Y$ having survival function
$\overline G(x)= \pk{ X_1> x \lvert \vk X_{I} = \vk 0}$
if $\mathcal L= \mathcal I$. In case that $\mathcal N= \mathcal L \setminus \mathcal I$ is non-empty, then $\overline G$ is given from \eqref{barG} with $N^*= \mathcal N+1$.
\end{theo}
\begin{remark} \label{remkoka} In the two dimensional setup $\vk b$ has only one element and thus $\mathcal I= \mathcal L$. Hence the limiting random variable
$Y$ has $N(0, 1- \rho^2)$ distribution and therefore $\E{Y}=0$ confirming \eqref{eqe1}.
If $\mathcal I\not= \mathcal L$, then in general $\E{Y}$ does not equal 0.
\end{remark}
\subsection{Approximation of MCTE}
Another interesting risk measure is the multivariate conditional tail expectation (abbreviated here as MCTE), which for elliptically symmetric random vectors can be calculated explicitly, see \cite{MR3543047,MR3754572}. For a given random vector $\vk{X}=(X_1 , \ldots, X_d)$ with integrable components and given $\vk c\in \R^d$ it is defined by
$$ M(\vk c, u)=\E{ X_1 \lvert \vk{X} > \vk c u }$$ for $u>0$ and $\vk c$ with at least one positive component. \\
Note in passing that for any $\vk c,u$ and taking for simplicity $\mu=0$ we have (hereafter where $\mathbb{I}(\cdot)$ denotes the indicator function)
\bqny{
E(\vk c,u)&=&
\frac{ \E{ (X_1 - c_1u )_+ \mathbb{I}(\vk{X}_{-1}> \vk c_{-1} u) }}{\pk{\vk{X}_{-1}> \vk c_{-1} u}}\\
&=& \frac{ \E{ (X_1 - c_1u )\mathbb{I}(X_1> c_1u) \mathbb{I}(\vk{X}_{-1}> \vk c_{-1} u) }}{\pk{\vk{X}_{-1}> \vk c_{-1} u} }\\
&=& \frac{\pk{\vk X> \vk c u} } {\pk{ \vk X_{-1}> \vk c_{-1} u}}
\frac{ \E{ (X_1 - c_1u ) \mathbb{I}( \vk{X} > \vk c u ) }}{\pk{\vk{X} > \vk c u} }
\\
&=& \frac{\pk{\vk X> \vk c u} } {\pk{ \vk X_{-1}> \vk c_{-1} u}}
\E{ (X_1 - c_1u ) \lvert \vk{X} > \vk c u ) } \\
&=:& r(u ) [M(\vk c, u) - c_1 u],
}
where we assumed that $ \pk{\vk{X}> \vk cu} >0$. In view of \nelem{thmL}, under the assumption $iii)$ in \netheo{thm2} it follows
that $\limit{u} r(u)=1$. Consequently, \netheo{thm2} implies
\bqn{\label{ms}
M(\vk c, u) \sim \Sigma_{1,I} (\Sigma_{II})^{-1} \vk c_I u, \quad u\to \infty.
}
Under the assumption $ii)$ in \netheo{thm2} since by \nelem{thmL} we have $\limit{u} r(u)= C\in (0,\infty)$, then again \netheo{thm2} yields that for some $C_1>0$ that can be calculated explicitly
\bqn{ \limit{u} [M(\vk c,u) - c_1u] =C_1, \quad u\to \infty.
}
Finally, under the assumptions of \netheo{thm2}, $i)$ we have that
\bqn{
\limit{u} u[M(\vk c,u) - c_1u] = \frac{1}{\vk c_I^\top (\Sigma_{II})^{-1} \vk e_1}>0, \quad u\to \infty.
}
An intuition for the above approximations comes from the conditional limit theorem derived in \cite{MR2397662}[Thm 5.1]. For instance if $1 \in I$ being the index set related to $\Pi_\Sigma(\vk c)$ for some general $\vk c$ with at least one positive component, we have the convergence in distribution
$$ u(X_1 - c_1 u) \bigl \lvert ( \vk{X} > \vk c u ) \overset{d}\rightarrow \mathcal{E}, \quad u\to \infty, $$
where $\mathcal{E}$ is an exponential random variable with mean $1/\vk c_I^\top (\Sigma_{II})^{-1} \vk e_1$.
The following result is new and gives a minor refinement of \eqref{ms}.
\begin{theo} Under the assumptions of \netheo{thm2} iii) we have $ \tilde c_1= \Sigma_{1,I} (\Sigma_{II})^{-1} \vk c_I>c_1$
\bqn{ (X_1 - \tilde c_1 u) \lvert \vk{X} > \vk c u \overset{d}\rightarrow Y, \quad u\to \infty,
}
where $Y$ has survival function $\overline G$ given in \netheo{thm2} with $N^*=L \setminus I.$ Moreover as $u\to \infty$
\bqn{ \label{besserM}
M(\vk c,u)- \tilde c_1 u \to \E{Y}.}
\label{thm4}
\end{theo}
\begin{remark}
If $L=I$, then $\E{Y}=0$ since $Y$ with survival function $\overline G$ defined above is a centered Gaussian random variable.\\
\end{remark}
\subsection{Trivariate Case} In order to apply our results we need to determine the index sets $I$ and $L$ related to the quadratic programming problem $\Pi_\Sigma(\vk c)$. The index set $I$ has $m\le d$ elements and it is possible that $m=1$ for given $\vk c$ with at least one positive component. If $X_1$ is independent of $\vk X_{-1}$, then it follows easily that $m\ge 2$ and $1 \in I$, whereas for the case $d=2$ and $c_1=c_2$ we have $m=2$ and $I=L$. In general, $m=d$ if and only if the so-called Savage condition (see \cite{Savage,Ruben})
$$\Sigma^{-1} \vk c> \vk 0=(0 , \ldots, 0)^\top \in \R^d$$
holds, which can be easily checked for given $\vk c$ and $\Sigma$. If the Savage condition does not hold, then $m<d$ but the exact value of $m$ cannot be known without the knowledge of $\Sigma$ and $\vk c$. In the following we discuss in details the trivariate case $\vk c= (1,1,1)^\top $ and $\Sigma$ is a non-singular correlation matrix with entries $\sigma_{ij},i,j\le 3$.\\
First note that the Savage condition is equivalent with
\bqn{\label{arge}
1+ 2 \sigma -\sigma_{12}- \sigma_{13}- \sigma_{23}> 0, \quad \sigma= \min(\sigma_{12}, \sigma_{13}, \sigma_{23}),
}
which is equivalent with $m=3$ as mentioned above. Consequently, assuming \eqref{arge}, by statement $i)$ in \netheo{thm2}
\bqny{
E(\vk c, u) \sim
\frac{\sqrt{1- \sigma_{23}} } {(1+ \sigma_{23})^{3/2} } \frac{1}{ \sqrt{2 \pi det(\Sigma)} (\vk c^\top \Sigma^{-1} \vk e_1)^2 \prod_{i=2}^3 \vk c^\top \Sigma^{-1} \vk e_i } \frac{1}{u^2} e^{ - \frac{u^2}{2} [ ( \vk c + \mu \vk e_1)^\top \Sigma^{-1} (\vk c + \mu \vk e_1)/2 - 1/(1+ \sigma_{23})]}
}
as $u\to \infty$, where $\vk e_i$'s are unit vectors in $\mathbb{R}^ d$ with $1$ in the $i$th coordinate and all other coordinates equal 0.\\
Suppose next that
\eqref{arge} does not hold, i.e.,
\bqny{ \label{wild}
1+ 2\sigma - \sigma_{12}- \sigma_{13}- \sigma_{23}\le 0
}
and $m=2$ since $m=1$ is impossible in the two dimensional setup when the coordinates of $\vk c$ are equal and positive.
If \eqref{wild} is satisfied with equality, then $L=\{1,2 ,3\}$. Assuming that $\sigma_{12}\le \min( \sigma_{13},\sigma_{23})$ implies $I=\{1,2\}$ and thus $1\in I$ and the asymptotics of $E(\vk c, u)$ follows again from statement $i)$ in \netheo{thm2}.
The case $\sigma= \sigma_{13}$ is similar and therefore we assume next that $\sigma=\sigma_{23}$, which implies that $I=\{2,3\}$ and thus $1\not \in I$ and $1\in L$, provided that $1+ \sigma_{23}= \sigma_{12}+ \sigma_{13}$. For this case, by \eqref{meinart}
\bqn{ \limit{u} E(\vk c,u)= \E{(X_1- \mu)_+ \lvert X_2=0, X_3=0}.
}
Finally, if $\sigma_{12}+ \sigma_{13}- \sigma_{23}-1>0$, then $I=L$ and $1 \in L^c$. Hence by statement $iii)$ in \netheo{thm2}
\bqny{ E(\vk c,u) \sim \frac{\sigma_{12} + \sigma_{13}- \sigma_{23}-1}{1+ \sigma_{23}}
}
and from \eqref{besserM}
$$ M(\vk c,u) - \frac{\sigma_{12} + \sigma_{13}}{1+ \sigma_{23}} u \to 0$
as $u\to \infty$.
\section{Proofs}
\prooftheo{th1} Let $(X_1,X_2)$ be jointly Gaussian with mean vector zero, correlation $\rho\in (-1,1)$ and set
$$u:=u_p= VaR_{X_2}(p), \quad \beta= \frac{\mu_2- \mu_1}{\sigma_1}, \quad c= \frac{\sigma_2}{\sigma_1}.$$
For any $u>0$ we have
\bqny{ E(p) &=& \E{ (\sigma_1 X_1 + \mu_1 - \sigma_2 u - \mu_2 )_+ \lvert X_2 > u }\\
&=&
\sigma_1\E*{ \Bigl( X_1 - \frac{ \sigma_2}{\sigma_1} u - \frac{\mu_2- \mu_1}{\sigma_1}\Bigr )_+ \Bigl \lvert X_2 > u }\\
&=& \frac{ \sigma_1}{\pk{X_1> u}}\E*{ ( X_1 - c u - \beta ) \mathbb{I}( X_1 > cu+ \beta, X_2 > u } \\
&=:&\frac{ \sigma_1}{\pk{X_1> u}}\theta_u \in (0,\infty).
}
Let below $\varphi$ denote the pdf
of $(X_1,X_2)$.\\
$i)$ First note that in this case $c \in ( \rho,1]$. Let $h_1^*,h_2^*$ be defined by
\bqn{\label{frank}
h_1^*= \frac{c - \rho}{1- \rho^2}>0, \quad h_2^*= \frac{1- c \rho}{1- \rho^2}>0.
}
Using the transformation
$$ s= c u+\beta + x/ u,\quad t= u + y/u$$
for any $u>0$, we have further
\bqny{
\lefteqn{\theta_u =\int_{cu+ \beta}^\infty \int_u^\infty (s- cu- \beta) \varphi(s,t) ds dt}\\
&=&
{u^{-3}}\int_{0}^\infty \int_0^\infty x \varphi(cu+ \beta + x/u,u+ y/u) dx dy\\
&=:& {u^{-3}}\varphi(cu+ \beta , u)
\int_{0}^\infty \int_0^\infty x \expon{- h_1^* x- h_2^* y } \psi_u(x,y) dx dy.
}
After some calculations for any $x,y$ positive we obtain
\bqn{ \label{psi} \limit{u} \psi_u(x,y)=1}
and further for all $\varepsilon >0$ sufficiently small and all $u$ large $\psi_u(x,y) \le e^{ \varepsilon (x+ y)}$. Consequently, since $h_1^*,h_2^*$ are positive, applying the dominated convergence theorem
we obtain
\bqny{ \theta_u &\sim & {u^{-3}} \varphi(cu+ \beta , u)
\int_{0}^\infty \int_0^\infty x \expon{-h_1^*x - h_2^* y } dx dy \\
&=& \frac{1}{ (h_1^*)^2 h_2^*} {u^{-3}} \varphi(cu+ \beta, u), \quad u\to \infty,
}
hence the claim follows. \\
$ii)$ If $c= \rho$ the above transformation cannot be used since then $h_1^*= 0$ and the limiting integral is not finite. We use another transformation, namely
$$ s= \rho u+\beta + x , \quad t= u + y/u$$
for any $u>0$. Consequently, we have
\bqny{
\lefteqn{\theta_u = u^{-1} \int_{0}^\infty \int_0^\infty x \varphi (\rho u+ \beta + x,u+ y/u) dx dy} \\
&=:& u^{-1} \varphi(\rho u, u)
\int_{0}^\infty \int_0^\infty x e^{ - \frac{(x+ \beta)^2}{2 (1- \rho^2)} - y } \psi_u(x,y) dx dy.
}
By the definition of $\varphi$
\bqn{\label{hgoja}
u^{-1} \varphi(\rho u, u) \sim \frac{1}{2 \pi \sqrt{1- \rho^2}} u^{-1} e^{-u^2/2}, \quad
\pk{X_1> u} \sim u^{-1} e^{-u^2/2}/\sqrt{2 \pi}, \quad u\to \infty,
}
where the second approximation is a direct consequence of the well-known Mill's ratio asymptotics. Clearly \eqref{psi} holds and the domination of the integrand follows easily. Hence by the dominated convergence theorem as $u\to \infty$
\bqny{
\theta_u & \sim & u^{-1} \varphi(\rho u, u)
\int_{0}^\infty \int_0^\infty x e^{ - \frac{(x+\beta)^2}{2 (1- \rho^2)} - y } dx dy \\
&\sim & \frac{1}{\sqrt{2 \pi (1- \rho^2)}}
\int_{\beta}^\infty (x-\beta) e^{ - \frac{x^2}{2 (1- \rho^2)} } dx \pk{X_1> u} \\
& = &\E{( \sqrt{1- \rho^2}X_1 - \beta)_+}\pk{X_1> u}.
}
Since for any $a>0, b\in \R$
\bqn{\label{ab}
\E{( a X_1 - b)_+} = a\Phi'( b/a) -
b[1- \Phi( b/a)]
}
the claim follows. \\
$iii)$ If $c< \rho$, then
\bqny{
\pk{X_1> cu + \beta , X_2> u} &=& u^{-1} \int_{0}^\infty \pk{ \sqrt{1- \rho^2} X_1> (c- \rho)u - \rho x/u+\beta } \frac{1}{\sqrt{2 \pi}} e^{- (u+ x/u )^2/2} \, dx\\
&\sim & \pk{X_1> u}, \quad u \to \infty.
}
Next, using the same transformation as for the case $c=\rho$ gives letting $u\to \infty$
\bqny{
\theta_u &=& \int_{cu+\beta}^\infty \int_u^\infty (x+ (\rho -c)u - \rho u - \beta ) \varphi(x,y) dx dy \\
&=& (\rho -c)u \pk{X_1> cu+ \beta , X_2> u}+ \int_{cu+ \beta}^\infty \int_u^\infty (x- \rho u - \beta ) \varphi(x,y) dx dy \\
&\sim& (\rho -c) u\pk{X_1> u} +
u^{-1} \int_{(c - \rho)u }^\infty \int_0^\infty x \varphi(\rho u+ \beta +x,u+ y/u) dx dy \\
&= &
(\rho -c) u\pk{X_1> u} +
u^{-1} \varphi(\rho u, u) \int_{(c - \rho)u }^\infty \int_0^\infty x e^{ - \frac{(x+\beta)^2}{2 (1- \rho^2)} - y } \psi_u(x,y) dx dy.
}
As above, by \eqref{psi} and $ \limit{u} (c-\rho)u= - \infty$
$$ \limit{u} \int_{(c - \rho)u }^\infty \int_0^\infty x e^{ - \frac{(x+ \beta)^2}{2 (1- \rho^2)} - y } \psi_u(x,y) dx dy =
\int_{\mathbb{R} } \int_0^\infty x e^{ - \frac{(x+ \beta)^2}{2 (1- \rho^2)} - y } dx dy=0.
$$
Utilising further \eqref{hgoja} we obtain
$$ \theta_u \sim (\rho-c) u\pk{X_1> u}, \quad u\to \infty$$
establishing the claim.\\
$iv)$ Since $c\ge 1/\rho$, then $h_2^*$ defined in \eqref{frank} is non-positive. Hence we need to use another transform, namely
$$ s= c u +\beta + x/u , \quad t= c \rho u + y$$
for any $u>0$. Consequently, for any $u>0$
\bqny{
\theta_u &=& u^{-2} \int_{0}^\infty \int_{ (1- c \rho) u} ^\infty x \varphi (c u+ \beta + x/u, c \rho u+ y) dx dy \\
&=:& {(cu)}^{-2} \varphi(c u, c \rho u) e^{-\beta cu}
\int_{0}^\infty\int_{ (1- c \rho) u}^\infty x e^{-x} e^{ - \frac{ y^2 - 2 \rho \beta y + \beta^2}{2 (1- \rho^2)}} \psi_u(x,y) dx dy,
}
where $\psi_u(x,y) \to 1$ as $u\to \infty$. The domination of the integrad follows easily, hence applying the dominated convergence theorem and \eqref{hgoja}, for $c=1/\rho$
\bqny{
\theta_u & \sim & {(cu)}^{-2} \varphi(c u, c \rho u) e^{ -\frac{ \beta^2}{2 } -\beta cu} \int_0^\infty \int_{0}^\infty x e^{-x} e^{ - \frac{(y- \rho \beta )^2}{2 (1- \rho^2)} } dx dy \\
&=& {(cu)}^{-2} \varphi(c u, c \rho u) e^{ -\frac{ \beta^2}{2 } -\beta cu} \sqrt{ 2 \pi (1- \rho^2)} [1- \Phi( - \rho \beta/\sqrt{1- \rho^2}) ]\\
&\sim & {(cu)}^{-1} e^{ -\frac{ \beta^2}{2 } -\beta cu} {\Phi( \rho \beta/\sqrt{1- \rho^2})} [1- \Phi(cu)]
}
as $u\to \infty$. If $c> 1/\rho$, then
\bqny{
\theta_u & \sim & {(cu)}^{-2}
\varphi(c u, c \rho u) e^{ -\frac{ \beta^2}{2 } -\beta cu}
\int_{\mathbb{R} } e^{ - \frac{(y- \rho \beta )^2}{2 (1- \rho^2)} } dy \\
&\sim & {(cu)}^{-1} e^{ -\frac{ \beta^2}{2 } -\beta cu}[1- \Phi(cu)], \quad u\to \infty,
}
hence the claim follows.\\
$v)$ First note that for any $p \in (0,1)$ and $u:=u_p= VaR_{Z_2}(p)$
$$ S(p)=\mu_1+ \E{( Z_1- \mu_1) \lvert Z_2> VaR_{Z_2}(p) } = \mu_1+ \sigma_1 \rho u + \sigma_1 \E{ X_1- \rho u \lvert X_2> u}.$$
As above we have
\bqny{
\E{ X_1- \rho u \lvert X_2> u} &=& \frac{1}{\pk{X_1> u}} \int_{x \in \R, y> u} (x- \rho u) \varphi(x,y) dx dy\\
&=& \frac{\varphi(\rho u, u)}{u\pk{X_1> u}} \int_{x \in \R, y> 0} x \varphi(\rho u+ x, u+y/u)/\varphi(\rho u, u) dx dy\\
&\sim & \frac{1}{\sqrt{2 \pi (1- \rho^2)}} \int_{x \in \R, y> 0} x \varphi(\rho u+ x, u+y/u)/\varphi(\rho u, u) dx dy\\
&\sim & \frac{1}{\sqrt{2 \pi (1- \rho^2)}} \int_{x \in \R, y> 0} x e^{-\frac{x^2}{2 (1- \rho^2)} - y} dx dy\\
&=&0
}
as $u\to \infty$, establishing thus the claim.
\hfill $\Box$
\COM{
\proofprop{propA}
First note that for any $y_u \to y \in \R $ as $u\to \infty$ and any $x\in \R$, by the independence of $W_1$ and $W_2$ we have
$$ \pk{ Z_1 - ru> x \lvert Z_2= u + y_u/w(u) } =
\pk{ W_1 > x- ry_u/w(u)} \to \pk{W_1 > x- ry/\gamma}, \quad u\to \infty$$
since we have assumed that
$$\limit{u} w(u)= \gamma \in (0, \infty]. $$
Let $\nu_u(\cdot), u>0$ be a family of finite positive measures index by $u$ defined by $ F_u(\cdot)= F(u+ \cdot/w(u) /\pk{W_2> u}$, i.e.,
$\mu_u( (x,y])= F_u(y)- F_u(x), x \le y$. By the Gumbel max-domain of attraction assumption we have for any $x,y\in \R$
$$ \limit{u} [ F_u(y) - F_u(x)]
=e^{-x}- e^{-y}, \quad x\le y, x,y\in \R.$$
Consequently, using for instance \cite{Htilt}[Lem A.2] we obtain
\bqn{\pk{ Z_1- ru > x\lvert Z_2 > u}
&=& \int_0^\infty \pk{ W_1 > x- ry/w(u)} d F_u(y) \\%F(u+ y/w(u)) /\pk{W_2> u} \notag \\
& \to & \int_u^\infty e^{-y} \pk{ W_1 > x- ry/\gamma } d y, \quad u\to \infty \notag \\
&=& \pk{ W_1 + r \mathcal{E} /\gamma > x},
\label{eT}
}
with $\mathcal{E}$ being a unit exponential random variable independent of $W_1$. Note that by the assumption that $F$ has an infinite upper endpoint we have
$ \pk{ W_1 + r \mathcal E/ \gamma > 0}>0$. Further, for any $M$ positive using \eqref{eT}
\bqny{
\lefteqn{\E{(Z_1- ru)_+ \lvert Z_2 >u} }\\
&=&
\int_0^\infty \frac{ \pk{ Z_1 > ru+ x , Z_2> u } } {\pk{Z_2> u} } dx\\
&= &
\Bigl[ \int_0 ^M \pk{ Z_1 > ru + x \lvert Z_2> u} dx +
\int_ M^\infty \pk{ Z_1 > ru + x \lvert Z_2> u} dx \Bigr]
}
as $ u\to \infty.$
By the previous derivation and the monotone convergence theorem
$$ \limit{M} \limit{u} \int_0 ^M \pk{ Z_1 > ru + x \lvert Z_2> u} dx= \limit{M}\int_0^M \pk{ W_1+ r \mathcal{E}/\gamma > x} dx =
\E{(W_1 + r \mathcal{E}/\gamma )_+},$$
hence the claim follows from \eqref{cM}.
\hfill $\Box$
}
\prooftheo{thm2}
The proof is driven by the tail asymptotics of Gaussian random vectors derived in \cite{ENJH02}.
As therein the index set $I$ is also crucial for the derivation of the asymptotics of $E(\vk c, u)$, since the tail asymptotics of $\pk{\vk{X} > \vk c u}$ is up to a pre-factor the same as that of $\pk{\vk{X}_I> \vk c_I u}$ as $u\to \infty$. The components with indices in the set $L \setminus I$ influence the asymptotics only by the pre-factor, whereas the components with indices in the set $K:=L^c$ are not important. For these reasons we have three different cases which shall be dealt with separately.\\
Set next for any $u>0$
$$E^*(u)=\pk{\vk X_{-1} > \vk c_{-1} u }E(\vk c,u)$$
and write $\varphi$ for the pdf of $\vk X$.\\
$i)$ When $1 \in I$, then $\tilde{c}_1=c_1$. Hence for any $u$ positive
\bqny{ E^*(u)
&=& \int_{ \vk s > \vk c u+ \mu \vk e^*_1}( s_1- c_1u- \mu)_+ \varphi(\vk s) d \vk s\\
&=& \frac{1}{u^{m+1}} \int_{ \vk x> \overline{\vk u} (\vk c u- \tilde{\vk c}u) } x_1 \varphi(\tilde {\vk c} u+ \vk x/\overline {\vk u} + \mu \vk e^*_1 ) d \vk x,
}
where $\overline {\vk u}$ has all components with indices in $I$ equal to $u$ and otherwise equal to 1 and $\vk e^*_1$ has all components equal to 0 apart from the first component equal to 1. Recall that $m$ stands for the number of the elements of the index set $I$ which cannot be empty. Using further
\eqref{eq:new} (set next $J=I^c=\{ 1 , \ldots, d \} \setminus I$ and assume for simplicity that $J$ is not empty) we have
\bqn{ \label{domA}
\lefteqn{ (\tilde{\vk c}u + \vk x/\overline{\vk u} + \mu \vk e^*_1 ) ^\top \Sigma^{-1} (\tilde{\vk c}u + \vk x/\overline{\vk u} + \mu \vk e^*_1 ) }\notag \\
&=&
(\tilde{\vk c}u + \mu \vk e^*_1 ) ^\top \Sigma^{-1} (\tilde{\vk c} u + \mu \vk e^*_1 )
+ 2 u\tilde{\vk c}^\top \Sigma^{-1} \vk x/ \overline{\vk u} + 2\mu ( {\vk e}^*_1) ^\top \Sigma^{-1}\vk x/\overline {\vk u}+ (\vk x/\overline{\vk u})^\top \Sigma^{-1} \vk x/\overline{\vk u}.
}
By the properties of $\tilde {\vk c}$ (see equation \eqref{eq:new} in \nelem{prop1}) for any $u\not=0,\vk{x}\in \R^d$
$$ u\tilde{\vk c}^\top \Sigma^{-1} \vk x/ \overline{\vk u} = \tilde{\vk c}_I^\top (\Sigma_{II})^{-1} \vk x_I .$$
Hence since $1 \in I$ implies $ (\vk e^*_1) ^\top \Sigma^{-1}\vk x/\overline{\vk u} = O(1/u)$ as $u\to \infty$, then by \eqref{domA}
$$
\varphi(\tilde {\vk c} u+ \vk x/\tilde{\vk u} + \mu \vk e^*_1 ) = \varphi(\tilde {\vk c} u + \mu \vk e^*_1 )\psi_u(\vk x) e^{ - {\vk c}_I (\Sigma_{II})^{-1}\vk x_I - \vk x_J^\top (\Sigma^{-1})_{JJ } \vk x_J/2 },
$$
where $\limit{u}\psi_u(\vk x) = 1$ for any $\vk{x} \in \mathbb{R}^d$. Using the fact that $\Sigma^{-1}$ is positive definite and ${\vk c}_I^\top (\Sigma_{II})^{-1} > \vk 0_I$ for any $\vk{x}\in \R^d$ with $\vk{x}_I> \vk 0_I$ we obtain that
\bqn{\label{domC} 2 {\vk c}_I^\top (\Sigma_{II})^{-1} \vk x_I+ 2\mu ( {\vk e}^*_1) ^\top \Sigma^{-1}\vk x/\overline {\vk u}+ (\vk x/\overline{\vk u})^\top \Sigma^{-1} \vk x/\overline{\vk u} \le C ( \vk 1 _I^\top \vk x_I + \vk{x}^\top_J \vk{x}_J)
}
holds for all large $u$ and some positive constant $C$. Using thus the dominated convergence theorem (recall $\tilde c_i > c_i $ for any $i\in K=L^c$) we obtain
\bqny{ \lefteqn{ E^*(u)}\\
&=& \frac{1}{u^{m+1}} \varphi (\tilde {\vk c} u + \mu \vk e^*_1 ) \int_{ \vk x_L > \vk 0_L, x_i >u (c_i- \tilde c_i), i \in K } x_1 \psi_u(\vk x)
e^{ - {\vk c}_I (\Sigma_{II})^{-1}\vk x_I - \vk x_J^\top (\Sigma^{-1})_{JJ } \vk x_J/2 } d \vk x \\
&\sim & \frac{1}{u^{m+1}} \varphi(\tilde {\vk c} u + \mu \vk e^*_1 ) \int_{ \vk x_L > \vk 0_L, x_i \in \mathbb{R}, i\in K}
x_1 e^{ - {\vk c}_I (\Sigma_{II})^{-1}\vk x_I - \vk x_J^\top (\Sigma^{-1})_{JJ } \vk x_J/2 } d \vk x\\
&= & \frac{1}{h_1 u} \frac{1}{u^{m}} \varphi(\tilde {\vk c} u + \mu \vk e^*_1 ) \frac{1}{\prod_{i\in I}h_i}
\int_{ x_{i}> 0, i\in L\setminus I , x_i \in \mathbb{R}, i\in K}
e^{ - \vk x_J^\top (\Sigma^{-1})_{JJ } \vk x_J/2 } d \vk x_J,
}
where $h_i= \vk c_I^\top (\Sigma_{II})^{-1} \vk e_i>0$ with $\vk e_i$ the $i$th unit vector in $\mathbb{R}^{m}$ with $m$ the number of elements of the index set $I$.
Since $1 \in I$, applying \eqref{sh} in \nelem{thmL} yields
$$ E^*(u) \sim (u h_1)^{-1} \pk{ \vk{X}> \vk c u + \mu \vk e_1^*}, \quad u\to \infty , $$
hence the claim follows by the definition of $E^*(u)$.\\
$ii)$ In view of \nelem{thmL}, the asymptotics of $\pk{ X_1 > c_1 u + x, \vk X_{-1}> \vk c_{-1} u }$ and that of
$\pk{\vk X_{-1}> \vk c_{-1} u }$ as $u\to \infty$ are up to the pre-factor the same. It follows easily that
$ Y_u:= (X_1 - c_1 u) \lvert \vk{X}_{-1}> \vk c_{-1}u $ converges in distribution as $u\to \infty$
to a random variable $Y$ which has survival function $\pk{X_1> x \lvert \vk{X}_I = \vk 0_I}$ if $L\setminus I= \{1\}$ and
when $N^*=L \setminus (I \cup \{1\})$ is non-empty, then $Y$ has survival function
$$ \frac{\pk{X_1> x, \vk{X}_{N^*}> \vk 0_{N^*} \lvert \vk{X}_I = \vk 0_I} }{\pk{\vk{X}_{N^*}> \vk 0_{N^*} \lvert \vk{X}_I = \vk 0_I}}, \quad x\in \R.$$
In case that $(Y_u- \mu)_+, u>0$ is uniformly integrable, then
$$ \limit{u} E(\vk u,c) = \E{(Y- \mu)_+}.$$
We show next the above convergence directly, which in turn implies the uniform integrability mentioned above. Since $1\in L \setminus I$ we still have that $\tilde c_1=c_1$ and as above
\bqny{ E^*(u)&=& \int_{ \vk s > \vk c u+ \mu \vk e^*_1 }( s_1- c_1u- \mu)_+ \varphi(\vk s) d \vk s\\
&=& \frac{1}{u^{m}} \int_{ \vk{x} > \overline{\vk u} ( \vk c u - \tilde{\vk c}u ) } x_1 \varphi(\tilde {\vk c} u+ \vk x/\overline {\vk u} + \mu \vk e^*_1 ) d \vk x.
}
Next, since $1 \not \in I$ i.e., $1 \in J:={I^c}$ by \eqref{domA}
\bqny{
\lefteqn{ (\tilde{\vk c}u + \vk x/\overline{\vk u} + \mu \vk e^*_1 ) ^\top \Sigma^{-1} (\tilde{\vk c}u + \vk x/\overline{\vk u} + \mu \vk e^*_1 ) }\\
&=& (\tilde{\vk c} u + \mu \vk e^*_1 ) ^\top \Sigma^{-1} (\tilde{\vk c} u + \mu \vk e^*_1 )
+ 2 {\vk c}_I^\top (\Sigma_{II})^{-1} \vk x_I + 2 \mu (\Sigma^{-1})_{1,J} \vk{x}_{J}+ \vk x_{J}^\top (\Sigma^{-1})_{JJ } \vk x_{J} + O(u^{-1})
}
as $u\to \infty$.
Consequently, in view of \eqref{domC}, we can apply the dominated convergence theorem to obtain (set $N= L \setminus I$, write $k$ for the number of elements of the index set $K=L^c=\{1 , \ldots, d \} \setminus L$ and recall that $\tilde c_i> c_i, i\in K$)
\bqny{ E^*( u)
&\sim & \frac{1}{u^{m}} \varphi(\tilde {\vk c} u + \mu \vk e^*_1 ) \int_{ \vk x_L > \vk 0_L, x_i >u (c_i- \tilde c_i), i \in K } x_1 e^{ - {\vk c}_I (\Sigma_{II})^{-1}\vk x_I - \vk x_J^\top (\Sigma^{-1})_{JJ } \vk x_J/2 - \mu (\Sigma^{-1})_{1,J} \vk{x}_J} d \vk x \\
&= & \frac{1}{u^{m}} \varphi(\tilde {\vk c} u+\mu \vk e^*_1 ) \frac{1}{\prod_{i\in I}h_i}
\int_{ \vk{x}_{N}> \vk 0_N, \vk x_K \in \mathbb{R}^{k}}
x_1 e^{ - \vk x_J^\top (\Sigma^{-1})_{JJ } \vk x_J/2 - \mu (\Sigma^{-1})_{1,J} \vk{x}_J} d \vk x_J
}
as $u\to \infty$. With similar calculations
\bqny{ \pk{ \vk{X} > \vk c u + \mu \vk e^*_1}
&\sim & \frac{1}{u^{m}} \varphi(\tilde {\vk c} u+\mu \vk e^*_1 ) \frac{1}{\prod_{i\in I}h_i}
\int_{ \vk{x}_{N}> \vk 0_N, \vk x_K \in \mathbb{R}^{k}}
e^{ - \vk x_J^\top (\Sigma^{-1})_{JJ } \vk x_J/2 - \mu (\Sigma^{-1})_{1,J} \vk{x}_J} d \vk x_J
}
as $u\to \infty$. Since $1 \in J$, by \nelem{thmL}
$$ \limit{u} \frac{ \pk{ \vk{X} > \vk c u + \mu \vk e^*_1}}{ \pk{\vk X_{-1} > \vk c_{-1} u }} = C_1$$
for some $C_1>0$ which can be calculated explicitly, hence the claim follows. \\
$iii)$ When $1 \in L^c$, then $\tilde c_1> c_1$ implying
\bqny{ E^*(u)&=& \int_{\vk s > \vk c u+ \mu \vk e^*_1}( s_1- \tilde c_1 u + (\tilde c_1 - c_1)u- \mu) \varphi(\vk s) d \vk s\\
&=&(\tilde c_1 - c_1)u \pk{ \vk X > \vk c u + \mu \vk e^*_1 } + \int_{ \vk s > \vk c u + \mu \vk e^*_1 }( s_1- \tilde c_1 u - \mu) \varphi(\vk s) d \vk s.
}
It follows easily that
\bqny{ E^*(u)&\sim & (\tilde c_1 - c_1)u \pk{ \vk X > \vk c u+ \mu \vk e^*_1 }, \quad u \to \infty
}
and further by \nelem{thmL}
$$ \pk{ \vk X > \vk c u+ \mu \vk e^*_1 } \sim \pk{ \vk X_{-1}> \vk c_{-1} u }, \quad u \to \infty,$$
hence the proof is complete.
\hfill $\Box$
\prooftheo{thm3} We show first the conditional convergence in \eqref{lum}. Let $\tilde{\vk b}$ be the solution of the quadratic programming problem $\Pi_{B}(\vk b)$ with corresponding index set $\mathcal I=\{k , \ldots, d-1 \}$ and let $\mathcal L$ be the set of indices such that $\tilde{b}_i=b_i$. Recall that $\vk b= \vk c_{-1}$ is a $(d-1)$-dimensional vector.
Let $I=\{k+1 , \ldots, d\}$ and set $c_1= \Sigma_{1, I} (\Sigma_{II})^{-1} \vk c_I$. By the definition of $\mathcal I$ and $I$ we have that
$(\Sigma_{II})^{-1} \vk c_{I
}> \vk 0_{ I}$ and $ \tilde{\vk c}_{J^*}=\Sigma_{J^*I}(\Sigma_{II})^{-1}\vk c_I$ where $J^*=\{2 , \ldots, k\}$ being empty if $k=1$. Note that we agree that when index sets are empty, the defined relationships should be ignored. Let $\tilde{\vk c}$ be such that $\tilde c_1=c_1= \Sigma_{1, I} (\Sigma_{II})^{-1} \vk c_I$ and $\tilde{\vk c}_{-1}=\tilde{\vk b}$. Setting $J= \{1\} \cup J^*$ we have that
$\tilde{\vk c}_{J}=\Sigma_{JI}(\Sigma_{II})^{-1}\vk c_I$. Consequently, since $(\Sigma_{II})^{-1} \vk c_{I
}> \vk 0_{ I}$ and $I\cup J=\{1 , \ldots, k\}$ by the converse statement in \nelem{prop1} we have that $\tilde{\vk c}$
is the unique solution of $\Pi_{\Sigma} (\vk c) $. From the aforementioned proposition $I$ is the index set that determines the unique solution $\tilde{\vk c}$. \\
In order to show \eqref{lum} we need to determine the asymptotics as $u\to \infty$ of
$$ \pk{\vk X > \vk c u+ x \vk e^*_1 }/\pk{\vk X_{-1} >\vk c_{-1} u }$$ for any $x\in \R$. If $\mathcal L = \mathcal I$, then
$L= \{1 \} \cup I$ (since $\tilde c_1=c_1$) and thus by \nelem{thmL} we have that \eqref{lum} holds with $Y$ having the same distribution as $X_1 \lvert \vk X_I = \vk 0_I $.
The case that $\mathcal N=\mathcal L \setminus \mathcal I$ is not empty follows from \cite{MR2397662}[Corr 5.2]. Indeed, the tail asymptotics of the denominator and the nominator are the same up to some positive constant since the $I$ index sets of the corresponding quadratic programming problems are the same. The ratio of those constants is (set $N^*= \mathcal N+1$)
$$ \frac{ \pk{ X_1 > x, \vk X_{ N^*} > \vk 0_{ N^*} \lvert \vk X_{I} = \vk 0_I }}
{\pk{ \vk X_{ N^*}> \vk 0_{ N^*} \lvert \vk X_{I}= \vk 0 _I } }
$$
and thus \eqref{lum} holds. The proof of \eqref{zez} follows by calculating the asymptotics of
$$ \E{ (X_1 - c_1 u) \mathbb{I}( \vk X_{-1}> \vk c_{-1}u )},
$$
which is established similarly to the proof of statement $ii)$ in \netheo{thm2} and therefore we omit the details.
\hfill $\Box$
\prooftheo{thm4} Let $I,L$ denote the unique index sets defined from the solution of the quadratic programming problem $\Pi_\Sigma(\vk c)$. Suppose first that $N= L\setminus I$ is not empty. By the assumptions $1 \not \in N \cup I$. Let $\tilde{\vk {a}}$ be the unique solution of
$\Pi_{\Sigma}( \vk a), \vk a= (\tilde c_1, c_2 , \ldots, c_d)^\top$. The corresponding index set $I$ (write this as $I_{\vk a})$ includes $I$ since $1\not \in I$. But we cannot have $1 \in I_{\vk a}$, i.e., $(\Sigma_{I_{\vk a}I_{\vk a}})^{-1} \vk a_{I_{\vk a}}> \vk 0_{I_{\vk a}}$ since this contradicts the definition of $a_1= \tilde c_1> c_1$. Consequently, $1$ belongs to the index set $L_{\vk a}$ of all indices $i\le d$ such that $\tilde a_i= a_i$. Next, for any $x\in \R$ using \nelem{thmL} and \nelem{prop1} we have
\bqny{ \limit{u} \frac{ \pk{ X_1> \tilde c_1 u+ x, \vk X_{-1} > \vk c_{-1} u }}{ \pk{\vk{X} > \vk c u}} =
\frac{ \pk{ X_1> x, \vk{X}_N > \vk 0_N \lvert \vk X_I= \vk 0_I} }{\pk{ \vk{X}_N > \vk 0_N \lvert \vk X_I= \vk 0_I}} =: \overline G(x), \quad x\in \R,}
where for the asymptotics of the denominator we used the fact that $1\in L^c$, i.e., $\tilde c_1> c_1$.
If $I=L$, then $\overline G(x)= \pk{X_1> x \lvert \vk X_I= \vk 0_I}$. Consequently, $Y$ has the claimed survival function $\overline G$.
The second claim follows easily and therefore we omit the proof.
\hfill $\Box$
\section{Appendix}
\def\vk b{\vk b}
\def\tilde{\vk b}{\tilde{\vk b}}
\begin{lem} \label{prop1} Let $\Sigma$ be a $d\times d$ positive definite matrix and let $\vk b \in \R^d \setminus (-\infty, 0]^d $.
The quadratic programming problem
$\Pi_\Sigma(\vk b)$: minimise $\vk{x}^\top \Sigma^{-1} \vk{x}$ under $\vk x \ge \vk b$ has a unique solution $\tilde{\vk b}$ and there exists a unique non-empty
index set $I\subseteq \{1, \ldots, d\}$ with $m\le d$ elements such that
\begin{eqnarray} \label{eq:IJ}
\tilde{\vk b}_{I}=\vk b_{I} , \quad (\Sigma_{II})^{-1} \vk b_{I}>\vk{0}_I
\end{eqnarray}
and if $I^c :=\{ 1 , \ldots, d\}\setminus I \not=\emptyset$, then
\bqn{ \label{eq:IJ2}
\tilde{\vk b}_{I^c} &=& \Sigma_{{I^c}I} (\Sigma_{II})^{-1} \vk b_{I}\ge \vk b_{I^c},
}
\bqn{
\label{eq:IJ3}
\min_{\vk{x} \ge \vk b}\vk{x}^\top \Sigma^{-1} \vk{x}= \tilde{\vk b} ^\top \Sigma^{-1} \tilde{\vk b}
&=& \vk b_{I}^\top (\Sigma_{II})^{-1}\vk b_{I}>0
}
\begin{eqnarray} \label{eq:new}
\vk{x}^\top \Sigma^{-1} \tilde{\vk b}= \vk{x}_F^\top (\Sigma_{FF})^{-1} \vk b_F, \quad \forall \vk{x}\in \R^d
\end{eqnarray}
for any index set $F$ of $\{1, \ldots, d \}$ containing $I$ and if $\vk b= (b , \ldots, b)^\top , b\in (0,\infty)$,
then $ 2 \le \abs{I} \le d$. Conversely, if for some non-empty index set
$I \subset \{ 1 , \ldots, d \}$ we have
$$(\Sigma_{II})^{-1}\vk b_I> \vk 0_I, \quad \Sigma_{{I^c}I} (\Sigma_{II})^{-1} \vk b_I \ge \vk b_{I^c},$$
then $\tilde{\vk b}$ with $\tilde{\vk b}_{I^c}= \Sigma_{{I^c}I} (\Sigma_{II})^{-1} \vk b_I, \tilde{\vk b}_I= \vk b_I$ is the solution of $\Pi_{\Sigma}(\vk b)$.
\end{lem}
\prooflem{prop1} The claims in \eqref{eq:IJ}-\eqref{eq:IJ3} are formulated in \cite{Rolski17}.
Since by \eqref{eq:IJ2} we have $(\Sigma^{-1} \tilde{\vk b})_M= \vk{0}_M$ for any $M\subset {I^c}$ (assuming ${I^c}$ is not empty) exactly as in proof of
\cite{BischoffCMP}[Lem 4.1] we have for any $\vk x\in \R^d$ and $F= \{1 , \ldots, d \} \setminus M$
\bqn{ (\vk x+ \tilde{\vk b})^\top \Sigma^{-1} (\vk x+ \tilde{\vk b}) =
\vk x^\top \Sigma^{-1} \vk x + 2\vk x_{F}^\top (\Sigma_{FF })^{-1} \tilde{\vk b}_{F} +\tilde{\vk b}_{F}^\top (\Sigma_{FF } )^{-1} \tilde{\vk b}_{F} , \label{coni}
}
which implies that $\vk x^\top \Sigma^{-1} \tilde{\vk b} =\vk x_{F}^\top (\Sigma_{FF })^{-1} \tilde{\vk b}_{F}$ and thus \eqref{eq:new} holds.
If for some non-empty index set $I$ we have $(\Sigma_{II})^{-1} \vk b_I> \vk 0_I$, then $\vk b_I= argmin_{\vk{x}_I \ge \vk b_I} \vk{x}_I^\top (\Sigma_{II})^{-1} \vk{x}_I$. Since for any two non-overlapping index set $A,B, A\cup B= \{ 1 , \ldots, d\}$ (using Schur compliments)
\bqny{
\vk{x}^\top \Sigma^{-1} \vk{x} =\vk{x}_A^\top (\Sigma_{AA})^{-1} \vk{x}_A +
(\vk{x}_B - \Sigma_{BA}(\Sigma_{AA})^{-1} \vk{x}_A) ^\top (\Sigma^{-1})_{BB} (\vk{x}_B - \Sigma_{BA}(\Sigma_{AA})^{-1} \vk{x}_A),\quad \vk{x} \in \R^d
}
and $(\Sigma^{-1})_{BB} $ is positive definite, it follows easily that $\tilde{\vk b}$ with $\tilde{\vk b}_I=\vk b_I$ and $\tilde{\vk b}_{I^c}=
\Sigma_{{I^c}I} (\Sigma_{II})^{-1} \vk b_I$ is the unique solution of $\Pi_\Sigma(\vk b)$, hence the claim is complete.
\hfill $\Box$
The next result follows from \cite{MR2397662}[Thm 3.3] since Gaussian random vectors are particular instances of elliptically symmetric ones where the radius has distribution function in the Gumbel max-domain of attraction with scaling function $w(u)=u$. We present however a short proof.
\begin{lem} \label{thmL} Let $\vk c\in \R^d$ have at least one positive component and let $\vk X$ be a centered $d$-dimensional Gaussian random vector with non-singular covariance matrix $\Sigma$. Denote by $I,L$ the index sets related to $\Pi_\Sigma( {\vk c})$ and let
further $ \vk x (u), u> 0$ be a $d$-dimensional vector such that $\limit{u} u^{-1} \vk x(u)= \vk 0.$\\
As $u\to \infty$ we have
\bqn{ \label{sh}
\pk{
\vk{X}_I > (\vk c u+ \vk x (u))_I} \sim
\frac{1}{\prod_{i\in I} \vk c_I^\top (\Sigma_{II})^{-1} \vk e_i } u^{- m}
\varphi_{\vk X_I} ( (\vk c u + \vk x(u) )_I ) , \quad u\to \infty,
}
where $m$ is the number of elements of $I$ and $\varphi_{\vk X_I}$ is the pdf of $\vk X_I$. Moreover, with
$N= L \setminus I$
\bqn{
\limit{u} \frac{ \pk{ \vk{X} > \vk c u + \vk x(u)} }{ \pk{ \vk{X}_I> (\vk c u+ \vk x(u))_I }} &=& \limit{u} \frac{ \pk{ \vk{X}_L > (\vk c u + \vk x(u))_L} } { \pk{ \vk{X}_I> (\vk c u+ \vk x(u))_I } } \notag \\
&=&
\pk{ \vk X_N> \vk x_N\ \lvert \vk X_I= \vk x_I} ,
}
provided that $\limit{u} (\vk x(u))_{I\cup N}=\vk x_{I \cup N}$ (set $\pk{ \vk X_N> \vk x_N\ \lvert \vk X_I= \vk x_I} $ to 1 if $N$ is empty).
\end{lem}
\begin{remark} In the particular case $\vk x(u)= \vk x/u, \vk{x} \in \R^d$ from \eqref{sh} we obtain
\bqny{ \pk{\vk{X}_I > (\vk c u+ \vk x /u) _I} \sim \pk{\vk{X}_I > \vk c_Iu } e^{- \vk x_I^\top (\Sigma_{II})^{-1}\vk c_I}, \quad u\to \infty.
}
\end{remark}
\prooflem{thmL} Assume for simplicity that $I=\{1 , \ldots, d\}$. In view of \nelem{prop1} $\Sigma^{-1} \vk c> \vk 0$ and this is the crucial condition for the proof. Note further that $\Pi_{\Sigma}(\vk c)$ has unique solution $\vk c$. Hence for any $u\in \R$ we have (set $\vk a(u)= \vk c u + \vk x(u)$)
$$ (\vk a(u) + \vk x/u) ^\top \Sigma^{-1} (\vk a(u)+ \vk x/u) =
( \vk a(u)) ^\top \Sigma^{-1} \vk a(u) + 2 \vk \vk{x} ^\top \Sigma^{-1} \vk a (u)/u + \vk x^\top \Sigma^{-1} \vk x /u^2 .$$
The term $\vk x^\top \Sigma^{-1} \vk x /u^2 $ is important for showing an integrable upper bound for the integrand below, and the finiteness of the integral follows from $\Sigma^{-1} \vk c > \vk 0$. More precisely, with $\varphi$ the pdf of $\vk X$ we have
\bqny{ \pk{ \vk X > \vk a(u)} &=& \int_{ \vk{x} > \vk a (u)} \varphi(\vk{x}) d\vk{x} \\
&=& \frac{1}{u^d} \varphi(\vk a(u)) \int_{ \vk y > \vk 0} e^{- \vk y ^\top \Sigma^{-1} \vk a (u)/u + \vk y^\top \Sigma^{-1} \vk y /u^2}d\vk y
\sim \frac{1}{u^d} \varphi (\vk a(u)) \int_{ \vk y > \vk 0} e^{- \vk y ^\top \Sigma^{-1} \vk c} d\vk y
}
since we assume that $\vk x(u)/u \to \vk 0$ as $u\to \infty$. \\
Next suppose that $I$ has $m< d$ elements and let $J=I^c= \{1 , \ldots, d \} \setminus I$. We have
\bqny{ \pk{ \vk X > \vk a(u)} &=& \frac{1}{u^m}\int_{ \vk y_I > \vk 0_I, \vk y_{J} > (\vk c u - \tilde{\vk c} u)_{J} } \varphi ( \tilde{\vk c} u + \vk x(u) + \vk y / \overline{\vk u}) d\vk y ,
}
where $\overline{\vk u}_I=u \vk 1_I$ and $\overline{\vk u}_{J}= \vk 1_{J}$, hence the proof follows easily using further \eqref{coni}.
It follows easy that the components of $\vk X$ with indices not in $L$ do not contribute, so we assume without loss of generality that $L$ has $d$ elements. In that case $ (\vk c u - \tilde{\vk c} u)_{J} = \vk{0}_{J}$ and the proof follows after some straightforward calculations.\hfill $\Box$
To this end we prove that $E(\vk c,u)$ in \eqref{27} equals $o(e^{- \varepsilon u^2})$ for some small $\varepsilon>0$.
We have that
\bqny{ E(\vk c,u) =o( R(u)), \quad R(u)= \pk{ X_1> c_1 u+ \mu, \vk X_{-1} > \vk c_{-1} u} /\pk{
\vk X_{-1} > \vk c_{-1} u }
}
as $u\to \infty$ and $1 \in I$ where the index set $I$ determines the solution of $\Pi_{\Sigma}(\vk c)$. The claim now follows if we show that
$\limit{u}R(u)=0$. Indeed this is the case, since in view of \nelem{thmL} the other possibility is that $\limit{u} R(u)= C>0$.
This means that the attained minimum of the quadratic programming problem $\Pi_{\Sigma}(\vk c) $ is $\vk c_I ^\top (\Sigma_{II})^{-1} \vk c_I$
being equal to the attained minimum of $\Pi_{B}(\vk b)$ where $B$ is obtained from $\Sigma$ by deleting the first row and column and $\vk b= \vk c_{-1}$. Since $1 \in I$ there are two different index sets that determine the minimum of the quadratic programming problem $\Pi_\Sigma(\vk c)$ which is a contradiction.
\section*{Acknowledgments}
I am thankful to both referees for detailed review reports which improved the manuscript.
Support from SNSF Grant no. 200021-175752/1 is kindly acknowledged.
\bibliographystyle{ieeetr}
|
2,877,628,091,239 | arxiv | \section{Introduction}
Skin effect is a familiar phenomenon in electrodynamics that the distribution of an alternating electric current density inside a conductor tends to concentrate close to the surface of the conductor \cite{Lamb}. Recently the existence of skin effect has been revealed in non-Hermitian Hamiltonian systems \cite{Yao1,Yao2}, where with open boundary conditions the wave-functions of all eigenstates of such systems concentrate to a boundary. This phenomenon is called non-Hermitian skin effect. The non-Hermitian skin effect requires novel non Bloch bulk-boundary correspondence in classifying topological properties of non-Hermitian Hamiltonians \cite{Lee1,Lee2,Li,Xiao,Yang,Yao2,Yokomizo,Bergholtz}.
The topological origin of non-Hermitian skin effect has been investigated theoretically \cite{Okuma,Zhang}, and experimental observation of non-Hermitian topology has been achieved in mechanical meta-materials \cite{Ghatak}, RLC circuits \cite{Hofmann} and quantum walk \cite{Rudner,Zhan,Xiao1,Xiao2}.
Skin effect was also found in open quantum systems \cite{Song}. The evolution of such open quantum systems is usually described by the Lindblad equation of the form \cite{Lindblad,Gorini}
\begin{align}
\frac{d\rho}{dt}={\mathcal L} \rho \equiv-i[H,\rho]+\sum_\mu(2L_\mu\rho L_\mu^\dagger-\{L_\mu^\dagger L_\mu,\rho\}),\label{Lindblad}
\end{align}
where $\rho$ is the density matrix of the system, the linear Liouvillian superoperator ${\mathcal L}$ consists of the Hamiltonian $H$ and the jump operators $L_\mu$, which specify the couplings between the system and its environment. The paradigm system considered in Ref.~\cite{Song} is the one dimensional Su-Schrieffer-Heeger (SSH) model, whose Hamiltonian is given by
\begin{align}
H=\sum_{i=1}^{\mathcal N/2} \left(t_1d_{i,B}^\dagger d_{i,A} +t_2d_{i+1,A}^\dagger d_{i,B}\right)+h.c.,\label{HforSSH}
\end{align}
where $A$ and $B$ label different sites within a cell, $t_1$, $t_2$ are intra-cell and inter-cell hopping strengths, and $d_{i,A/B}$ ($d^\dagger_{i,A/B}$) is the Dirac fermion annihilation (creation) operator acting on the $i$th cell. In addition, each cell is coupled to the environment through the loss and gain jump operators
\begin{align}
&L_{l,i}=\sqrt{\frac{\gamma_l}{2}}(d_{i,A}-id_{i,B}),\label{Lforsigmayl}\\
&L_{g,i}=\sqrt{\frac{\gamma_g}{2}}(d_{i,A}^\dagger+id_{i,B}^\dagger),\label{Lforsigmayg}
\end{align}
where $\gamma_l$ and $\gamma_g$ are loss and gain rates.
The skin effect is manifested in the time evolution of the single particle correlation function $\mathbf G(t)$ whose matrix elements are $G_{mn}(t)\equiv{\rm{Tr}}\left[d_{m}^\dagger d_{n}\rho(t)\right]$ with $m$ and $n$ labeling both cells and sites within a cell. The equation of motion for $\mathbf G(t)$ derived from the Lindblad equation (\ref{Lindblad}) shows that the dynamics of $\mathbf G(t)$ is determined by the damping matrix $\mathbf X$, which is equivalent to a non-Hermitian SSH Hamiltonian $H_{\rm eff}$, and thus is known to exhibit the non-Hermitian skin effect \cite{Song}. (Also see Appendix B.) Consequently, ``chiral damping'' emerges in the occupation numbers $G_{mm}(t)$ with the unit filling initial condition, i.e., $\mathbf G(t=0)=\mathbf I$ and $\mathbf I$ is the identity matrix; $G_{mm}(t)$ is triggered to transit from an algebraic to an exponential relaxation towards its steady value one by one from one end of the chain to the other, and a clear wave front shows up. [See Fig.~(\ref{skineffect})].
However, the puzzle here is that on the face of it, the couplings of the one dimensional SSH chain to its environment are the same for each cell. As time goes to infinity, the system should relax to its non-equilibrium steady state. It is natural to expect that the non-equilibrium steady state of each cell shall be the same (as it should be; see below). On top of the non-equilibrium steady state, one shall be able to create normal modes, such as particle and hole excitations. The density variation due to the creation of the normal modes is also expected
to be even with respect to the center of the one dimensional chain. These above expectations seem to need to be reconciled with the skin effect, and the consequent chiral damping.
In this paper, we apply an adjoint fermion approach to investigate the Lindblad equation for the paradigm SSH chain subject to the jump operators (\ref{Lforsigmayl}) and (\ref{Lforsigmayg}). The formalism of adjoint fermions is a specific way to vectorize the density matrix $\rho$. This approach has been employed to study open spin chain \cite{Prosen1,Prosen2,Prosen3,Prosen4,Prosen5} and topology classification \cite{Cooper}. By this approach, we are able to express the linear Liouvillian superoperator ${\mathcal L}$ in terms of the bilinear forms of the adjoint fermions, and work out the non-equilibrium steady state and the normal modes on top of it. No matter with the periodic or open boundary condition, the non-equilibrium steady state and the states with normal modes excited on top of it all exhibit even density distribution with respect to the center of the chain as naturally expected. We show that the skin effect and the consequent chiral damping in this Lindbladian system can be understood as due to the interference between the normal mode excitations.
\section{Quadratic Lindbladian Systems}
In this section, we give a brief review on the adjoint fermion approach used to solve the Lindblad equation (\ref{Lindblad}) when the Liouvillian superoperator ${\mathcal L}$ is quadratic \cite{Prosen1}.
To be general, we consider that the open system by itself is composed of free fermions of $\mathcal N$ modes whose Hamiltonian is given by
\begin{align}
H=\sum_{m,n=1}^{\mathcal N}d_m^\dagger h_{mn}d_n,\label{H}
\end{align}
and the coupling between the environment and the open system is in such a way that
\begin{align}
L_\mu=\sum_{m=1}^{\mathcal N} c^{(-)}_{\mu,m}d_m+c^{(+)}_{\mu,m}d_m^\dagger.\label{L}
\end{align}
Here $\{d_m\}$ are the Dirac fermion field operators, and satisfy the anticommutators $\{d_m, d_n\}=0$ and $\{d_m, d_n^\dagger\}=\delta_{mn}$, as $\{c^{(\pm)}_{\mu,m}\}$ are superposition coefficients.
\subsection{Adjoint Fermions}
The Lindblad equation (\ref{Lindblad}) is linear for the density matrix $\rho(t)$. To work out the non-equilibrium steady state (NESS) in the long time limit, i.e., $\rho_{\rm NESS}=\lim_{t\to\infty}\rho(t)$, and the normal mode excitations on top of $\rho_{\rm NESS}$, it is convenient to use the adjoint fermion representation \cite{Prosen1, Cooper}; the idea is to treat $\rho$ as the state vector and find the explicit form
of the linear operator $\mathcal {\mathcal L}$ acting on the vector space in terms of the adjoint fermions.
We begin by expressing Eqs. (\ref{H}) and (\ref{L}) in terms of the Majorana operators $w_{2m-1}=d_m+d_m^\dagger,w_{2m}=i(d_m-d_m^\dagger)$ which satisfy $\{w_m,w_n\}=2\delta_{mn}$, and have
\begin{align}
H=&\sum_{m,n=1}^{2\mathcal N}w_mH_{mn}w_n+\sum_{m=1}^{\mathcal N}h_{mm},\label{fermitomajoranaH}\\
L_\mu=&\sum_{m=1}^{2\mathcal N}l_{\mu,m}w_m.\label{fermitomajoranaL}
\end{align}
The matrix elements $H_{mn}$ and $l_{\mu,m}$ are linear superpositions of $h_{mn}$, and $c^{(-)}_{\mu,m}$ and $c^{(+)}_{\mu,m}$ respectively. To manifest the hermiticity of the Hamiltonian $H$, we choose $H^*_{mn}=-H_{mn}$ and $H_{mn}=-H_{nm}$. We have the expression of $H_{mn}$ and $l_{\mu, }$ in terms of $h_{mn}$ and $c_{\mu,m}^{(\pm)}$ as
\begin{align}
H_{2m-1,2n-1} &= \frac{1}{8}(h_{mn}-h_{nm}),\\
H_{2m-1,2n} &= -\frac{i}{8}(h_{mn}+h_{nm}),\\
H_{2m,2n-1} &= \frac{i}{8}(h_{mn}+h_{nm}),\\
H_{2m,2n} &=\frac{1}{8}(h_{mn}-h_{nm}),
\end{align}
as well as
\begin{align}
l_{\mu,2m-1} &= \frac{1}{2}\left(c_{\mu,m}^{(-)}+c_{\mu,m}^{(+)}\right),\\
l_{\mu,2m} &= -\frac{i}{2}\left(c_{\mu,m}^{(-)}-c_{\mu,m}^{(+)}\right).
\end{align}
The advantage of using the Majorana fermion operators is to avoid distinguishing between Hermitian conjugates.
On the other hand, the density matrix $\rho$ can be expressed as a linear combination of the polynomials
\begin{align}
\mathcal P_{\alpha_1,\alpha_2,...,\alpha_{2\mathcal N}}=w_1^{\alpha_1}w_2^{\alpha_2}\dots w_{2\mathcal N}^{\alpha_{2\mathcal N}},
\end{align}
with $\alpha_m=0,1$. For example, if there is a only single Dirac fermion mode, i.e., $\mathcal N=1$, according to the action on the basis $|0\rangle$ and $|1\rangle\equiv d_1^\dagger|0\rangle$, we find $|1\rangle\langle 1|=d_1^\dagger d_1=(1-iw_1w_2)/2$, $|0\rangle\langle 0|=d_1 d_1^\dagger=(1+iw_1w_2)/2$, $|1\rangle\langle 0|=d_1^\dagger=(w_1+iw_2)/2$ and $|0\rangle\langle 1|=d_1=(w_1-iw_2)/2$. Thus, one can use the polynomials
\begin{align}
|\mathcal P_\alpha)\equiv \mathcal P_{\alpha_1,\alpha_2,...,\alpha_{2\mathcal N}}
\end{align}
as the basis to define a $2^{2\mathcal N}$ dimensional vector space for $\rho$.
To realize linear transformation between the basis $|\mathcal P_\alpha)$, one can
define a set of $2\mathcal N$ adjoint fermion annihilation operators $\mathcal C_m$ as
\begin{align}
\mathcal C_m |\mathcal P_\alpha) = \delta_{\alpha_m,1}w_m \mathcal P_{\alpha_1,\alpha_2,...,\alpha_{2\mathcal N}},\label{cidefine}
\end{align}
and the adjoint fermion creation operators $\mathcal C_m^\dagger$ as
\begin{align}
\mathcal C_m^\dagger |\mathcal P_\alpha) = \delta_{\alpha_m,0}w_m \mathcal P_{\alpha_1,\alpha_2,...,\alpha_{2\mathcal N}}.\label{cidaggerdefine}
\end{align}
Due to $w_m^2=1$, the action of $\mathcal C_m$ is to annihilate a Majorana fermion $w_m$ and that of $\mathcal C_m^\dagger$ is to create one. From Eqs.~(\ref{cidefine}) and (\ref{cidaggerdefine}), one can show $\{\mathcal C_m,\mathcal C_n^\dagger\}=\delta_{mn}$. By adding Eqs.~(\ref{cidefine}) and (\ref{cidaggerdefine}) together, one obtains
\begin{align}
(\mathcal C_m+\mathcal C_m^\dagger) |\mathcal P_\alpha) = w_m \mathcal P_{\alpha_1,\alpha_2,...,\alpha_{2\mathcal N}};\label{handy1}
\end{align}
by subtracting the two equations, one has
\begin{align}
(\mathcal C_m^\dagger-\mathcal C_m) |\mathcal P_\alpha) = w_m (-1)^{\alpha_m}\mathcal P_{\alpha_1,\alpha_2,...,\alpha_{2\mathcal N}}.\label{handy2}
\end{align}
Likewise, we define the left basis vectors as
\begin{align}
(\mathcal P_\alpha|\equiv \mathcal P^\dagger_{\alpha_1,\alpha_2,...,\alpha_{2\mathcal N}},\label{left}
\end{align}
and further the inner product as $(x|y)=2^{-2\mathcal N}{\rm Tr_A}(x^\dagger y)$, where ${\rm Tr_A}$ stands for the trace in the representation of the adjoint fermions; such a definition gives $(\mathcal P_\alpha|\mathcal P_\alpha')=\delta_{\alpha\alpha'}$. The next task is to find the expression of the linear Liouvillian superoperator ${\mathcal L}$ acting on the density matrix $\rho$ in terms of the adjoint fermion operators.
Let us decompose the operator ${\mathcal L}$ in Eq.~(\ref{Lindblad}) into parts as
\begin{align}
{\mathcal L}_0\rho\equiv& -i[H,\rho],\\
{\mathcal L}_\mu\rho\equiv& 2L_\mu\rho L_\mu^\dagger-\{L_\mu^\dagger L_\mu,\rho\}.
\end{align}
It is straightforward to show
\begin{align}
{\mathcal L}_0= -4i \mathcal C^\dagger \textbf H \mathcal C\label{l0}
\end{align}
where $\mathcal C=(\mathcal C_1, \mathcal C_2,\dots,\mathcal C_{2\mathcal N})^T$, $\mathcal C^\dagger=(\mathcal C_1^\dagger, \mathcal C_2^\dagger,\dots,\mathcal C_{2\mathcal N}^\dagger)$ and the elements of the matrix $\textbf H$ is $H_{mn}$.
On the other hand,
\begin{align}
{\mathcal L}_\mu |\mathcal P_\alpha)=&\sum_{m,n=1}^{2\mathcal N}l_{\mu,m}l_{\mu,n}^*
\{2w_mw_n(-1)^{\sum_{\ell=1}^{2\mathcal N}\alpha_\ell+\alpha_n}|\mathcal P_\alpha)\nonumber\\
&-w_nw_m[1+(-1)^{\alpha_m+\alpha_n}]|\mathcal P_\alpha)\}.
\end{align}
Let us define the operator $\tilde {\mathcal N}$ such that $\tilde {\mathcal N} |\mathcal P_\alpha)=\sum_{\ell=1}^{2\mathcal N}\alpha_\ell |\mathcal P_\alpha)$; $\tilde {\mathcal N}$ counts the number of the adjoint fermions.
By using Eqs.~(\ref{handy1}) and (\ref{handy2}), we have
\begin{align}
{\mathcal L}_\mu =&\sum_{m,n=1}^{2\mathcal N}l_{\mu,m}l_{\mu,n}^*
[(1+e^{i\pi \tilde {\mathcal N}})(2\mathcal C_m^\dagger \mathcal C_n^\dagger-\mathcal C_m^\dagger\mathcal C_n-\mathcal C_n^\dagger \mathcal C_m)\nonumber\\
& \quad +(1-e^{i\pi \tilde {\mathcal N}})(2\mathcal C_m \mathcal C_n-\mathcal C_m\mathcal C_n^\dagger-\mathcal C_n \mathcal C_m^\dagger).\label{Lsolution2}
\end{align}
Note that while ${\mathcal L}_0^\dagger={\mathcal L}_0$, ${\mathcal L}_\mu^\dagger\neq {\mathcal L}_\mu$.
Since $[e^{i\pi \tilde {\mathcal N}},{\mathcal L}]=0$, one can consider the evolution of the density matrix $\rho$ in the subspace of $e^{i\pi \tilde {\mathcal N}}=1$ governed by ${\mathcal L}^{(+)}\equiv(1+e^{i\pi \tilde {\mathcal N}}){\mathcal L}(1+e^{i\pi \tilde {\mathcal N}})/4$, or in the subspace of $e^{i\pi \tilde {\mathcal N}}=-1$ governed by ${\mathcal L}^{(-)}\equiv(1-e^{i\pi \tilde {\mathcal N}}){\mathcal L}(1-e^{i\pi \tilde {\mathcal N}})/4$ separately. In the following discussion on the skin effect, we will consider starting with $\rho(t=0)$ in the subspace of $e^{i\pi \tilde {\mathcal N}}=1$ and focus on
\begin{align}
{\mathcal L}^{(+)}=&-2{\mathcal C}^\dagger(2i\textbf H+\textbf M+\textbf M^T){\mathcal C}+2 {\mathcal C}^\dagger(\textbf M-\textbf M^T)\left({\mathcal C}^\dagger\right)^T,\label{L+}
\end{align}
where $\textbf M$ is a complex Hermitian matrix whose elements are given by $M_{mn}=\sum_\mu l_{\mu,m}l_{\mu,n}^*$.
\subsection{Normal Modes}
To work out the normal modes of ${\mathcal L}^{(+)}$, it is convenient to introduce the adjoint Majorana fermion operators as
\begin{align}
\mathcal A_{2m-1}=&\frac {1}{\sqrt 2}(\mathcal C_m+\mathcal C_m^\dagger),\label{Aodd}\\
\mathcal A_{2m}=&\frac {i}{\sqrt 2}(\mathcal C_m-\mathcal C_m^\dagger)\label{Aeven}.
\end{align}
The normalization has been chosen such that $\{\mathcal A_m,\mathcal A_n\}=\delta_{mn}$. The Liouvillian takes the form
\begin{align}
{\mathcal L}^{(+)}={\mathcal A}^\dagger \textbf T {\mathcal A}-T_0\mathbf I.
\end{align}
Here $\mathcal A=(\mathcal A_1, \mathcal A_2,\dots,\mathcal A_{4\mathcal N})^T$, $T_0=2\sum_{m=1}^{2\mathcal N}M_{mm}=2\rm{Tr}\,\textbf M$ and the elements of the anti-symmetry complex matrix $\textbf T$ are
\begin{align}
T_{2m-1,2n-1} & =-2iH_{mn}+M_{mn}-M_{nm}, \\
T_{2m-1,2n}& = 2iM_{mn}, \\
T_{2m,2n-1}& = -2iM_{nm},\\
T_{2m,2n} & = -2iH_{mn}-M_{mn}+M_{nm}.
\end{align}
Since $\textbf T$ is anti-symmetric, if $\lambda$ is an eigenvalue of $\textbf T$, so is $-\lambda$. Suppose $\mathbf v_m=(v_{m,1},v_{m,2},\dots,v_{m,4\mathcal N})^T$ is the $m$th right eigenvector of $\textbf T$ corresponding to eigenvalue $\lambda_m$. We label the eigenvectors in such a way that ${\rm Re}\lambda_{2m-1}\ge0$ and $\lambda_{2m-1}=-\lambda_{2m}\equiv\beta_m$. The magnitude of the real part of $\beta_m$ indicates how fast the normal mode decays. Since
\begin{align}
(\lambda_m+\lambda_{m'})\sum_{n=1}^{4\mathcal N} v_{m,n}v_{m',n}=0,
\end{align}
we choose the normalization to be
\begin{align}
\sum_{n=1}^{4\mathcal N} v_{2m-1,n}v_{2m,n}=1.
\end{align}
Note that because the normalization is invariant under the transformation $\mathbf v_{2m-1}\to s\mathbf v_{2m-1}$ and $\mathbf v_{2m}\to \mathbf v_{2m}/s$, physical observables shall depend on the bilinear forms of $\mathbf v_{2m-1}$ and $\mathbf v_{2m}$.
Thus, the matrix $\mathbf V$ of element $V_{mn}=v_{m,n}$ can transform $\mathbf T$ into the form
\begin{align}
\textbf T=\textbf V^T \Lambda \textbf V,
\end{align}
where
\begin{align}
\Lambda=
\left[\begin{matrix}
0&\beta_1&0&0&\cdots \\
-\beta_1&0&0&0&\cdots \\
0&0&0&\beta_2&\cdots \\
0&0&-\beta_2&0&\cdots \\
\vdots&\vdots&\vdots&\vdots&\ddots
\end{matrix}\right].
\end{align}
Finally, we define the adjoint Majorana fermion operators for the normal modes
$\mathcal B=(\mathcal B_1, \tilde{\mathcal B}_1,\dots,\mathcal B_{2\mathcal N}, \tilde{\mathcal B}_{2\mathcal N})^T$
as
\begin{align}
\mathcal B=\mathbf V\mathcal A\label{BVA},
\end{align}
which satisfy $\{\mathcal B_m, \tilde{\mathcal B}_n\}=\delta_{mn}$, $\{\mathcal B_m, {\mathcal B}_n\}=0$ and $\{\tilde{\mathcal B}_m, \tilde{\mathcal B}_n\}=0$. Explicitly
\begin{align}
{\mathcal B}_m &=\frac{1}{\sqrt 2}\sum_j^{2\mathcal N} (\textbf V_{2m-1,2j-1}+i\textbf V_{2m-1,2j})\mathcal C_j\nonumber\\
&\quad + (\textbf V_{2m-1,2j-1}-i\textbf V_{2m-1,2j})\mathcal C_j^\dagger,\label{bm}\\
\tilde {\mathcal B}_m &=\frac{1}{\sqrt 2}\sum_j^{2\mathcal N} (\textbf V_{2m,2j-1}+i\textbf V_{2m,2j})\mathcal C_j\nonumber\\
&\quad + (\textbf V_{2m,2j-1}-i\textbf V_{2m,2j})\mathcal C_j^\dagger.\label{tbm}
\end{align}
and consequently
\begin{align}
\mathcal {\mathcal L}^{(+)}=-2\sum_{m=1}^{2\mathcal N}\beta_m\tilde{\mathcal B}_m \mathcal B_m.\label{Ldiag}
\end{align}
Note that generally speaking, $\tilde{\mathcal B}_m \neq\mathcal B_m^\dagger$. The Liouvillian superoperator becomes
\begin{align}
\mathcal {\mathcal L}^{(+)}=-2\sum_{m=1}^{2\mathcal N}\beta_m \mathcal O_m,\label{Ldiag}
\end{align}
with $\mathcal O_m=\tilde{\mathcal B}_m \mathcal B_m$. Let us emphasize ${\rm Re}\beta_m\ge0$. Since $\mathcal O_m^2=\mathcal O_m$, the eigenvalue $\nu_m$ of $\mathcal O_m$ is either $0$ or $1$; $\mathcal {\mathcal L}^{(+)}$ is commutable with $\mathcal O_m$, we can use the eigenvalue $\nu_m$ of $\mathcal O_m$ to label the right eigenvectors of $\mathcal {\mathcal L}^{(+)}$ as $|\nu_1,\nu_2,\dots,\nu_{2\mathcal N})$, i.e., $\mathcal O_m|\nu_1,\nu_2,\dots,\nu_{2\mathcal N})=\nu_m|\nu_1,\nu_2,\dots,\nu_{2\mathcal N})$ and $\mathcal {\mathcal L}^{(+)}|\nu_1,\nu_2,\dots,\nu_{2\mathcal N})=(-2\sum_{m=1}^{2\mathcal N}\beta_m\nu_m)|\nu_1,\nu_2,\dots,\nu_{2\mathcal N})$.
Due to the coupling to the environment, usually all ${\rm Re}\beta_m>0$. In this case,
as the formal solution of the density matrix is $\rho(t)=e^{\mathcal {\mathcal L}^{(+)}t}\rho(0)$, when $t\to\infty$, the open system shall relax to its non-equilibrium steady state, i.e., $\rho(t=\infty)=\rho_{\rm NESS}$, which is uniquely determined by ${\mathcal L}^{(+)}$ \cite{Zoller}. Let us define $|{\rm NESS})\equiv 2^{2\mathcal N}\rho_{\rm NESS}$. The nontrivial state $ |{\rm NESS})$ shall be a right eigenvector of $\mathcal {\mathcal L}^{(+)}$ with eigenvalue zero, which means that $|{\rm NESS})$ must be the form $|\nu_1,\nu_2,\dots,\nu_{2\mathcal N})$ with $\nu_m=0$. And $|{\rm NESS})$ must be the vacuum vector for all ${\mathcal B}_m$, i.e., ${\mathcal B}_m |{\rm NESS})=0$. If not, the evolution of the state $e^{\mathcal {\mathcal L}^{(+)}t}{\mathcal B}_m |{\rm NESS})=e^{2\beta_m t}{\mathcal B}_m |{\rm NESS})$ exhibits a nonstop exponential growth, which is physically impossible in an open system governed by the Lindblad equation. On the other hand, the states $\tilde{\mathcal B}_m |{\rm NESS})$ would evolve with time as $e^{\mathcal {\mathcal L}^{(+)}t}\tilde{\mathcal B}_m |{\rm NESS})=[e^{\mathcal {\mathcal L}^{(+)}t}\tilde{\mathcal B}_m e^{\mathcal {-\mathcal L}^{(+)}t} ] |{\rm NESS})=e^{-2\beta_m t}\tilde{\mathcal B}_m |{\rm NESS})$, which means that the states $\tilde{\mathcal B}_m |{\rm NESS})$ are right eigenvectors of $\mathcal {\mathcal L}^{(+)}$ with eigenvalues $-2\beta_m$ respectively; these right eigenvectors represent the normal modes of the open system, corresponding to creating particle or hole excitations on top of the non-equilibrium steady state $ |{\rm NESS})$. All these normal modes die out through the evolution from $\rho(0)$ to $\rho_{\rm NESS}$. The $2^{2\mathcal N}$ basis $|\nu_1,\nu_2,\dots,\nu_{2\mathcal N})$ to expand a general $|\rho(t))$ now can be taken to be $\prod_{m=1}^{2\mathcal N}\tilde B_m^{\nu_m}|{\rm NESS})$.
Regarding the left eigenvectors, since the Lindblad equation conserves the trace of the density matrix, i.e., $\partial_t{\rm Tr}\rho(t)=0$, we have $(1|{\mathcal L}^{(+)}|\rho(t))=0$ for arbitrary $\rho(t)$; this identity indicates that $(1|$ is a left eigenvector of ${\mathcal L}^{(+)}$ with eigenvalue zero. This point is also obvious from Eqs.~(\ref{cidefine}), (\ref{left}) and (\ref{L+}). Similarly, one can show
\begin{align}
(1|{\mathcal B}_m {\mathcal L}^{(+)}=-2\beta_m (1|{\mathcal B}_m.\label{1bm}
\end{align}
Combined with Eqs.~(\ref{L+}), (\ref{bm}), Eq.~(\ref{1bm}) implies that the eigenvalues of the matrix $2i\textbf H+\textbf M+\textbf M^T$ are $\{\beta_m\}$; the eigenvalues of ${\mathcal L}^{(+)}$ can be obtained by solving the eigenvalues of $2i\textbf H+\textbf M+\textbf M^T$ \cite{Zoller,Cooper}. Likewise, $(1|\tilde{\mathcal B}_m=0$. Otherwise, $(1|\tilde{\mathcal B}_m e^{\mathcal L^{(+)}t}$ would have an exponential growth. From Eq.~(\ref{tbm}), we would conclude $\textbf V_{2m,2j-1}+i\textbf V_{2m,2j}=0$, which we also confirmed by numerical calculations.
It can happen that there exist normal modes of ${\rm Re}\beta_m=0$. These modes decouple from the environment, and would not die out as time evolves. In such cases, the non-equilibrium steady state $ |{\rm NESS})$ also depends on the initial density matrix $\rho(t=0)$, and ${\mathcal B}_m |{\rm NESS})\sim {\mathcal B}_m |\rho(t=0))$ \cite{Zoller}.
\section{The skin effect}
We now apply the adjoint fermion approach to the SSH model, Eq.~(\ref{HforSSH}), which is coupled to the environment via Eqs.~(\ref{Lforsigmayl}) and (\ref{Lforsigmayg}). We are going to work out the normal modes of the system with both the periodic boundary condition and the open boundary condition, and show how the skin effect, e.g., the chiral damping phenomenon, can be understood in terms of the normal modes.
\subsection{The Periodic Boundary Condition}
We first consider the periodic boundary condition. We Fourier transform as $d_{k,A(B)}=\sum_{m=1}^{\mathcal N/2}d_{m,A(B)}e^{-ikm}/\sqrt{\mathcal N/2}$, and have the Hamiltonian expressed in the momentum $k$ space as
\begin{align}
H=\sum_k
\begin{bmatrix}
d_{k,A}^\dagger&d_{k,B}^\dagger
\end{bmatrix}
\begin{bmatrix}
0&t_1+t_2e^{-ik}\\
t_1+t_2e^{ik}&0
\end{bmatrix}
\begin{bmatrix}
d_{k,A}\\
d_{k,B}
\end{bmatrix}.
\end{align}
From now on, we focus on each subspace of momentum $k$. Thus in such a subspace of momentum $k$,
the matrix $\mathbf H$ in Eq.~(\ref{l0}) takes the form
\begin{align}
\textbf H=\frac{i}{4}
\begin{bmatrix}
0&0&I_k&-R_k\\
0&0&R_k&I_k\\
-I_k&-R_k&0&0\\
R_k&-I_k&0&0
\end{bmatrix},\label{pbch}
\end{align}
with $R_k=t_1+t_2\cos(k)$, $I_k=-t_2\sin(k)$. Similarly, in the same subspace, we derive the matrix $\mathbf M$ in Eq.~(\ref{L+}) from Eqs.~(\ref{Lforsigmayl}) and (\ref{Lforsigmayg}) as
\begin{align}
\textbf M=\frac{\gamma_+}{8}
\left[\begin{matrix}
1&0&0&-1\\
0&1&1&0\\
0&1&1&0\\
-1&0&0&1
\end{matrix}\right]+
\frac{i\gamma_-}{8}
\left[\begin{matrix}
0&1&1&0\\
-1&0&0&1\\
-1&0&0&1\\
0&-1&-1&0
\end{matrix}\right],\label{pbcm}
\end{align}
with $\gamma_{\pm}=\gamma_l\pm\gamma_g$. By collecting Eqs.~(\ref{pbch}) and (\ref{pbcm}) together, we find
$\mathcal L^{(+)}=\sum_k \mathcal L^{(+)}_k$ and explicitly
\begin{align}
&\mathcal L^{(+)}_k=\nonumber\\
&\mathcal C^\dagger_k
\left[\begin{matrix}
-\frac{\gamma_+}{2}&0&I_k&-R_k+\frac{\gamma_+}{2}\\
0&-\frac{\gamma_+}{2}&R_k-\frac{\gamma_+}{2}&I_k\\
-I_k&-R_k-\frac{\gamma_+}{2}&-\frac{\gamma_+}{2}&0\\
R_k+\frac{\gamma_+}{2}&-I_k&0&-\frac{\gamma_+}{2}
\end{matrix}\right]\mathcal C_k\nonumber\\
&+\frac{\gamma_-}{2}\mathcal C^\dagger_k
\left[\begin{matrix}
0&1&1&0\\
-1&0&0&1\\
-1&0&0&1\\
0&-1&-1&0
\end{matrix}\right](\mathcal C^\dagger_k)^T.\label{L+forsigmayinPBC}
\end{align}
The eigenvalues of $\mathcal L^{(+)}_k$ are found to be
\begin{align}
\lambda_{k,1}=&\frac{1}{4}\left(\gamma_+ - i\sqrt{-\gamma_+^2+4I_k^2+4R_k^2+4i\gamma_+ I_k}\right), \label{lambdak1}\\
\lambda_{k,2}=&-\lambda_{k,1}, \label{lambdak2}\\
\lambda_{k,3}=&\frac{1}{4}\left(\gamma_+ + i\sqrt{-\gamma_+^2+4I_k^2+4R_k^2-4i\gamma_+ I_k}\right), \label{lambdak3}\\
\lambda_{k,4}=&-\lambda_{k,3}.\label{lambdak4}
\end{align}
Note that when $t_1=t_2$, $I_{k=\pi}=R_{k=\pi}=0$, and the Matrix $\mathbf H$ in Eq.~(\ref{pbch}) is identically zero as well; two eigenvalues of $\mathcal L^{(+)}_{k=\pi}$ are zero. In the regime $t_1< t_2$, there always exists one momentum $k_0$ such that $R_{k_0}=0$, and consequently either ${\rm Re}\lambda_{k_0,1}$ or ${\rm Re}\lambda_{k_0,3}$ is zero \cite{Song}. As discussed above, if the real part of some eigenvalues of $\mathcal L^{(+)}$ is zero, the existence of the corresponding normal modes in
the non-equilibrium steady state $\rho_{\rm NESS}$ depends on the initial state $\rho(t=0)$.
The occupation number for each momentum $k$ can be calculated as
\begin{align}
G_{k,A}(t)&=\braket{d^\dagger_{k,A}(t) d_{k,A}(t)}\nonumber\\
&=\frac{1}{4}(1|[\mathcal C_{k,1}+\mathcal C_{k,1}^\dagger+i(\mathcal C_{k,2}+\mathcal C_{k,2}^\dagger)]\nonumber\\
&\quad \times[\mathcal C_{k,1}+\mathcal C_{k,1}^\dagger-i(\mathcal C_{k,2}+\mathcal C_{k,2}^\dagger)] e^{\mathcal L^{(+)}t}|\rho(t=0)),\label{gka}\nonumber\\
G_{k,B}(t)&=\braket{d^\dagger_{k,B}(t) d_{k,B}(t)}\\
&=\frac{1}{4}(1|[\mathcal C_{k,3}+\mathcal C_{k,3}^\dagger+i(\mathcal C_{k,4}+\mathcal C_{k,4}^\dagger)]\nonumber\\
&\quad \times[\mathcal C_{k,3}+\mathcal C_{k,3}^\dagger-i(\mathcal C_{k,4}+\mathcal C_{k,4}^\dagger)] e^{\mathcal L^{(+)}t}|\rho(t=0)).\label{gkb}
\end{align}
According to Eqs. (\ref{Aodd}), (\ref{Aeven}) and (\ref{BVA}) we have
\begin{align}
\mathcal C_m=&\frac{1}{\sqrt 2}\sum_j^{2\mathcal N} (\textbf V_{2j,2m-1}-i\textbf V_{2j,2m})\mathcal B_j\nonumber\\
&+(\textbf V_{2j-1,2m-1}-i\textbf V_{2j-1,2m})\tilde{\mathcal B}_j,\\
\mathcal C_m^\dagger=&\frac{1}{\sqrt 2}\sum_j^{2\mathcal N} (\textbf V_{2j,2m-1}+i\textbf V_{2j,2m})\mathcal B_j\nonumber\\
&+(\textbf V_{2j-1,2m-1}+i\textbf V_{2j-1,2m})\tilde{\mathcal B}_j.
\end{align}
\begin{figure}
\includegraphics[width=3 in]{adjointfermionperiodic}
\caption{Time evolution of $G_{k,A}(t)$ for $k=0,\frac{6}{13}\pi,\frac{11}{10}\pi,\frac{10}{7}\pi$ with $t_1=t_2=1$ and $\gamma_+=0.4,\gamma_-=0$ calculated numerically by the adjoint fermion approach. Equations (\ref{lambdak1}), (\ref{lambdak2}), (\ref{lambdak3}) and (\ref{lambdak4}) show that zero eigenvalues emerge at $k=\pi$; at $k=11\pi/10$, the real parts of the eigenvalues are $\pm8.8\times 10^{-4}$ and $\pm0.20$, resulting in a smaller decay rate than the other cases. }\label{adjointfermionperiodic}
\end{figure}
Figure (\ref{adjointfermionperiodic}) shows the time evolution of the occupation numbers $G_{k,A}(t)$ calculated numerically for $k=0,\frac{6}{13}\pi,\frac{11}{10}\pi,\frac{10}{7}\pi$ with $t_1=t_2=1$ and $\gamma_+=0.4,\gamma_-=0$. Since zero eigenvalues emerge at $k=\pi$, the steady state for the periodic boundary condition yields $G_{k,A}(t\rightarrow \infty)=G_{k,B}(t\rightarrow \infty)=1/2$ except for $k=\pi$. Besides, the decay rate of $G_{11\pi/10,A}(t)$ is much smaller than the others, which implies small eigenvalues of $\mathcal L_{11\pi/10}^{(+)}$. Figure (\ref{adjointfermionperiodic}) agrees with the results obtained from the equation of motion obeyed by $G_{k,A(B)}(t)$ in Appendix B.
The occupation numbers in the real space can be obtained through an inverse Fourier transform $d_{j,A(B)}=\sum_{k=-\pi}^{\pi}d_{k,A(B)}e^{ikj}/\sqrt{\mathcal N/2}$:
\begin{align}
G_{j,A(B)}(t) &= \braket{d^\dagger_{j,A(B)}(t) d_{j,A(B)}(t)}\nonumber\\
& =\frac2{\mathcal N} \sum_{k,k^\prime}\braket{d_{k^\prime,A(B)}^\dagger(t) d_{k,A(B)}(t)}e^{i(k-k^\prime)j}\nonumber\\
& = \frac2{\mathcal N} \sum_k G_{k,A(B)}(t);
\end{align}
the third equal sign in the above equation is valid since the momentum $k$ is conserved. Obviously $G_{j,A(B)}(t)$ is independent of $j$; no skin effect shows up as there is no skin at all.
\subsection{The Open Boundary Condition}
In the presence of the open boundary condition, we have to work in the real space, and find
\begin{widetext}
\begin{align}
\mathcal L^{(+)}=\mathcal C^\dagger\left[\begin{matrix}
0&0&0&-a_L&0&0&0&0&\cdots \\
0&0&a_L&0&0&0&0&0&\cdots \\
0&-a_R&0&0&0&-t_2&0&0&\cdots \\
a_R&0&0&0&t_2&0&0&0&\cdots \\
0&0&0&-t_2&0&0&0&-a_L&\cdots\\
0&0&t_2&0&0&0&a_L&0&\cdots\\
0&0&0&0&0&-a_R&0&0&\cdots\\
0&0&0&0&a_R&0&0&0&\cdots\\
\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\ddots
\end{matrix}\right]\mathcal C
-\frac{\gamma_+}{2} \mathcal C^\dagger\textbf I\mathcal C+\frac{i\gamma_-}{2}
\mathcal C^\dagger
\left[\begin{matrix}
0&1&1&0&0&0&0&0&\cdots \\
-1&0&0&1&0&0&0&0&\cdots \\
-1&0&0&1&0&0&0&0&\cdots \\
0&-1&-1&0&0&0&0&0&\cdots \\
0&0&0&0&0&1&1&0&\cdots\\
0&0&0&0&-1&0&0&1&\cdots\\
0&0&0&0&-1&0&0&1&\cdots\\
0&0&0&0&0&-1&-1&0&\cdots\\
\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\ddots
\end{matrix}\right](\mathcal C^\dagger)^T,\label{L+forsigmayinOBC}
\end{align}
\end{widetext}
with $a_L=t_1-\gamma_+/2$, $a_R=t_1+\gamma_+/2$.
We numerically calculate the normal modes of $\mathcal L^{(+)}$ given in Eq.~(\ref{L+forsigmayinOBC}).
We find that different from the periodic boundary condition, the real parts of the eigenvalues of $\mathcal L^{(+)}$ with the open boundary condition are all nonzero and negative, which garauntees a unique steady state regardless of the initial state.
The form of Eq.~(\ref{L+forsigmayinOBC}) determines that the eigenvalues of $\mathcal L^{(+)}$ is independent of $\gamma_-$. For the sake of simplicity, in the following discussion, we choose $\gamma_g=\gamma_l$, i.e., $\gamma_-=0$; in this case of $\gamma_-=0$, from Eqs.~(\ref{cidefine}) and (\ref{L+forsigmayinOBC}), we know that $\rho_{\rm{NESS}}$ must be proportional to the identity, i.e., $|\rm{NESS})=1$. The occupation number on the $m$th site in the state of $|\rm{NESS})$ is
\begin{align}
G_{m,{\rm{NESS}}}&={\rm{Tr}}d_m^\dagger d_m \rho_{\rm{NESS}}\nonumber\\
&=\frac{1}{4}({\rm1}|[\mathcal C_{2m-1}+\mathcal C_{2m-1}^\dagger+i(\mathcal C_{2m}+\mathcal C_{2m}^\dagger)]\nonumber\\
&\quad \times [\mathcal C_{2m-1}+\mathcal C_{2m-1}^\dagger-i(\mathcal C_{2m}+\mathcal C_{2m}^\dagger)]|{\rm{NESS}})\nonumber\\
&=\frac{1}{2};\label{GNESS}
\end{align}
equal $\gamma_l$ and $\gamma_g$ result in half occupation of every site in the non-equilibrium steady state.
On top of the non-equilibrium steady state $|\rm{NESS})=1$, if a single normal mode corresponding to $\tilde{\mathcal B}_n$ is excited, the state should be $\tilde{\mathcal B}_n|\rm{NESS})$, and the occupation number on the $m$th site instead becomes
\begin{align}
G_{m,n}=&\frac{1}{4}({\rm1}|\mathcal B_n [\mathcal C_{2m-1}+\mathcal C_{2m-1}^\dagger+i(\mathcal C_{2m}+\mathcal C_{2m}^\dagger)]\nonumber\\
&\quad [\mathcal C_{2m-1}+\mathcal C_{2m-1}^\dagger-i(\mathcal C_{2m}+\mathcal C_{2m}^\dagger)]\tilde{\mathcal B}_n|{\rm{NESS}}).
\end{align}
Figure (\ref{occupationnumberforfirstexcitation}) shows the numerical results of the occupation number difference $\Delta_{G_{m,n}}=G_{m,n}-G_{m,{\rm{NESS}}}$ for representative excitations on top of the non-equilibrium steady state. We have numerically checked that the occupation number difference satisfy the sum rule, i.e., $\sum_m{\Delta_{G_{m,n}}}=\pm 1$, which matches the expectation that the single excitation of normal modes of $\mathcal L^{(+)}$ actually corresponds to the creation of a particle or a hole in terms of the Dirac Fermions.
The density variation due to the excitation of the normal modes $\Delta_{G_{m,n}}$ does not show the skin effect as expected; this feature of the normal modes provides an alternative way to understand the independence of local observables on the boundary conditions in the thermodynamic limit \cite{Mao}.
In the following we are going to show how such normal modes give rise to the skin effect.
\begin{figure}[t]
\includegraphics[width=3 in]{occupationnumberforfirstexcitation}
\caption{The occupation number difference $\Delta_{G_{m,n}}$ due to particle or hole excitations on top of the non-equilibrium steady state. Representative excitations are numerically calculated with $\mathcal N=200$, $t_1=0.8$, $t_2=1$, $\gamma_+=0.4$, $\gamma_l=0$. The skin effect is not shown in the occupation numbers with single excitations.}
\label{occupationnumberforfirstexcitation}
\end{figure}
\begin{figure}[t]
\includegraphics[width=3 in]{skineffect}
\caption{Time evolution of $\Delta_{G_m}(t)=G_m(t)-G_{m,\rm{NESS}}$ for $m=1,3,5,7,9$ with the open boundary condition. The calculation is done with $\mathcal N=40$, $t_1=0.8$, $t_2=1$, $\gamma_+=0.4$, $\gamma_l=0$. The skin effect is manifested in the "chiral damping" that $\Delta_{G_m}(t)$ enters into an exponential decay regime one by one from left to right in the chain. The result with the periodic boundary condition is plotted for comparison.}
\label{skineffect}
\end{figure}
\begin{figure}[t]
\label{frequency}
\includegraphics[width=3 in]{frequency}
\caption{Frequency $\omega_{j,k,\rm{Im}}$ calculated with $\mathcal N=40$, $t_1=0.8$, $t_2=1$, $\gamma_+=0.4$, $\gamma_l=0.4$. In the case of $\mathcal N=40$, there are 3160 $F_{x_1,x_2}^{(2)}$ in Eq.~(\ref{rhot0}); these numbers of $\omega_{j,k,\rm{Im}}$ have been sorted in decreasing order. }\label{frequency}
\end{figure}
\begin{figure*}[t]
\includegraphics[width=1\textwidth]{amplitude}
\caption{
Amplitude $D_{j,k,m}$ corresponding to $\omega_{j,k,\rm{Im}}$ via Eq.~(\ref{Gmdecom}) indexed in Fig.~(\ref{frequency}). The plots are for sites $m=1,3,5,7$ with $\mathcal N=40$, $t_1=0.8$, $t_2=1$, $\gamma_+=0.4$, $\gamma_l=0$. The amplitudes gradual grow out of phase as the site move from left to right.}\label{amplitude}
\end{figure*}
\begin{figure*}[t]
\includegraphics[width=1\textwidth]{amplitude3}
\caption{
Amplitude $D_{j,k,m}$ corresponding to $\omega_{j,k,\rm{Im}}$ via Eq.~(\ref{Gmdecom}) indexed in Fig.~(\ref{frequency}). The plots are for sites $m=34,36,38,40$ with $\mathcal N=40$, $t_1=0.8$, $t_2=1$, $\gamma_+=0.4$, $\gamma_l=0$. The amplitudes are generally completely out of phase.}\label{amplitude2}
\end{figure*}
Now, we turn to the computation number
\begin{align}
G_m(t)={\rm {Tr}}\left[d_m^\dagger d_m e^{\mathcal L^{(+)}t}\rho(t=0)\right]\label{Gm}
\end{align}
and calculate it in terms of the normal modes of
$\mathcal L^{(+)}$ with the initial state $\rho(t=0)$ such that $G_m(t=0)=1$ for all $m$.
To work out the time evolution of $G_m(t)$ in Eq.~(\ref{Gm}), we first linear map from the Dirac fermion representation to the adjoint fermion representation in the calculation of observables as
\begin{align}
&G_m(t)=\nonumber\\
&\frac{1}{2}\sum_{j,l}^{2\mathcal N}(1|[K_{2j,m}^{(+)}\mathcal B_j+K_{2j-1,m}^{(+)}\tilde {\mathcal B}_j][K_{2l,m}^{(-)}\mathcal B_l+K_{2l-1,m}^{(-)}\tilde {\mathcal B}_l]\nonumber\\
&e^{\mathcal L^{(+)}t}|\rho(t=0)),\label{Gmsim}
\end{align}
with $K_{j,m}^{(\pm)}=\textbf V_{j,4m-3}\pm i\textbf V_{j,4m-1}$.
The initial state $|\rho(t=0))$ can be expanded in terms of the eigenvectors of $\mathcal L^{(+)}$ as
\begin{align}
&|\rho(t=0)) \nonumber\\
&= F^{(0)}{|\rm{NESS})}+\sum_{x_1=1}^{2\mathcal N}F_{x_1}^{(1)}\tilde {\mathcal B}_{x_1}|\rm{NESS})+\nonumber\\
&\quad \sum_{x_1=1}^{2\mathcal N}\sum_{x_2=x_1+1}^{2\mathcal N}F_{x_1,x_2}^{(2)}\tilde {\mathcal B}_{x_2}\tilde {\mathcal B}_{x_1}|\rm{NESS})+\cdots+\nonumber\\
&\quad \sum_{x_1=1}^{2\mathcal N}\cdots \sum_{x_n=x_{n-1}+1}^{2\mathcal N}F_{x_1,\cdots,x_n}^{(n)}\tilde {\mathcal B}_{x_n}\cdots\tilde {\mathcal B}_{x_1}|\rm{NESS})+\cdots,\label{rhot0}
\end{align}
where $F$ are the expansion coefficients, and reversely
\begin{align}
F_{x_1,\cdots,x_n}^{(n)} = (1|\mathcal B_{x_1}\cdots\mathcal B_{x_n}|\rho(t=0)).
\end{align}
Note that $F^{(0)}=\rm{Tr}\rho(t=0)\equiv 1$.
Due to the anti-commutation $\{\mathcal B_m, \tilde{\mathcal B}_n\}=\delta_{mn}$, the contribution to Eq.~(\ref{Gmsim}) comes only from the terms of $n=0,2$ in Eq.~(\ref{rhot0}).
Finally we have the time dependent density deviation as
\begin{align}
\Delta_{G_m}(t)=G_m(t)-G_{m,{\rm{NESS}}}=\sum_{j=1}^{2\mathcal N} \sum_{k=j+1}^{2\mathcal N} D_{j,k,m}e^{-2(\beta_j+\beta_k) t},
\end{align}
where
\begin{align}
D_{j,k,m} = i(\textbf V_{2j,4m-3} \textbf V_{2k,4m-1}-\textbf V_{2j,4m-1} \textbf V_{2k,4m-3})F_{j,k}^{(2)}.
\end{align}
The study of the equation of motion for $\Delta_{G_m}(t)$ has shown that all $\omega_{j,k}=-2(\beta_j+\beta_k)$ have the same real part and equal $-2\gamma_+$, i.e., $\omega_{\rm Re}\equiv{\rm Re}\omega_{j,k}=-2\gamma_+$ for all $j,k$ \cite{Song}. This equality has been confirmed by our numerical calculation. In Fig.~(\ref{skineffect}), the numerical results of $\Delta_{G_m}(t)$ exhibit the "chiral damping", which is the hallmark of the skin effect.
To understand the skin effect in terms of the normal modes, we look at the following expression
\begin{align}
\Delta_{G_m}(t)=e^{\omega_{\rm Re}t} \sum_{j=1}^{2\mathcal N} \sum_{k=j+1}^{2\mathcal N} D_{j,k,m}e^{i\omega_{j,k,{\rm Im}}t}.\label{Gmdecom}
\end{align}
Here $\omega_{j,k,{\rm Im}}={\rm Im}\omega_{j,k}$.
Since $\Delta_{G_m}(t=0)=1/2$, Eq.~(\ref{Gmdecom}) yields the sum rule
\begin{align}
\sum_{j=1}^{2\mathcal N} \sum_{k=j+1}^{2\mathcal N} D_{j,k,m}=1/2.\label{sumrule}
\end{align}
For $\mathcal N=40$, $t_1=0.8$, $t_2=1$, $\gamma_+=0.4$, $\gamma_-=0$, Figure (\ref{frequency}) shows the numerically calculated $\omega_{j,k,{\rm Im}}$ sorted in the decreasing order.
Figures (\ref{amplitude}) and (\ref{amplitude2}) show the amplitudes $D_{j,k,m}$ corresponding to $\omega_{j,k,{\rm Im}}$ on the different site $m=1,3,5,7$ and $m=34,36,38,40$ respectively; $D_{j,k,m}$ are found to be symmetric with respect to $\omega_{j,k,{\rm Im}}$ for each $m$, and this property guarantees that $\Delta_{G_m}(t)$ is real. A noticeable feature is that for small $m$, $D_{j,k,m}$ are mainly in phase. As $m$ increases, $D_{j,k,m}$ starts to show out-of-phase peaks when $\omega_{j,k,{\rm Im}}$ moves away from zero.
Thus we attribute the skin effect to the interference between the different frequencies $\omega_{j,k,{\rm Im}}$ with amplitudes $D_{j,k,m}$.
To demonstrate qualitatively why $\Delta_{G_m}(t)$ decays slower when $m$ increases, we assume a ``three mode" approximation: we only consider three frequencies contributing to $\Delta_{G_m}(t)$ [c.f.~Eq.~(\ref{Gmdecom})]. The ``first" mode has frequency $\omega_{j,k,{\rm Im}}=0$ and real amplitude denoted by $D_0(>0)$. The other two modes has frequencies $\omega_{j,k,{\rm Im}}=\pm \omega_0$ and the amplitudes $D_{\pm1}=1/4-D_0/2$ to satisfy the sum rule Eq.~(\ref{sumrule}).
Thus within this approximation $e^{-\omega_{\rm Re}t}\Delta_{G_m}(t)\sim D_0+2D_{\pm}\cos(\omega_0t)$; for small $t$, if $D_0$ and $D_{\pm}$ are in phase, $e^{-\omega_{\rm Re}t}\Delta_{G_m}(t)$ would decrease, and instead if $D_0$ and $D_{\pm}$ are out of phase, $e^{-\omega_{\rm Re}t}\Delta_{G_m}(t)$ would increase. Though our simplified ``three mode" approximation bears certain pathology that can be eliminated by including more modes, the argument based on it does reveal the crucial role of interference between the normal modes in the skin effect.
As shown in Figs.~(\ref{amplitude}) and (\ref{amplitude2}), $D_{j,k,m}$ change from in phase to out of phase as $m$ increases; correspondingly $\Delta_{G_m}(t)$ decays slower for larger $m$.
It is worth emphasizing that this interference effect vitally depends on the projection of the initial state $\rho(t=0)$ onto the basis $\tilde {\mathcal B}_{x_2}\tilde {\mathcal B}_{x_1}|\rm{NESS})$, since $G_m(t)$ would not change with time at all if $\rho(t=0)=\rho_{\rm NESS}$.
\section{discussion}
The uneven decay of $G_m(t)$ from $G_m(t=0)=1$ is also expected from the form of $\mathcal L^{(+)}$ in Eq.~(\ref{L+forsigmayinOBC}), where for the adjoint fermions $\mathcal C$, within each single cell, the left hopping amplitude is $a_L=t_1-\gamma_+/2$ and the right one $a_R=t_1+\gamma_+/2$; the unequal hopping amplitudes shall result in slower decay of $G_m(t)$ as $m$ is closer to the right end of the chain. For small time lapse $\Delta t\rightarrow 0$, one can calculate $G_m(\Delta t)$ perturbatively as
\begin{align}
G_m(\Delta t)&=({\rm 1}|d_m^\dagger(t=0)d_m(t=0)e^{\mathcal L^{(+)}\Delta t}|\rho(t=0))\nonumber\\
&=({\rm 1}|d_m^\dagger(t=0) d_m(t=0)\sum_n\frac{(\mathcal L^{(+)}\Delta t)^n}{n!}|\rho(t=0)),\label{GmDeltat}
\end{align}
and find, for example,
\begin{align}
G_1(\Delta t)&=1-\gamma_+\Delta t/2+(\gamma_+^2-t_1\gamma_+)(\Delta t)^2/2+O((\Delta t)^2)\nonumber\\
G_2(\Delta t)&=1-\gamma_+\Delta t/2+(\gamma_+^2+t_1\gamma_+)(\Delta t)^2/2+O((\Delta t)^2),\label{G1G2}
\end{align}
which shows slower decrease of $G_2(\Delta t)$ compared with $G_1(\Delta t)$. Equivalently speaking, the skin effect shown by $G_m(t)$ lies in the asymmetric hopping amplitudes for the adjoint fermions.
Instead of Eqs.~(\ref{Lforsigmayl}) and (\ref{Lforsigmayg}), if the Hermitian SSH chain (\ref{HforSSH}) is subject to more general jump operators
\begin{align}
L_{m,l}&=\sqrt{\gamma_l}(\cos\theta d_{m,A}+e^{i\phi}\sin\theta d_{m,B}),\label{Llgeneral}\\
L_{m,g}&=\sqrt{\gamma_g}(\cos\theta^\prime d_{m,A}^\dagger+e^{i\phi^\prime}\sin\theta^\prime d_{m,B}^\dagger),\label{Lggeneral}
\end{align}
as shown in Appendix A, the modification to $t_1$ would be $t_1\to t_1+(\gamma_l\sin\phi\sin\theta\cos\theta-\gamma_g \sin\phi'\sin\theta'\cos\theta')$ for hopping in the left direction and $t_1\to t_1-(\gamma_l\sin\phi\sin\theta\cos\theta-\gamma_g \sin\phi'\sin\theta'\cos\theta')$ in the right direction in the expression of $\mathcal L^{(+)}$ in terms of the adjoint fermions. Therefore the skin effect is absent only when $t_1=0$ or $\gamma_l\cos\phi \sin\theta \cos\theta= \gamma_g \cos\phi^\prime \sin\theta^\prime \cos\theta^\prime$. This conclusion agrees with inspecting the equation of motion obeyed by $\mathbf G(t)$ given in Appendix B.
The adjoint fermion approach can be employed to study the Dirac damping and related phenomena \cite{Chen1,Chen2}.
The explicit workout of the eigen-system of the Liouvillian superoperator $\mathcal L$ enables one to go beyond to calculate two-time Green's functions such as $G_{m,n}(t,t')\equiv\langle d^\dagger_m(t) d_n(t')\rangle={\rm Tr}\left[d^\dagger_m e^{\mathcal L (t-t')}d_n e^{\mathcal L t'}\rho(0)\right]$.
\section*{Acknowledgements}
We thank Pengfei Zhang for helpful discussions. We thank Guangcun Liu for critical reading of the manuscript. This work is supported by the Key Area Research and Development Program of Guangdong Province (Grant No.~2019B030330001), the National Natural Science Foundation of China (Grant Nos.~11474179, 11722438, 91736103, and 12074440), and Guangdong Project (Grant No.~2017GC010613).
\section*{Appendix A: General Linear Cellular Jump Operators}
For the SSH chain (\ref{HforSSH}) subject to the jump operators Eqs.~(\ref{Llgeneral}) and (\ref{Lggeneral}), we present the expression of $\mathcal L^{(+)}$ with the open boundary condition in terms of the adjoint fermions here.
According to Eq.~(\ref{fermitomajoranaL}) and $M_{mn}=\sum_\mu l_{\mu,m}l_{\mu,n}^*$ [cf.~Eq.~(\ref{L+})], we divide the matrix $\mathbf M$ into the one due to loss $\mathbf M_l$ and the one to gain $\mathbf M_g$, and have
\begin{widetext}
\begin{align}
\mathbf M_l&=\frac{\gamma_l}{4}
\left[\begin{matrix}
c^2\theta&0&c\phi s\theta c\theta&s\phi s\theta c\theta&\cdots \\
0&c^2\theta&-s\phi s\theta c\theta&c\phi s\theta c\theta&\cdots \\
c\phi s\theta c\theta&-s\phi s\theta c\theta&s^2\theta&0&\cdots \\
s\phi s\theta c\theta&c\phi s\theta c\theta&0&s^2\theta&\cdots \\
\vdots&\vdots&\vdots&\vdots&\ddots
\end{matrix}\right]
+\frac{\gamma_l}{4}
\left[\begin{matrix}
0&ic^2\theta&-is\phi s\theta c\theta&ic\phi s\theta c\theta&\cdots \\
-ic^2\theta&0&-ic\phi s\theta c\theta&-is\phi s\theta c\theta&\cdots \\
is\phi s\theta c\theta&ic\phi s\theta c\theta&0&is^2\theta&\cdots \\
-ic\phi s\theta c\theta&is\phi s\theta c\theta&-is^2\theta&0&\cdots \\
\vdots&\vdots&\vdots&\vdots&\ddots
\end{matrix}\right],\label{Mlgeneral}\\
\mathbf M_g&=\frac{\gamma_g}{4}
\left[\begin{matrix}
c^2\theta^\prime&0&c\phi^\prime s\theta^\prime c\theta^\prime&-s\phi^\prime s\theta^\prime c\theta^\prime&\cdots \\
0&c^2\theta^\prime&s\phi^\prime s\theta^\prime c\theta^\prime&c\phi^\prime s\theta^\prime c\theta^\prime&\cdots \\
c\phi^\prime s\theta^\prime c\theta^\prime&s\phi^\prime s\theta^\prime c\theta^\prime&s^2\theta^\prime&0&\cdots \\
-s\phi^\prime s\theta^\prime c\theta^\prime&c\phi^\prime s\theta^\prime c\theta^\prime&0&s^2\theta^\prime&\cdots \\
\vdots&\vdots&\vdots&\vdots&\ddots
\end{matrix}\right]
+\frac{\gamma_g}{4}
\left[\begin{matrix}
0&-ic^2\theta^\prime&-is\phi^\prime s\theta^\prime c\theta^\prime&-ic\phi^\prime s\theta^\prime c\theta^\prime&\cdots \\
ic^2\theta^\prime&0&ic\phi^\prime s\theta^\prime c\theta^\prime&-is\phi^\prime s\theta^\prime c\theta^\prime&\cdots \\
is\phi^\prime s\theta^\prime c\theta^\prime&-ic\phi^\prime s\theta^\prime c\theta^\prime&0&-is^2\theta^\prime&\cdots \\
ic\phi^\prime s\theta^\prime c\theta^\prime&is\phi^\prime s\theta^\prime c\theta^\prime&is^2\theta^\prime&0&\cdots \\
\vdots&\vdots&\vdots&\vdots&\ddots
\end{matrix}\right].\label{Mggeneral}
\end{align}
\end{widetext}
Both $\mathbf M_l$ and $\mathbf M_g$ are block diagonalized and the $4\times4$ submatrices are shown in Eq.~(\ref{Mlgeneral}) and (\ref{Mggeneral}); Note the symmetric and antisymmetric parts of $\mathbf M_l$ and $\mathbf M_g$ have been deliberately separated out. Here to save space, we have employed a short-hand notation as $c\psi\equiv \cos\psi$, $s\psi\equiv \sin\psi$.
Based on Eq. (\ref{L+}), we find that the Liouvillian superoperator $\mathcal L^{(+)}$ becomes
\begin{widetext}
\begin{align}
\mathcal L^{(+)}&\nonumber\\
=&-\mathcal C^\dagger \
\left[\begin{matrix}
0&0&\gamma_lc\phi s\theta c\theta+\gamma_gc\phi^\prime s\theta^\prime c\theta^\prime&\gamma_ls\phi s\theta c\theta-\gamma_gs\phi^\prime s\theta^\prime c\theta^\prime&\cdots \\
0&0&-\gamma_ls\phi s\theta c\theta+\gamma_gs\phi^\prime s\theta^\prime c\theta^\prime&\gamma_lc\phi s\theta c\theta+\gamma_gc\phi^\prime s\theta^\prime c\theta^\prime&\cdots \\
\gamma_lc\phi s\theta c\theta+\gamma_gc\phi^\prime s\theta^\prime c\theta^\prime&-\gamma_ls\phi s\theta c\theta+\gamma_gs\phi^\prime s\theta^\prime c\theta^\prime&0&0&\cdots \\
\gamma_ls\phi s\theta c\theta-\gamma_gs\phi^\prime s\theta^\prime c\theta^\prime&\gamma_lc\phi s\theta c\theta+\gamma_gc\phi^\prime s\theta^\prime c\theta^\prime&0&0&\cdots \\
\vdots&\vdots&\vdots&\vdots&\ddots
\end{matrix}\right]\mathcal C\nonumber\\
&+\mathcal C^\dagger
\left[\begin{matrix}
0&0&0&-t_1&\cdots \\
0&0&t_1&0&\cdots \\
0&-t_1&0&0&\cdots \\
t_1&0&0&0&\cdots \\
\vdots&\vdots&\vdots&\vdots&\ddots
\end{matrix}\right]\mathcal C
-\mathcal C^\dagger
\left[\begin{matrix}
\gamma_lc^2\theta+\gamma_gc^2\theta^\prime&0&0&0&\cdots \\
0&\gamma_lc^2\theta+\gamma_gc^2\theta^\prime&0&0&\cdots \\
0&0&\gamma_ls^2\theta+\gamma_gs^2\theta^\prime&0&\cdots \\
0&0&0&\gamma_ls^2\theta+\gamma_gs^2\theta^\prime&\cdots \\
\vdots&\vdots&\vdots&\vdots&\ddots
\end{matrix}\right]\mathcal C\nonumber\\
&+i\mathcal C^\dagger
\left[\begin{matrix}
0&\gamma_lc^2\theta-\gamma_gc^2\theta^\prime&-\gamma_ls\phi s\theta c\theta-\gamma_gs\phi^\prime s\theta^\prime c\theta^\prime&\gamma_lc\phi s\theta c\theta-\gamma_gc\phi^\prime s\theta^\prime c\theta^\prime&\cdots \\
-\gamma_lc^2\theta+\gamma_gc^2\theta^\prime&0&-\gamma_lc\phi s\theta c\theta+\gamma_gc\phi^\prime s\theta^\prime c\theta^\prime&-\gamma_ls\phi s\theta c\theta-\gamma_gs\phi^\prime s\theta^\prime c\theta^\prime&\cdots \\
\gamma_ls\phi s\theta c\theta+\gamma_gs\phi^\prime s\theta^\prime c\theta^\prime&\gamma_lc\phi s\theta c\theta-\gamma_gc\phi^\prime s\theta^\prime c\theta^\prime&0&\gamma_ls^2\theta-\gamma_gs^2\theta^\prime&\cdots \\
-\gamma_lc\phi s\theta c\theta+\gamma_gc\phi^\prime s\theta^\prime c\theta^\prime&\gamma_ls\phi s\theta c\theta+\gamma_gs\phi^\prime s\theta^\prime c\theta^\prime&-\gamma_ls^2\theta+\gamma_gs^2\theta^\prime&0&\cdots \\
\vdots&\vdots&\vdots&\vdots&\ddots
\end{matrix}\right]
(\mathcal C^\dagger)^T\label{Lgeneral}.
\end{align}
\end{widetext}
The special case, Eq.~(\ref{L+forsigmayinOBC}), is to take $\theta=\theta^\prime=\pi/4$, $\phi=-\pi/2$ and $\phi^\prime=\pi/2$ in Eq.~(\ref{Lgeneral}). Compared with Eq.~(\ref{L+forsigmayinOBC}), one notes now that the modification to the intra-cell hopping is
$t_1\to t_1+(\gamma_l\sin\phi\sin\theta\cos\theta-\gamma_g \sin\phi'\sin\theta'\cos\theta')$ for hopping in the left direction and $t_1\to t_1-(\gamma_l\sin\phi\sin\theta\cos\theta-\gamma_g \sin\phi'\sin\theta'\cos\theta')$ in the right direction.
\section*{Appendix B: Damping Matrix}
For the SSH chain (\ref{HforSSH}) subject to the jump operators Eqs.~(\ref{Llgeneral}) and (\ref{Lggeneral}) with the open boundary condition, the equation of motion for $\mathbf G(t)$, whose matrix elements are $G_{mn}(t)\equiv{\rm{Tr}}\left[d_{m}^\dagger d_{n}\rho(t)\right]$ with $m$ and $n$ labeling both cells and sites within a cell, can be determined similarly as in Ref.~\cite{Song}. The deviation away from the non-equilibrium steady state value $\Delta_{\mathbf G}(t)\equiv \mathbf G(t)-\mathbf G(t\to\infty)$ is found to be
\begin{align}
\Delta_{\mathbf G}(t) = e^{\mathbf X t}\Delta_{\mathbf G}(0)e^{\mathbf X^\dagger t},
\end{align}
where the ``damping matrix" $\mathbf X$ has the form
\begin{widetext}
\begin{align}
\mathbf X=&\left[\begin{matrix}
-\gamma_lc^2\theta-\gamma_gc^2\theta^\prime&it_1-\gamma_le^{-i\phi}s\theta c\theta-\gamma_ge^{i\phi^\prime}s\theta^\prime c\theta^\prime&0&0&\cdots\\
it_1-\gamma_le^{i\phi}s\theta c\theta-\gamma_ge^{-i\phi^\prime}s\theta^\prime c\theta^\prime&-\gamma_ls^2\theta-\gamma_gs^2\theta^\prime&0&0&\cdots\\
0&0&\ddots\\
0&0&&\ddots\\
\vdots&\vdots&&&\ddots
\end{matrix}\right]
+\left[\begin{matrix}
0&0&0&0&0 &\cdots\\
0&0&it_2&0&0&\cdots\\
0&it_2&0&0&0&\cdots\\
0&0&0&0&it_2&\cdots\\
0&0&0&it_2&0&\cdots\\
\vdots&\vdots&\vdots&\vdots&\vdots&\ddots
\end{matrix}\right].\label{x}
\end{align}
\end{widetext}
Now we define $H_{\rm eff}=-i\mathbf X$, and $H_{\rm eff}$ can be understood as a non-Hermitian SSH Hamiltonian; the first part in Eq.~(\ref{x}) includes the on-site energy and intra-cell hopping, while the second part is the inter-cell hopping.
It is the non-Hermicity of $H_{\rm eff}$ that results in the skin effect.
The skin effect emerges when the intra-cell hoppings of the matrix $\mathbf X$ have different strengths. Let us focus on the intra-cell hoppings in the first cell, and the corresponding matrix elements are
\begin{align}
X_{12}=&-\gamma_l\cos\phi \sin\theta \cos\theta -\gamma_g \cos\phi^\prime \sin\theta^\prime \cos\theta^\prime\nonumber\\
&+i(t_1+\gamma_l\sin\phi \sin\theta \cos\theta -\gamma_g \sin\phi^\prime \sin\theta^\prime \cos\theta^\prime),\\
X_{21}=&-\gamma_l\cos\phi \sin\theta \cos\theta -\gamma_g \cos\phi^\prime \sin\theta^\prime \cos\theta^\prime\nonumber\\
&+i(t_1-\gamma_l\sin\phi \sin\theta \cos\theta +\gamma_g \sin\phi^\prime \sin\theta^\prime \cos\theta^\prime).
\end{align}
We find $|X_{12}|=|X_{21}|$ only if $t_1=0$ or $\gamma_l\sin\phi \sin\theta \cos\theta =\gamma_g \sin\phi^\prime \sin\theta^\prime \cos\theta^\prime$. This condition for the absence of the skin effect is the same as the one obtained from Eq.~(\ref{Lgeneral}).
\begin{figure}
\includegraphics[width=3 in]{correlationfunctionperiodic}
\caption{Time evolution of $G_{k,A}(t)$ for $k=0,\frac{6}{13}\pi,\frac{11}{10}\pi,\frac{10}{7}\pi$ with $t_1=t_2=1$ and $\gamma_+=0.4,\gamma_-=0$ obtained by numerically solving the equation of motion (\ref{correlation}). The results agree with Fig.~(\ref{adjointfermionperiodic}).}\label{correlationfunctionperiodic}
\end{figure}
\section*{Appendix C: Equation of Motion for the Occupation Numbers with the Periodic Boundary Condition}
For the SSH chain (\ref{HforSSH}) subject to the jump operators Eqs.~(\ref{Lforsigmayl}) and (\ref{Lforsigmayg}), in addition to the adjoint fermion approach, the time evolution of $G_{k,A}(t)$ and $G_{k,B}(t)$ in Eqs.~(\ref{gka}) and (\ref{gkb}) can be determined from their equation of motion. To simplify the jump terms in the Lindblad equation, we define $\tilde G_{\sigma\sigma'}(t)\equiv {\rm Tr} [\tilde d_{\sigma}^\dagger \tilde d_{\sigma'}\rho(t)]$ with
\begin{align}
\tilde d_{1} &= d_{k, A}-id_{k, B},\\
\tilde d_{2} &= d_{k, A}+id_{k, B}.
\end{align}
From the Lindblad equation (\ref{Lindblad}), we find
\begin{align}
&\left\{\frac d{dt}-
\left[\begin{matrix}
-4\gamma&-A_{\textbf{Re}}&-A_{\textbf{Re}}&0\\
A_{\textbf{Re}}&2(-iA_{\textbf{Im}}-\gamma)&0&-A_{\textbf{Re}}\\
A_{\textbf{Re}}&0&2(iA_{\textbf{Im}}-\gamma)&-A_{\textbf{Re}}\\
0&A_{\textbf{Re}}&A_{\textbf{Re}}&0
\end{matrix}\right]\right\}
\left[\begin{matrix}
\tilde G_{11}\\ \tilde G_{12}\\ \tilde G_{21}\\ \tilde G_{22}
\end{matrix}\right]\nonumber\\
&=\begin{bmatrix}
2\gamma\\ 0\\0\\0
\end{bmatrix},\label{correlation}
\end{align}
where $A=t_1+t_2e^{-ik}$ and $A_{\textbf{Re}}$ ($A_{\textbf{Im}}$) are the real (imaginary) part of $A$. We solve Eq.~(\ref{correlation}) numerically with the initial condition of unit filling. The numerical results of $G_{k,A}(t)$ for $k=0,\frac{6}{13}\pi,\frac{11}{10}\pi,\frac{10}{7}\pi$ with $t_1=t_2=1$ and $\gamma_+=0.4,\gamma_-=0$ are plotted in Fig.~(\ref{correlationfunctionperiodic}), which agrees with ones obtained by the adjoint fermion approach.
|
2,877,628,091,240 | arxiv | \section{Introduction}
Quantum mechanics allows distant parties to establish certain correlations that are impossible in the classical world. These correlations are nonlocal, in the sense that they violate Bell inequality \cite{Bel64,CHSH69}. However, as first shown by Tsirelson\cite{Tsi80}, their violation is still bounded. In a seminal paper, Popescu and Rohrlich \cite{PR94} proposed a hypothetical nonlocal correlation that attains the maximal value for the Clauser-Horne-Shirmony-Holt(CHSH) inequality\cite{CHSH69} but still cannot be used to signal from one party to another. Understanding why such stronger-than-quantum correlations have not been observed in nature has since then become an active topic of research \cite{Dam05,WW05,BP05,BLM+05,BCU+06,BBL+06,MAG06,SGB+06,BB+07,LPS+07,Bar07,MR08,ABL+09,PPK+09,BS09,SBP09,ABP+09,NW09,Hsu09,GMC+10,
OW10,ABB+10,CSS10,XR11,Hsu11,GWA+11,SS11}. Recently, several information-theoretical principles were proposed as candidates that separate physically realizable correlations from nonphysical ones. In this paper, we will be concerned with two of them --- nontrivial communication complexity \cite{Dam05} and information causality \cite{PPK+09}. Both of them show that allowing certain postquantum correlations would lead to implausible simplification of communication tasks.
In one line, van Dam\cite{Dam05} showed that, equipped with many copies of Popescu-Rohrlich(PR) boxes\cite{PR94}, every communication complexity problem can be solved deterministically by a single bit of communication. Then, Brassard \emph{et al.}\cite{BBL+06} extended this result by proving that if PR boxes can be implemented with probability greater than $(3+\sqrt{6})/6 \approx 90.8\%$, then the probabilistic communication complexity also collapses. These results were further generalized by Refs.\cite{BP05,MR08,BS09}.
In another line, Paw{\l}owski \emph{et al.} \cite{PPK+09} considered a scenario in which Alice had a database of $N$ independent and random bits, and a distant party, Bob, was asked to guess a the $k$-th bit in Alice's database for random $k \in \{0,1,\dots,N-1\}$. They suggested a principle stating that Bob can gain at most $m$ bits of information about Alice's database by using his local resources and receiving $m$ bits from Alice. This principle was named information causality. When $m=0$, it reduces to no-signalling. It was demonstrated that using many copies of PR boxes Bob can correctly guess any bit of Alice with certainty by receiving only $1$ bit from her. Moreover, any correlation exceeding Tsirelson's bound for the CHSH inequality violates information causality. The implications of information causality on nonlocality were further explored in Refs.\cite{ABP+09,Hsu09,CSS10,XR11,Hsu11,GWA+11,SS11}.
A common feature of these two proposals is that the PR box perfectly cracks the information processing task under consideration. It also exhibits an extremely strong power for other tasks \cite{WW05,LPS+07}. The reason for the success of this box can be summarized as follows. Essentially, the aforementioned tasks can be viewed as that Alice and Bob want to compute some function $f(x,y)$ in some distributed way, where $x$ and $y$ are initially held by Alice and Bob respectively. Note that any boolean (or arithmetic) function can be computed by a circuit consisting of XOR (or addition) and AND (or multiplication) gates. These two types of gates generally do not commute. However, using a PR box, AND operations are ``transformed" into XOR operations,
in the way that the AND of two inputs is encoded as the XOR of two outputs. So Alice and Bob effectively convert the original circuit into a ``circuit" consisting of only XOR operations, which are commutative. Then Alice and Bob can reorder their operations, compact them, and minimize the interaction between them. This argument was made explicit in the discussion of communication complexity\cite{Dam05}. But it can also explain the success of PR box for the information causality task, since that task can be viewed as the communication complexity problem for a special function -- the index function $f(\vec{x},y)\equiv x_y$, with the extra condition that the communication is only from Alice to Bob.
In light of the above observation, we introduce a class of nonlocal boxes called \textit{functional boxes}. This class of boxes include a generalization of PR box as a special case. A functional box's inputs and outputs
have $p$ possible values which are identified with the elements of $\mathbb{Z}_p$, where $p$ can be any prime number. This box encodes the value of a function $f:\mathbb{Z}_p \times \mathbb{Z}_p \to \mathbb{Z}_p$ over its inputs as the difference of its outputs. In particular, the PR box is just the functional box corresponding to $f(x,y)=xy$ and $p=2$. We find that, as long as $f$ is not \textit{additively separable} (see section 2 for definition), the corresponding functional box is asymptotically equivalent to a generalized PR box. (More specifically, an additively inseparable function contains a component that is about the multiplication of its two variables, and we show that this component can be isolated out by using a difference method, and hence the corresponding functional box can be transformed into a generalized PR box.) We also prove that a generalized PR box can enable perfect distributed computation and make communication complexity trivial. As a result, any functional box equivalent to it has the same power and is unlikely to exist in nature. Then the next question is how well these functional boxes might be approximated. We derive several bounds on the proximity between a plausible $p$-nary-input, $p$-nary-output box and these functional boxes, from the principle of information causality. In order to do this, we extend the basic and nested protocols in Ref.\cite{PPK+09} to the $p$-nary digit case. Besides, we also present a generalization of the \textit{depolarization} process that transforms any binary-input binary-output box into an \textit{isotropic} one\cite{MAG06,Sho08}, which might be of independent interest.
\section{Functional Boxes and Communication Complexity}
Let us first review several basic definitions about nonlocal boxes.
\begin{definition}
A bipartite correlation box (or simply a box) is a hypothetical device shared by two spatial separated parties Alice and Bob that receives an input $x \in \mathcal{X}$ from Alice and an input $y \in \mathcal{Y}$ from Bob, and outputs $a \in \mathcal{A}$ to Alice and $b \in \mathcal{B}$ to Bob, according to a joint probability distribution $P(a,b|x,y)$. Without causing ambiguity, we also call this box $P$.
If $P$ satisfies
\begin{equation}\begin{array}{lll}
\ssll{b}{}P(a,b|x,y)&=\ssll{b}{}P(a,b|x,y')&\equiv P(a|x),~~\forall a,x,y,y';\\
\ssll{a}{}P(a,b|x,y)&=\ssll{a}{}P(a,b|x',y)&\equiv P(b|y),~~\forall b,x,x',y,
\end{array}\end{equation}
then it is no-signalling. Namely, Alice cannot signal to Bob via her choice of input to this box and vice versa.
If Alice and Bob can simulate $P$ using shared randomness (without communication between them), then $P$ is local. Otherwise, it is nonlocal.
\end{definition}
For example, the standard PR box is given by
\begin{equation}\begin{array}{lll}
PR(a,b|x,y)=
\begin{cases}
\dfrac{1}{2},~~~~\textrm{if}~~a \oplus b=x \wedge y\\
0,~~~~\textrm{otherwise}
\end{cases},
\end{array}\end{equation}
where $a,b,x,y\in \{0,1\}$. Note that $PR(a|x,y)=PR(b|x,y)=\dfrac{1}{2}$,
$\forall a,b,x,y$. So this box is no-signalling.
In this paper, we focus on no-signalling boxes for which $|\mathcal{X}|=|\mathcal{Y}|=|\mathcal{A}|=|\mathcal{B}|=p$,
where $p$ can be any prime number. Without loss of generality, we assume $\mathcal{X}=\mathcal{Y}=\mathcal{A}=\mathcal{B}
=\{0,1,\dots,p-1\} \equiv \mathbb{Z}_p$, and \textit{all the following computation related to} $x,y,a,b$ \textit{is modulo} $p$.
\begin{definition}
A function $F:\mathbb{Z}^n_p \times \mathbb{Z}^m_p \to \mathbb{Z}_p$ is distributedly computed by Alice and Bob if, when Alice is given any $\vec{x}=(x_0,x_1,\dots,x_{n-1}) \in \mathbb{Z}^n_p$ and Bob is given any $\vec{y}=(y_0,y_1,\dots,y_{m-1}) \in \mathbb{Z}^m_p$, Alice can produce $a \in \mathbb{Z}_p$ and Bob can produce $b \in \mathbb{Z}_p$ such that $a-b=F(\vec{x},\vec{y})$.
\end{definition}
If $F$ can be distributedly computed with certainty (or with constant probability), then the deterministic (or probabilistic) communication complexity of $F$ becomes trivial, since Alice can simply send $a$ to Bob and then Bob can calculate $a-b=F(\vec{x},\vec{y})$.
Let us first consider the following box, which is a straightforward generalization of standard PR box to the $p$-nary input/output case:
\begin{definition}
\begin{equation}\begin{array}{lll}
PR_p(a,b|x,y)=
\begin{cases}
\dfrac{1}{p},~~~~\textrm{if}~~a-b=xy\\
0,~~~~\textrm{otherwise}
\end{cases},
\end{array}\end{equation}
where $a,b,x,y \in \mathbb{Z}_p$.
\end{definition}
In particular, the standard PR box is just $PR_2$. Note that $PR_p(a|x,y)=PR_p(b|x,y)=\dfrac{1}{p}$, $\forall a,b,x,y$. So $PR_p$ is no-signalling.
Given arbitrarily many copies of $PR_p$, Alice and Bob will be able to distributedly compute any function $F:\mathbb{Z}^n_p \times \mathbb{Z}^m_p \to \mathbb{Z}_p$ perfectly. To prove this, fist note that $F(\vec{x},\vec{y})$ can always be written as a multivariate polynomial whose degree in each $x_i$ or $y_j$ is no larger than $p-1$:
\begin{equation}\begin{array}{lll}
F(\vec{x},\vec{y})
&=& \ssll{\alpha_0,\dots,\alpha_{n-1}=0}{p-1}
\ssll{\beta_0,\dots,\beta_{m-1}=0}{p-1}
\mu_{\vec{\alpha},\vec{\beta}} \prod\limits_{i=0}^{n-1}x_i^{\alpha_i}
\prod\limits_{j=0}^{m-1} y_j^{\beta_j}
\end{array}\end{equation}
for some $\mu_{\vec{\alpha},\vec{\beta}} \in \mathbb{Z}_p$,
where $\vec\alpha=(\alpha_0,\alpha_1,\dots,\alpha_{n-1})$
and $\vec\beta=(\beta_0,\beta_1,\dots,\beta_{m-1})$. So Alice and Bob can execute the following protocol: for each $(\vec\alpha, \vec\beta)$, they use a $PR_{p}$ as follows: Alice inputs $\prod\limits_{i=0}^{n-1}x_i^{\alpha_i}$ and Bob inputs $\prod\limits_{j=0}^{m-1} y_j^{\beta_j}$, and suppose they
get outputs $a_{\vec{\alpha},\vec{\beta}}$ and
$b_{\vec{\alpha},\vec{\beta}}$ respectively. In the end, Alice sets
$a=\ssll{\vec{\alpha}}{}\ssll{\vec{\beta}}{}\mu_{\vec{\alpha},\vec{\beta}}a_{\vec{\alpha},\vec{\beta}} $ as her final output, and Bob sets
$b=\ssll{\vec{\alpha}}{}\ssll{\vec{\beta}}{}\mu_{\vec{\alpha},\vec{\beta}}b_{\vec{\alpha},\vec{\beta}} $
as his final output. It is easy to verify $a-b=F(x,y)$.
Now let us consider a wider class of boxes:
\begin{definition}
For any function $f:\mathbb{Z}_p \times \mathbb{Z}_p \to \mathbb{Z}_p$, the functional box corresponding to $f$ is defined as
\begin{equation}
P^f(a,b|x,y)=
\begin{cases}
\dfrac{1}{p},~~~~\textrm{if}~~a-b=f(x,y)\\
0,~~~~\textrm{otherwise}
\end{cases},
\end{equation}
where $a,b,x,y \in \mathbb{Z}_p$.
\end{definition}
Namely, $P^f$ can be directly used to distributedly compute $f$. In particular, $PR_p$ can be viewed as the functional box corresponding to $f(x,y)=xy$. Note that $P^f(a|x,y)=P^f(b|x,y)=\dfrac{1}{p}$, $\forall a,b,x,y$. So $P^f$ is also no-signalling.
\begin{definition}
A bivariate function $f:\mathbb{Z}_p \times \mathbb{Z}_p \to \mathbb{Z}_p$ is additively separable if there exist univariate functions $g,h: \mathbb{Z}_p \to \mathbb{Z}_p$ such that $f(x,y)=g(x)+h(y)$, $\forall x,y \in \mathbb{Z}_p$. Otherwise, $f$ is additively inseparable.
\end{definition}
\begin{definition}
Suppose $f:\mathbb{Z}_p \times \mathbb{Z}_p \to \mathbb{Z}_p$ can be written as
$f(x,y)=\ssll{i,j=0}{p-1} \lambda_{i,j}x^iy^j$ for some $\lambda_{i,j} \in \mathbb{Z}_p$. Define
\begin{equation}\begin{array}{lll}
\Delta(f)=\max\limits_{1 \le i,j \le p-1}\{\mathbb{I}_{\lambda_{i,j} \neq 0}(i+j)\},
\label{eq:deltaf}
\end{array}\end{equation}
where $\mathbb{I}$ is the indicator function (i.e. $\mathbb{I}_E=1$ if $E$ is true, and $0$ otherwise). Namely, $\Delta(f)$ is the maximum of the degrees of the terms in $f$ that can be divided by $xy$.
\end{definition}
Obviously, a function $f$ is additively inseparable if and only if $\Delta(f) \ge 2$. Only an additively inseparable $f$ contains a term that is related to the product of $x$ and $y$. So one may naturally wonder if $P^f$ can be used to simulate $PR_p$. If so, then $P^f$ can also benefit distributed computation. We find that it is indeed the case, provided we are given sufficiently many copies of $P^f$.
\begin{definition}
We use the notation $P_1 \to P_2$ to denote the fact that we can use $N$ copies of $P_1$ to simulate a $P_2$ exactly, for some $N \ge 1$. We also use $P_1 \leftrightarrow P_2$ to denote that $P_1 \to P_2$ and $P_2 \to P_1$, i.e. $P_1$ and $P_2$ are asymptotically interconvertible.
\end{definition}
\begin{lemma}
If $f:\mathbb{Z}_p \times \mathbb{Z}_p \to \mathbb{Z}_p$ is additively separable, then
$P^f$ is local. Otherwise, $P^f \leftrightarrow PR_p$.
\label{lem:pf}
\end{lemma}
\begin{proof}
(1)Suppose $f(x,y)=g(x)+h(y)$ for some $g,h:\mathbb{Z}_p \to \mathbb{Z}_p$,
then we can build a local model for $P^f$ as follows:
Alice and Bob first generate a uniformly random variable $z \in \mathbb{Z}_p$, then Alice outputs $a=g(x)+z$ and Bob outputs $b=-h(y)+z$.
(2)Suppose $f$ is additively inseparable. We first show $PR_p \to P^f$. Recall that we have already shown how to use many copies of $PR_p$ to perform distributed computation of any functions, including $f$. So Alice and Bob can first use that protocol to obtain $a$ and $b$ such that $a-b=f(x,y)$. Then they generate a uniformly random $z \in \mathbb{Z}_p$ and modify their outputs by $a \to a+z$ and $b \to b+z$.
It remains to show $P^f \to PR_p$. Proof by induction on $\Delta(f)$:
\begin{itemize}
\item Base case: $\Delta(f)=2$. In this case, we have
\begin{equation}\begin{array}{lll}
f(x,y)=\lambda xy+g(x)+h(y)
\end{array}\end{equation}
for some $\lambda \neq 0$ and $g,h:\mathbb{Z}_p \to \mathbb{Z}_p$.
We can use a $P^f$ to simulate a $PR_p$ as follows:
Alice inputs $x$ and Bob inputs $y$ to $P^f$, and suppose they obtain outputs $a$ and $b$ respectively. Then Alice sets
\begin{equation}\begin{array}{lll}
a'=\lambda^{-1}(a-g(x))
\end{array}\end{equation}
as her final output, while Bob sets
\begin{equation}\begin{array}{lll}
b'=\lambda^{-1}(b+h(y))
\end{array}\end{equation}
as his final output. Then we have
\begin{equation}\begin{array}{lll}
a'-b'&=&\lambda^{-1}(a-b-g(x)-h(y))\\
&=&\lambda^{-1}(f(x,y)-g(x)-h(y))\\
&=&xy.
\end{array}\end{equation}
Furthermore, since $a$ (or $b$) is uniformly random, $a'$ (or $b'$) is also uniformly random conditioned on any $(x,y)$.
\item Inductive step: Suppose $P^f \to PR_p$ for any $f$ with $\Delta(f)=k$ for some $k \ge 2$.
Consider any $f$ with $\Delta(f)=k+1 \ge 3$. Such $f(x,y)$ (viewed as a bivariate polynomial) contains a term that is a multiple of $x^2y$ or $xy^2$. We deal with the two cases separately.
In the first case, consider
\begin{equation}\begin{array}{lll}
f^{(1)}(x,y) \equiv f(x+1,y)-f(x,y).
\end{array}\end{equation}
Compared to $f$, the degree of $f^{(1)}$ in $x$ is decreased by $1$, while its degree in $y$ is the same or smaller.
Moreover, it is easy to see
\begin{equation}\begin{array}{lll}
\Delta(f^{(1)})=\Delta(f)-1=k \ge 2.
\end{array}\end{equation}
Thus by induction $P^{f^{(1)}} \to PR_p$.
Furthermore, $P^{f^{(1)}}$ can be simulated with two copies of $P^f$ as follows: for the first $P^f$, Alice inputs $x+1$ and Bob inputs $y$, and assume they receive outputs $a_1$ and $b_1$ ; for the second $P^f$, Alice inputs $x$ and Bob inputs $y$, and assume
they receive outputs $a_2$ and $b_2$. Then Alice sets $a=a_1-a_2$
as her final output, and Bob sets $b=b_1-b_2$
as his final output. Then we have
\begin{equation}\begin{array}{lll}
a-b&=&(a_1-a_2)-(b_1-b_2)\\
&=&(a_1-b_1)-(a_2-b_2)\\
&=&f(x+1,y)-f(x,y)\\
&=&f^{(1)}(x,y).
\end{array}\end{equation}
So $P^f \to P^{f^{(1)}} \to PR_p$.
A similar argument holds for the second case. But in this case, we consider
\begin{equation}\begin{array}{lll}
f^{(2)}(x,y) \equiv f(x,y+1)-f(x,y),
\end{array}\end{equation}
which satisfies
\begin{equation}\begin{array}{lll}
\Delta(f^{(2)})=\Delta(f)-1=k \ge 2.
\end{array}\end{equation}
Then by induction $P^{f^{(2)}} \to PR_p$.
Furthermore, $P^{f^{(2)}}$ can also be simulated with two copies of $P^f$. Therefore we have $P^f \to P^{f^{(2)}} \to PR_p$.
\begin{remark}
This proof actually yields a recursive strategy to simulate $PR_p$ with $2^{\Delta(f)-2}$ copies of $P^f$. To be specific, we have shown
\begin{equation}\begin{array}{lll}
P^f \to P^{f_1} \to P^{f_2} \to \dots \to P^{f_L} \to PR_p,
\end{array}\end{equation}
where $f_{i}(x,y)=f_{i-1}(x+1,y)-f_{i-1}(x,y)$ or $f_{i}(x,y)=f_{i-1}(x,y+1)-f_{i-1}(x,y)$. The strategy is to use a $P^{f_L}$ to simulate a $PR_p$, where this $P^{f_{L}}$ is simulated with two copies of $P^{f_{L-1}}$, where each copy of $P^{f_{L-1}}$
is simulated with two copies of $P^{f_{L-2}}$,
and so on. Overall, $2^{\Delta(f)-2}$ copies of $P^f$ is used, since $\Delta(f_{i-1})-\Delta(f_{i})=1$ and $\Delta(f_L)=2$.
This strategy can be demonstrated by the following example. Suppose $p=3$ and $f:\mathbb{Z}_3 \times \mathbb{Z}_3 \to \mathbb{Z}_3$
is given by
\begin{equation}\begin{array}{lll}
f(x,y)=x^2y^2+2xy^2+xy+2x.
\end{array}\end{equation}
Using two copies of $P^f$, we can simulate
a $P^{f_1}$ where
\begin{equation}\begin{array}{lll}
f_1(x,y) & = & f(x+1,y)-f(x,y)\\
&=&2xy^2+y+2.
\end{array}\end{equation}
Then using two copies of $P^{f_1}$ (each of
which is simulated with two copies of $P^f$), we can simulate a $P^{f_2}$ where
\begin{equation}\begin{array}{lll}
f_2(x,y) & = & f_1(x,y+1)-f_1(x,y)\\
&=&xy+2x+1
\end{array}\end{equation}
Finally, we can simulate a $PR_p$ with a $P^{f_2}$ by Alice subtracting her output by $2x+1$. Overall, four copies of $P^f$ is used to to simulate a $PR_p$.
\end{remark}
\end{itemize}
\end{proof}
Hence there are only two inequivalent classes of functional boxes with respect to asymptotic transformation: those corresponding to additively separable functions are local and cannot benefit distributed computation, while the others are all equivalent to $PR_p$ and can enable perfect distributed computation and make communication complexity trivial. Therefore, we have
\begin{theorem}
In any world where communication complexity is not trivial,
the functional box $P^f$ corresponding to any additively inseparable function $f:\mathbb{Z}_p \times \mathbb{Z}_p \to \mathbb{Z}_p$ cannot be implemented perfectly.
\end{theorem}
\section{Limits on Nonlocality from Information Causality}
In the previous section, we have shown that an exact implementation of $P^f$ for any additively inseparable function $f$ is impossible, unless communication complexity collapses. So now the question is how well these boxes might be implemented in nature. In this section, we partially answer this question by deriving several bounds on the proximity between any plausible $p$-nary-input, $p$-nary-output box and these functional boxes, from the principle of information causality. But before doing that, we need to first present the following result which is a key ingredient to our analysis.
\subsection{Generalized Depolarization Process}
It is well known that any binary-input, binary-output box can be transformed into an \textit{isotropic} one:
\begin{equation}\begin{array}{lll}
P_{iso}(\lambda) & \equiv & \lambda PR_2 + (1-\lambda) P_{N}
\\&=& \dfrac{1+\lambda}{2}PR_2 + \dfrac{1-\lambda}{2}\overline{PR_2},
\end{array}\end{equation}
where $P_N$ is the completely random noise
(i.e. $P_N(a,b|x,y)=\dfrac{1}{4}$, $\forall a,b,x,y \in \mathbb{Z}_2$), and $\overline{PR_2}$ is the anti-PR box
(i.e. $\overline{PR_2}(a,b|x,y)=\dfrac{1}{2}$ if
$a+b=xy+1$, and $0$ otherwise,
$\forall a,b,x,y \in \mathbb{Z}_2$), via the so-called \textit{depolarization} process\cite{MAG06,Sho08}:
Alice and Bob generate three independent and uniformly random bits $\alpha$, $\beta$ and $\gamma$, and modify their inputs and outputs by
\begin{equation}\begin{array}{lll}
x &\to x+\alpha, \\
y &\to y+\beta, \\
a &\to a+\beta x+ \alpha \beta+\gamma,\\
b &\to b+\alpha y+\gamma.
\end{array}\end{equation}
Here we prove an analogue of this result for $p$-nary-input, $p$-nary-output boxes.
\begin{definition}
\begin{equation}
PR_{p,j}(a,b|x,y)=
\begin{cases}
\dfrac{1}{p},~~~~\textrm{if}~~a-b=xy-j\\
0,~~~~\textrm{otherwise}
\end{cases},
\end{equation}
where $a,b,x,y,j \in \mathbb{Z}_p$.
\end{definition}
\begin{lemma}
Given any $p$-nary-input, $p$-nary-output box $P$, define
\begin{equation}\begin{array}{lll}
\mu_j=\dfrac{1}{p^2}\ssll{x,y=0}{p-1}P(a-b=xy-j|x,y),
\label{eq:muj}
\end{array}\end{equation}
where
\begin{equation}\begin{array}{lll}
P(a-b=xy-j|x,y)=\ssll{k=0}{p-1}P(a=k,b=k-xy+j|x,y).
\end{array}\end{equation}
Then
\begin{equation}
P \to \sum\limits_{j=0}^{p-1}\mu_j PR_{p,j}.
\end{equation}
\label{lem:dep}
\end{lemma}
\begin{proof}
Consider the following protocol: Alice and Bob generate three independent and uniformly random variable $\alpha,\beta,\gamma \in \mathbb{Z}_p$. They input
\begin{equation}\begin{array}{lll}
x'&=x+\alpha,\\
y'&=y+\beta
\end{array}\end{equation}
to box $P$. Suppose they obtain outputs $a'$ and $b'$. Then they set
\begin{equation}\begin{array}{lll}
a&=a'-\beta x-\alpha\beta+\gamma,\\
b&=b'+\alpha y+\gamma
\end{array}\end{equation}
as their final outputs.
Suppose this protocol realizes a box $\widetilde{P}$.
Then for any given $a,b,x,y$,
\begin{equation}\begin{array}{lll}
\widetilde{P}(a,b|x,y)=\dfrac{1}{p^3}\ssll{\alpha,\beta,\gamma=0}{p-1}P(a',b'|x',y').
\end{array}\end{equation}
Note that
\begin{equation}\begin{array}{lll}
a'-b'-x'y'&=&(a+\beta x+\alpha\beta-\gamma)-(b-\alpha y-\gamma)\\
&&-(x+\alpha)(y+\beta)\\
&=&a-b-xy.
\end{array}\end{equation}
Besides, $(x',y')$ are uniformly random in $\mathbb{Z}_p \times \mathbb{Z}_p$; $a'$ (or $b'$) is also uniformly random in $\mathbb{Z}_p$ conditioned on any $(x',y')$. So, if $a-b=xy-j$, then
\begin{equation}\begin{array}{lll}
\widetilde{P}(a,b|x,y)&=&\dfrac{1}{p^3}\ssll{x',y'=0}{p-1}P(a'-b'=x'y'-j|x',y')\\
&=&\dfrac{\mu_j}{p},
\end{array}\end{equation}
which implies
\begin{equation}\begin{array}{lll}
\widetilde{P}=\ssll{j=0}{p-1}\mu_jPR_{p,j}.
\end{array}\end{equation}
\end{proof}
Hence, any $p$-nary-input, $p$-nary-output box can be transformed into a probabilistic mixture of $PR_{p,j}$'s, each of which is essentially equivalent to $PR_{p}$ up to an additive shift of the outputs.
\begin{remark}
Lemma \ref{lem:dep} can be straightforwardly generalized to the case of any $q$-nary input/output box where $q$ does not have to be prime.
\end{remark}
\subsection{Limits on Nonlocality from Information Causality}
Let us briefly review the principle of information causality\cite{PPK+09}. It was introduced via the following communication task, which is similar to a random access code \cite{AN+02} or oblivious transfer \cite{Rab81,WW05}. Suppose Alice and Bob are two spatially separated parties. Alice receives a string of $N$ random and independent $p$-nary digits $\vec{x}=(x_0,x_1,\dots,x_{N-1}) \in \mathbb{Z}^N_p$. Bob receives a random variable $y \in \{0,1,\dots,N-1\}$ and is asked to give the $y$-th digit of Alice. To achieve this, they may share in advance some no-signalling resources such as shared randomness, entangled states or nonlocal boxes. Besides, Alice is allowed to send at most $m$ $p$-nary digits (or equivalently, $m\log p$ bits) to Bob. Let us denote Bob's output by $b$. The degree of their success is quantified by
\begin{equation}\begin{array}{lll}
I \equiv \ssll{i=0}{N-1}I(x_i:b|y=i)
\end{array}\end{equation}
where $I(x_i:b|y=i)$ is the Shannon mutual information between $x_i$ and $b$, under the condition that Bob receives $y=i$. Note that if $P(b=x_i|y=i)=p_i$, then by Fano's inequality,
\begin{equation}\begin{array}{lll}
I \ge N \log p -\ssll{i=0}{N-1} h(p_i)
-\ssll{i=0}{N-1}(1-p_i)\log (p-1)
\label{eq:fano}
\end{array}\end{equation}
where $h(x)=-x\log x-(1-x)\log (1-x)$ is the binary entropy function.
The principle of information causality states that for any physically allowed theories, we must have
\begin{equation}\begin{array}{lll}
I \le m\log p.
\label{eq:ic}
\end{array}\end{equation}
Both classical and quantum correlations satisfy this condition. However, it is unknown whether all postquantum correlations violate this condition.
Now suppose Alice and Bob share unlimited number of copies of a $p$-nary-input, $p$-nary-output box $P$, and Alice is allowed to send only one $p$-nary digit to Bob, i.e. $m=1$. We are going to investigate how $P$ can help them in this task, and presents several limits on $P$ from condition (\ref{eq:ic}).
We will consider the cases of $N \le p$ and $N \ge p$ separately.
\subsubsection*{Case 1: $N \le p$}
In this case, Alice receives $\vec{x}=(x_1,x_2,\dots,x_N) \subseteq \mathbb{Z}^N_p$
and Bob receives $y \in \{0,1,\dots,N-1\}\subseteq \mathbb{Z}_p$.
Bob aims to obtain the value of
\begin{equation}\begin{array}{lll}
F(\vec{x},y) \equiv x_y.
\end{array}\end{equation}
Note that
\begin{equation}\begin{array}{lll}
F(\vec{x},y)=\ssll{i=0}{N-1}[\ppll{0 \le j \neq i \le N-1}{}(i-j)^{-1}(y-j)]x_i
\label{eq:Fvxy1}
\end{array}\end{equation}
for any $\vec{x} \in \mathbb{Z}^N_p$ and $y \in \{0,1,\dots,N-1\}\subseteq \mathbb{Z}_p$.
Moreover, the right-hand side of the above equation has degree $N-1$ in $y$. So $F(\vec{x},y)$ can be rewritten as
\begin{equation}\begin{array}{lll}
F(\vec{x},y)=\ssll{k=0}{N-1} y^k F_k(\vec{x})
\label{eq:Fvxy2}
\end{array}\end{equation}
for some $F_k:\mathbb{Z}^N_p \to \mathbb{Z}_p$. This fact suggests a protocol that generates each term $y^k F_k(\vec{x})$ independently and then sums them together. For $PR_p$, this can be achieved by Alice inputting $F_k(\vec{x})$ and Bob inputting $y^k$, then the difference between their outputs would be $y^k F_k(\vec{x})$. For a general box $P$, by lemma \ref{lem:dep}, it can be converted into a mixture of $PR_{p,j}$ boxes which can be viewed as an imperfect $PR_p$ box with random additive noise. As long as the random noise is not very bad, we can still pretend it to be a $PR_p$ and achieve a high efficiency. So consider the following protocol:\\\\
\begin{protocol}[htbp]
\caption{~~~basicRAC$(p,N,c,P,\vec{x},y)$}
\begin{tabular}{p{0.06 \textwidth}p{0.4 \textwidth}}
\textbf{Setup:} & $p$ is prime. $N \le p$. Alice has $\vec{x} \in \mathbb{Z}^N_p$ and Bob has $y \in \{0,1,\dots,N-1\}$. They share at least
$N-1$ copies of a $p$-nary-input, $p$-nary-output box $P$.
They also choose $c \in \mathbb{Z}_p$.\\
\textbf{Steps:} &
1. Alice and Bob convert each copy of $P$ into a copy of $\widetilde{P}= \ssll{j=0}{p-1}\mu_j PR_{p,j}$ (where $\mu_j$ is given by Eq.(\ref{eq:muj})), using the protocol given in the proof of lemma \ref{lem:dep}.\\
&2. Alice and Bob use $N-1$ copies of box $\widetilde{P}$ as follows: for the $k$-th box, Alice inputs $F_k(\vec{x})$ (which is given by Eqs.(\ref{eq:Fvxy1}) and (\ref{eq:Fvxy2})) and Bob inputs $y^k$, and suppose they get outputs
$a_k$ and $b_k$ respectively, $\forall k=1,2, \dots,N-1$.\\
&3. Alice sends $q=\ssll{k=1}{N-1} a_k + F_0(\vec{x})$ to Bob.\\
&4. After receiving $q$, Bob outputs $b=q-\ssll{k=1}{N-1} b_k-c$.\\
\end{tabular}
\end{protocol}
For an illustration of this protocol, see Fig. \ref{fig:basic}.
\begin{figure}[htbp]
\center
\includegraphics[width=0.9 \columnwidth, height=\columnwidth]{fig1.pdf}\\
\caption{Basic Protocol for $N\le p$. Each black box represents a copy of $\widetilde{P}$ that is obtained from a copy of $P$ using the protocol given in the proof of lemma \ref{lem:dep}. $F_0(\vec{x}),F_1(\vec{x}),\dots,F_{N-1}(\vec{x})$ are given by Eqs.(\ref{eq:Fvxy1}) and (\ref{eq:Fvxy2}). }
\label{fig:basic}
\end{figure}
\begin{remark}
When $p=2$ and $N=2$, we have $F(x_0,x_1,y)=x_0+(x_0+x_1)y$ and hence $F_1(x_0,x_1)=x_0+x_1$, $F_0(x_0,x_1)=x_0$. Then the above protocol with $c=0$ reduces to the basic protocol in Ref.\cite{PPK+09}.
\end{remark}
Let us analyse the efficiency of this protocol.
First, consider the special case
of $P=PR_p$ and $c=0$. In this case, step 1 does not have any effect on $PR_p$, and we have $\widetilde{P}=PR_p$. So in step 2, we have
\begin{equation}\begin{array}{lll}
a_k-b_k=F_k(\vec{x})y^k, ~~~\forall k.
\end{array}\end{equation}
As a result, Bob's output
\begin{equation}\begin{array}{lll}
b=q-\ssll{k=1}{N-1} b_k
=\ssll{k=1}{N-1} (a_k-b_k) + F_0(\vec{x})
=F(\vec{x},y).
\end{array}\end{equation}
So Bob correctly guesses $x_y$ with certainty.
However, by lemma \ref{lem:dep}, $\widetilde{P}$ is generally not $PR_p$, but equals $PR_{p,j}$ with probability $\mu_j$, $\forall j\in \mathbb{Z}_p$. Then
\begin{equation}\begin{array}{lll}
a_k-b_k=F_k(\vec{x})y^k-j_k
\end{array}\end{equation}
where
\begin{equation}\begin{array}{lll}
P(j_k=j)=\mu_j,~~~\forall j\in \mathbb{Z}_p.
\end{array}\end{equation}
And Bob's output is
\begin{equation}\begin{array}{lll}
b=F(\vec{x},y)-\ssll{k=1}{N-1}j_k-c,
\end{array}\end{equation}
which is correct if and only if
\begin{equation}\begin{array}{lll}
\ssll{k=1}{N-1}j_k=-c,
\end{array}\end{equation}
which happens with probability
\begin{equation}\begin{array}{lll}
\chi_p(\vec{\mu},N-1,c) \equiv
\ssll{j_1+\dots+j_{N-1}=-c}{}\ppll{k=1}{N-1}{\mu_{j_k}},
\end{array}\end{equation}
where $\vec{\mu}=(\mu_0,\mu_1,\dots,\mu_{p-1})$. Note that this probability is independent of $y$.
So by Eq.(\ref{eq:fano}), we must have
\begin{equation}\begin{array}{lll}
\chi_p(\vec{\mu},N-1,c) \le \tpi{\dfrac{N-1}{N}\log p},
\end{array}\end{equation}
where
\begin{equation}
\tp{x}=h(x)+(1-x)\log (p-1)
\end{equation}
and $\tpi{\cdot}$ is its inverse function
\footnote{For any $p$, the function $\tp{x}$ first monotonically increases, then monotonically drops, as $x$ grows from $0$ to $1$. Only the first part is interesting to us, and $\tpi{\cdot}$ is the inverse function for that part.}, for condition (\ref{eq:ic}) to be satisfied.
Note that $\forall c \in \mathbb{Z}_p$, $\forall M \ge 1$,
\begin{equation}\begin{array}{lll}
\chi_p(\vec{\mu},M,c)
&=&\ssll{j_1+\dots+j_{M}=-c}{}\ppll{k=1}{M}{\mu_{j_k}}\\
&=&\dfrac{1}{p}\textrm{tr}(Z_p^{c}(\ssll{j=0}{p-1}\mu_j Z^j_p)^{M})\\
&=&\dfrac{1}{p}\ssll{k=0}{p-1}\omega_p^{ck}(\ssll{j=0}{p-1}\mu_j\omega_p^{jk})^{M}.
\end{array}\end{equation}
where $Z_p=\textrm{diag}(1,\omega_p,\dots,\omega_p^{p-1})$
is the $p$-dimensional generalization of Pauli $Z$ matrix,
$\omega_p=e^{i{2\pi}/{p}}$, and in the second step we use $\textrm{tr}(Z^i_p)=0$, $\forall i=1,2,\dots,p-1$. Therefore,
\begin{theorem}
In any world where information causality holds,
any $p$-nary-input, $p$-nary-output box $P$ satisfies:
$\forall N \le p$, $\forall c \in \mathbb{Z}_p$,
\begin{equation}\begin{array}{lll}
\dfrac{1}{p}\ssll{k=0}{p-1}\omega_p^{ck}(\ssll{j=0}{p-1}\mu_j\omega_p^{jk})^{N-1} \le \tpi{\dfrac{N-1}{N}\log p},
\end{array}\end{equation}
where $\mu_j$ is defined as Eq.(\ref{eq:muj}).
\label{thm:icpr}
\end{theorem}
\subsubsection*{Case 2: $N \ge p$}
Assume $N=p^n$ and $y=\ssll{i=0}{n-1}y_ip^i$ for $y_i \in \mathbb{Z}_p$. We present a recursive protocol that calls the basicRAC protocol as a subroutine. It is the $p$-dimensional generalization of the one given in Ref.\cite{PPK+09}.
\begin{protocol}[htb]
\caption{~~~recursiveRAC$(p,N,c,P,\vec{x},y)$}
\begin{tabular}{p{0.06 \textwidth}p{0.4 \textwidth}}
\textbf{Setup:} & $p$ is prime. $N=p^n$ for some $n \ge 1$. Alice has $\vec{x} \in \mathbb{Z}^N_p$ and Bob has $y=\ssll{i=0}{n-1}y_ip^i$ where $y_i\in \mathbb{Z}_p$.
They share at least $N-1$ copies of a $p$-nary-input, $p$-nary-output box $P$. They also choose $c \in \mathbb{Z}_p$.\\
\textbf{Steps:} & If $N=p$, then Alice and Bob execute basicRAC$(p,N,c,P,\vec{x},y)$. Otherwise:
\begin{itemize}
\item \textbf{Alice}: she divides her input $\vec{x}$ into $\frac{N}{p}$ substrings:
$\vec{z}_0\equiv(x_0,x_1,\dots,x_{p-1})$,
$\vec{z}_1\equiv(x_p,x_{p+1},\dots,x_{2p-1})$,
$\dots$, $\vec{z}_{\frac{N}{p}-1} \equiv(x_{N-p},x_{N-p+1},\dots,x_{N-1})$.
For $k=0,1,\dots,\frac{N}{p}-1$, she executes her part of basicRAC$(p,p,c,P,\vec{z}_k,y_0)$, except that she does not send her message (which is denoted by $q_k$). Then she executes her part of recursiveRAC$(p,\frac{N}{p},0,P,\vec{q},y')$,
where $\vec{q}=(q_0,q_1,\dots,q_{\frac{N}{p}-1})$ and $y'=\ssll{i=1}{n-1}y_ip^{i-1}$.
\item \textbf{Bob}: he executes his part of recursiveRAC$(p,\dfrac{N}{p},0,P,\vec{q},y')$. (Here still $y'=\ssll{i=1}{n-1}y_ip^{i-1}$.) Suppose the output from this protocol is $\hat{q}$. Then he executes his part of basicRAC$(p,p,c,\vec{z}_{y'},y_0)$, except that he uses $\hat{q}$ as the message received from Alice. The output from this protocol is set as his final output.
\end{itemize}
\end{tabular}
\end{protocol}
Fig. \ref{fig:recursive} illustrates an example of this protocol for $p=3$ and $N=9$.
\begin{figure}[htbp]
\center
\includegraphics[width=0.9 \columnwidth, height=1.2 \columnwidth]{fig2.pdf}\\
\caption{Recursive protocol for $N \ge p$. Here we show an example for $p=3$ and $N=9$.
Suppose Alice receives $\vec{x}=(x_0,x_1,\dots,x_8)\in \mathbb{Z}^9_3$
and Bob receives $y=(y_1y_0)_3 \in \{0,1,\dots,8\}$. They also share at least $N-1=8$
copies of some box $P$. The protocol consists of three level-$0$ and one level-$1$
executions of the basic protocol illustrated in Fig.\ref{fig:basic}. For Alice, she executes
the three level-$0$ basic protocols with inputs $(x_0,x_1,x_2)$, $(x_3,x_4,x_5)$ and $(x_6,x_7,x_8)$
respectively. Suppose her messages (which are not really sent) are $q_0,q_1,q_2$ respectively.
Then she executes the level-$1$ basic protocol with input $(q_0,q_1,q_2)$,
and truly sends her message for this one. For Bob, he executes the level-$1$ basic protocol
with input $y_1$, and suppose his output is $\widehat{q}$. Then he executes only the $y_1$-th level-$0$ basic protocol with input $y_0$(this figure shows the case $y_1=1$). During this protocol he
pretends that $\widehat{q}$ is the message received from Alice. At last, this basic protocol's output
$b$ is set as Bob's final output. Note that each basic protocol costs $p-1=2$ copies of box $P$, so
Alice totally accesses $8$ copies of $P$, while Bob only accesses $4$ copies of $P$.}
\label{fig:recursive}
\end{figure}
The recursiveRAC protocol can be viewed as a level-$n$ pyramid of basicRAC protocols. Its idea is that Alice uses level $k+1$ to transmit her messages generated at level $k$, while Bob uses level $k+1$ to reveal the message he needs
at level $k$, $\forall k=0,1,\dots,n-1$. Although Alice accesses totally $(p-1)\ssll{k=0}{n-1}p^k=N-1$ copies of $\widetilde{P}$, Bob only accesses
$n(p-1)$ copies of $\widetilde{P}$ and only these boxes are truly relevant to his final output. Each of these boxes contributes to his final output an additive shift that equals $j$ with probability $\mu_j$,
$\forall j \in \mathbb{Z}_p$. So Bob's final guess is correct if and only if all these additive shifts sum to $-c$, which happens with probability
\begin{equation}\begin{array}{lll}
\chi_p(\vec{\mu},n(p-1),c) &=& \ssll{j_1+\dots+j_{n(p-1)}=-c}{}\ppll{k=1}{n(p-1)}{\mu_{j_k}}\\
&=&\dfrac{1}{p}\ssll{k=0}{p-1}\omega^{ck}(\ssll{j=0}{p-1}\mu_j\omega^{jk})^{n(p-1)}.
\end{array}\end{equation}
So, by Eq.(\ref{eq:fano}), we must have
\begin{equation}\begin{array}{lll}
\chi_p(\vec{\mu},n(p-1),c) \le \tpi{\dfrac{N-1}{N}\log p},
\end{array}\end{equation}
otherwise condition (\ref{eq:ic}) is violated. Thus,
\begin{theorem}
In any world where information causality holds,
any $p$-nary-input, $p$-nary-output box $P$ satisfies:
$\forall n\ge 1$, $N=p^n$, $\forall c \in \mathbb{Z}_p$,
\begin{equation}\begin{array}{lll}
\dfrac{1}{p}\ssll{k=0}{p-1}\omega_p^{ck}(\ssll{j=0}{p-1}\mu_j\omega_p^{jk})^{n(p-1)} \le \tpi{\dfrac{N-1}{N}\log p},
\end{array}\end{equation}
where $\mu_j$ is defined as Eq.(\ref{eq:muj}).
\label{thm:icpr2}
\end{theorem}
\subsubsection{Bounds with respect to general functional boxes}
Theorems \ref{thm:icpr} and \ref{thm:icpr2} can be viewed as giving bounds on the proximity between a plausible $p$-nary-input, $p$-nary-output box and $PR_p$. By lemma \ref{lem:pf}, $P^f$ and $PR_p$ are interconvertible for any additively inseparable function $f$. So there should be also bounds on the proximity between a plausible $p$-nary-input, $p$-nary-output box and the corresponding functional box. In what follows, we will give several such bounds.
\begin{definition}
For any function $f:\mathbb{Z}_p \times \mathbb{Z}_p \to \mathbb{Z}_p$, define
\begin{equation}
P^f_j(a,b|x,y)=
\begin{cases}
\dfrac{1}{p},~~~~\textrm{if}~~a-b=f(x,y)-j\\
0,~~~~\textrm{otherwise}
\end{cases},
\end{equation}
where $a,b,x,y,j \in \mathbb{Z}_p$.
\end{definition}
$P^f_j$ and $P^f$ are essentially equivalent, except for an additive shift of the outputs.
We will consider the case of $\Delta(f)=2$ and $\Delta(f)>2$ separately.\\
\textbf{Case 1}: $\Delta(f)=2$.
Suppose
\begin{equation}\begin{array}{lll}
f(x,y)=\lambda xy+g(x)+h(y)
\end{array}\end{equation}
for some $\lambda \neq 0$ and $g,h:\mathbb{Z}_p \to \mathbb{Z}_p$.
Given any $p$-nary-input, $p$-nary-output box $P$, define
\begin{equation}\begin{array}{lll}
\nu_j=\dfrac{1}{p^2}\ssll{x,y=0}{p-1}P(a-b=f(x,y)-j|x,y)
\label{eq:nuj}
\end{array}\end{equation}
where
\begin{equation}\begin{array}{lll}
&&P(a-b=f(x,y)-j|x,y)\\
&=&\ssll{k=0}{p-1}P(a=k,b=k-f(x,y)+j|x,y).
\end{array}\end{equation}
Then we have
\begin{equation}\begin{array}{lll}
P \to \ssll{j=0}{p-1}\nu_j P^f_j.
\label{eq:ppfj}
\end{array}\end{equation}
To prove this, consider the following protocol:
Alice and Bob generate three independent and uniformly random variable $\alpha$, $\beta$, $\gamma \in \mathbb{Z}_p$.
Alice inputs
\begin{equation}\begin{array}{lll}
x'=x+\alpha
\end{array}\end{equation}
to $P$, and Bob inputs
\begin{equation}\begin{array}{lll}
y'=y+\beta
\end{array}\end{equation}
to $P$. Suppose they receive outputs $a'$ and $b'$ respectively. Then Alice sets
\begin{equation}\begin{array}{lll}
a=a'-\lambda \beta x -\lambda\alpha\beta -g(x+\alpha)+g(x)
+\gamma
\end{array}\end{equation}
as her final output,
and Bob sets
\begin{equation}\begin{array}{lll}
b=b'+\lambda \alpha y+h(y+\beta)-h(y)+\gamma
\end{array}\end{equation}
as his final output. Suppose this protocol realizes a box $\widehat{P}$.
Then for any given $a,b,x,y$,
\begin{equation}\begin{array}{lll}
\widehat{P}(a,b|x,y)=\dfrac{1}{p^3}\ssll{\alpha,\beta,\gamma=0}{p-1}P(a',b'|x',y').
\end{array}\end{equation}
Note that
\begin{equation}\begin{array}{lll}
a'-b'-f(x',y')&=&(a+\lambda \beta x +\lambda\alpha\beta +g(x+\alpha)-g(x)\\
&&-\gamma)-(b-\lambda \alpha y-h(y+\beta)+h(y)\\
&&-\gamma)-(\lambda (x+\alpha)(y+\beta)+g(x+\alpha)\\
&&+h(y+\beta))\\
&=&a-b-f(x,y).
\end{array}\end{equation}
Besides, $(x',y')$ are uniformly random in $\mathbb{Z}_p \times \mathbb{Z}_p$; $a'$ (or $b'$) is also uniformly random in $\mathbb{Z}_p$ conditioned on any $(x',y')$. So, if $a-b=f(x,y)-j$, then
\begin{equation}\begin{array}{lll}
\widehat{P}(a,b|x,y)&=&\dfrac{1}{p^3}\ssll{x',y'=0}{p-1}P(a'-b'=f(x',y')-j|x',y')\\
&=&\dfrac{\nu_j}{p},
\end{array}\end{equation}
which implies
\begin{equation}\begin{array}{lll}
\widehat{P}=\ssll{j=0}{p-1}\nu_jP^f_j.
\end{array}\end{equation}
Now recall that in the proof of lemma {\ref{lem:pf}} we gave the following protocol that converts a $P^f$ to a $PR_p$: Alice inputs $x$ and Bob inputs $y$, and suppose they receive $a$ and $b$. Their final outputs are $a'=\lambda^{-1}(a-g(x))$ and $b'=\lambda^{-1}(b+h(y))$. Note
\begin{equation}\begin{array}{lll}
a'-b'-xy=\lambda^{-1}(a-b-f(x,y)).
\end{array}\end{equation}
So this protocol converts a $P^f_j$ to a $PR_{p,\lambda^{-1}j}$. Hence, we have
\begin{equation}\begin{array}{lll}
\ssll{j=0}{p-1}\nu_jP^f_j \to \ssll{j=0}{p-1}\nu_jPR_{p,\lambda^{-1}j}.
\label{eq:pfjprpj}
\end{array}\end{equation}
Combining Eq.(\ref{eq:ppfj}) and (\ref{eq:pfjprpj}),
we obtain
\begin{equation}\begin{array}{lll}
P \to \ssll{j=0}{p-1}\nu_j PR_{p,\lambda^{-1}j}.
\end{array}\end{equation}
Then by theorems \ref{thm:icpr} and
\ref{thm:icpr2}, we get
\begin{theorem}
In any world where information causality holds,
for any additively inseparable function $f:\mathbb{Z}_p \times \mathbb{Z}_p \to \mathbb{Z}_p$ with $\Delta(f)=2$ and any $p$-nary-input, $p$-nary-output box $P$, we have:
\begin{itemize}
\item $\forall N \le p$, $\forall c \in \mathbb{Z}_p$,
\begin{equation}\begin{array}{lll}
\dfrac{1}{p}\ssll{k=0}{p-1}\omega_p^{ck}(\ssll{j=0}{p-1}\nu_j\omega_p^{jk})^{N-1} \le \tpi{\dfrac{N-1}{N}\log p};
\end{array}\end{equation}
\item $\forall n \ge 1$, $N=p^n$, $\forall
c \in \mathbb{Z}_p$,
\begin{equation}\begin{array}{lll}
\dfrac{1}{p}\ssll{k=0}{p-1}\omega_p^{ck}(\ssll{j=0}{p-1}\nu_j\omega_p^{jk})^{n(p-1)} \le \tpi{\dfrac{N-1}{N}\log p},
\end{array}\end{equation}
where $\nu_j$ is defined as Eq.(\ref{eq:nuj}).
\end{itemize}
\label{thm:icpf1}
\end{theorem}
\textbf{Case 2}: $\Delta(f)>2$.
Given arbitrary $p$-nary-input, $p$-nary-output box $P$, we change it slightly as follows: Alice and Bob generate a uniformly random $\gamma \in \mathbb{Z}_p$, and modify their outputs by $a \to a+\gamma$ and $b \to b+\gamma$. Suppose this modified box is $P'$. They we have
\begin{equation}\begin{array}{lll}
P'(a,b|x,y)=\dfrac{1}{p}E_{x,y,f(x,y)-a+b}
\end{array}\end{equation}
where
\begin{equation}\begin{array}{lll}
E_{x,y,j} & = & P(a-b=f(x,y)-j|x,y)\\
&=& \ssll{k=0}{p-1}P(a=k,b=k-f(x,y)+j|x,y).
\label{eq:exyj}
\end{array}\end{equation}
Define
\begin{equation}\begin{array}{lll}
\nu_j=\min\limits_{x,y \in \mathbb{Z}_p}{E_{x,y,j}}.
\label{eq:nuj2}
\end{array}\end{equation}
Then we have
\begin{equation}\begin{array}{lll}
P'=\ssll{j=0}{p-1} \nu_j P^f_j + (1-\ssll{j=0}{p-1} \nu_j)P''
\end{array}\end{equation}
for some box $P''$.
Now recall that in the proof of lemma \ref{lem:pf},
we have given a recursive protocol that transforms $M \equiv 2^{\Delta(f)-2}$ copies of $P^f$ into a copy of $PR_p$. Let us see what happens if we apply that protocol to $P'$. Suppose the resulting box is $\widetilde{P}$.
Note that $P'$ acts as $P^f_j$ with probability $\nu_j$ (or acts as $P''$ with probability $1-\ssll{j=0}{p-1}\nu_j$). So each time Alice and Bob access a $P'$, their outputs are basically the same as those of $P^f$ except for an additive shift which is $j$ with probability $\nu_j$ (or with probability $1-\ssll{j=0}{p-1}\nu_j$ the outputs are nonsense). These additive shifts are multiplied by a factor $\lambda^{-1}$ for some $\lambda\neq 0$ (see Eq.(\ref{eq:pfjprpj})). And the overall shift in the final output is the sum of a half of these multiplied shifts minus the sum of the other half, since we use a difference method. So,
\begin{equation}\begin{array}{lll}
\widetilde{P} = \ssll{j=0}{p-1}{\mu_j} PR_{p,j} +(1-\ssll{j=0}{p-1}\mu_j)P''',
\end{array}\end{equation}
where
\begin{equation}\begin{array}{lll}
\mu_j &=& \ssll{(j_1,\dots,j_M) \in \mathcal{S}_{M,j}} {}
\ppll{k=1}{M}\nu_{j_k}
\label{eq:mujnuj}
\end{array}\end{equation}
in which
\begin{equation}\begin{array}{lll}
\mathcal{S}_{M,j}\equiv\{(j_1,\dots,j_M)\in \mathbb{Z}^M_p:
\ssll{k=1}{\frac{M}{2}}j_k - \ssll{k=\frac{M}{2}+1}{M}j_k=\lambda j\}
\label{eq:smj}
\end{array}\end{equation}
and $P'''$ is some box.
Then, by lemma \ref{lem:dep}, we have
\begin{equation}\begin{array}{lll}
\widetilde{P} \to \widehat{P}=\ssll{j=0}{p-1}\widehat{\mu_j}PR_{p,j}
\end{array}\end{equation}
for some $\widehat{\mu_j}\ge \mu_j$.
Now assume Alice and Bob execute basicRAC$(p,N,c,\widehat{P},\vec{x},y)$ for $N \le p$. Then Bob's guess is correct with probability
\begin{equation}\begin{array}{lll}
\chi_p(\vec{\widehat{\mu}},N-1,c)&=& \ssll{j_1+\dots+j_{N-1}=-c}{}\ppll{k=1}{N-1}{\widehat{\mu_{j_k}}}\\
&\ge &
\ssll{j_1+\dots+j_{N-1}=-c}{}\ppll{k=1}{N-1}{\mu_{j_k}}\\
&=&
\ssll{(j_1,\dots,j_{M(N-1)}) \in \mathcal{S}_{M(N-1),-c}}{}\ppll{k=1}{M(N-1)}{\nu_{j_k}}\\
& \equiv & \sigma_p (\vec{\nu}, M(N-1), c),
\end{array}\end{equation}
where in the second step we use $\widehat{\mu_j} \ge \mu_j \ge 0$, and in the third step
we use Eqs.(\ref{eq:mujnuj}) and (\ref{eq:smj}), and in the last step
$\vec\nu=(\nu_0,\nu_1,\dots,\nu_{p-1})$.
So we must have
\begin{equation}\begin{array}{lll}
\sigma_p (\vec{\nu}, M(N-1), c) \le \tpi{\dfrac{N-1}{N}\log p},
\end{array}\end{equation}
otherwise condition (\ref{eq:ic}) is violated.
Similarly, if Alice and Bob execute recursiveRAC$(p,N,c,\widehat{P},\vec{x},y)$ for $N=p^n$, then Bob's guess is correct with probability
\begin{equation}\begin{array}{lll}
\chi_p(\vec{\widehat{\mu}},n(p-1),c)
\ge \sigma_p (\vec{\nu}, Mn(p-1), c),
\end{array}\end{equation}
So unless
\begin{equation}\begin{array}{lll}
\sigma_p (\vec{\nu}, Mn(p-1), c) \le \tpi{\dfrac{N-1}{N}\log p},
\end{array}\end{equation}
condition (\ref{eq:ic}) is violated.
Note that $\forall L \ge 1$, $\forall c \in \mathbb{Z}_p$,
\begin{equation}\begin{array}{lll}
\sigma_p (\vec{\nu}, L, c)
&=&
\ssll{(j_1,\dots,j_L) \in \mathcal{S}_{L,-c}}{}\ppll{k=1}{L}{\nu_{j_k}}\\
&=& \dfrac{1}{p}\mathrm{tr}(Z_p^{\lambda c} (\ssll{j=0}{p-1}{\nu_j Z^j_p})^{\frac{L}{2}}
(\ssll{j=0}{p-1}{\nu_j Z^{-j}_p})^{\frac{L}{2}})\\
&=& \dfrac{1}{p} \ssll{k=0}{p-1}
\omega_p^{\lambda c k}
(\ssll{j=0}{p-1}{\nu_j \omega_p^{jk}})^{\frac{L}{2}}
(\ssll{j=0}{p-1}{\nu_j \omega_p^{-jk}})^{\frac{L}{2}},
\end{array}\end{equation}
where in the second step we use $\textrm{tr}(Z^i_p)=0$, $\forall i=1,2,\dots,p-1$. So we have
\begin{theorem}
In any world where information causality holds,
for any additively inseparable function $f:\mathbb{Z}_p \times \mathbb{Z}_p \to \mathbb{Z}_p$ with $\Delta(f)>2$ and any
$p$-nary-input, $p$-nary-output box $P$, we have:
\begin{itemize}
\item $\forall N \le p$, $L=(N-1)2^{\Delta(f)-2}$,
$\forall c \in \mathbb{Z}_p$,
\begin{equation}\begin{array}{lll}
\dfrac{1}{p} \ssll{k=0}{p-1}
\omega_p^{ c k}
(\ssll{j=0}{p-1}{\nu_j \omega_p^{jk}})^{\frac{L}{2}}
(\ssll{j=0}{p-1}{\nu_j \omega_p^{-jk}})^{\frac{L}{2}} \le \tpi{\dfrac{N-1}{N}\log p};
\end{array}\end{equation}
\item $\forall n \ge 1$, $N=p^n$, $L=n(p-1)2^{\Delta(f)-2}$,
$\forall c \in \mathbb{Z}_p$,
\begin{equation}\begin{array}{lll}
\dfrac{1}{p} \ssll{k=0}{p-1}
\omega_p^{ c k}
(\ssll{j=0}{p-1}{\nu_j \omega_p^{jk}})^{\frac{L}{2}}
(\ssll{j=0}{p-1}{\nu_j \omega_p^{-jk}})^{\frac{L}{2}} \le \tpi{\dfrac{N-1}{N}\log p},
\end{array}\end{equation}
where $\nu_j$ is defined as Eqs.(\ref{eq:exyj}) and (\ref{eq:nuj2}).
\end{itemize}
\label{thm:icpf2}
\end{theorem}
\begin{remark}
Since quantum correlations satisfy the principle of
information causality, theorems \ref{thm:icpr}, \ref{thm:icpr2},
\ref{thm:icpf1} and \ref{thm:icpf2} all apply to them.
\end{remark}
\section{Conclusion}
In summary, we have proposed the class of functional boxes which incorporate the generalized PR boxes as a special case. Every functional box corresponding to an additively inseparable function is asymptotically equivalent to a generalized PR box, which can enable perfect distributed computation and make communication complexity trivial. So all such functional boxes are unlikely to exist. Furthermore, we investigated how proximate can a general box be to these functional boxes without violating the principle of information causality.
Our work raises many new questions:
First, we have shown that if $PR_p$ box can be implemented exactly, it would lead to the collapse of \emph{deterministic} communication complexity. But we do not know how much noise it can tolerate while still making \emph{probabilistic} communication complexity trivial. And what about general $P^f$?
Second, in the proof of lemma \ref{lem:pf}, we gave a protocol that
transforms $2^{\Delta(f)-2}$ copies of $P^f$ into a $PR_p$. That protocol is universal, but might be not optimal for some $f$. If we can simulate a $PR_p$ with fewer copies of $P^f$, then the bounds in theorem \ref{thm:icpf2} can be improved accordingly. In fact,
can we directly use $P^f$ to perform distributed computation, instead of first converting it into $PR_p$?
Third, as pointed out in Ref.\cite{ABL+09}, the set of physically allowed boxes should form a closed set under local wirings. Namely, if a set of boxes $P_1,P_2,\dots,P_m$ are all allowed by a physical theory, then a new box $\widehat{P}$ obtained by locally connecting these boxes should also be allowed by this theory. Conversely, if $\widehat{P}$ is not allowed, then at least one of $P_1,P_2,\dots,P_m$ should be prohibited. Since we have already obtained a set of implausible postquantum correlations, can we use this approach to rule out more?
Finally, here we have only considered bipartite $p$-nary-input, $p$-nary-output boxes. It would be interesting to extend our results to more general boxes with arbitrary number of inputs and outputs, as well as multipartite boxes.
\section*{Acknowledgments}
This research was supported by NSF Grant CCR-0905626
and ARO Grant W911NF-09-1-0440.
|
2,877,628,091,241 | arxiv | \partial{\partial}
\def\parallel{\parallel}
\def\rangle{\rangle}
\def\subset{\subset}
\def\subseteq{\subseteq}
\def\simeq{\simeq}
\def\supset{\supset}
\def\supseteq{\supseteq}
\def\mathop{\rm tr}\nolimits{\triangle}
\def\times{\times}
\def\underline{\underline}
\def\wedge{\wedge}
\def\widehat{\widehat}
\def\widetilde{\widetilde}
\def\rightarrow{\rightarrow}
\def\leftarrow{\leftarrow}
\def\longrightarrow{\longrightarrow}
\def\longleftarrow{\longleftarrow}
\def\leftrightarrow{\leftrightarrow}
\def\longleftrightarrow{\longleftrightarrow}
\def\hookrightarrow{\hookrightarrow}
\def\hookleftarrow{\hookleftarrow}
\def{\cal R}{\Rightarrow}
\def\Longrightarrow{\Longrightarrow}
\def\Leftarrow{\Leftarrow}
\def\Longleftarrow{\Longleftarrow}
\def\Leftrightarrow{\Leftrightarrow}
\def\Longleftrightarrow{\Longleftrightarrow}
\def\mathop{\rm Ann}\nolimits{\mathop{\rm Ann}\nolimits}
\def\mathop{\rm Ad}\nolimits{\mathop{\rm Ad}\nolimits}
\def\mathop{\rm alg}\nolimits{\mathop{\rm alg}\nolimits}
\def\mathop{\rm Alg}\nolimits{\mathop{\rm Alg}\nolimits}
\def\mathop{\rm and}\nolimits{\mathop{\rm and}\nolimits}
\def\mathop{\rm arc}\nolimits{\mathop{\rm arc}\nolimits}
\def\mathop{\rm Aut}\nolimits{\mathop{\rm Aut}\nolimits}
\def\mathop{\rm card}\nolimits{\mathop{\rm card}\nolimits}
\def\mathop{\rm Card}\nolimits{\mathop{\rm Card}\nolimits}
\def\mathop{\rm Ch}\nolimits{\mathop{\rm Ch}\nolimits}
\def\mathop{\rm codim}\nolimits{\mathop{\rm codim}\nolimits}
\def\mathop{\rm codim}\nolimits{\mathop{\rm codim}\nolimits}
\def\mathop{\rm coker}\nolimits{\mathop{\rm coker}\nolimits}
\def\mathop{\rm const}\nolimits{\mathop{\rm const}\nolimits}
\def\mathop{\rm cotg}\nolimits{\mathop{\rm cotg}\nolimits}
\def\mathop{\rm cub}\nolimits{\mathop{\rm cub}\nolimits}
\def\mathop{\rm Cyl}\nolimits{\mathop{\rm Cyl}\nolimits}
\def\mathop{\rm dia}\nolimits{\mathop{\rm dia}\nolimits}
\def\mathop{\rm diam}\nolimits{\mathop{\rm diam}\nolimits}
\def\mathop{\rm dist}\nolimits{\mathop{\rm dist}\nolimits}
\def\mathop{\rm div}\nolimits{\mathop{\rm div}\nolimits}
\def\mathop{\rm DR}\nolimits{\mathop{\rm DR}\nolimits}
\def\mathop{\rm End}\nolimits{\mathop{\rm End}\nolimits}
\def\mathop{\rm et}\nolimits{\mathop{\rm et}\nolimits}
\def\mathop{\rm ext}\nolimits{\mathop{\rm ext}\nolimits}
\def\mathop{\rm Ext}\nolimits{\mathop{\rm Ext}\nolimits}
\def\mathop{\rm extr}\nolimits{\mathop{\rm extr}\nolimits}
\def\mathop{\rm fin}\nolimits{\mathop{\rm fin}\nolimits}
\def\mathop{\rm Fix}\nolimits{\mathop{\rm Fix}\nolimits}
\def\mathop{\rm for}\nolimits{\mathop{\rm for}\nolimits}
\def\mathop{\rm Frac}\nolimits{\mathop{\rm Frac}\nolimits}
\def\mathop{\rm Gal}\nolimits{\mathop{\rm Gal}\nolimits}
\def\mathop{\rm grad}\nolimits{\mathop{\rm grad}\nolimits}
\def\mathop{\rm Grass}\nolimits{\mathop{\rm Grass}\nolimits}
\def\mathop{\rm Hom}\nolimits{\mathop{\rm Hom}\nolimits}
\def\mathop{\rm id}\nolimits{\mathop{\rm id}\nolimits}
\def\mathop{\rm Id}\nolimits{\mathop{\rm Id}\nolimits}
\def\mathop{\rm im}\nolimits{\mathop{\rm im}\nolimits}
\def\mathop{\rm Im}\nolimits{\mathop{\rm Im}\nolimits}
\def\mathop{\rm Ind}\nolimits{\mathop{\rm Ind}\nolimits}
\def\mathop{\rm Inf}\nolimits{\mathop{\rm Inf}\nolimits}
\def \infty{\mathop{\rm inf}\nolimits}
\def\mathop{\rm Int}\nolimits{\mathop{\rm Int}\nolimits}
\def\mathop{\rm ker}\nolimits{\mathop{\rm ker}\nolimits}
\def\mathop{\rm Ker}\nolimits{\mathop{\rm Ker}\nolimits}
\def\mathop{\rm loc}\nolimits{\mathop{\rm loc}\nolimits}
\def\mathop{\rm Lim}\nolimits{\mathop{\rm Lim}\nolimits}
\def\mathop{\rm Lip}\nolimits{\mathop{\rm Lip}\nolimits}
\def\mathop{\rm Max}\nolimits{\mathop{\rm Max}\nolimits}
\def\mathop{\rm mes}\nolimits{\mathop{\rm mes}\nolimits}
\def\mathop{\rm meas}\nolimits{\mathop{\rm meas}\nolimits}
\def\mathop{\rm Min}\nolimits{\mathop{\rm Min}\nolimits}
\def\mathop{\rm mod}\nolimits{\mathop{\rm mod}\nolimits}
\def\mathop{\rm odd}\nolimits}{\mathop{\rm odd}\nolimits}
\def\mathop{\rm Out}\nolimits{\mathop{\rm Out}\nolimits}
\def\mathop{\rm Pic}\nolimits{\mathop{\rm Pic}\nolimits}
\def\mathop{\rm Proj}\nolimits{\mathop{\rm Proj}\nolimits}
\def\mathop{\rm Re}\nolimits{\mathop{\rm Re}\nolimits}
\def\mathop{\rm res}\nolimits{\mathop{\rm res}\nolimits}
\def\mathop{\rm sgn}\nolimits{\mathop{\rm sgn}\nolimits}
\def\mathop{\rm Sign}\nolimits{\mathop{\rm Sign}\nolimits}
\def\mathop{\rm Sin}\nolimits{\mathop{\rm Sin}\nolimits}
\def\mathop{\rm Sol}\nolimits{\mathop{\rm Sol}\nolimits}
\def\mathop{\rm spec}\nolimits{\mathop{\rm spec}\nolimits}
\def\mathop{\rm Spec}\nolimits{\mathop{\rm Spec}\nolimits}
\def\mathop{\rm Spin}\nolimits{\mathop{\rm Spin}\nolimits}
\def\mathop{\rm spt}\nolimits{\mathop{\rm spt}\nolimits}
\def\mathop{\rm Sq}\nolimits{\mathop{\rm Sq}\nolimits}
\def\mathop{\rm Sup}\nolimits{\mathop{\rm Sup}\nolimits}
\def\mathop{\rm supp}\nolimits{\mathop{\rm supp}\nolimits}
\def\mathop{\rm tg}\nolimits{\mathop{\rm tg}\nolimits}
\def\mathop{\rm Tor}\nolimits{\mathop{\rm Tor}\nolimits}
\def\mathop{\rm Tot}\nolimits{\mathop{\rm Tot}\nolimits}
\def\mathop{\rm tr}\nolimits{\mathop{\rm tr}\nolimits}
\def\mathop{\rm Tr}\nolimits{\mathop{\rm Tr}\nolimits}
\def\mathop{\rm var}\nolimits{\mathop{\rm var}\nolimits}
\def\mathop{\rm Vect}\nolimits{\mathop{\rm Vect}\nolimits}
\def\mathop{\rm vol}\nolimits{\mathop{\rm vol}\nolimits}
\def\mathop{\rm Vol}\nolimits{\mathop{\rm Vol}\nolimits}
\def\mathop{\fam\bffam\tenbf w}\nolimits{\mathop{\fam\bffam\tenbf w}\nolimits}
\font\tenfm=eufm10
\font\tencmdunh=cmdunh10
\catcode`\@=11
\def\displaylinesno #1{\displ@y\halign{
\hbox to\displaywidth{$\@lign\hfil\displaystyle##\hfil$}&
\llap{$##$}\crcr#1\crcr}}
\def\ldisplaylinesno #1{\displ@y\halign{
\hbox to\displaywidth{$\@lign\hfil\displaystyle##\hfil$}&
\kern-\displaywidth\rlap{$##$}
\tabskip\displaywidth\crcr#1\crcr}}
\catcode`\@=12
\def\buildrel#1\over#2{\mathrel{
\mathop{\kern 0pt#2}\limits^{#1}}}
\def\build#1_#2^#3{\mathrel{
\mathop{\kern 0pt#1}\limits_{#2}^{#3}}}
\def\hfl#1#2{\smash{\mathop{\hbox to 6mm{\rightarrowfill}}
\limits^{\scriptstyle#1}_{\scriptstyle#2}}}
\def\hfll#1#2{\smash{\mathop{\hbox to 6mm{\leftarrowfill}}
\limits^{\scriptstyle#1}_{\scriptstyle#2}}}
\def\vfl#1#2{\llap{$\scriptstyle #1$}\left\downarrow
\vbox to 3mm{}\right.\rlap{$\scriptstyle #2$}}
\def\vfll#1#2{\llap{$\scriptstyle #1$}\left\uparrow
\vbox to 3mm{}\right.\rlap{$\scriptstyle #2$}}
\def\vfls#1#2{\llap{$\scriptstyle #1$}\left\searrow
\vbox to 5mm{}\right.\rlap{$\scriptstyle #2$}}
\def\vfln#1#2{\llap{$\scriptstyle #1$}\left\nearrow
\vbox to 5mm{}\right.\rlap{$\scriptstyle #2$}}
\def\diagram#1{\def\normalbaselines{\baselineskip=0pt
\lineskip=10pt\lineskiplimit=1pt} \matrix{#1}}
\def\up#1{\raise 1ex\hbox{\sevenrm#1}}
\def\cqfd{\unskip\kern 6pt\penalty 500
\raise -2pt\hbox{\vrule\vbox to10pt{\hrule width 4pt
\vfill\hrule}\vrule}\par}
\def\signed#1 (#2){{\unskip\nobreak\hfil\penalty 50
\hskip 2em\null\nobreak\hfil\sl#1\/ \rm(#2)
\parfillskip=0pt\finalhyphendemerits=0\par}}
\def\limproj{\mathop{\oalign{lim\cr
\hidewidth$\longleftarrow$\hidewidth\cr}}}
\def\limind{\mathop{\oalign{lim\cr
\hidewidth$\longrightarrow$\hidewidth\cr}}}
\newfam\bffam \textfont\bffam=\tenbf \scriptfont\bffam=\sevenbf
\scriptscriptfont\bffam=\fivebf
\def\fam\bffam\tenbf{\fam\bffam\tenbf}
\def\mathop{>\!\!\!\triangleleft}{\mathop{>\!\!\!\triangleleft}}
\def\mathop{\triangleright \!\!\! <}{\mathop{\triangleright \!\!\! <}}
\def\vrule height 12pt depth 5pt width 0pt{\vrule height 12pt depth 5pt width 0pt}
\def\tvi\vrule{\vrule height 12pt depth 5pt width 0pt\vrule}
\def\cc#1{\hfill\kern .7em#1\kern .7em\hfill}
\catcode`\@=11
\def\system#1{\left\{\null\,\vcenter{\openup1\jot\m@th
\ialign{\strut\hfil$##$&$##$\hfil&&\enspace$##$\enspace&
\hfil$##$&$##$\hfil\crcr#1\crcr}}\right.}
\catcode`\@=12
\def\boxit#1#2{\setbox1=\hbox{\kern#1{#2}\kern#1}%
\dimen1=\ht1 \advance\dimen1 by #1 \dimen2=\dp1 \advance\dimen2 by #1
\setbox1=\hbox{\vrule height\dimen1 depth\dimen2\box1\vrule}%
\setbox1=\vbox{\hrule\box1\hrule}%
\advance\dimen1 by .4pt \ht1=\dimen1
\advance\dimen2 by .4pt \dp1=\dimen2 \box1\relax}
\font\twelverm=cmr12
\font\ibf=cmbxti10
\font\cmd=cmdunh10
\font\twelvebf=cmbx12
\def\XX{\hbox {\vrule\vbox to8pt{\hrule
width 4pt \vfill\hrule}\vrule}}
\newcount\notenumber \notenumber =1
\def\note#1{\footnote{$^{\the\notenumber}$}{#1}%
\global\advance\notenumber by 1}
\font\twelverm=cmr10 scaled\magstep1
\font\twelverm=cmr10 scaled\magstep2
\magnification=1200
\overfullrule=0pt
\vglue 1cm
\centerline{\twelvebf Predictions from Quantum Cosmology}
\vglue 1cm
\centerline{Alexander Vilenkin\footnote{*)}{\sevenrm On leave from Tufts
University \smallskip e-mail address: AVILENKI@PEARL.TUFTS.EDU}}
\bigskip
\centerline{\sevenrm Institut des Hautes Etudes Scientifiques}
\centerline{\sevenrm 35, route de Chartres}
\centerline{\sevenrm 91440 Bures-sur-Yvette}
\centerline{\sevenrm FRANCE}
\vglue 4cm
\baselineskip 2 \normalbaselineskip
The world view suggested by quantum cosmology is that inflating
universes with all possible values of the fundamental constants are
spontaneously created out of nothing. I explore the consequences of the
assumption that we are a ``typical'' civilization living in this metauniverse.
The conclusions include inflation with an extremely flat potential and low
thermalization temperature, structure formation by topological defects, and an
appreciable cosmological constant.
\vfill\eject
\baselineskip 2 \normalbaselineskip
Why do the constants of Nature take the particular values
that they are observed to have in our universe? It certainly appears that the
constants have not been selected at random. Assuming that the particle masses
are bounded by the Planck mass $m_p$ and the coupling constants are $\build
<_{\sim}^{} 1$, one expects that a random selection would give all masses $\sim
m_p$ and all couplings $\sim 1$. The cosmological constant would then be $\Lambda
\sim m_p^2$ and the corresponding vacuum energy $\rho_v \sim m_p^4$. In
contrast,
some of the particle masses are more than 20 orders of magnitude below $m_p$,
and the actual value of $\rho_v$ is $\build <_{\sim}^{} 10^{-120} \ m_p^4$. (I
use the system of units in which $\hbar =c =1$.)
\medskip
It has been argued [1] that the values of the constants
are, to a large degree, determined by anthropic considerations: these values
should be consistent with the existence of conscious observers who can wonder
about them. If one assumes that the production of heavy elements in stars
and
their dispersement in supernova explosions are essential for the evolution of
life, then one finds that this Anthropic Principle imposes surprisingly
stringent constraints on the electron, proton and neutron masses ($m_e$,
$m_{pr}$ and $m_n$), the $W$-boson mass $m_W$, and the fine structure constant
$e^2$. An anthropic bound on the cosmological constant can be obtained by
requiring that gravitationally bound systems are formed before the universe is
dominated by the vacuum energy [2].
\medskip
I should also mention the popular view that there exists a unique logically
consistent Theory of Everything and that all constants can in principle be
determined from that theory. The problem, however, is that the constants we
observe depend not only on the fundamental Lagrangian, but also on the vacuum
state, which is likely not to be unique. For example, in higher-dimensional
theories, like superstring theory, the constants in the four-dimensional world
depend on the way in which the extra dimensions are compactified. Moreover,
Coleman has argued [3] that all constants appearing in sub-Planckian physics
become totally undetermined due to Planck-scale wormholes connecting distant
regions of spacetime.
\medskip
Finally, it has been suggested that the explanation for the values of some
constants can be found in quantum cosmology. The wave function of the universe
gives a probability distribution for the constants which can be peaked at some
particular values [4]. Wormhole effects can also contribute an important factor
to the probability [5]. Smolin [6] has argued that new expanding regions of the
universe may be created as a result of gravitational collapse due to quantum
gravity effects. Assuming that the constants in these ``daughter'' regions
deviate slightly from their values in the ``mother'' region, he conjectured
that the observed values of the constants are determined by ``natural
selection'' for the values that maximize the production of black holes. Some
problems with this conjecture have been pointed out in Ref. [7].
\medskip
In this paper I would like to suggest a different approach to determining the
constants of Nature. This approach is not entirely new and has elements of
both anthropic principle and quantum cosmology. However, to my knowledge, it
has not been clearly formulated and its implications have not been
systematically explored. My approach is based on the picture of the universe
suggested by quantum cosmology and by the inflationary scenario. In this
picture, small closed universes spontaneously nucleate out of nothing, where
``nothing'' refers to the absence of not only matter, but also of space and
time [8]. All universes in this metauniverse are disconnected from one another
and generally have different values for some of the constants. This variation
may be due to different compactification schemes, wormhole effects, etc. We
shall not adopt any particular hypothesis and keep an open mind as to which
constants can be varied and what is the allowed range of their variation.
\medskip
After nucleation, the universes enter a state of
inflationary expansion. It is driven by the potential energy of a scalar field
$\varphi$, while the field slowly ``rolls down'' its potential $V(\varphi)$. When $\varphi$
reaches the steep portion of the potential at some $\varphi \sim \varphi_*$, its energy
thermalizes, and inflation is
followed by the usual radiation-dominated expansion. The evolution of $\varphi$
is influenced by quantum fluctuations, and as a result thermalization does not
occur simultaneously in different parts of the universe. In many models it can
be shown that at any time there are parts of the universe that are still
inflating [9,10]. Such eternally inflating universes have a beginning,
but have no end.
\medskip
We are one of the infinite number of civilizations living in thermalized
regions of the metauniverse. Although it may be tempting to believe that our
civilization is very special, the history of cosmology demonstrates that the
assumption of being average is often a fruitful hypothesis. I call this
assumption the Principle of Mediocrity. We shall see that, compared to the
traditional point of view, this principle gives a rather different perspective
on what is natural and what is not.
\medskip
The Principle of Mediocrity suggests that we think of ourselves as a
civilization randomly picked in the metauniverse. Denoting by $\alpha} \def\beta{\beta_i$ the
constants of Nature that can vary from one universe to another, we can write
the corresponding
probability distribution as
$$d{\cal P} (\alpha} \def\beta{\beta) = Z^{-1} \ w_{\rm nucl} (\alpha} \def\beta{\beta) \ {\cal N} (\alpha} \def\beta{\beta) \ \build
\Pi_{i}^{}
\ d\alpha} \def\beta{\beta_i . \eqno (1)$$
\noindent Here, $w_{\rm nucl} (\alpha} \def\beta{\beta) \ \Pi \ d\alpha} \def\beta{\beta_i$ is the probability of
nucleation for an inflating universe with a given set of $\alpha} \def\beta{\beta_i$ in the
intervals
$d\alpha} \def\beta{\beta_i$, ${\cal N} (\alpha} \def\beta{\beta)$ is the average number of civilizations in such a universe
(in its entire history) [11], and $Z$ is a normalization factor. We shall
interpret (1) as an {\it a priori} probability distribution for $\alpha} \def\beta{\beta_i$.
\medskip
The inflating part of the universe can be divided into a quantum region, $V
(\varphi) > V_q$, where
the dynamics of the inflaton field $\varphi$ is dominated by quantum fluctuations,
and slow-roll region, $V_* < V(\varphi) < V_q$, where the evolution is essentially
deterministic. ($V_* = V(\varphi_*)$ corresponds to the end of inflation). The
values of $V_*$ and $V_q$ are model-dependent.
The inflationary expansion rate is given by
$H^2 =8\pi \ V(\varphi) /3m_p^2$
and can be arbitrarily high if $V(\varphi)$ is unbounded from above.
In order to extend the validity of the theory up to $V(\varphi) \sim m_p^4$, one
can include one-loop matter corrtections to Einstein's action [12]. This may
be adequate if the number $N$ of matter fields is large, $N>>1$. Then
it can be
shown [13] that the resulting equations have no inflationary solutions for
$V(\varphi) > V_{\rm max} \sim m_p^4 /N$. The inflationary expansion rate is
therefore bounded from above [14], $H < H_{\rm max} \sim m_p /\sqrt {N}$.
Smaller values of $H_{\rm max}$ can be obtained in dilatonic and
higher-dimensional gravity models, or simply in models where $V(\varphi)$ is
bounded from above (e.g., when $\varphi$ is a cyclic variable and has a finite
range). Here, we shall
assume that, for one reason or another, $H$
is bounded by some $H_{\rm max}$. Eternal inflation is possible if $V_{\rm
max} > V_q$.
\medskip
Let us first assume that $V_{\rm max} < V_q$ in the whole range of variation of
$\alpha} \def\beta{\beta_i$, so that inflation is finite. Very roughly, we can write
$$ {\cal N} (\alpha} \def\beta{\beta) \sim {\cal V}_* (\alpha} \def\beta{\beta) \nu_{\rm civ} (\alpha} \def\beta{\beta), \eqno(2)$$
\noindent where ${\cal V}_*$ is the volume of the universe at the end of inflation
(that is, the 3-volume of the hypersurface $V(\varphi) = V_*$), and $\nu_{\rm civ}$
is the average number of civilizations originating per unit volume ${\cal V}_*$.
The maximum of ${\cal V}_*$ is achieved by maximizing the highest value of the
potential $V_{\rm max}$ at which inflation starts and minimizing the slope of
$V(\varphi)$ between $V_{\rm max}$ and $V_*$: the field $\varphi$ takes longer to roll
down for a flatter potential.
\medskip
The cosmological literature abounds with remarks on the ``unnaturally'' flat
potentials required by inflationary scenarios. The slope of the potential is
severely constrained by the observed isotropy of the cosmic microwave
background. With the Principle of Mediocrity, the situation is reversed:
flat is natural! Instead of asking why $V(\varphi)$ is so flat, one should now ask
why it is not flatter.
\medskip
Let us now consider the role of other factors in (1). The
calculation of $w_{\rm nucl} (\alpha} \def\beta{\beta)$ is a matter of some controversy. The
result
depends on one's choice of boundary conditions for the wave function of the
universe (see, e.g., [8,15]). Here we shall adopt the tunneling boundary
condition. Then the semiclassical nucleation probability is proportional to
$\mathop{\rm exp}\nolimits (-\vert S\vert)$, where $S$ is the Euclidean action of the corresponding
instanton. In Einstein's gravity, $\vert S\vert =\pi m_p^2 /H_{\rm max}^2$,
and thus $w_{\rm nucl} (\alpha} \def\beta{\beta)$ favors large values of $V_{\rm max}$ and is not
sensitive to other parameters of the model [16].
\medskip
An important role in constraining the values of $\alpha} \def\beta{\beta_i$ is played by the ``human
factor'', $\nu_{\rm civ} (\alpha} \def\beta{\beta)$. We do not know what other forms of intelligent
life are
possible, but the Principle of Mediocrity favors the hypothesis that our form
is the most common in the metauniverse. The conditions required for life of our
type to exist (the low-energy physics based on the symmetry group $SU(3)\times
SU(2) \times U(1)$, the existence of stars and planets, supernova explosions) may
then fix, by order of magnitude, the values of $e^2$, $m_e$, $m_{\rm pr}$ and
$m_W$, as discussed in Ref. [1]. Anthropic considerations also impose a bound
on
the allowed flatness of the inflaton potential $V(\varphi)$. If the potential is
too
flat, then the thermalization temperature after inflation is too low for
baryogenesis. The lowest temperature at which baryogenesis can still occur is
set by the electroweak scale, $T_{\rm min} \sim m_W$. Hence, if other
constraints do not interfere, we expect the
universe to thermalize at $T \sim m_W$. Specific constraints on $V(\varphi)$
depend
on the couplings of $\varphi$ to other fields and can be easily obtained in
specific
models.
\medskip
Super-flat potentials required by the Principle of Mediocrity give rise to
density fluctuations which are many orders of magnitude below the strength
needed for structure formation. This means that the observed structures must
have been seeded by some other mechanism. The only alternative mechanism
suggested so far is based on topological defects: strings, global monopoles and
textures, which could be formed at a symmetry breaking phase transition [17].
The required symmetry breaking scale for the defects is $\eta \sim 10^{16}
{\rm Ge} V$. With ``natural'' (in the traditional sense) values of the
couplings,
the transition temperature is $T_c \sim \eta$, which is much higher than the
thermalization temperature $(T_{\rm th} \sim m_W)$, and no defects are formed
after inflation. It is possible for the phase transition to occur during
inflation, but the resulting defects are inflated away, unless the transition
is sufficiently close to the end of inflation. To arrange this requires some
fine-tuning of the constants. However, the alternative is to have
thermalization at a much
higher temperature and to cut down on the amount of inflation. Since the
dependence of the volume factor ${\cal V}_*$ on the duration of
inflation is exponential, we
expect that the gain in the volume will more than compensate for the decrease
in ``$\alpha} \def\beta{\beta$-space''
due to the fine-tuning. We note also that in some supersymmetric
models the critical temperature of superheavy string formation can
``naturally'' be as low as $m_W$ [18].
\medskip
The symmetry breaking scale $\eta \sim 10^{16} {\rm Ge} V$ for the defects is
suggested by observations, but we have not explained why this particular scale
has been selected. The value of $\eta$ determines the amplitude of density
fluctuations, which in turn determines the time when galaxies form, the
galactic density, and the rate of star formation in the galaxies. Since these
parameters certainly affect the chances for civilizations to develop, it is
quite possible that $\eta$ is significantly constrained by the anthropic factor
$\nu_{\rm civ} (\alpha} \def\beta{\beta)$.
\medskip
If $\nu_{\rm civ}$ is indeed sharply peaked at some value of $\eta$ and thus
fixes the amplitude of density
fluctuations and the epoch of active galaxy formation, then an upper bound on
the cosmological constant can be obtained by requiring that it does not disrupt
galaxy formation until the end of that epoch. The growth of density
fluctuations in a flat universe with $\Lambda >0$ effectively stops at a redshift
[19] $1+z\sim (1-\Omega_{\Lambda} )^{-1/3}$, where $\Omega_{\Lambda} =\rho_v /\rho_c$ and
$\rho_c$ is the critical density. Requiring that this happens at $z \build
<_{\sim}^{} 1$ gives $\Omega_{\Lambda} \build <_{\sim}^{} 0.9$. The actual value of $\Lambda$
is likely to be comparable to this upper bound. Negative values of $\Lambda$
are bounded by requiring that our part of the universe does not recollapse
while stars are still shining and new civilizations are being formed. This
gives a bound comparable to that for positive $\Lambda$ (by absolute value).
\medskip
Let us now turn to the case of eternal inflation, $V_{\rm max} > V_q$. The
evolution of $\varphi$ is then a stochastic process and can be described by a
distribution function $\rho (\varphi , t)$ which satisfies a ``diffusion equation''
with appropriate boundary conditions at $V(\varphi) = V_*$ and $V(\varphi) = V_{\rm
max}$ [9,20-23]. In an eternally inflating universe, the volume ${\cal V}_*$ of the
hypersurfaces $V(\varphi) = V_*$ is infinite and has to be regulated. The simplest
way to do this is to cut it off at some time $t = \tau$ and consider the
asymptotic behavior as $\tau \to \infty$. The time variable $t$ can be defined
as the proper time on the congruence of geodesics orthogonal to the initial
hypersurface at the ``moment of nucleation''. Since geodesics tend to diverge
during inflation, this proper-time gauge should be well defined.
If $\rho$ is normalized to the total inflating volume, then
in the limit $t \to \infty$ we have [23] $\rho = F(\varphi) \mathop{\rm exp}\nolimits (dH_{\rm max} t)$,
where $d(\alpha} \def\beta{\beta)$ can be interpreted [21]
as the fractal dimension of the region expanding at the highest rate
$H_{\rm max} (\alpha} \def\beta{\beta)$, $0 < d(\alpha} \def\beta{\beta) < 3$. The asymptotic
form of ${\cal V}_*$ at large $\tau$ is then
$$ {\cal V}_* (\alpha} \def\beta{\beta, \tau) = {\tilde {\cal V}} (\alpha} \def\beta{\beta) \mathop{\rm exp}\nolimits [d (\alpha} \def\beta{\beta) H_{\rm max} (\alpha} \def\beta{\beta)
\tau], \eqno(3)$$
\noindent and it is clear that in the limit $\tau \to \infty$ the distribution
(1) selects the values of $\alpha} \def\beta{\beta_i$ that maximize the product $d(\alpha} \def\beta{\beta)H_{\rm max}
(\alpha} \def\beta{\beta)$,
$$B(\alpha} \def\beta{\beta) \equiv d (\alpha} \def\beta{\beta) H_{\rm max} (\alpha} \def\beta{\beta) = {\rm max}. \eqno(4)$$
Generically, a function attains its absolute maximum at a single point. If
this is so for $B(\alpha} \def\beta{\beta)$, then Eq.(4) is sufficient to determine all constants of
Nature. However, it is conceivable that the maximum of $B(\alpha} \def\beta{\beta)$ is degenerate,
so that (4) defines a surface in the space of
$\alpha} \def\beta{\beta_i$. All values not on this surface have a vanishing probability, and the
probability distribution on the surface is
proportional to $w_{\rm nucl}(\alpha} \def\beta{\beta) {\tilde {\cal V}}(\alpha} \def\beta{\beta) \nu_{\rm civ} (\alpha} \def\beta{\beta)$.
The functions $B(\alpha} \def\beta{\beta)$ and ${\tilde {\cal V}}(\alpha} \def\beta{\beta)$ depend on the choice of the time
variable $t$ which is used to implement the cutoff. For example, if instead of
the proper time we chose
${\tilde t} = {\cal V}_* (\alpha} \def\beta{\beta, t)$, then by construction the factor
${\cal V}_*$ would be the same for all universes.
Here, we shall keep the proper time cutoff, which has a simple geometric and
physical meaning. It favors the universes producing the largest number of
civilizations per unit time by the clocks of the co-moving observers. The
cutoff-dependence of the results is nontheless an important issue and requires
further study [24].
\medskip
The fractal dimension $d(\alpha} \def\beta{\beta)$ increases as the potential $V(\varphi)$ becomes
flatter [21, 23], and thus the condition (4) selects maximally flat potentials
with the highest value of $V_{\rm max}$. In some models, maximization of
$B(\alpha} \def\beta{\beta)$ may drive the slope of $V(\varphi)$ to zero; then no reasonable cosmology
is obtained. The approach presented here can be meaningful only if the maximum
of $B(\alpha} \def\beta{\beta)$ corresponds to a non-trivial potential $V(\varphi)$. If we assume in
addition that this maximum is degenerate and defines a surface rather than a
single point, then
the probability maximum on that surface
is determined by the same considerations as in the case of finite inflation.
In
particular, the electroweak scale should not
exceed the thermalization temperature,
since otherwise no baryons would be formed. A more detailed discussion of
$d{\cal P} (\alpha} \def\beta{\beta)$ in the case of eternal inflation will be given elsewhere.
\medskip
Let us now summarize the ``predictions'' of the Principle of Mediocrity. The
preferred models have very flat inflaton potentials, thermalization and
baryogenesis at the electroweak scale, density fluctuations seeded by
topological defects and a non-negligible $\Omega_\Lambda$ (as long as these features are
consistent with one another and with the constraint (4)
in the case of eternal inflation).
\bigskip
After this work was completed, I learned about the preprints by A.Albrecht [25]
and by J.Garcia-Bellido and A.Linde [26] which have some overlap with the
ideas presented here.
I am grateful to Brandon Carter and Alan Guth for discussions and to Thibault
Damour for his hospitality at I.H.E.S. where this work was completed. This
research was supported in part by the National Science Foundation.
\vfill\eject
\def\date{le\ {\the\day}\ \ifcase\month\or janvier\or
{f\'evrier}\or mars\or avril \or mai\or juin\or juillet\or
{ao\^ut}\or septembre\or octobre\or novembre\or {d\'ecembre}\fi
\ {\oldstyle\the\year}}
\let\sskip=\smallskip
\let\bskip=\bigskip
\let\noi=\noindent
\let\cline=\centerline
\let\widetilde=\widetilde
\let\what=\widehat
\let\ucase=\uppercase
\def\alpha} \def\beta{\beta{\alpha} \def\beta{\beta}
\def\gamma{\gamma}
\def\delta{\delta}
\def\epsilon{\epsilon}
\def\varepsilon{\varepsilon}
\def\zeta{\zeta}
\def\theta{\theta}
\def\vartheta{\vartheta}
\def\iota{\iota}
\def\kappa{\kappa}
\def\lambda{\lambda}
\def\rho{\rho}
\def\varrho{\varrho}
\def\sigma{\sigma}
\def\varsigma{\varsigma}
\def\upsilon{\upsilon}
\def\phi{\phi}
\def\varphi{\varphi}
\def\omega{\omega}
\def\Delta{\Delta}
\def\Gamma{\Gamma}
\def\Theta{\Theta}
\def\Lambda{\Lambda}
\def\Sigma{\Sigma}
\def{\cal U}{\Upsilon}
\def\Omega{\Omega}
\font\tenbb=msym10
\font\sevenbb=msym7
\font\fivebb=msym5
\font\twelverm=cmr12
\newfam\bbfam
\textfont\bbfam=\tenbb \scriptfont\bbfam=\sevenbb
\scriptscriptfont\bbfam=\fivebb
\def\fam\bbfam{\fam\bbfam}
\def{\bb R}{{\fam\bbfam R}}
\def{\cal W}{{\fam\bbfam W}}
\def{\bb Z}{{\fam\bbfam Z}}
\def{\bb C}{{\fam\bbfam C}}
\def{\bb P}{{\fam\bbfam P}}
\def{\bb N}{{\fam\bbfam N}}
\def{\bb Q}{{\fam\bbfam Q}}
\def{\bb H}{{\fam\bbfam H}}
\def{\bb A}{{\fam\bbfam A}}
\def{\bb F}{{\fam\bbfam F}}
\def{\bb G}{{\fam\bbfam G}}
\def{\rm 1\mkern-4mu l }{{\rm 1\mkern-4mu l }}
\font\ibf=cmbxti10
\font\huit=cmr8
\font\seven=cmr7
\font\douze=cmr12
\font\titre=cmbx12
\font\tenfm=eufm10
\font\got=eufm10
\font\gots=eufm7
\font\gotf=eufm5
\font\gross=cmbx10
\font\pc=cmcsc10 \rm
\def\partial{\partial}
\def \infty{ \infty}
\def\backslash{\backslash}
\def\oplus{\oplus}
\def\forall{\forall}
\def\exists{\exists}
\def\parallel{\parallel}
\def\bullet{\bullet}
\def \widetilde {\widetilde}
\def \overline {\overline}
\def \underline {\underline}
\def\rightarrow{\rightarrow}
\def\longrightarrow{\longrightarrow}
\def\leftarrow{\leftarrow}
\def\leftrightarrow{\leftrightarrow}
\def\longleftrightarrow{\longleftrightarrow}
\def\hookrightarrow{\hookrightarrow}
\def\hookleftarrow{\hookleftarrow}
\def{\cal R}{\Rightarrow}
\def\Leftarrow{\Leftarrow}
\def\Leftrightarrow{\Leftrightarrow}
\def\lbrace{\lbrace}
\def\rbrace{\rbrace}
\def\lbrack{\lbrack}
\def\rbrack{\rbrack}
\def\subset{\subset}
\def\supset{\supset}
\def\subseteq{\subseteq}
\def{\cal A}{{\cal A}}
\def{\cal Q}{{\cal Q}}
\def{\cal R}{{\cal R}}
\def{\cal H}{{\cal H}}
\def{\cal M}{{\cal M}}
\def{\cal U}{{\cal U}}
\def{\cal W}{{\cal W}}
\def{\cal V}{{\cal V}}
\def{\cal F}{{\cal F}}
\def{\cal B}{{\cal B}}
\def{\cal E}{{\cal E}}
\def{\cal I}{{\cal I}}
\def{\cal J}{{\cal J}}
\def{\cal Y}{{\cal Y}}
\def{\cal K}{{\cal K}}
\def{\cal T}{{\cal T}}
\def{\cal G}{{\cal G}}
\def{\cal L}{{\cal L}}
\def{\cal S}{{\cal S}}
\def{\cal C}{{\cal C}}
\def{\cal D}{{\cal D}}
\def{\cal N}{{\cal N}}
\def{\cal P}{{\cal P}}
\def{\cal O}{{\cal O}}
\def{\cal X}{{\cal X}}
\def\mathop{\rm all}\nolimits{\mathop{\rm all}\nolimits}
\def\mathop{\rm and}\nolimits{\mathop{\rm and}\nolimits}
\def\mathop{\rm angle}\nolimits{\mathop{\rm angle}\nolimits}
\def\mathop{\rm Area}\nolimits{\mathop{\rm Area}\nolimits}
\def\mathop{\rm area}\nolimits{\mathop{\rm area}\nolimits}
\def\mathop{\rm areas}\nolimits{\mathop{\rm areas}\nolimits}
\def\mathop{\rm card}\nolimits{\mathop{\rm card}\nolimits}
\def\mathop{\rm Clos}\nolimits{\mathop{\rm Clos}\nolimits}
\def\mathop{\rm codim}\nolimits{\mathop{\rm codim}\nolimits}
\def\mathop{\rm Cof}\nolimits{\mathop{\rm Cof}\nolimits}
\def\mathop{\rm{\rm Con}}\nolimits{\mathop{\rm{\rm Con}}\nolimits}
\def\mathop{\rm{\rm com}}\nolimits{\mathop{\rm{\rm com}}\nolimits}
\def\mathop{\rm{\rm conf}}\nolimits{\mathop{\rm{\rm conf}}\nolimits}
\def\mathop{\rm const}\nolimits{\mathop{\rm const}\nolimits}
\def\mathop{\rm cos}\nolimits{\mathop{\rm cos}\nolimits}
\def\mathop{\rm cotg}\nolimits{\mathop{\rm cotg}\nolimits}
\def\mathop{\rm coth}\nolimits{\mathop{\rm coth}\nolimits}
\def\mathop{\rm def}\nolimits{\mathop{\rm def}\nolimits}
\def\mathop{\rm deg}\nolimits{\mathop{\rm deg}\nolimits}
\def\mathop{\rm diam}\nolimits{\mathop{\rm diam}\nolimits}
\def\mathop{\rm Diam}\nolimits{\mathop{\rm Diam}\nolimits}
\def\mathop{\rm Diff}\nolimits{\mathop{\rm Diff}\nolimits}
\def\mathop{\rm dist}\nolimits{\mathop{\rm dist}\nolimits}
\def\mathop{\rm disk}\nolimits{\mathop{\rm disk}\nolimits}
\def\mathop{\rm Dist}\nolimits{\mathop{\rm Dist}\nolimits}
\def\mathop{\rm div}\nolimits{\mathop{\rm div}\nolimits}
\def\mathop{\rm et}\nolimits{\mathop{\rm et}\nolimits}
\def\mathop{\rm exp}\nolimits{\mathop{\rm exp}\nolimits}
\def\mathop{\rm Fill}\nolimits{\mathop{\rm Fill}\nolimits}
\def\mathop{\rm Fonction}\nolimits{\mathop{\rm Fonction}\nolimits}
\def\mathop{\rm for}\nolimits{\mathop{\rm for}\nolimits}
\def\mathop{\rm Frac}\nolimits{\mathop{\rm Frac}\nolimits}
\def\mathop{\rm grad}\nolimits{\mathop{\rm grad}\nolimits}
\def\mathop{\rm gal}\nolimits{\mathop{\rm gal}\nolimits}
\def\mathop{\rm Hom}\nolimits{\mathop{\rm Hom}\nolimits}
\def\mathop{\rm id}\nolimits{\mathop{\rm id}\nolimits}
\def\mathop{\rm Id}\nolimits{\mathop{\rm Id}\nolimits}
\def\mathop{\rm if}\nolimits{\mathop{\rm if}\nolimits}
\def\mathop{\rm Im}\nolimits{\mathop{\rm Im}\nolimits}
\def\mathop{\rm Ind}\nolimits{\mathop{\rm Ind}\nolimits}
\def\mathop{\rm ind}\nolimits{\mathop{\rm ind}\nolimits}
\def\mathop{\rm inf}\nolimits{\mathop{\rm inf}\nolimits}
\def\mathop{\rm Int}\nolimits{\mathop{\rm Int}\nolimits}
\def\mathop{\rm intersect}\nolimits{\mathop{\rm intersect}\nolimits}
\def\mathop{\rm intersects}\nolimits{\mathop{\rm intersects}\nolimits}
\def\mathop{\rm Is}\nolimits{\mathop{\rm Is}\nolimits}
\def\mathop{\rm Iso}\nolimits{\mathop{\rm Iso}\nolimits}
\def\mathop{\rm Ker}\nolimits{\mathop{\rm Ker}\nolimits}
\def\mathop{\rm ker}\nolimits{\mathop{\rm ker}\nolimits}
\def\mathop{\rm length}\nolimits{\mathop{\rm length}\nolimits}
\def\mathop{\rm log}\nolimits{\mathop{\rm log}\nolimits}
\def\mathop{\rm meas}\nolimits{\mathop{\rm meas}\nolimits}
\def\mathop{\rm mod}\nolimits{\mathop{\rm mod}\nolimits}
\def\near{\mathop{\rm near}\nolimits
\def\mathop{\rm odd}\nolimits}{\mathop{\rm odd}\nolimits}}
\def\mathop{\rm on}\nolimits{\mathop{\rm on}\nolimits}
\def\mathop{\rm ord}\nolimits{\mathop{\rm ord}\nolimits}
\def\mathop{\rm otherwise}\nolimits{\mathop{\rm otherwise}\nolimits}
\def\mathop{\rm plane}\nolimits{\mathop{\rm plane}\nolimits}
\def\mathop{\rm pour}\nolimits{\mathop{\rm pour}\nolimits}
\def\mathop{\rm Proj}\nolimits{\mathop{\rm Proj}\nolimits}
\def\mathop{\rm Rad}\nolimits{\mathop{\rm Rad}\nolimits}
\def\mathop{\rm Rank}\nolimits{\mathop{\rm Rank}\nolimits}
\def\mathop{\rm rank}\nolimits{\mathop{\rm rank}\nolimits}
\def\mathop{\rm sgn}\nolimits{\mathop{\rm sgn}\nolimits}
\def\mathop{\rm sign}\nolimits{\mathop{\rm sign}\nolimits}
\def\mathop{\rm Sin}\nolimits{\mathop{\rm Sin}\nolimits}
\def\mathop{\rm sin}\nolimits{\mathop{\rm sin}\nolimits}
\def\mathop{\rm Sol}\nolimits{\mathop{\rm Sol}\nolimits}
\def\mathop{\rm Sp}\nolimits{\mathop{\rm Sp}\nolimits}
\def\mathop{\rm Span}\nolimits{\mathop{\rm Span}\nolimits}
\def\mathop{\rm Spec}\nolimits{\mathop{\rm Spec}\nolimits}
\def\mathop{\rm such}\nolimits{\mathop{\rm such}\nolimits}
\def\mathop{\rm sup}\nolimits{\mathop{\rm sup}\nolimits}
\def\mathop{\rm Sup}\nolimits{\mathop{\rm Sup}\nolimits}
\def\mathop{\rm that}\nolimits{\mathop{\rm that}\nolimits}
\def\mathop{\rm Tr}\nolimits{\mathop{\rm Tr}\nolimits}
\def\mathop{\rm Tors}\nolimits{\mathop{\rm Tors}\nolimits}
\def\mathop{\rm tr}\nolimits{\mathop{\rm tr}\nolimits}
\def\mathop{\rm Tranform}\nolimits{\mathop{\rm Tranform}\nolimits}
\def\mathop{\rm vol}\nolimits{\mathop{\rm vol}\nolimits}
\def\mathop{\rm Vol}\nolimits{\mathop{\rm Vol}\nolimits}
\def\mathop{\rm when}\nolimits{\mathop{\rm when}\nolimits}
\def\mathop{\rm where}\nolimits{\mathop{\rm where}\nolimits}
\catcode`\@=11
\def\Eqalign#1{\null\,\vcenter{\openup\jot\m@th\ialign{
\strut\hfil$\displaystyle{##}$&$\displaystyle{{}##}$\hfil
&&\quad\strut\hfil$\displaystyle{##}$&$\displaystyle{{}##}$
\hfil\crcr#1\crcr}}\,} \catcode`\@=12
\catcode`\@=11
\def\displaylinesno #1{\displ@y\halign{
\hbox to\displaywidth{$\@lign\hfil\displaystyle##\hfil$}&
\llap{$##$}\crcr#1\crcr}}
\def\ldisplaylinesno #1{\displ@y\halign{
\hbox to\displaywidth{$\@lign\hfil\displaystyle##\hfil$}&
\kern-\displaywidth\rlap{$##$}
\tabskip\displaywidth\crcr#1\crcr}}
\catcode`\@=12
\def\buildrel#1\over#2{\mathrel{
\mathop{\kern 0pt#2}\limits^{#1}}}
\def\build#1_#2^#3{\mathrel{
\mathop{\kern 0pt#1}\limits_{#2}^{#3}}}
\def\hfl#1#2{\smash{\mathop{\hbox to 6mm{\rightarrowfill}}
\limits^{\scriptstyle#1}_{\scriptstyle#2}}}
\def\vfl#1#2{\llap{$\scriptstyle #1$}\left\downarrow
\vbox to 3mm{}\right.\rlap{$\scriptstyle #2$}}
\def\vfll#1#2{\llap{$\scriptstyle #1$}\left\uparrow
\vbox to 3mm{}\right.\rlap{$\scriptstyle #2$}}
\def\nefl#1#2{\llap{$\scriptstyle #1$}\left\nearrow
\vbox to 3mm{}\right.\rlap{$\scriptstyle #2$}}
\def\sefl#1#2{\llap{$\scriptstyle #1$}\left\searrow
\vbox to 3mm{}\right.\rlap{$\scriptstyle #2$}}
\def\diagram#1{\def\normalbaselines{\baselineskip=0pt
\lineskip=10pt\lineskiplimit=1pt} \matrix{#1}}
\def\up#1{\raise 1ex\hbox{\sevenrm#1}}
\def\signed#1 (#2){{\unskip\nobreak\hfil\penalty 50
\hskip 2em\null\nobreak\hfil\sl#1\/ \rm(#2)
\parfillskip=0pt\finalhyphendemerits=0\par}}
\def\limproj{\mathop{\oalign{lim\cr
\hidewidth$\longleftarrow$\hidewidth\cr}}}
\def(x_1,\ldots,x_n){(x_1,\ldots,x_n)}
\def\vektor#1{(#1_1,\ldots,#1_n)}
\def\largebaselines{\baselineskip=0pt
\lineskip=5mm \lineskiplimit=1 pt}
\def\mediumbaselines{\baselineskip=0pt
\lineskip=4mm \lineskiplimit=1 pt}
\def\medbaselines{\baselineskip=0pt
\lineskip=2mm \lineskiplimit=1 pt}
\defT\kern-.1667em\lower.5ex\hbox{E}\kern-.125em X{T\kern-.1667em\lower.5ex\hbox{E}\kern-.125em X}
\def\hfil\break{\hfil\break}
\def\mathop{>\!\!\!\triangleleft}{\mathop{>\!\!\!\triangleleft}}
\def\mathop{\triangleright\!\!\!<}{\mathop{\triangleright\!\!\!<}}
\def\lsim{ {\raise -3mm \hbox{$<$} \atop \raise 2mm
\hbox{$\sim$}} }
\def\gsim{ {\raise -3mm \hbox{$>$} \atop \raise 2mm
\hbox{$\sim$}} }
\def\frac#1#2{\mathop{\scriptstyle#1\over\scriptstyle#2}\nolimits}
\def\mathop{\textstyle 1 \over \textstyle 2}\nolimits{\mathop{\textstyle 1 \over \textstyle 2}\nolimits}
\def \mathop {\scriptstyle 1 \over \scriptstyle 2}\nolimits {\mathop {\scriptstyle 1 \over \scriptstyle 2}\nolimits}
\def\baro #1#2{\mathop {^ {#1}/ _{#2}}\nolimits}
\def\fnote#1{\advance\noteno by 1\footnote{$^{\the\noteno}$}
{\eightpoint #1}}
\def\boxit#1#2{\setbox1=\hbox{\kern#1{#2}\kern#1}%
\dimen1=\ht1 \advance\dimen1 by #1 \dimen2=\dp1 \advance\dimen2 by
#1
\setbox1=\hbox{\vrule height\dimen1 depth\dimen2\box1\vrule}%
\setbox1=\vbox{\hrule\box1\hrule}%
\advance\dimen1 by .4pt \ht1=\dimen1
\advance\dimen2 by .4pt \dp1=\dimen2 \box1\relax}
\def\cube{
\raise 1 mm \hbox { $\boxit{3pt}{}$}
}
\def $ k$\llap{$^{\,-}$}{$ k$\llap{$^{\,-}$}}
\def\cqfd{\unskip\kern 6pt\penalty 500
\raise -2pt\hbox{\vrule\vbox to10pt{\hrule width 4pt
\vfill\hrule}\vrule}\par}
\def\dstar {\displaystyle ({\raise- 2mm \hbox
{$*$} \atop \raise 2mm \hbox {$*$}})}
\long\def\ctitre#1{\vbox{\leftskip=0pt plus 1fil
\rightskip=0pt plus 1fil \parfillskip=0pt#1}}
\def\ref #1#2{
\smallskip\parindent=2,0cm
\item{\hbox to\parindent{\enskip\lbrack{#1}\rbrack\hfill}}{#2} }
\def\Sum#1#2{\sum\limits_{#1}^{#2}}
\def\summ#1#2{\sum_{#1}^{#2}}
\def\Prod#1#2{\prod\limits_{#1}^{#2}}
\def\prodd#1#2{\prod_{#1}^{#2}}
\def\mathop{\buildrel \rm def \over =}\nolimits {\mathop{\buildrel \rm def \over =}\nolimits}
\def\choose#1#2{\mathop{\scriptstyle#1\choose\scriptstyle#2}\nolimits}
\def\mathop{\scriptstyle1\over\scriptstyle2}\nolimits{\mathop{\scriptstyle1\over\scriptstyle2}\nolimits}
\def\mathinner{\mkern2mu\raise1pt\hbox{.{\mathinner{\mkern2mu\raise1pt\hbox{.}
\mkern3mu\raise4pt\hbox{.}\mkern1mu\raise7pt\hbox{.}}}
\def\pegal{\mathrel{\vbox{\hsize=9pt\hrule\kern1pt
\centerline {$\circ$}\kern.6pt\hrule}}}
\def\bob #1#2{\mathop {^ {#1}/ _{#2}}\nolimits}
\overfullrule=0mm
\def\gsim{ {\raise -3mm \hbox{$>$} \atop \raise 2mm
\hbox{$\sim$}} }
\centerline {\fam\bffam\tenbf References}
\medskip
\ref{1.}{B.~Carter, in I.A.U. Symposium {\fam\bffam\tenbf 63}, ed.~by M.S.~Longair (Reidel,
Dordrecht, 1974); Phil.~Trans.~R.~Soc.~Lond. {\fam\bffam\tenbf A310}, 347 (1983);
B.J.~Carr and M.J.~Rees, Nature {\fam\bffam\tenbf 278}, 605 (1979); J.D.~Barrow and
F.J.~Tipler, ``The Anthropic Cosmo-\break logical Principle" (Clarendon,
Oxford,
1986). It should be noted that the Anthropic Principle, as originally
formulated by Carter, is more than a trivial consistency condition. It is the
requirement that anthropic constraints should be taken into account when
evaluating the plausibility of various hypotheses about the physical world.}
\ref {2.} {S.~Weinberg, Phys.~Rev.~Lett.~{\fam\bffam\tenbf 59}, 2607 (1987)}
\ref {3.} {S.~Coleman, Nucl.~Phys.~{\fam\bffam\tenbf B307}, 867 (1988).}
\ref {4.} {E.~Baum, Phys.~Lett.~{\fam\bffam\tenbf B133}, 185 (1984); S.W.~Hawking,
Phys.~Lett.~{\fam\bffam\tenbf B134}, 403 (1984).}
\ref {5.}{S.~Coleman, Nucl.~Phys.~{\fam\bffam\tenbf B310}, 643 (1988). In this paper Coleman
obtained a probability distribution for $\rho_v$ with an extremely
sharp peak at $\rho_v = 0$. However, his derivation was based on Euclidean
quantum gravity, which has serious problems. For a discussion of the problems,
see W.~Fischler et.~al., Nucl.~Phys.~{\fam\bffam\tenbf B327}, 157 (1989).}
\ref {6.} {L.~Smolin, Class.~Quant.~Grav.~{\fam\bffam\tenbf 9}, 173 (1992); Penn.~State
Preprint, un-\break published.}
\ref {7.} {T.~Rothman and G.F.R.~Ellis, Quart.~J.~Roy.~Astr.~Soc.~{\fam\bffam\tenbf 34},
201 (1993).}
\ref {8.} {A.~Vilenkin, Phys.~Lett.~{\fam\bffam\tenbf 117B}, 25 (1982), Phys.~Rev.~{\fam\bffam\tenbf
D30},
509 (1984);\break J.~Hartle and S.W.~Hawking, Phys.~Rev.~{\fam\bffam\tenbf D28}, 2960
(1983);
A.D.~Linde, Lett.~Nuovo Cim.~{\fam\bffam\tenbf 39}, 401 (1984).}
\ref{9.} {A.~Vilenkin, Phys.~Rev.~{\fam\bffam\tenbf D27}, 2848 (1983).}
\ref {10.} {A.D.~Linde, Phys.~Lett.~{\fam\bffam\tenbf B175}, 395 (1986).}
\ref {11.} {We may wish to assign a weight to each civilization, depending on
its lifetime and/or on the number of individuals. This would not change the
conclusions of the present paper.}
\ref {12.} {See, e.g., A.A.~Starobinsky, Phys.~Lett.~{\fam\bffam\tenbf 91B}, 99(1980).}
\ref {13.} {Including the vacuum contributions of matter fields to the
expectation value of $T_{\mu \nu}$ and assuming slow rollover conditions,
$\dot \varphi^2 \ll V(\varphi)$ and $\dot H \ll H^2$, the evolution equation for $H$
takes the form
$$H^2 = {8 \pi \over 3m^2_p} V(\varphi) + {H^4 \over H_0^2} - {6 \over M^2} H^2
\dot H.$$
Here, $H_0 \sim m_p/\sqrt{N}$, $N$ is the number of matter fields with masses
$m \ll H$, and $M$ can be adjusted to any value by a finite renormalization of
the quadratic in curvature term in the gravitational Lagrangian. Physically
reasonable models are obtained for $H^2_0 > 0,\ M^2 > 0$ (for details see
Ref.~[12]). Classical inflationary solutions must have $\dot H < 0$. This
gives a quadratic inequality for $H^2$, which can be satisfied only if $V(\varphi)
\le 3 H^2_0 m^2_p/32 \pi$. The expansion rate cannot exceed $H_0$. A detailed
discussion of this issue will be given elsewhere.}
\ref {14.} {Linde et. al. [23] have argued that the inflationary expansion
rate is bounded by $H_{\max} \sim m_p$, because at this rate quantum
fluctuations
in the energy-momentum tensor $T_\varphi^{\mu \nu}$ of the inflaton field $\varphi$
become comparable to $T_\varphi ^{\mu \nu}$ itself, and the vacuum form of
$T_\varphi^{\mu \nu} \propto g^{\mu \nu}$ is destroyed. I disagree with this
argument. At $H \sim m_p$, quantum fluctuations in $T^{\mu \nu}$ for all fields
with $m \ll m_p$ have comparable magnitude. The average total energy-momentum
tensor is $\langle T^{\mu \nu} \rangle \propto g_{\mu \nu}$, and its relative
fluctuation is $\sim N^{- 1/2}$, where $N$ is the number of fields with $m \ll
m_p$. Since $N \gsim 100$, the vacuum form of $T^{\mu \nu}$ holds with a good
accuracy.}
\ref {15.} {A.~Vilenkin, Phys.~Rev.~{\fam\bffam\tenbf D37}, 888 (1988).}
\ref {16.} {$w_{\rm nucl}$ depends also on the initial value of $\varphi$ and is
maximized at $V(\varphi) = V_{\rm max}$.}
\ref {17.} {For a review of topological defects, see A.~Vilenkin and
E.P.S.~Shellard, ``Cosmic Strings and Other Topological Defects" (Cambridge
University Press, Cambridge, 1994).}
\ref {18.} {G.~Lazarides, C.~Panagiotakopoulos and Q.~Shafi,
Phys.~Rev.~Lett.~{\fam\bffam\tenbf 56}, 432 (1987); Phys.~Lett.~{\fam\bffam\tenbf 183B}, 289 (1987).}
\ref {19.} {S.M.~Carroll, W.H.~Press and E.L.~Turner,
Ann.~Rev.~Astron.~Astrophys.~{\fam\bffam\tenbf 30}, 499 (1992).}
\ref {20.} {A.A.~Starobinsky, in ``Current Topics in Field Theory, Quantum
Gravity and Strings'', ed. by H.J.~de Vega and N.~Sanchez (Springer,
Heidelberg, 1986).}
\ref {21.} {M.~Aryal and A.~Vilenkin, Phys.~Lett.~{\fam\bffam\tenbf B199}, 351 (1987)}
\ref {22.} {Y.~Nambu and M.~Sasaki, Phys.~Lett ~{\fam\bffam\tenbf B219}, 240 (1989)}
\ref {23.} {A.D.~Linde and A.~Mezhlumian, Phys.~Lett.~{\fam\bffam\tenbf B307}, 25 (1993);
A.D.~Linde, D.A.~Linde and A.~Mezhlumian, Phys.~Rev.~{\fam\bffam\tenbf D49}, 1783 (1994).}
\ref {24.} {The dependence of $\rho (\varphi, t)$ on the time parametrization has
been emphasized by Linde {\it et.al.} [23] and
by Garcia-Bellido {\it et.al.} [27]
who studied the probability distribution for the
inflaton and other fields in a single eternally inflating universe.}
\ref {25.} {A.Albrecht, Imperial College Report TP/93-94/56 (unpublished).}
\ref {26.} {J.~Garcia-Bellido and A.D.~Linde, Stanford University Report
SU-ITP-94-24 (unpublished)}
\ref {27.} {J.~Garcia-Bellido, A.D.~Linde and D.A.~Linde, Phys.~Rev.~{\fam\bffam\tenbf D50},
730 (1994)}
\bye
|
2,877,628,091,242 | arxiv | \section{Introduction}
In this paper, we present a Dutch-based FAQ retrieval system trained using a limited amount of training data.
FAQ answering is the task of retrieving the right answer given a new user query. It is widely used in chatbots and has been studied for many years \cite{hammond1995faq,sneiders1999automated,jijkoun2005retrieving,riezler2007statistical,karan2016faqir,sakata2019faq}, although the attention has shifted towards extractive question answering more recently \cite{rogers2021qa}, probably because of a lack of dedicated datasets. FAQ answering systems typically use retrieval systems \cite{hammond1995faq,sneiders1999automated,jijkoun2005retrieving,riezler2007statistical,karan2016faqir,sakata2019faq} rather than generative models grounded on external knowledge \cite{komeili2021internet,de2020bart,lotfi2021teach}. The generative approach is more flexible as it is able to generate new answers. However, these models suffer from knowledge hallucinations \cite{shuster2021retrieval}, limiting their usefulness in a corporate environment.
Most previous research focusing on FAQ retrieval and non-factoid question answering were developed for English. ConveRT \cite{DBLP:journals/corr/abs-1911-03688}, a response selection module available within Rasa \cite{DBLP:journals/corr/abs-1712-05181}, caught our attention as it is effective and does not require a GPU at inference time. Unfortunately, it is only available in English. Despite having significantly less conversational training data (400K pairs of utterances) than the original ConveRT model (727M pairs), we successfully trained the same model for Dutch.
Our contributions are the following:
\begin{itemize}
\item We show it is possible to train a ConveRT model for a non-English language using a limited number of conversation pairs by adopting a two-phase pre-training approach (general and conversational).
\item We show that a Dutch ConveRT model performs better than the response selector module from Rasa, both in a low and high data regime.
\end{itemize}
\section{Related Work}
An FAQ dataset consists of pairs of questions and answers. The FAQ retrieval task involves ranking the available answers for a given user query. There are three methods available to solve this problem: matching a new user query on the available questions, the answers, or the concatenation of both. FAQ retrieval can be broadly divided into 4 categories: lexical, supervised, unsupervised, and conversational.
\paragraph{Lexical}
To our knowledge, FAQ-Finder \cite{hammond1995faq} was the first to explicitly study the task of FAQ retrieval, it tries to do so by matching user queries to FAQ questions of the Usenet dataset with TF-IDF. FAQ-Finder was later improved by including the similarity to the answer (on top of the similarity to the question) \cite{tomuro2004retrieval}. Another improvement comes from adding a rule-based layer on top of the TF-IDF module \cite{sneiders1999automated}.
\paragraph{Unsupervised}
Another approach is to used unsupervised techniques to retrieve the right FAQ pair given a new user query. One possible way is to use Latent Semantic Analysis (LSA) to overcome the lexical mismatch between related queries \cite{kim2008cluster}.
\paragraph{Supervised}
The first supervised methods were developed using tree kernels and SVMs \cite{moschitti2007exploiting}. BERT methods were later developed specifically for the task of FAQ retrieval \cite{sakata2019faq}.
\paragraph{Conversational}
In this paper, we propose a fourth type not yet explored in the literature: conversational. FAQ retrieval can be treated as a special case of conversational modeling: retrieving the answer is similar to retrieving the next utterance in a conversation.
Dual-encoder architectures, pre-trained on response selection, have become increasingly popular in the dialog community due to their simplicity and ease of control \cite{DBLP:journals/corr/abs-1906-01543,cer-etal-2018-universal}.
There are two options when it comes to retrieving the next utterance. One can either encode the two sentences separately (dual-encoder) \cite{DBLP:journals/corr/abs-1911-03688}, or simultaneously (cross-encoder) \cite{10.1007/978-3-030-47426-3_19}. Dual-encoders are faster than cross-encoders as they can cache the answer representations. ConveRT \cite{DBLP:journals/corr/abs-1911-03688} is a dual-encoder pre-trained on a large-scale conversational dataset. Thanks to various design optimizations (such as using single-headed self-attention) ConveRT can vastly reduce the size of the model.
In this work, we choose to focus on ConveRT as it has a low computational cost and does not require a GPU for inference.
\section{ConveRT}
In this section, we give a brief overview of the ConveRT (Conversational Representations from Transformers) model \cite{DBLP:journals/corr/abs-1911-03688}. The objective of the model is to generate vector representations for utterances that are as similar as possible (in terms of dot-product) for a given pair. ConveRT takes as input the sequence of tokens of the two utterances. Both sequences are tokenized using the same byte pair encoding vocabulary.
\subsection{Architecture}
\begin{figure}[t]
\includegraphics[scale=0.45]{convert}
\centering
\caption{Illustration of the ConveRT model architecture. The model has three distinct parts. First, the subword and positional embeddings. Second, a shared Transformer block followed by a two-headed self-attention. Third, separate feed-forward networks (3 layers) for the input and responses.}
\label{figure:architecture}
\end{figure}
The ConveRT architecture (Fig. \ref{figure:architecture}) is composed of three distinct parts: the embedding layer, the Transformer block and the feedforward layers.
\subsubsection{Embedding}
The first element stores the embeddings for the subwords and position tokens. Embeddings are shared for the input and response representations. Unlike the original Transformer architecture \cite{DBLP:journals/corr/VaswaniSPUJGKP17}, ConveRT uses two positional encoding matrices of different sizes to handle sequences larger than seen during training. We refer the reader to the original paper for a detailed description \cite{DBLP:journals/corr/abs-1911-03688}.
\subsubsection{Transformer Block}
The next element is the Transformer block. It closely follows the original Transformer architecture \cite{DBLP:journals/corr/VaswaniSPUJGKP17} with some notable differences.
First, the model uses a single-headed self-attention using a 64-dimensional projection for computing the attention weights. Second, the model applies a two-headed self-attention after the six Transformer layers.
The parameters of the Transformer block are fully shared for the input and response sides. ConveRT uses the square-root-of-N reduction \cite{cer-etal-2018-universal} to convert the embedding sequences to fixed-dimensional vectors.
\subsubsection{Feed Forward}
The last elements are a series of feed-forward hidden layers with skip connections. The parameters are not shared between the inputs and responses side, as there is a separate feed-forward for the inputs and responses.
\subsection{Training Objective}
The training objective of ConveRT is to select the right response given a question from a question-answer pair.
The relevance of each response to a given input is quantified with a dot-product between the input and response representation.
Training proceeds in a batch of K pairs of utterances. The objective is to distinguish between the true relevant responses and irrelevant negative examples (we use other responses from the batch as negative examples). ConveRT uses cross-entropy as the loss function. The model is optimized with Adam \cite{DBLP:journals/corr/KingmaB14} and L2 weight decay. The learning rate is warmed up over the first 10,000 steps to a peak value and then linearly decayed.
\section{ConveRT for Dutch}
In this section, we explain our approach to training a ConveRT model for Dutch.
To overcome the limited supply of conversational data available in Dutch, we use a two-stage pre-training: general pre-training on a large open-domain corpus, and conversational pre-training using a smaller conversational dataset from Reddit.
\subsection{Data}
The original ConveRT model was developed for English using a large-scale conversational dataset from Reddit.
We did not have access to such a dataset for Dutch. Instead, we chose to split the problem in two. First, we pre-train the model on a general Dutch corpus. Second, we use a smaller Dutch conversational corpus from Reddit.
\subsubsection{General Dataset}
We consider the same Dutch-language corpora as Bertje \cite{devries2019bertje}, a successful Dutch BERT model:
\begin{itemize}
\item Books: a collection of contemporary and historical fiction novels
\item TwNC \cite{42e3c5016cab421281a9029a774fffae}: a Multifaceted Dutch News Corpus
\item SoNaR-500 \cite{Oostdijk2013}: a multi-genre reference corpus
\item Web news
\item Wikipedia
\end{itemize}
In total, this is about 12GB of uncompressed text.
To match the setup expected by ConveRT (the tokens of a pair of utterances), we first split each paragraph into sentences. Next, we save pairs of sentences and treat them as pairs of input and response. To avoid small inputs, we filter out pairs with less than 64 characters. After transformation, the general corpus dataset for pre-training has 110M pairs.
\subsubsection{Conversational Dataset}
We also consider a Dutch conversational dataset for which we downloaded comments from around 200 Dutch subreddits. Non-Dutch comments were filtered out. After filtering for the language we arrive at a size of 400K pairs of utterances.
\subsection{Pre-training}
We followed the training procedure of ConveRT, except for the number of epochs and the batch size. For the general pre-training, we trained the model for 8 epochs. To facilitate the training, we used other examples from the batch as negative examples.
To increase the difficulty of the training, we doubled the batch size at every second epoch. The batch size increased from 128 at the first epoch to 2048 at the last epoch. The larger the batch size, the harder it is for the model as the model has to select the correct response amongst more negative examples.
For the conversational pre-training, we trained for 10 epochs with a fixed batch size of 2048.
\begin{table*}
\centering
\begin{tabular}{lcccccc}
\hline
\textbf{model} & \textbf{split 1} & \textbf{split 2} & \textbf{split 4} & \textbf{split 6} & \textbf{split 8} & \textbf{split 10} \\ \hline
RASA (baseline) & 22\% & 42\% & 50\% & 55\% & 61\% & 65\% \\
without pre-training & 20\% & 25\% & 33\% & 45\% & 52\% & 65\% \\
general pre-training & 30\% & 36\% & 40\% & 55\% & 58\% & 43\% \\
conversational pre-training & 40\% & 44\% & 55\% & 63\% & 66\% & 69\% \\
general + conversational pre-training & \textbf{46\%} & \textbf{57\%} & \textbf{68\%} & \textbf{69\%} & \textbf{75\%} & \textbf{79\%} \\ \hline
\end{tabular}
\caption{
Accuracy on the COVID-19 vaccination FAQ dataset per splits of increasing size. Split one has one training example per answer, while split ten has ten training examples. Pre-training ConveRT on both a general dataset, as well as a conversational dataset provides the best results on this task.
}
\label{table:results}
\end{table*}
\section{Experiments}
In this section, we fine-tune our model on a corpus of FAQs related to the COVID-19 vaccine. We then perform an ablation study to analyze which part of the pre-training has the most impact on the downstream performance. To have a better understanding of how our model would perform in the real world, we study its performance as the number of training examples increases.
\subsection{Data}
We test the performance of our model on a proprietary dataset. The dataset was collected while running a COVID-19 vaccination FAQ bot with Rasa. It consists of 1,200 questions for 76 distinct answers.
\subsection{Baseline}
As our higher objective is to use this model in a Rasa chatbot, we compare our Dutch ConveRT model to a baseline response retrieval model developed by Rasa.\footnote{Rasa does not have a published paper describing their model.} All models are trained using the same number of epochs and dropout probability.
\subsection{Low Data Scenario}
When starting out, FAQ bots usually have a one-on-one mapping between the number of questions and answers (one question for one answer). As the number of users increases, the number of available questions per answer also increases. To evaluate the generalization capabilities of our model in a low data scenario, we artificially create datasets of increasing sizes, which we call splits. The first split has one training example per answer (the same as when someone starts a new FAQ chatbot), the second split has two training examples per answer, and so on until split ten. We also generate a test set by randomly selecting (and removing from the training set) one training example per answer.
\subsection{Results}
Results in Table \ref{table:results} confirm our intuition that the baseline accuracy of the Rasa model radically improves with the number of training examples. In our analysis, the accuracy increases by a factor of 3 from split 1 to split 10. The results also show that a ConveRT model without any pre-training underperforms the baseline, on every split. General pre-training modestly improves the model's performance, but the results are not significantly different from the baseline. Conversational pre-training alone (without any general pre-training) shows a consistent improvement over the baseline. The gain is more visible in the low data regime than in the high data regime. The Dutch ConveRT model reveals its true power when pre-trained on a general corpus and a conversational corpus as it outperforms the baseline by a wide margin on every split.
\section{Conclusion}
We have successfully pre-trained, fine-tuned, and evaluated a Dutch ConveRT model. This model consistently outperforms a baseline response selector from Rasa on a COVID-19 vaccine FAQ dataset.
Conversational datasets for non-English languages are scarce. Our two-phase pre-training procedure bypasses this problem by first pre-training on a general corpus, then pre-training on a smaller conversational corpus.
In future work, we plan on extending the two-stage training to additional languages and additional domains.
\section{Acknowledgments}
This research received funding from the Flemish Government under the “Onderzoeksprogramma Artificiële Intelligentie (AI) Vlaanderen” programme. We also thank the reviewers for their helpful comments.
\bibliographystyle{plain}
|
2,877,628,091,243 | arxiv | \section{Introduction}
\begin{flushright}
\begin{quote}
\vspace{0.5cm}
\hspace{4.5cm} My days sprint past me like runners,\\
\hspace{4.5cm} I will never see them again.\footnote{{\sl The Book of Job}, translated by Stephen Mitchell, HarperPrennial, New York, 1992.}\\
\vspace{0.5cm}
\hspace{8.5cm} Job
\end{quote}
\end{flushright}
\vspace{0.7cm}
There is something notorious about the world. It {\em changes}. The past seems to be quite different from the future. We can remember the former and, sometimes, predict the latter. We grow older, not younger. The universe was hotter in the past, and very likely it will become colder in the future. The disorder around us seems to increase. All these facts and many others of the kind are expressed in terms of the Second Law of Thermodynamics: {\em The entropy of a closed system never decreases}. If entropy is denoted by $S$, this law reads:
\begin{equation}
\frac{dS}{dt}\geq 0.
\end{equation}
In the 1870s, Ludwig Boltzmann argued that the effect of randomly moving gas molecules was to ensure that the entropy of a gas would increase, until it reaches its maximum possible value. This is his famous {\em H-theorem}. Boltzmann was able to show that macroscopic distributions of great inhomogeneity (i.e. of high order or low entropy) are formed from relatively few microstate arrangements of molecules, and were, consequently, relatively improbable. Since physical systems do not tend to go into states that are less probable than the states they are in, it follows that any system would evolve toward the macrostate that is consistent with the largest number of microstates. The number of microstates and the entropy of the system are related by the fundamental formula:
\begin{equation}
S= k \ln W,
\end{equation}
where $k=10^{-23}$ JK$^{-1}$ is Boltzmann's constant and $W$ is the volume of the phase-space that corresponds to the macrostate of entropy $S$.
More than twenty years after the publication of Boltzmann's fundamental papers on kinetic theory\cite{bol1}\cdash\cite{bol2}, Burbury\cite{bur1}\cdash\cite{bur2} pointed out that the source of asymmetry in the H-theorem is the assumption that the motions of the gas molecules are independent before they collide and not afterward, if entropy is going to increase. This essentially means that the entropy increase is a consequence of the {\em initial conditions} imposed upon the state of the system. Boltzmann's response was\cite{bol3}:
\begin{quote}
There must then be in the universe,
which is in thermal equilibrium as a
whole and therefore dead, here and
there, relatively small regions of the
size of our world, which during the
relatively short time of eons deviate
significantly from thermal equilibrium.
Among these worlds the state probability
increases as often as it decreases.
\end{quote}
As noted by Price\cite{price}: ``The low-entropy condition of our region seems to be associated entirely with a low-entropy condition in our past.'' This is called the Past Hypothesis.
The probability of the large fluctuations required for the formation of the universe, on the other hand, seems to be zero, as noted long ago by Eddington\cite{ast}: ``A universe containing mathematical physicists
at any assigned date will be in the state of
maximum disorganization which is not inconsistent
with the existence of such creatures.'' Large fluctuations are rare (the probability $P$ of an entropic variation $\Delta S$ is $P\sim \exp{-\Delta S}$); {\em extremely} large fluctuation, basically impossible. For the whole universe, $\Delta S\sim 10^{104}$ in units of $k=1$ \cite{el}. This yields $P=0$. We are here, however, living momentarily because we are far from thermal equilibrium.
In this paper we shall discuss a possible source for the existence of local irreversible processes that is related to the presence of cosmological horizons.
\section{Formulation of the problem}
In 1876, a former teacher of Boltzmann and later colleague at the University of Vienna, J. Loschmidt, noted that the laws of (Hamiltonian) mechanics are such that for every solution one can construct another solution by reversing all velocities and replacing $t$ by $-t$\cite{los}. Since the Boltzmann's function $H[f]$ is invariant under velocity reversal, it follows that if $H[f]$ decreases for the first solution, it will increase for the second. Accordingly, the H-theorem cannot be a general theorem for all mechanical evolutions of the gas. More generally, the problem goes far beyond classical mechanics and encompasses our whole representation of the physical world. This is because {\em all formal representations of all fundamental laws of physics are invariant under the operation of time reversal}. Nonetheless, the evolution of all physical processes in the universe is irreversible.
If we accept, as mentioned in the introduction, that the origin of the irreversibility is not in the laws but in the initial conditions of the equations that represent the laws, two additional problems emerge: 1) what were exactly these initial conditions?, and 2) how the initial conditions, of global nature, can enforce, at any time and any place, the observed local irreversibility?
The first problem is, in turn, related to the following one, once the cosmological setting is taken into account: in the past, the universe was hotter and at some point matter and radiation were in thermal equilibrium (i.e. in a state of maximum entropy); how is this compatible with the fact that entropy has ever been increasing according to the Past Hypothesis?, how can entropy still increase if it was at a maximum at some past time?
The standard answer to this question invokes the expansion of the universe: as the universe expanded, the maximum possible entropy increased with the size of the universe, but the actual entropy was left well behind the permitted maximum. The Second Law of Thermodynamics and the source of irreversibility is the trend of the entropy to reach the permitted maximum. According to this view, the universe actually began in a state of maximum entropy, but due to the expansion, it was still possible for the entropy to continue growing\cite{gold}.
The main problem with this line of thought is that it is not true that the universe was in a state of maximum disorder at some early time. In fact, although locally matter and radiation might have been in thermal equilibrium, this situation occurred in a regime where the local effects of gravity cannot be ignored. Penrose\cite{penrose} suggested that entropy might be assigned to the gravitational field itself. Though locally matter and radiation were in thermal equilibrium in the past, the gravitational field should have been quite far from equilibrium, since gravity is an attractive force and the universe was initially structureless. Consequently, the early universe was globally out of equilibrium, being the total entropy dominated by the entropy of the gravitational field.
In absence of a theory of quantum gravity, a statistical measure of the entropy of the gravitational field is not possible. The study of the gravitational properties of macroscopic systems through classic general invariants, however, might be a suitable approach to the problem. Penrose proposed that the Weyl curvature tensor can be used to specify the gravitational entropy\cite{penrose}. Several prescriptions have been proposed since then to estimate the entropy associated with the classical field, on the basis of scalars constructed out of different functions of the Weyl scalar\cite{scalars}.
\section{Electrodynamics and cosmology}
What makes physical processes occur in a preferred direction of space-time if the physical laws are expressed by time-invariant equations? If entropy globally increases because it was low in the past, how this enforces local changes in a particular sense?
We suggest that there is a global-to-local relation between the conditions in the far past and future, related to the dynamical state of the universe, with the local physics that determines the way affairs occur in and around us.
The basic processes in our brain and those we perceive through our senses are of electromagnetic origin. Gravity is far too weak in comparison to electromagnetism. The other fundamental interactions, strong and weak, are of very short range. If gravitational contributions dominate the low entropy in the early universe, there should be some coupling between gravity and electromagnetism that determines the direction along which heat flows.
The electromagnetic radiation field can be described in the terms of a 4-potential $A^{\mu}$, which satisfies linear equations:
\begin{equation}
\partial^{\nu}\partial_{\nu}A^{\mu}(\vec{r},\;t)=4\pi j^{\mu} (\vec{r},\;t), \label{Maxwell}
\end{equation}
where we have considered units such that $c=1$ and $j^{\mu}$ represents the 4-current. The solution $A^{\mu}$ is a functional of the sources $j^{\mu}$. This type of equation admits both retarded and advanced solutions.
\begin{equation}
A^{\mu}_{\rm ret}(\vec{r},\;t)=\int_{V_{\rm ret}}
\frac{j^{\mu} \left(\vec{r},\;t-\left|\vec{r}-\vec{r'}\right|\right)}{\left|\vec{r}-\vec{r'}\right|}d^{3}\vec{r'} + \int_{\partial V_{\rm ret}}
\frac{j^{\mu} \left(\vec{r},\;t-\left|\vec{r}-\vec{r'}\right|\right)}{\left|\vec{r}-\vec{r'}\right|}d^{3}\vec{r'}, \label{ret}
\end{equation}
\begin{equation}
A^{\mu}_{\rm adv}(\vec{r},\;t)=\int_{V_{\rm adv}}
\frac{j^{\mu} \left(\vec{r},\;t+\left|\vec{r}-\vec{r'}\right|\right)}{\left|\vec{r}-\vec{r'}\right|}d^{3}\vec{r'} + \int_{\partial V_{\rm adv}}
\frac{j^{\mu} \left(\vec{r},\;t+\left|\vec{r}-\vec{r'}\right|\right)}{\left|\vec{r}-\vec{r'}\right|}d^{3}\vec{r'}. \label{adv}
\end{equation}
The two functionals of $j^{\mu}(\vec{r}, t)$ are related to one another by a time
reversal transformation. The solution (\ref{ret}) is contributed by sources in the
past of the space-time point $p(\vec{r}, t)$ and the solution (\ref{adv}) by sources in the
future of that point. The integrals in the second term on the right side are
the surface integrals that give the contributions from i) sources outside of $V$
and ii) source-free radiation. If $V$ is the causal past ($J^{-}$) and future ($J^{+}$), the surface
integrals do not contribute since material sources both outside $V$ and on the boundary are causally disconnected from $p(\vec{r}, t)$. We also assume Sommerfeld radiation condition, that makes source-free radiation null.
The linear combinations of electromagnetic solutions are also solutions,
since the equations are linear and the Principle of Superposition holds. It is
usual to consider only the retarded potential as physical meaningful in order
to estimate the electromagnetic field at $p(\vec{r}, t)$: $F^{\mu\nu}_{\rm ret}=\partial^{\mu} A^{\nu}_{\rm ret}- \partial^{\nu}A^{\mu}_{\rm ret}.$ There seems to be no compelling reason, however, for such a choice\footnote{ See, for instance, references \refcite{Fokker}, \refcite{Dirac}, \refcite{W-F1}, \refcite{W-F2}, and \refcite{clarke}.}. We can
adopt, for instance (in what follows we use a simplified notation),
\begin{equation}
A^{\mu}(\vec{r}, t)=\frac{1}{2}\left(\int_{J^{-}} {\rm ret} +\int_{J^{+}} {\rm adv}\right) dV.
\end{equation}
If the sources in the past and future are the same, and the boundary conditions are the same, both solutions are identical. Given the dynamical state of the universe, characterized by an accelerated expansion, the causal past and future of a point $p(\vec{r},\;t)$ are not, however, necessary symmetric in what the number of charges contained concerns.
If the space-time is curved, the null cones that determine the local causal structure will not be symmetric around the point $p(\vec{r},\;t)$. In particular, the presence of cosmological particle horizons can make very different the contributions of both solutions. Particle horizons occur whenever a particular system never gets to be influenced by the whole space-time. If a particle crosses the horizon, it will not exert any further action upon the system respect to which the horizon is defined.
Finding the particle horizons (if one exists at all) requires a knowledge of the global space-time geometry. Particle horizons occur in systems undergoing lasting acceleration.
The radius of the past particle horizon is\cite{rindler}:
\begin{equation}
R_{\rm past}= a(t) \int^{t}_{t'=0} \frac{c}{a(t')} dt',
\end{equation}
where $a(t)$ is the time-dependent scale factor of the universe. The radius of the future particle horizon (sometimes called event horizon) is:
\begin{equation}
R_{\rm future}= a(t_0) \int^{\infty}_{t'_0} \frac{c}{a(t')} dt'.
\end{equation}
If the universe is accelerating, as it seems to be suggested by recent observations\cite{permu}, then $J^{+}(p)$ and $J^{-}(p)$ are not symmetric because of the presence of future horizons. This implies that $A^{\mu}_{\rm ret}$ and $A^{\mu}_{\rm adv}$ will be different.
We can then introduce a vector field $L^{\mu}$ given by:
\begin{equation}
L^{\mu}= \left[ \int_{J^{-}}{\rm ret}- \int_{J^{+}}{\rm adv}\right] dV\neq 0.
\end{equation}
If $g_{\mu\nu}L^{\mu}T^{\nu}\neq0$, with $T^{\nu}=(1,0,0,0)$, there is a preferred direction for the flux of electromagnetic energy in space-time. If the sign of $T$ is chosen in such a way that it is positive in the direction of the global expansion of the universe, the electromagnetic flux will go from what we call past to future if $L>0$, i.e. if there is a future particle horizon hidden some electromagnetic currents. The (Poynting) flux is given by:
\begin{equation}
\vec{S^{\mu}}=( \vec{E}^{2}+\vec{B}^{2}, \vec{E} \times \vec{B})=(T^{00}_{\rm EM}, \;T^{01}_{\rm EM},\; T^{02}_{\rm EM},\; T^{03}_{\rm EM}),
\end{equation}
where $\vec{E}$ and $\vec{B}$ are the electric and magnetic fields, both determined from $A^{\mu}$, and $T^{\mu\nu}_{\rm EM}$ is the electromagnetic energy-momentum tensor.
In a black hole interior the direction of the Poynting flux is toward the singularity at its center. In an expanding, accelerating universe, it is in the global future direction. Then, the fact that there is a time-like vector field along where Poynting flux occurs, indicates the existence of a future particle horizon. There is a global-to-local relation given by the Poynting flux as determined by the curvature of space-time that indicates the direction along which events occur. Physical processes, inside a black hole, occur along a different direction from outside. The causal structure of the world is determined by the dynamics of space-time and the initial conditions. Macroscopic irreversibility\footnote{The electromagnetic flux is related with the macroscopic concept of temperature through the Stefan-Boltzmann law: $L=A\sigma_{\rm SB}T^{4}$, where $\sigma_{\rm SB}= 5.670 400 \times 10^{-8} \textrm{J\,s}^{-1}\textrm{m}^{-2}\textrm{K}^{-4}$ is the
Stefan-Boltzmann constant.} emerges from fundamental reversible laws.
There is an important corollary to these conclusions. Local observations about the direction of events can provide information about global features of space-time and the existence of horizons and singularities.
\section{Causal explanations?}
Do the initial conditions of the universe, namely the fact that the gravitational entropy was extremely small, require a causal explanation? What such an explanation would be?
The causal relation is a relation between events (ordered pairs of states), not between things. Causation is a form of event generation\cite{be,bu}. The initial conditions represent a state of a thing (the universe in this case, the maximal thing) and hence have no causal power. The initial conditions are a ``state of affairs''. The causal power should be looked for in previous events, but if space-time itself, as an emergent property of basic things, has a quantum behavior, classical causality would not operate. Rather, the initial conditions should appear as a classical limit of the gravitational processes at quantum level. Final conditions can be causally explained because there are possible causes that precede final conditions, but initial conditions, on the contrary, cannot be causally explained because there is no time that precede them. The initial conditions of the universe, then, should have an explanation in terms of yet unknown dynamical laws. Such laws do not need, and likely have not, a causal structure.
\section{Final remarks}
Time is an emergent property of changing things. It is represented by a one-dimensional continuum. Processes in space-time are anisotropic, although physical laws are invariant under time reversal. Time itself is not anisotropic, because it is not represented by a vector field. For instance, in a Friedman-Robertson-Walker-Lema\^{\i}tre model, time is represented by the following real parameter\cite{rindler}:
$$
t=\int\frac{da}{\sqrt{F(a)}},
$$
where $a$ is the cosmic scale factor, $F(a)={C}/{a}+(\Lambda a^{2})/3 - k$, $C=(8/3)\pi G\rho a^{3}$, and the remaining symbols have their usual meaning in the literature\cite{rindler}. This time coincides with proper time along each fundamental worldline, clearly being described by a real number.
Our conclusion is that the dynamical state of space-time and the initial conditions determine the local direction of the physical processes through the electromagnetic Poynting flux. There is a global-to-local relation between gravitation and electrodynamics, between the origin and fate of the universe, and processes such as those in our brains. From the irreversibility observed around us we can infer the existence of a future cosmological particle horizon. \\
\section*{Acknowledgments}
This work has been supported by Grant CONICET PIP 0078. GER thanks Mario Bunge for stimulating comments.
|
2,877,628,091,244 | arxiv | \section*{Supplementary Information}
\section{\label{SM}Samples and experimental techniques}
For this work we used commercially obtained SrTiO$_{3}$ and Sr$_{1-x}$Ca$_{x}$TiO$_{3}$ ($x=0.0022$, 0.0045 and 0.009) single crystals. The nominal calcium concentration of two samples was checked using the Secondary Ion Mass Spectrometry (SIMS) analysis technique as detailed previously\cite{DeLima:2015}. The oxygen content has been changed by heating the samples in vacuum (pressure $10^{-6}-10^{-7}$ mbar) to temperatures of $775-1100$ $^\circ$C. In order to attain carrier densities above $\sim 4\times 10^{18}$ cm$^{-3}$, a piece of titanium has been placed next to the sample during heating. Ohmic contacts have been realized prior to oxygen removal by evaporation of gold contact pads.\\
The electrical measurements have been performed in a Quantum Design Physical Property Measurement System (PPMS) between 1.8 and 300 K as well as in a 17 T dilution refrigerator with a base temperature of 26 mK. Detailed electrical transport information on all samples presented in the main text is listed in Tab. S\ref{tab:Samples} of this supplement.\\
The ac magnetic susceptibility was measured in a homemade set-up, comprising a primary coil and a compensating pick-up coil with two sub-coils with their turns in opposite direction. A Lock-in amplifier was utilized to supply the exciting ac current and pick up the induced voltage signal. The applied ac field was as low as 10 mG with a frequency of 16 kHz.\\
The dielectric permittivity measurements were performed employing a frequency-response analyzer ({\sc Novocontrol} Alpha-Analyzer). Using silver paint, the plate-like samples were prepared as capacitors with typical electrode dimensions of $3 \times 3$~mm$^2$ and a typical thickness of 0.5-0.85~mm. For the evaluation of the as-measured data $C_{meas}$, passivated surface layers were assumed, as described in \cite{Aso:1976}. Such layers can be considered as additional capacitors $C_{surf}$ in series to the remaining bulk specimen $C_{bulk}$, which limits the total capacitance data. Therefore the data were corrected assuming a temperature independent surface contribution $C_{bulk}{-1}=C_{meas}{-1}-C_{surf}{-1}$, which results in $\varepsilon(T)$ curves comparable to literature data on surface-etched samples \cite{Aso:1976}. Measurements of $P(E)$-hysteresis loops were performed using the same setup with an additional high-voltage module ({\sc Novocontrol} HVB1000). The actual field dependent polarization was calculated from the non-linear dielectric permittivities up to the tenth order as described in \cite{Niermann:2014}. The thermo-remanent polarization data was gained from the integrated pyro-current as collected with an electrometer (Keithley 6517) after cooling in a poling field of approximately 120~V/mm.\\
A home-built capacitance dilatometer has been used to detect the uniaxial length changes $\Delta L(T)$ while continuously heating the crystal from about 5 to 150~K with a rate of about $0.1$~K/min. Here, $\Delta L(T)$ was measured along the [100] directions of Sr$_{1-x}$Ca$_{x}$TiO$_{3-\delta}$ single crystals with total lengths $L_0\simeq 2$~mm and the uniaxial thermal expansion coefficient $\alpha=1/L_0 \partial \Delta L/\partial T$ has been derived numerically.\\
The Raman measurements were performed using the 532 nm line of Diode Pumped Solid State (DPSS) laser. An incident power of 5mW was focused on a spot of dimension 50 $\times$ 80 $\mu$m approximately. Power dependance measurements at low temperature indicated negligible laser heating for this incident power. The inelastically scattered photons were analyzed using a triple grating spectrometer working in subtractive configuration and equipped with a nitrogen cooled Charge Coupled Device (CCD) camera. The spectral resolution was about 1.5 cm$^{-1}$. All spectra were recorded with linearly polarized and parallel incoming and outgoing photons. The crystals were cooled using a close-cycle optical cryostat with a base temperature of 3 K.\\
\begin{table}
\centering
\caption{\label{tab:Samples} Details of the Sr$_{1-x}$Ca$_{x}$TiO$_{3-\delta}$ samples. Hall carrier density ($n_H$), room temperature ($\rho_{\textnormal{300K}}$) and 2 K ($\rho_{\textnormal{2K}}$) resistivity, the ratio RRR$=\rho_{\textnormal{300K}}/\rho_{\textnormal{2K}}$, the Hall mobility at 2 K ($\mu_{H-\textnormal{2K}}$) and the Curie temperature seen by resistivity are specified.}
\footnotesize
\begin{tabular}{ccccccc}
\hline
$x$ & $n_H$ & $\rho_{\textnormal{300K}}$ & $\rho_{\textnormal{2K}}$ & RRR & $\mu_{H-\textnormal{2K}}$ & $T_{Curie,{\rho}_{xx}}$ \\
& $10^{18}$cm$^{-3}$ & m$\Omega$cm & m$\Omega$cm & & cm$^2$/Vs & K\\
\hline
\hline
0.0022 & 0.72 & 1840 & 2.30 & 800 & 3770 & $9.6 \pm 0.4$ \\
0.0022 & 1.1 & 1380 & 2.31 & 600 & 2456 & $ 7.8\pm 0.5$ \\
0.0022 & 3.3 & 553 & 1.08 & 512 & 1751 & $ 6.8\pm 0.5$ \\
0.0022 & 4.4 & 344 & 0.847 & 406 & 1675 & $ 3.7\pm 0.4 $\\
0.0022 & 6.8 & 87 & 0.625 & 139 & 1469 & 0\\
\hline
0.0045 & 0.66 & 1830 & 3.43 & 535 & 2760 & $ 19.1\pm 1 $ \\
\hline
0.009 & 0.83 & 1470 & 6.37 & 231 & 1183 & $24.5\pm 1$\\
0.009 & 1.5 & 857 & 4.17 & 206 & 1005 & $24.4 \pm 1$\\
0.009 & 3.8 & 394 & 2.27 & 174 & 724 & $19.8 \pm 0.4$\\
0.009 & 4.4 & 346 & 2.17 & 159 & 654 & $ 18.1 \pm 0.4$\\
0.009 & 7.2 & 217 & 1.48 & 147 & 586 & $17.7 \pm 0.8$\\
0.009 & 13 & 132 & 0.941 & 140 & 510 & $9.1 \pm 0.4$\\
0.009 & 20 & 91.3 & 0.656 & 139 & 476 & 0\\
0.009 & 26 & 70.4 & 0.542 & 130 & 443 & 0\\
\hline
\end{tabular}
\normalsize
\end{table}
\section{\label{Tdep}Temperature dependence of resistivity}
Figures S\ref{FigSI1} and S\ref{FigSI2} plot the resistivity $\rho_{xx}$ as well as its derivative $d\rho_{xx}/dT$ measured on Sr$_{1-x}$Ca$_{x}$TiO$_{3-\delta}$ samples with Ca contents of $x=0.0022$ and 0.009, respectively, as a function of temperature $T$. The arrows mark the temperatures associated with the resistivity anomaly and the Curie temperature $T_{Curie,\rho_{xx}}$ seen in resistivity plotted in Fig. 2c of the main text (see also Tab. S\ref{tab:Samples}). The temperatures $T_{Curie,\rho_{xx}}$ have been taken as the temperature at which $d\rho_{xx}/dT$ shows a kink.\\
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{FigureSI3a.pdf}
\caption{Temperature dependence of resistivity $\rho_{xx}$ and its first derivative $d\rho_{xx}/dT$ in metallic Sr$_{0.9978}$Ca$_{0.0022}$TiO$_{3-\delta}$. The arrows mark the temperatures associated with the resistivity anomaly and the Curie temperature $T_{Curie,\rho_{xx}}$ seen in resistivity.}
\label{FigSI1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{FigureSI3b.pdf}
\caption{Temperature dependence of resistivity $\rho_{xx}$ and its first derivative $d\rho_{xx}/dT$ in metallic Sr$_{0.991}$Ca$_{0.009}$TiO$_{3-\delta}$.}
\label{FigSI2}
\end{figure}
\clearpage
\section{\label{US}Temperature dependence of thermo-remanent polarization}
Fig. S\ref{FigSI5tp} displays the thermo-remanent polarization in the system Sr$_{0.991}$Ca$_{0.009}$TiO$_{3}$. A theoretical description of the quantum ferroelectric regime according to the transverse Ising model yields values for the saturation polarization reaching up to 20\,mC/m$^2$ for a comparable Ca concentration \cite{Guo:2012}. However, as we are dealing only with the remanant polarization, the maximum value at low temperatures lies around 20\,mC/m$^2$, which is smaller due to domain formation but is still of the same order of magnitude. Upon heating above the ferroelectric ordering temperature $T_{Curie}$ near 25\,K $P(T)$ shows a steep fall, but does not vanish completely. Above $T_{Curie}$, there is a finite frozen polarization, as seen by the opening of the $P(E)$ hysteresis loops shown in the inset of Fig. S\ref{FigSI5tp}. Obviously, polar entities like ferroelectric clusters or polar structural domain walls persist to temperatures well above the ferroelectric transition. The latter is in accord to results gained from ultrasound experiments even in pure STO \cite{Salje:2013}. A corresponding characterization of the oxygen-reduced samples obviously cannot be made as the macroscopic polarization features are shielded by the metallic background.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.4\textwidth]{Fig_STO09CaPyro.pdf}
\caption{Thermo-remanent polarization measured in an insulating sample of Sr$_{0.991}$Ca$_{0.009}$TiO$_{3}$ after cooling in a poling field of 120\,V/mm. The inset shows $P(E)$ loops measured at various temperatures between 5 and 250\,K.}
\label{FigSI5tp}
\end{figure}
\section{\label{US}Detection of the phase transition with sound-velocity measurements}
The sound velocity was measured in transmission geometry using longitudinal 10~MHz PZT-transducers as emitter and detector. A network analyzer (Rohde \& Schwarz ZVB4) with time domain option was used to determine the transit time trough plate-like samples with a length of typically 5 mm\cite{Balashova:1996}. The FFT-representation of the response carries an absolute time resolution of the reciprocal resonance frequency, i.e., approximately 100 nS. However, relative changes of the transit time can be determined with much higher resolution.\\
The sound velocity as a function of temperature in an insulating ($\delta = 0$) and metallic ($\delta \neq 0$) Sr$_{0.991}$Ca$_{0.009}$TiO$_{3-\delta}$ sample is shown in Fig. S\ref{FigSI7}. In both cases the phase transition gives rise to an anomaly near the Curie temperature.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{FigureSI7.pdf}
\caption{Sound velocity as a function of temperature in insulating ($\delta = 0$) and metallic ($\delta \neq 0$, $n=1.4 \times 10^{18}$ cm$^{-3}$) Sr$_{0.991}$Ca$_{0.009}$TiO$_{3-\delta}$ samples. The arrow depicts the Curie temperature.}
\label{FigSI7}
\end{figure}
\section{\label{QO}Quantum oscillations}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{FigureSI5.pdf}
\caption{Quantum oscillations detected at low temperatures on metallic Sr$_{0.991}$Ca$_{0.009}$TiO$_{3-\delta}$. Each panel plots the oscillating part $\Delta\rho_{xx}$ of the magnetoresistance, obtained after subtracting a smooth background, as a function of inverse magnetic field $1/B$.}
\label{FigSI3}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{FigureSI6.pdf}
\caption{Oscillation period F detected on Sr$_{0.991}$Ca$_{0.009}$TiO$_{3-\delta}$ as a function of carrier density together with data obtained on SrTiO$_3$ \cite{Lin:2014}. The doping levels $n_{c1}$ and $n_{c2}$ correspond to the threshold for the filling of a second and third band.}
\label{FigSI4}
\end{figure}
Quantum oscillations in SrTiO$_{3-\delta}$ have been systematically studied for a large range of carrier densities ($10^{17} - 10^{20}$ cm$^{-3}$) using electric and thermoelectric measurements \cite{Lin:2013,Lin:2014}. Fig. S\ref{FigSI3} plots the oscillating part $\Delta\rho_{xx}$ of the magnetoresistance, obtained after subtracting a smooth background, as a function of inverse magnetic field $1/B$ for metallic Sr$_{1-x}$Ca$_{x}$TiO$_{3-\delta}$ ($x=0.0091$) samples. The detected oscillation periods $F$ are plotted as a function of carrier density in Fig. S\ref{FigSI4} and compared with data obtained on SrTiO$_{3-\delta}$ \cite{Lin:2014}. Lin et al. identified two critical doping levels $n_{c1}\approx 1.5\times 10^{18}$ and $n_{c2}\approx 3 \times 10^{19}$ cm$^{-3}$ that correspond to the threshold for the filling of a second and third band and which are associated with the appearance of a second or third oscillation period, respectively. As seen in Fig. S\ref{FigSI4}, the detected periods obtained on Sr$_{1-x}$Ca$_{x}$TiO$_{3-\delta}$ agree well with those obtained on Ca-free STO. However, due to the $3-5$ times lower mobility in the Ca-doped samples compared to pure STO, only the lowest oscillation period could be resolved with certainty for $n>n_{c1}$.\\
Even for low carrier concentrations below $n_{c1}$ the Fermi surface of n-doped SrTiO$_{3-\delta}$ is not a perfect sphere but an ellipsoid squeezed along the c-axis due to the tetragonal distortion of the lattice. As done in \cite{Lin:2014} we estimated the carrier concentration $n_{SdH}$ from the oscillations using the magnitude of the tetragonal distortion reported by Allen et al. \cite{Allen:2013}. The values of $n_{SdH}$ given in the main text were calculated as $n_{SdH}=1.26\times n_{sphere}$ with $n_{sphere}=k_{F}^{3}/3\pi^2$ and $k_{F}=\sqrt{2eF/\hbar}$ with the oscillation period $F$.
|
2,877,628,091,245 | arxiv | \section{Introduction}
Distinguishing entanglement from separability is one of the most
important question in the theory of quantum entanglement, and the
positive partial transpose (PPT) criterion \cite{choi-ppt,peres}
gives a simple but strong necessary condition for separability. The
PPT condition is actually equivalent to the separability if the rank
of a given PPT state is sufficiently low by \cite{hlvc}. In the case
that the rank is not so high, it turns out that the local geometry is
quite useful to distinguish and construct entanglement among PPT
states. Basic idea is to consider the smallest faces determined by a
given separable state in the convex sets of all separable states and
all PPT states respectively, and compare those. See
\cite{ha+kye_unique_decom,ha+kye_2x4} for recent progresses in this
direction.
In this paper, we turn our attention to the global geometries for separable and PPT states,
and look for the differences.
We denote by $\mathbb S_{m,n}$ the convex set of all $m\otimes n$ separable states,
and by $\mathbb T_{m,n}$ the convex set of all $m\ot n$ PPT states.
For the convex set $\mathbb S_{m,n}$, it is easy to see that a convex combination
of two extreme points is always on the boundary of the convex set. Actually, the line
segment between two extreme points of $\mathbb S_{2,2}$ is already a nontrivial face of the convex set, in most cases.
See \cite{kye_trigono} for more details for the convex geometry of $\mathbb S_{2,2}$.
More generally, the convex hull of $\max\{m,n\}$ extreme points of $\mathbb S_{m,n}$
is a face of the convex set, in most cases by \cite{alfsen,kirk}.
Therefore, it is natural to ask how these properties are retained for the convex set $\mathbb T_{m,n}$.
For a convex compact set $C$ in a finite dimensional real vector space,
we introduce the number $\nu(C)$ the smallest natural number $k$ such that the convex combination
of $k$ extreme points of $C$ may be an interior point of $C$. We recall that the interior of a convex set
is defined by the interior with respect to the relative topology induced by the affine manifold generated by itself.
Sometimes, it is more convenient to consider the convex cone $\tilde C$ generated by the convex compact set $C$.
For example, $\tilde{\mathbb S}_{m,n}$ ($\tilde{\mathbb T}_{m,n}$, respectively)
is the convex cone of all $m\otimes n$ unnormalized separable (PPT, respectively) states.
In this case, we may replace extreme points by extreme rays to get the same number $\nu(C)$.
Recall that a point $x$ of a convex compact set $C$ is an extreme point of $C$ if and only if $x$ generates an extreme ray of
the convex cone $\tilde C$, whenever the hyperplane generated by $C$ does not contain the origin.
It is easy to see that
$$
\nu(\mathbb S_{m,n})=mn,
$$
for every $m,n=2,3,\dots$. The main purpose of this note is to show that
$$
\nu(\mathbb T_{3,3})=2,
$$
to see the geometric difference between $\mathbb S_{3,3}$ and $\mathbb T_{3,3}$.
In other words, we can take just two extreme points of $\mathbb T_{3,3}$ whose convex combinations
belong to the interior of $\mathbb T_{3,3}$.
To do this, we consider the following $3\otimes 3$ states
$$
\varrho_{b,\theta}
=\left(
\begin{array}{ccccccccccc}
p_\theta &\cdot &\cdot &\cdot &-e^{i\theta} &\cdot &\cdot &\cdot &-e^{-i\theta} \\
\cdot &\frac 1b &\cdot &-e^{-i\theta} &\cdot &\cdot &\cdot &\cdot &\cdot \\
\cdot &\cdot &b &\cdot &\cdot &\cdot &-e^{i\theta} &\cdot &\cdot \\
\cdot &-e^{i\theta} &\cdot &b &\cdot &\cdot &\cdot &\cdot &\cdot \\
-e^{-i\theta} &\cdot &\cdot &\cdot &p_\theta &\cdot &\cdot &\cdot &-e^{i\theta} \\
\cdot &\cdot &\cdot &\cdot &\cdot &\frac 1b &\cdot &-e^{-i\theta} &\cdot \\
\cdot &\cdot &-e^{-i\theta} &\cdot &\cdot &\cdot &\frac 1b &\cdot &\cdot \\
\cdot &\cdot &\cdot &\cdot &\cdot &-e^{i\theta} &\cdot &b &\cdot \\
-e^{i\theta} &\cdot &\cdot &\cdot &-e^{-i\theta} &\cdot &\cdot &\cdot &p_\theta
\end{array}
\right)
$$
for a given positive number $b>0$ and a real number $\theta$, where $\cdot$ denotes zero and
$$
p_\theta=\max\{ e^{i(\theta-\frac 23 \pi)}+e^{-i(\theta-\frac23 \pi)},
e^{i\theta}+e^{-i\theta},
e^{i(\theta+\frac 23 \pi)}+e^{-i(\theta+\frac 23 \pi)}\}
$$
is the smallest positive number $a$ so that the following $3\times
3$ matrix
$$
\left(
\begin{matrix}
a & -e^{i\theta} & -e^{-i\theta}\\
-e^{-i\theta} & a & -e^{i\theta}\\
-e^{i\theta} & -e^{-i\theta} & a
\end{matrix}
\right)
$$
is positive, as it was discussed in Section 2 of \cite{ha+kye_Choi}.
We note that $1\le p_\theta\le 2$. Therefore, it is immediate to see that $\varrho_{b,\theta}$ is a PPT state.
These PPT states have been constructed in \cite{kye_osaka} for $-\pi/3<\theta<\pi/3$. The main point is to
extend this construction for the full range of $\theta$.
We check that they are extreme points of $\mathbb T_{3,3}$ in most cases, with a few exceptions.
Note that the case of $b=2$ and $\theta=\pi/6$ has been checked to be extreme in \cite{chen_dj_3x3}.
If we divide the parameter $e^{i\theta}$ into three arcs and take any two extreme points from different arcs then their
convex combinations lie in the interior of $\mathbb T_{3,3}$. We see that some of them turn out to be even
in the interior of $\mathbb S_{3,3}$.
It had been asked in \cite{chen_dj_extreme_ppt} whether the sum of two PPT entangled extreme states can be separable,
and the authors \cite{ha+kye_unique_decom} gave an affirmative answer. More precisely, it was shown that
sum of two extreme PPT entangled states with rank four may be separable. Our construction in this paper shows that
sum of two extreme PPT entangled states with rank five may be even diagonal matrices with positive diagonal entries.
Let $M_n$ be the $C^*$-algebra consisting of all $n\times n$ matrices over the complex field.
We also consider the same question for the convex cone $\mathbb P_{m,n}$ (respectively $\mathbb D_{m,n}$) of all
positive maps (respectively decomposable positive maps) from $M_m$ into $M_n$, to show that
$\nu(\mathbb D_{m,n})\ge m+n-2$.
In the case of $n=m=3$, we have $\nu(\mathbb D_{3,3})=4$ and $\nu(\mathbb P_{3,3})=2$.
By the Jamio\l kowski-Choi isomorphism \cite{choi75-10,jami}, the cones $\mathbb P_{m,n}$ and $\mathbb D_{m,n}$
are considered as subsets of $M_m\ot M_n$, and we have the relation
$$
\tilde{\mathbb S}_{m,n}\subset \tilde{\mathbb T}_{m,n}\subset \mathbb D_{m,n}\subset\mathbb P_{m,n}.
$$
We also recall \cite{eom-kye} that $\tilde{\mathbb S}_{m,n}$ and $\mathbb P_{m,n}$ (respectively $\tilde{\mathbb T}_{m,n}$ and $\mathbb D_{m,n}$)
are dual to each others with respect to the bilinear pairing
\begin{equation}\label{pairing}
\langle \rho,\phi\rangle=\text{\rm Tr}(\rho C_{\phi}^{\rm t}),\qquad
\rho\in \tilde{\mathbb S}_{m,n},\ \phi \in \mathbb P_{m,n},
\end{equation}
where $C_{\phi}$ is the Choi matrix of $\phi$ defined by
$\sum |i\rangle \langle j|\otimes \phi(|i\rangle \langle j|)$, and interior points of these convex sets can be characterized by the above duality:
\begin{itemize}
\item $\rho$ is an interior point of $\mathbb S_{m,n}$ if and only if $\langle \rho,\phi\rangle >0$ for all nonzero $\phi\in \mathbb
P_{m,n}$.
\item $\phi$ is an interior point of $\mathbb P_{m,n}$ if and only if $\langle \rho,\phi\rangle>0$ for all $\rho\in \mathbb
S_{m,n}$.
\end{itemize}
In this characterization of interior points of convex sets, we note that
it suffices to check the positivity of the pairing only for extreme points (rays) of the dual convex set (convex cone).
See Proposition 5.1 and 5.4 of \cite{kye_ritsu}.
In the next section, we examine the properties of the states $\varrho_{b,\theta}$ and how to choose two of them
to get an interior point by their convex combination. In Section 3, we show that they are extreme points in
$\mathbb T_{3,3}$, in most cases. We consider the convex cones $\mathbb P_{3,3}$ and $\mathbb D_{m,n}$ in Section 4,
and close this note with discussions in Section 5.
\section{Separable states and PPT states}
The facial structures for the convex set $\mathbb T_{m,n}$ are well understood \cite{ha_kye_04}.
Every face of $\mathbb T_{m,n}$ is of the form
$$
\tau(D,E)=\{\varrho\in\mathbb T_{m,n}: {\mathcal R}\varrho\subset D,\ {\mathcal R}\varrho^\Gamma \subset E\},
$$
for subspaces $D$ and $E$ of $\mathbb C^m\ot\mathbb C^n$, and its interior is given by
$$
\inte\tau(D,E)=\{\varrho\in\mathbb T_{m,n}: {\mathcal R}\varrho= D,\ {\mathcal R}\varrho^\Gamma = E\},
$$
where ${\mathcal R}\varrho$ denotes the range space of $\varrho$, and $\varrho^\Gamma$ is the partial
transpose of $\varrho$. Especially, a PPT state $\varrho$ is an interior point
of $\mathbb T_{m,n}$ if and only if the ranges of $\varrho$ and $\varrho^\Gamma$ are full spaces.
Extreme points of the convex set $\mathbb S_{m,n}$ are nothing but product states by the definition of separability.
If we take $k$ product states with $k<mn$ and form a separable state $\varrho\in\mathbb S_{m,n}$ with their
convex combination then
the range space of $\varrho$ is never the full space, and so $\varrho$ is on the boundary of $\mathbb T_{m,n}$.
By the relation $\mathbb S_{m,n}\subset\mathbb T_{m,n}$, we conclude that $\varrho$ is also on the boundary
of $\mathbb S_{m,n}$. Therefore, we have $\nu(\mathbb S_{m,n})\ge mn$. Since the identity matrix is in the interior
of $\mathbb S_{m,n}$, we conclude that $\nu(\mathbb S_{m,n})= mn$.
In fact, it is easy to see that every diagonal matrix with nonzero positive diagonal entries is an interior point
of the convex set $\mathbb S_{m,n}$, by the duality between separable states and positive maps.
Now, we proceed to examine the properties of the states $\varrho_{b,\theta}$. We also consider the PPT states
defined by
$$
\sigma_{b,\theta}=
\left(
\begin{array}{ccccccccccc}
p_\theta &\cdot &\cdot &\cdot &-e^{i\theta} &\cdot &\cdot &\cdot &-e^{-i\theta} \\
\cdot &\frac 1b &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot \\
\cdot &\cdot &b &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot \\
\cdot &\cdot &\cdot &b &\cdot &\cdot &\cdot &\cdot &\cdot \\
-e^{-i\theta} &\cdot &\cdot &\cdot &p_\theta &\cdot &\cdot &\cdot &-e^{i\theta} \\
\cdot &\cdot &\cdot &\cdot &\cdot &\frac 1b &\cdot &\cdot &\cdot \\
\cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\frac 1b &\cdot &\cdot \\
\cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot &b &\cdot \\
-e^{i\theta} &\cdot &\cdot &\cdot &-e^{-i\theta} &\cdot &\cdot &\cdot &p_\theta
\end{array}
\right).
$$
If $0<|\theta|<\frac\pi 3$
then $\sigma_{b,\theta}$ is nothing but PPT entangled edge states of type $(8,6)$ constructed in \cite{kye_osaka}.
We recall that a PPT state $\varrho$ is said to be of type $(p,q)$ if the ranks of $\varrho$ and $\varrho^\Gamma$ are $p$ and $q$,
respectively. We note that the state $\varrho_{b,\theta}$ defined in Introduction is given by
$$
\varrho_{b,\theta}
=\sigma_{b,\theta}+\sigma_{b,\theta}^\Gamma -{\text{\rm Diag}}\, \sigma_{b,\theta},
$$
which is block-wise symmetric.
If $\theta=0$ then $\sigma_{b,0}$ was shown to be separable for each $b>0$ in \cite{kye_osaka}.
On the other hand, if $\theta=\pi$ then $\sigma_{b,\pi}$ was shown \cite{ha+kye_sep} to be separable if and only if $b=1$.
When $b\neq 1$, we note that $\sigma_{b,\pi}$ is nothing but PPT entangled state given by St\o rmer \cite{stormer82} in the early eighties.
We also know that both $\sigma_{b,\theta}$ and $\varrho_{b,\theta}$ are PPT entangled edge states for $0<|\theta|<\frac\pi 3$ by \cite{kye_osaka}.
Now, we turn our attention to the state $\varrho_{b,\pi}$.
We note that $\varrho_{1,\pi}$ is the separable state given by the following four product vectors
$$
\begin{aligned}
&(1,1,1)^{\text{\rm t}} \otimes (1,1,1)^{\text{\rm t}},\, &
&(1,1,-1)^{\text{\rm t}} \otimes (1,1,-1)^{\text{\rm t}},\\
&(1,-1,1)^{\text{\rm t}} \otimes (1,-1,1)^{\text{\rm t}},\,
&(-1,1,1)^{\text{\rm t}} \otimes (-1,1,1)^{\text{\rm t}},\\
\end{aligned}
$$
as it was shown in \cite{ha+kye_unique_decom}. The states $\varrho_{b,\pi}$ with $b\neq 1$ appear in the construction \cite{ha+kye}
of PPT entangled states of type $(4,4)$ using the duality between positive linear maps and separable states.
The special case $\varrho_{2,\pi}$ is just the first example
of $3\ot 3$ PPT entangled state given by Choi \cite{choi-ppt}. In short, we see that $\varrho_{b,\pi}$ is separable if and only if $b=1$.
If we take the diagonal unitary $U={\text{\rm Diag}}(1,e^{-\frac 23\pi i},e^{\frac 23\pi i})$, then we have
$$
U^{-1}
\left(
\begin{matrix}
p_\theta & -e^{i\theta} & -e^{-i\theta}\\
-e^{-i\theta} & p_\theta & -e^{i\theta}\\
-e^{i\theta} & -e^{-i\theta} & p_\theta
\end{matrix}
\right)U
=
\left(
\begin{matrix}
p_\theta & -e^{i(\theta-\frac23\pi)} & -e^{-i(\theta-\frac23\pi)}\\
-e^{-i(\theta-\frac23\pi)} & p_\theta & -e^{i(\theta-\frac23\pi)}\\
-e^{i(\theta-\frac23\pi)} & -e^{-i(\theta-\frac23\pi)} & p_\theta
\end{matrix}\right),
$$
and so it follows that
\begin{equation}\label{unitary}
(I\ot U)^{-1}\varrho_{b,\theta}(I\ot U)=\varrho_{b,\theta-\frac 23\pi},\qquad
(I\ot U)^{-1}\sigma_{b,\theta}(I\ot U)=\sigma_{b,\theta-\frac 23\pi}.
\end{equation}
Therefore, the separability and PPT properties of $\varrho_{b,\theta}$ and $\sigma_{b,\theta}$
are invariant under the translation of $\theta$ by $\frac 23\pi$,
as well as the types of the states.
Therefore, we have the following:
\begin{theorem}
For states $\sigma_\theta$ and $\varrho_\theta$, we have the following:
\begin{enumerate}
\item[(i)]
If $\theta\neq \frac n3\pi$ for any integer $n$, then $\sigma_{b,\theta}$ and $\varrho_{b,\theta}$ are
PPT entangled edge states of type $(8,6)$ and $(5,5)$, respectively.
\item[(ii)]
If $\theta=\frac n3\pi$ for an even integer $n$,
then $\sigma_{b,\theta}$ and $\varrho_{b,\theta}$ are separable states of type $(8,6)$ and $(5,5)$, respectively.
\item[(iii)]
If $\theta=\frac n3\pi$ for an odd integer $n$ and $b=1$,
then $\sigma_{b,\theta}$ and $\varrho_{b,\theta}$ are separable states of type $(7,6)$ and $(4,4)$, respectively.
\item[(iv)]
If $\theta=\frac n3\pi$ for an odd integer $n$ and $b\neq 1$,
then $\sigma_{b,\theta}$ and $\varrho_{b,\theta}$ are PPT entangled states of type $(7,6)$ and $(4,4)$, respectively.
\end{enumerate}
\end{theorem}
The separability and entangledness of the states $\varrho_{b,\theta}$ and $\sigma_{b,\theta}$ are summarized in
Figure 1.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.5]{b_neq_1}\hskip 2truecm
\includegraphics[scale=0.5]{b_eq_1}
\end{center}
\caption{The points on the arcs represent PPT entangled states, and the small circles represent separable states. }
\end{figure}
We note that the circle $\{e^{i\theta}:\theta\in\mathbb R\}$ is divided by three arcs by the range of the variable $\theta$:
$$
\left(-\pi,\,-\frac \pi 3\right),\qquad \left(-\frac\pi 3,\,\frac\pi 3\right),\qquad \left(\frac\pi 3,\,\pi\right).
$$
We also note that the following three vectors
$$
\begin{aligned}
w_1(\theta)=&(0,b,0\,;\,e^{i\theta},0,0\,;\,0,0,0),\\
w_2(\theta)=&(0,0,0\,;\,0,0,b\,;\,0,e^{i\theta},0),\\
w_3(\theta)=&(0,0,e^{i\theta}\,;\,0,0,0\,;\,b,0,0),
\end{aligned}
$$
belong to the kernel of $\varrho_{b,\theta}$, regardless of the values of $b$ and $\theta$. There are extra kernel vectors:
$$
\begin{aligned}
w_-&=(1,0,0\,;\,0,e^{\frac 23\pi i},0\,;\,0,0,e^{-\frac 23\pi i}),\qquad &-\pi<&\theta<-\frac\pi 3,\\
w_0&=(1,0,0\,;\,0,1,0\,;\,0,0,1),\qquad &-\frac\pi 3<&\theta<+\frac\pi 3,\\
w_+&=(1,0,0\,;\,0,e^{-\frac 23\pi i},0\,;\,0,0,e^{\frac 23\pi i}),\qquad &\frac\pi 3<&\theta<\pi.
\end{aligned}
$$
If we take $(b,\theta)$ and $(c,\tau)$ so that $e^{i\theta}$ and $e^{i\tau}$ belong to the different arcs, then it is clear that
the kernels of $\varrho_{b,\theta}$ and $\varrho_{c,\tau}$ have the trivial intersection. This means that
the nontrivial convex combination $\rho$ of these two states has the full range space, as well as the partial conjugate.
Therefore, we conclude that this PPT state $\rho$ belongs to the interior of the convex set $\mathbb T_{3,3}$.
In the next section, we will show that each state
$\varrho_{b,\theta}$ is an extreme point in the convex set
$\mathbb T_{3,3}$ consisting of all $3\ot 3$ PPT states, whenever $\theta\neq \frac n3\pi$ for an integer $n$.
From this, we conclude that $\nu(\mathbb T_{3,3})=2$.
If we take $(b,\theta)$ and $(c,\tau)$ so that $e^{i\theta}$ and $e^{i\tau}$ belong to the same arc, then
we note that the convex combinations
of $\varrho_{b,\theta}$ and $\varrho_{c,\tau}$ are on the boundary.
For a given $e^{i\theta}$, we take the antipodal point $e^{i(\theta+\pi)}=-e^{i\theta}$ then we see that
$$
\dfrac12(\varrho_{b,\theta}+\varrho_{c,\theta+\pi})
$$
is a diagonal matrix, and so it is separable. Actually, it is an interior point of $\mathbb S_{3,3}$,
since there is no zero entry in the diagonal.
This shows that the convex combination of two extreme PPT states may be in the interior of the convex set $\mathbb S_{3,3}$
of all separable states. We note that $p_\theta+p_{\theta+\pi}>2$ for each $\theta$,
and so we may take $b>0$ so that $b+\frac 1b=p_\theta+p_{\theta+\pi}$.
Then we see that the sum $\varrho_{b,\theta}+\varrho_{1/b,\theta+\pi}$ of two extreme PPT states is a scalar multiple of the identity matrix.
\section{Extremeness}
First, we briefly explain the method \cite{leinass_mo, ha_ext,augusiak_gkl} to check if a given face $\tau(D,E)$
is an extreme point or not,
where $D$ and $E$ are subspaces of $\mathbb C^m\otimes \mathbb C^n$. Let $(M_m\otimes M_n)_h$ be the real Hilbert space of all
$mn\times mn$ hermitian matrices in $M_m\otimes M_n$ with the inner product $\langle X,Y\rangle=\text{Tr}(YX^{\rm t})$,
and orthogonal projections $P_D$ and $P_E$ in
$(M_m\otimes M_n)_h$ onto $D$ and $E$, respectively. We define real linear maps $\phi_D$ and $\phi_E$ between $(M_m\otimes M_n)_h$ by
\[
\phi_D(X)=P_D X P_D-X,\quad \phi_E(X)=(P_E X^{\Gamma}P_E)^{\Gamma}-X, \quad X\in (M_m\otimes M_n)_h.
\]
Then we see that $\tau(D,E)\subset \text{Ker} \,\phi_D \cap \text{Ker}\,\phi_E$, where $\text{Ker}\,\phi_D$
denotes the kernel space of $\phi_D$. Therefore,
if $\text{Ker} \,\phi_D \cap \text{Ker}\,\phi_E$ is one-dimensional then $\tau(D,E)$ must be an extreme
point. It is not so difficult to see that the converse does hold.
Thus, we can conclude that $\tau(D,E)$ is an extreme point if and only if the condition
\[
\text{dim}(\text{Ker} \,\phi_D \cap \text{Ker}\,\phi_E)=1
\]
holds.
Now, we proceed to show that $\varrho_{b,\theta}$ is an extreme point in the convex set $\mathbb T_{3,3}$,
whenever $0<|\theta|<\frac \pi 3$. Let $D=\mathcal R \varrho_{b,\theta}$ and $E=\mathcal R \varrho_{b,\theta}^{\Gamma}$.
We note that $\varrho_{b,\theta}=\varrho_{b,\theta}^{\Gamma}$, so we see that $P_D=P_E$.
Applying the Gram-Schmidt process to linearly independent vectors of $\mathcal R \varrho_{b,\theta}$,
we can compute the orthogonal projection $P_D$ as follows:
\[
P_D=P_E=\begin{pmatrix}
\frac 23 & 0 & 0 & 0 & -\frac 13 & 0 & 0 & 0 & -\frac 13\\
0 &\frac 1{1+b^2} & 0 &-\frac{b e^{-i \theta}}{1+b^2} & 0 & 0 & 0 & 0 & 0\\
0 & 0 & \frac{b^2}{1+b^2} & 0 & 0 & 0 &-\frac{b e^{i \theta}}{1+b^2} & 0 & 0\\
0 & -\frac{b e^{i \theta}}{1+b^2} & 0 & \frac{b^2}{1+b^2} & 0 & 0 & 0 & 0 & 0\\
-\frac 13 & 0 & 0 & 0 & \frac 23 & 0 & 0 & 0 & -\frac 13\\
0 & 0 & 0 & 0 & 0 &\frac 1{1+b^2} & 0 &-\frac{b e^{-i \theta}}{1+b^2} & 0 \\
0 & 0 & -\frac{b e^{-i \theta}}{1+b^2} & 0 & 0 & 0 & \frac{1}{1+b^2} & 0 & 0 \\
0 & 0 & 0 & 0 & 0 &-\frac{b e^{i \theta}}{1+b^2} & 0 & \frac{b^2}{1+b^2} & 0 \\
-\frac 13 & 0 & 0 & 0 & -\frac 13 & 0 & 0 & 0 & \frac 23\\
\end{pmatrix}.
\]
By a direct computation, we can show that both $\text{Ker}\,\phi_D$ and $\text{Ker}\, \phi_E$ are twenty-five dimensional real linear subspaces.
Let $\{E_{ij}\}$ be the usual matrix units in $M_9$. Then,
we can find a basis $\{X_i:1\le i\le 25\}$ of real linear space $\text{Ker}\,\phi_D$,
which consists of hermitian matrices including the following vectors:
\[
\begin{aligned}
X_1&=E_{11}+E_{55}-E_{15}-E_{51}\\
X_2&=E_{11}+E_{99}-E_{19}-E_{91}\\
X_3&=E_{55}+E_{99}-E_{59}-E_{95}\\
X_4&=i (E_{19}-E_{15}-E_{59})-i(E_{91}-E_{51}-E_{95})\\
X_{5}&=e^{-i\theta}E_{24}+e^{i\theta}E_{42}-bE_{44}-\frac 1b E_{22},\\
X_{6}&=e^{-i\theta}E_{68}+e^{i\theta}E_{86}-bE_{88}-\frac 1b E_{66},\\
X_{7}&=e^{-i\theta}E_{73}+e^{i\theta}E_{37}-bE_{33}-\frac 1b E_{77}.
\end{aligned}
\]
We also see that $\text{Ker}\, \phi_E=\text{span}\{Y_i:1\le i \le 25\}$ with hermitian matrices $Y_i$'s.
Here, we just write down the list of $Y_i$ for $i=1,2,\cdots,7$, as follows:
\[
\begin{aligned}
Y_{1}&=E_{11}+E_{55}-E_{24}-E_{42},\\
Y_{2}&=E_{11}+E_{99}-E_{37}-E_{73},\\
Y_{3}&=E_{55}+E_{99}-E_{68}-E_{86},\\
Y_{4}&=i(E_{37}+E_{42}+E_{86})-i(E_{73}+E_{24}+E_{68}),\\
Y_{5}&=e^{-i\theta}E_{19}+e^{i\theta}E_{91}-bE_{33}-\frac 1 bE_{77},\\
Y_{6}&=e^{-i\theta}E_{51}+e^{i\theta}E_{15}-bE_{44}-\frac 1 bE_{22},\\
Y_{7}&=e^{-i\theta}E_{95}+e^{i\theta}E_{59}-bE_{88}-\frac 1 bE_{66}.
\end{aligned},
\]
For the full list of vectors $X_i$ and $Y_i$ for $8\le i \le 25$, see the appendix.
By solving the linear equation $\sum_{i=1}^{25} x_i X_i=\sum_{j=1}^{25}y_j Y_j$ with respect to $x_i$'s and $y_j$'s, we see that the subspace
$\text{Ker}\, \Phi_D\cap \text{Ker}\, \Phi_E$ is generated $\varrho_{b,\theta}$.
In fact, we have
\[
\begin{aligned}
\varrho_{b,\theta}&=\cos\theta(X_1+X_2+X_3)+\sin\theta X_4-X_5-X_6-X_7\\
&=\cos\theta(Y_1+Y_2+Y_3)-\sin\theta Y_4-Y_5-Y_6-X_7.
\end{aligned}
\]
Therefore, we see that $\varrho_{b,\theta}$ is an extreme point in $\mathbb T_{3,3}$ for $0<|\theta|<\frac {\pi}3$.
We note that $\varrho_{b,\theta-\frac 23 \pi}$ is extreme if and only if $\varrho_{b,\theta}$ is so by the relation \eqref{unitary}.
Consequently, we may conclude that $\varrho_{b,\theta}$ is extreme whenever $\theta\neq \frac n3\pi$ for an integer $n$.
\section{Decomposable and positive maps}
In order to see that $\nu(\mathbb P_{3,3})=2$, we recall the
positive linear map $\Phi_{\theta}(t)$ considered in
\cite{ha+kye_exposed}, which maps a $3\times 3$ matrix $X=(x_{ij})$
to the following $3\times 3$ matrix
\[
\begin{pmatrix}
a(t) x_{11}+b(t) x_{22}+c(t) x_{33} & -e^{i\theta} x_{12} & -e^{-i\theta}x_{13}\\
-e^{-i\theta}x_{21} & c(t)x_{11}+a(t)x_{22}+b(t)x_{33} & -e^{i\theta}x_{23}\\
-e^{i\theta}x_{31} &-e^{-i\theta}x_{32} & b(t)x_{11}+c(t) x_{22}+a(t) x_{33}
\end{pmatrix},
\]
where
\[
a(t)=1-\frac{(p_{\theta}-1)t}{1-t+t^2},\quad b(t)=\frac{(p_{\theta}-1)t^2}{1-t+t^2},\quad
c(t)=\frac {(p_{\theta}-1)}{1-t+t^2},
\]
with $0<t<\infty$. It was shown that $\Phi_{\theta}(t)$ generates an exposed ray of the convex cone $\mathbb P_{3,3}$,
and so generates an extreme ray of $\mathbb P_{3,3}$,
whenever the condition
$$
\theta \neq\frac {2n-1}3\pi,\qquad (\theta, t)\neq \left(\frac {2n}3\pi,\, 1\right)
$$
holds. It is now clear that if we take the convex combination of two antipodal maps $\Phi_\theta(t)$ and $\Phi_{\theta+\pi}(s)$
then we get a positive map whose Choi matrix is a
diagonal matrix with positive diagonal entries, and so we see that this map is an interior point of $\mathbb P_{3,3}$ by duality.
It remains to consider the convex cone $\mathbb D_{m,n}$ consisting of all decomposable maps from $M_m$ into $M_n$. We first note that
every decomposable map is the convex combination of the maps
$$
\phi_V:X\mapsto V^*XV,\qquad \phi^W: X\mapsto W^*X^\ttt W,\qquad X\in M_m,
$$
for $m\times n$ matrices $V$ and $W$, where $X^\ttt$ denotes the transpose of $X$.
Therefore, every decomposable map from $M_m$ into $M_n$ is of the form
\begin{equation}\label{decom}
\phi_{\mathcal V}+\phi^{\mathcal W}= \sum_i \phi_{V_i}+\sum_j \phi^{W_j},
\end{equation}
for a finite sets $\mathcal V=\{V_i\}$ and $\mathcal W=\{W_j\}$ of $m\times n$ matrices.
We also note that
if the map (\ref{decom}) is on the boundary of the cone $\mathbb P_{m,n}$ then it is also on the boundary of
the cone $\mathbb D_{m,n}$. For a product vector $|z\rangle=|\xi\rangle \otimes |\eta\rangle$,
the pairing in \eqref{pairing} is given by
$$
\langle |z\rangle \langle z|,\phi_{\mathcal V}+\phi^{\mathcal W}\rangle
=\sum_i \left| \langle \xi |V_i|\bar \eta\rangle\right|^2+\sum_j |\langle \bar \xi| W_j|\bar \eta\rangle |^2.
$$
Therefore, the map (\ref{decom}) is on the boundary of $\mathbb P_{m,n}$ if and only if the equation
$$
\langle \xi |V_i|\bar \eta\rangle=0,\qquad \langle \bar \xi| W_j|\bar \eta\rangle=0
$$
has a common solution $|\xi\rangle\otimes |\eta\rangle \in\mathbb C^m\ot\mathbb C^n$.
If we put $k=\dim\spa{\mathcal V}$ and $\ell=\dim\spa{\mathcal W}$ then
it was shown in \cite{kye-prod-vec} that
\begin{enumerate}
\item[(i)]
If $k+\ell < m+n-2$, then there exists a
solution
\item[(ii)]
If $k+\ell = m+n-2$ and
$$
\sum_{r+s=m-1}(-1)^r \binom kr\binom \ell s \neq 0,
$$
then there exists a solution.
\item[(iii)]
If $k+\ell > m+n-2$, then the existence of solutions is not guaranteed.
\end{enumerate}
Therefore, we have the following:
\begin{theorem}\label{ukyilhg}
For given natural numbers $m,n=2,3,\dots$, consider the equation
\begin{equation}\label{Krawtchouk}
k+\ell = m+n-2,\qquad \sum_{r+s=m-1}(-1)^r \binom kr\binom \ell s = 0
\end{equation}
with unknowns $k$ and $\ell$. Then, we have the following:
\begin{enumerate}
\item[(i)]
We have $\nu(\mathbb D_{m,n})\ge m+n-2$ in general.
\item[(ii)]
If the equation {\rm (\ref{Krawtchouk})} has no solution then $\nu(\mathbb D_{m,n})\ge m+n-1$.
\end{enumerate}
\end{theorem}
The polynomial in the Diophantine equation (\ref{Krawtchouk}) is called the Krawtchouk polynomial
which plays an important role in coding theory. See \cite{MWS} and \cite{vint}. The
equation (\ref{Krawtchouk}) has not yet completely solved.
In order to get an upper bound for $\nu(\mathbb D_{m,n})$, we have to construct decomposable maps
in the interior of the cone $\mathbb D_{m,n}$. By the duality
between decomposable maps and PPT states with respect to the pairing \eqref{pairing},
we see that the map in (\ref{decom}) lies on the boundary of the cone $\mathbb D_{m,n}$
if and only if there exists a PPT states $\sigma$ such that the ranges
of $\sigma$ and the partial transpose $\sigma^\Gamma$ coincide with
${\mathcal V}^\perp$ and ${\mathcal W}^\perp$, respectively.
We consider the case with $m=2$, to take an $n-1$ dimensional subspace $D$ of $\mathbb C^2\ot \mathbb C^n$
with no product vectors \cite{wallach,parth}. Then it is clear that there is no PPT state $\sigma$ such that ${\mathcal R}\sigma=D$ and
${\mathcal R}\sigma^\Gamma=\mathbb C^2\ot\mathbb C^n$. Indeed, if we assume that there is such a state $\sigma$ then $\sigma$ must be separable
by \cite{2xn}, but this state violates the range criterion \cite{p-horo}. Therefore, if we take a basis ${\mathcal V}$
in $D^\perp$ then the map $\phi_{\mathcal V}$ is an interior point of the cone $\mathbb D_{2,n}$. This shows that
$\nu(\mathbb D_{2,n})\le n+1$. In the case of $m=2$, it was shown in \cite{kye-prod-vec} that the equation (\ref{Krawtchouk}) has a solution
if and only if $n$ is an even number. This proves the odd case of the following:
\begin{equation}\label{2xxn}
\nu(\mathbb D_{2,n})=
\begin{cases}
n+1,\quad &n\ {\text{\rm is odd}},\\
n,\quad &n\ {\text{\rm is even}}.
\end{cases}
\end{equation}
When $n=2\mu$ is an even integer then the equation (\ref{Krawtchouk}) has the unique
solution $(k,\ell)=(\mu,\mu)$, and one can construct
${\mathcal V}=\{V_1,\dots,V_{\mu}\}$ and ${\mathcal W}=\{W_1,\dots,W_{\mu}\}$ so that the decomposable map
(\ref{decom}) is an interior point of $\mathbb D_{2,n}$,
following the argument in \cite{kye-prod-vec}.
To do this, we consider the $2\times 2\mu$ matrix $V_i$ whose $i$-th $2\times 2$ block is the identity
matrix and other entries are all zero. We also consider the $2\times 2\mu$ matrix
$W_i$ whose $i$-th block is $\left(\begin{matrix}0&-1\\1&0\end{matrix}\right)$ and other entries are all zero. Then we see that
$$
\sum_{i=1}^\mu\phi_{V_i}+\sum_{i=1}^\mu\phi^{W_i}
$$
is just the trace map sending $X\in M_2$ to $\text{\rm Tr}(X) I \in M_{2\mu}$,
which is an interior point of $\mathbb D_{2,2\mu}$. This shows the above equality (\ref{2xxn}) when $n$ is even.
For the $2\otimes 2$ system, the whole facial structures of the cone $\mathbb D_{2,2}$
have been characterized in \cite{byeon-kye}.
In the case of $m=3$, we know that the equation (\ref{Krawtchouk}) has a solution if and only if $n$ is of the form $n=\mu(\mu+2)$,
with the solution $(k,\ell)=(\binom{\mu+1} 2,\binom{\mu+2}2)$. Especially, in the $3\ot 3$ case, we have the solution $(k,\ell)=(1,3)$.
In this case, we see that the map
$$
\phi_I+\phi^{E_{12}-E_{21}}+\phi^{E_{23}-E_{32}}+\phi^{E_{31}-E_{13}}
$$
is exactly the trace map, which is an interior point of $\mathbb D_{3,3}$. Therefore, we have
$$
\nu(\mathbb D_{3,3})=4.
$$
\section{Discussion}
For a given convex set, we have considered the smallest number of extreme points with which we may get
an interior point by their convex combinations. For $3\otimes 3$ PPT states and positive maps, these numbers turned out to be just $2$.
This means that there exist \lq antipodal\rq\ extreme points. Poor knowledge on extreme PPT states and extreme positive maps prevent the authors
to extend these results to higher dimensions.
For the cases of separable states and decomposable maps, these numbers exceed $2$. This means that there exist no
\lq antipodal\rq\ extreme points, and might reflect the facts that the notions of separability and decomposability are defined by
convex hulls of prescribed given extreme points, and that there are no easy intrinsic characterizations for these notions.
The equality $\nu(\mathbb S_{m,n})=mn$ tells us that the number $mn$
is the minimum of the lengths of interior points of $\mathbb
S_{m,n}$. Recall that the length of a separable state $\varrho$ is
given by the minimum number of product states with which $\varrho$
can be expressed as a convex combination. It seems to be an
interesting question to ask if every interior point of $\mathbb
S_{m,n}$ has the length $mn$. The authors
\cite{ha+kye_unique_decom,ha+kye_2x4} have recently constructed
separable states in $\mathbb S_{m,n}$ whose lengths exceed the
number $mn$, for the cases $(m,n)=(3,3)$ and $(2,4)$. All of those
are boundary points of the convex set $\mathbb S_{m,n}$. See also \cite{chen_dj_semialg}. For some
faces of $\mathbb S_{m,n}$, it is possible to characterize the
interior by lengths. For example, this is clearly the case if a face
of $\mathbb S_{m,n}$ is affinely isomorphic to a simplex. See
\cite{ha+kye_unique_decom,ha+kye_2x4} for constructions of such
faces in the $3\otimes 3$ or $2\otimes n$ cases. This is also the
case \cite{kye_trigono} for a face of $\mathbb S_{2,n}$ which is
affinely isomorphic to the convex set generated by trigonometric
moment curve.
|
2,877,628,091,246 | arxiv |
\section{introduction}
\label{sec:intro}
\input{intro}
\section{analysis}
\label{sec:analysis}
\input{analysis}
\section{OpenStreetCab: A mobile app for cheap taxi fare discovery}
\label{sec:app}
\input{application}
\section{analysis of potential savings}
\label{sec:evaluation}
\input{evaluation}
\section{surge pricing}
\label{sec:surge}
\input{surge}
\section{related works}
\label{sec:related}
\input{related}
\section{conlusion and future work}
\label{sec:discussion}
\input{discussion}
\small
\bibliographystyle{plain}
|
2,877,628,091,247 | arxiv | \section{Introduction}
{The Arnowitt Deser Misner (ADM)} formalism \cite{ADM} in general relativity (GR)
expresses the Einstein field equations in canonical form, thus permitting a solution of
particular field/matter configurations formulated as initial value problems.
As a canonical Hamiltonian formulation that splits four-dimensional (4D) spacetime into three-dimensional (3D) space and a selected time
direction, ADM provides insight into general features of relativity, but is not always the
most convenient of the 3+1 formulations for computation, especially numerical simulation.
\textcolor{dRed}{In this paper we borrow techniques from the 3+1 formalism in
order to generalize the Stueckelberg--Horwitz--Piron (SHP) theory of
classical electrodynamics \cite{Stueckelberg-1,Stueckelberg-2,HP,saad,rel-qm,RCM}
to SHP GR \cite{SHPGR,SHPGR2}.}
The~SHP \textcolor{dRed}{framework} is a covariant canonical approach to relativistic classical and quantum
mechanics, in which 4D spacetime events are defined with respect to coordinates $x^\mu$
$(\mu = 0,1,2,3)$ and an external evolution parameter $\tau$.
Events trace out particle worldlines as functions $x^\mu (\tau)$ or $\psi (x,\tau)$ under
the monotonic advance of $\tau$, producing \textcolor{dRed}{five}
$\tau$-dependent gauge fields $a_\alpha \textcolor{dRed}{(x,\tau)}$
carrying the interaction between events. ({
{Here and}
throughout the SHP literature, Greek indices $\alpha,\beta, \gamma, ... , \eta $
take the values $0,1,2,3,5$, while~$\lambda, \mu, \nu, ... $ run from 0 to 3.}})
The result is an integrable electrodynamics, instantaneous in the external time $\tau$, but
recovering Maxwell theory in a $\tau$-equilibrium limit.
At numerous stages of analysis in SHP, an apparent five-dimensional (5D) symmetry arising from the five variables
$x^\mu,\tau$ must be judiciously broken to 4+1 representations of
O(3,1)\textcolor{dRed}{, because the $x^\mu$ are coordinates while $\tau$ is an external parameter.}
In this paper we apply the lessons of SHP electrodynamics to a 4+1 theory of
\textcolor{dRed}{a} local metric $g_{\alpha\beta}(x,\tau)$.
\textcolor{dRed}{As we shall see, this approach differs from a 3+2 or (3+1)+1
formalism, in that we do not split 4D spacetime into space and time, maintaining
the manifest spacetime covariance of the underlying physical picture in each
step.
Rather, we construct a purely formal \hbox{4+1 $\longrightarrow$
5D} manifold as a
guide to formulating field equations that under the \hbox{5D
$\longrightarrow$ 4+1} foliation describe a spacetime metric $\gamma_{\mu\nu}(x,\tau)$ evolving with $\tau$ and
preserving the required spacetime symmetries.
}
\subsection{Motivation: The Problem of Time}
In summarizing Einstein gravity as ``Spacetime tells matter how to move; matter tells
spacetime how to curve,'' Wheeler \cite{wheeler_bio} touched on certain general issues in
relativity known collectively as the problem of time
In nonrelativistic mechanics, space is viewed as the ``arena'' of physical motion, a
manifold with given background metric in some coordinate system, while time is an external
parameter introduced to mark the coordinate evolution that characterizes the motion of
objects in space.
In contrast, time in general relativity retains its traditional Newtonian role as
evolution parameter, but also serves as a coordinate, and thus, through the metric, plays
a structural role in the spacetime ``arena'' itself.
This dual role is complicated by the principal features of general relativity: the~diffeomorphism invariance that eliminates any \textit
{a priori}} distinction between space
and time coordinates, and the background independence that regards gravitation as
equivalent to motion in the spacetime determined by the local metric.
Because the metric is itself determined by the time parameterized motion of matter,
practical~approaches to problems in gravitation generally pose the Einstein field
equations and the equations of motion for matter as an initial value problem.
Beginning with a consistent spacetime geometry at some time, one may solve for the evolution of spacetime and the motions of matter over time.
Known as a 3+1 formalism, this approach singles out a time direction, as in standard
Hamiltonian formulations of field theory, and so the equations are not manifestly
covariant, although general covariance is preserved at each step \cite{ADM,isham,kiefer}.
On the one hand, a configuration of matter and spacetime that satisfies the equations of GR represents a 4D block universe, given once and describing all space, past, present, and future.
Additionally, on the other hand, we may find such solutions by integrating forward in time from
consistent initial conditions at some time.
In Wheeler's words \cite{Superspace}, ``A decade and more of work by Dirac, Bergmann,
Schild, Pirani, Anderson, Higgs, Arnowitt, Deser, Misner, DeWitt, and others has taught us
through many a hard knock that Einstein's geometrodynamics deals with the dynamics of
geometry: of 3-geometry, not 4-geometry.''
Unsurprisingly, the foliation of spacetime into
three-geometries of simultaneous points in space further complicates the interpretation of
time.
Because time is only felt in the evolution from one 3D submanifold to another, the
Hamiltonian is constrained to vanish when restricted to any given equal-time
three-geometry~\cite{kiefer}.
Moreover, there is no preferred criterion for choosing a functional of canonical
variables that might be used as an intrinsic time parameter.
While one may consider a physical clock that measures the proper time in some reference
frame, the proper time depends on a spacetime trajectory that is only known after the
equations of motion have been solved.
While~such a system may be well-posed in classical GR \cite{isham}, this is less obvious
if the metric is subject to quantum~fluctuations.
\subsection{Stueckelberg-Horwitz-Piron (SHP) Theory}
\label{SHP}
Stueckelberg--Horwitz--Piron (SHP) theory
is a covariant approach to relativistic classical and quantum mechanics developed to
address the problem of time as it arises in electrodynamics.
In~1937 Fock proposed using proper time as the evolution parameter for a Newton-like force
law, succinctly expressing a manifestly covariant formulation of electrodynamics \cite{Fock}.
But, four years later, Stueckelberg proposed \cite{Stueckelberg-1,Stueckelberg-2}
to interpret antiparticles as particles moving backward in time, and~showed that neither the coordinate time $x^0 = ct$ nor the proper time of the motion could serve as
evolution parameter for particle/antiparticle pair processes.
Because $ds^2 = \eta_{\mu\nu} dx^\mu dx^\nu$ cannot remain constant during such processes, he
introduced an external time $\tau$ and argued that $ds^2 = \eta_{\mu\nu} \dot x^\mu \dot
x^\nu d\tau^2$ can be a $\tau$-dependent dynamical quantity, even in flat space.
In 1973, Horwitz and Piron \cite{HP} were similarly led to use an external time in
formulating a manifestly covariant relativistic mechanics with interactions, in order to
overcome \textit
{a priori
} constraints on the 4D phase space that conflict with canonical
structure.
Thus, writing the eight-dimensional (8D) unconstrained phase space
\vspace{-6pt}
\begin{equation}
x^\mu(\tau), \ \dot x^\mu(\tau) \qquad \qquad \dot x^\mu = \frac{dx^\mu}{d\tau}
\qquad \qquad \lambda, \mu,\nu, \ldots = 0,1,2,3
\end{equation}
the O(3,1)-symmetric action for a particle in Maxwell theory
\begin{equation}
\textcolor{dRed}{S_{\text{Maxwell}} =\int d\tau \left[ \frac{1}{2}M\dot{x}^{\mu }\dot{x}_{\mu }+\frac{e}{c}\dot{x}
^{\mu }A_{\mu }\left( x^{\lambda }\right)\right]} \mbox{\qquad}\mu ,\lambda =0,1,2,3
\label{action-1}
\end{equation}
leads to the Lorentz force in the covariant form found by Fock. However, because the potential $A_\mu$ is produced by a Maxwell current
\begin{equation}
J^\mu (x) = \int d\tau \p \dot X^\mu (\tau) \p \delta^4 \left( x-X(\tau)\right)
\end{equation}
depending on the trajectory $X^\mu(\tau)$ that is only given {\em after} the
equations of motion have been solved, the~system may not be well-posed.
To overcome this conflict, Horwitz, Saad, and Arshansky \cite{saad} extended the action
(\ref{action-1}) by adding $\tau $-dependence to the vector potential, along with a new scalar
potential, to~obtain the action
\begin{eqnarray}
S_{\text{Maxwell}} \longrightarrow S_{\text{SHP}}
\eq \int d\tau ~\frac{1}{2}M\dot{x}^{\mu }\dot{x}_{\mu }+\frac{e
}{c}\dot{x}^{\mu }a_{\mu }\big( x^{\lambda },\tau \big) +\frac{e}{c}
c_{5}a_{5}\big( x^{\lambda },\tau \big)
\label{SM} \\
\eq \int d\tau ~\frac{1}{2}M\dot{x}^{\mu }\dot{x}_{\mu }
+\frac{e }{c}\dot{x}^{\beta }a_{\beta }\big( x^{\lambda },\tau \big)
\label{SSHP}
\end{eqnarray}
where $\alpha,\beta,\gamma = 0,1,2,3,5$, and in analogy to $x^0 = c t$, we write $x^5 = c_5
\tau$.
Compatibility of SHP electrodynamics with Maxwell theory requires
\textcolor{dRed}{$c_5 \ll c$} and we will neglect $(c_5 / c)^2$ where appropriate.
If we take the potential to be pure gauge, as $a_\alpha = \partial_\alpha
\Lambda (x,\tau)$, then the interaction term is just the total $\tau$-derivative of
$\Lambda$, showing that this theory is the most general U(1) gauge theory on the
unconstrained phase space (see also \cite{beyond}).
Variation with respect to $x^\mu$ leads to the Lorentz force \cite{lorentz} in the form
\begin{eqnarray}
M\ddot{x}_{\mu } \eq \frac{e}{c}\left( \dot{x}^{\nu }f_{\mu \nu }+c_{5}f_{\mu
5}\right) = \frac{e}{c}\dot{x}^{\beta }f_{\mu \beta }
\label{L-1}
\\
\frac{d}{d\tau }\left( - \frac{1}{2}M\dot{x}^{\mu }\dot{x}_{\mu }\right)
\eq c_{5}\frac{e}{c}\dot{x}^{\beta }f_{5 \beta }
\label{L-2}
\end{eqnarray}
where the field
\begin{equation}
f_{\alpha\beta} = \partial_\alpha a_\beta - \partial_\beta a_\alpha
\end{equation}
is made a dynamical quantity by addition of a kinetic term of the type
\begin{equation}
S_{\text{field}} = \int d\tau \p d^4x f^{\alpha\beta} (x,\tau) f_{\alpha\beta} (x,\tau)
\label{f-kin}
\end{equation}
to the total action.
\textcolor{dRed}{Because the apparent 5D symmetry of the interaction term
$\dot{x}^{\beta }a_{\beta }\left( x,\tau \right)$ in the
action (\ref{SSHP}) is broken to 4+1 in (\ref{SM}), SHP
electrodynamics differs in significant ways from 5D Maxwell theory.}
We~notice that (\ref{L-2}) permits the exchange of mass between particles and fields, and
indicates the condition for non-conservation of proper time.
It has been shown \cite{lorentz} that the total mass, energy, and~momentum of particles and fields are conserved.
These equations of motion, along with the $\tau$-dependent field equations, have been used to calculate \cite{pair} the Bethe--Heitler mechanism for
electron-positron production in classical electrodynamics.
A positron (an electron with $\dot x^0 = c\dot t <0$) propagates backward in coordinate time until entering the bremsstrahlung field produced by another electron scattering off a heavy
nucleus.
This field leads to $\ddot t >0$, so the particle gains energy $E = Mc^2 \p \dot
t < 0 $ continuously (and~thus $\dot x^\mu \dot x_\mu $ changes sign twice) until
emerging as an electron propagating forward in coordinate time with $E = Mc^2 \p
\dot t > 0 $.
At coordinate times prior to the particle's turn-around (when $E = Mc^2 \p \dot t = 0$) no
particles will be observed, but two particles will be observed for subsequent coordinate times, implementing Stueckelberg's picture of pair creation.
A physical event $x^\mu (\tau)$ in SHP is an irreversible occurrence at time $\tau$ with
spacetime coordinates $x^\mu$.
The formalism thereby implements the two aspects of time as distinct physical quantities:
the~coordinate time $x^0 = ct$ describing the locations of events, and the external
Stueckelberg time $\tau$ describing the chronological order of event occurrence.
This eliminates grandfather paradoxes because for $\tau_2 > \tau_1$ an event $x^\mu
(\tau_2)$ at some spacetime point $x^\mu$ occurs {\em after} the event $x^\mu (\tau_1)$
and cannot affect it.
Similarly, the 4D block universe $\mathcal{M}(\tau)$ occurs at $\tau$, representing the 4D
manifold of general relativity, comprising all of space and coordinate time $x^0$.
A Hamiltonian $K$ generates evolution of $\mathcal{M}(\tau)$ occurring at $\tau$ to an
infinitesimally close 4D block universe $\mathcal{M}(\tau + d\tau)$ occurring at
$\tau + d\tau$.
The~configuration of spacetime, including the past and future of $x^0 = ct$, may thus
change infinitesimally from chronological moment to moment in $\tau$.
Thus, it is not unreasonable to expect that $\mathcal{M}(\tau)$ will be endowed with a
$\tau$-dependent metric $\gamma_{\mu\nu}(x,\tau)$ whose dynamics we explore in this paper.
On the contrary, a 4D metric given for all $\tau$ would have the character of an absolute
background field in this formalism, in violation of the goals of general relativity.
For the kinetic term (\ref{f-kin}) we formally raise the five-index of $f_{\alpha\beta}$
although we understand the Lagrangian density as
\begin{equation}
f^{\alpha\beta} (x,\tau) f_{\alpha\beta} (x,\tau) =
f^{\mu\nu} (x,\tau) f_{\mu\nu} (x,\tau) + 2 \sigma f^\mu_{\ 5} (x,\tau) f_{\mu 5} (x,\tau)
\end{equation}
\textcolor{dRed}{with $\sigma = \pm 1$ simply the choice of sign for the
vector-vector term.}
That is, we bear in mind that in this notation the $\beta = 5$ index is
a formal convenience, indicating O(3,1) scalar quantities, not an
element of a 5D tensor, \textcolor{dRed}{and not a timelike coordinate.}
In particular, $\dot x^5 = c_5$ is constrained to be a constant scalar, identical in
all reference frames, and $x^5 = c_5 \tau$ must not be treated as a dynamical variable.
Nevertheless, the contraction on indices $\alpha,\beta$ suggests a
\textcolor{dRed}{formal} 5D symmetry, possibly
O(4,1) or O(3,2) that breaks to O(3,1) in the presence of matter, and
\textcolor{dRed}{for convenience we write }
\begin{equation}
\eta_{\alpha\beta} = \text{diag} \left( -1,1,1,1,\sigma \right)
\end{equation}
\textcolor{dRed}{in the form of} a 5D flat space metric.
Although the higher symmetry is non-physical for matter, it~appears in wave equations, much as the wave equations for nonrelativistic acoustics appear to possess a Lorentz
symmetry not associated with the physics.
In developing an SHP approach to general relativity, we will similarly exploit this
notation as a guide to the appropriate extension of GR while respecting the
non-dynamical character of $x^5$.
Classical and quantum SHP particle mechanics in a spacetime with
\textcolor{dRed}{a $\tau$-independent} local metric
$\gamma_{\mu\nu}(x)$ has been studied extensively by Horwitz \cite{SHPGR,SHPGR2} and will not
be discussed at length here.
Our~goal in this paper is to find a consistent prescription for extending general
relativity to accommodate a metric $g_{\alpha\beta}(x,\tau)$ (where $\alpha,\beta =
0,1,2,3,5$) satisfying $\tau$-dependent Einstein equations on a
\textcolor{dRed}{formal} 5D manifold whose meaning
is explored through particle mechanics and field equations.
As in standard approaches to GR, the study of embedded hypersurfaces is central to this program.
But, while the 3+1 formalism begins with a 4D block universe $\cm$ and defines a foliation
into embedded spacelike hypersurfaces of equal \textcolor{dRed}{coordinate time
$t$}, the 4+1 formalism begins with a
parameterized family of \textcolor{dRed}{4D} spacetimes $\cm(\tau)$ embedded as hypersurfaces into a 5D
pseudo-spacetime.
Because the evolution of $\cm(\tau)$ is determined by an O(3,1) scalar Hamiltonian $K$,
with $\tau$ as an external parameter (Poincar\'e~invariant by definition), there is no
conflict with the diffeomorphism invariance of general relativity.
This approach will guide us toward the formal structures of a 5D manifold $\cm_5$ with
coordinates $(x,\tau)$ on which we may perform a 4+1 foliation by choosing $\tau$ as the
unambiguously preferred time direction (
{See} \cite{pitts1,pitts2} for discussion of
general 5D spacetime with preferred foliation.}).
We refer to $\cm_5$ as a pseudo-spacetime to emphasize that despite
the formal manifold structure,
in specifying the physics we treat $\tau$ as a parameter and not a coordinate.
Moreover, $\cm_5$ represents an admixture of symmetries: 4D spacetime geometry within each
$\cm(\tau)$, and canonical dynamics between any pair $\cm(\tau_1)$, $\cm(\tau_2)$.
We expect no general diffeomorphism invariance for $\cm_5$.
\subsection{Organization of This Paper}
The remainder of this paper is organized, as follows: in Section \ref{particle},
we formulate the particle mechanics for an event in 5D pseudo-spacetime, derive
the 5D mass-energy-momentum tensor for non-thermodynamic dust, and pose the
Einstein field equations generalized to 5D. We obtain a general solution for
the associated weak field equations, and consider a source event of slightly
varying mass (time acceleration in a co-moving frame). This leads to a small
nonrelativistic modification to Newtonian gravity in which the mass variation of
the source is transferred through the metric to induce varying mass motion in a
test event. In Section \ref{field}, we formalize the foliation of the 5D
pseudo-spacetime into the 4+1 hypersurface geometry, and by projecting onto
tangent and normal components, express~5D Einstein equations as a set of
coupled partial differential equations in the intrinsic and extrinsic curvature
of the hypersurface.
In Section \ref{ADM}, we complete the 4+1 ADM formalism by transforming the
differential equations to \textcolor{dRed}{covariant} canonical Hamiltonian
form. Finally, in Section~\ref{SG} we apply the 4+1 formalism to two possible generalizations of Schwarzschild
geometry. In the first, we~include a non-trivial fifth component in the
diagonal metric, which is seen to be constrained to satisfy a 4D wave equation.
A test event moving in the resulting field evolves with mass that depends on its
distance from the source. In the second, we allow for the mass parameter in the
standard Schwarzschild metric to be $\tau$-dependent and find the conditions of
the mass-energy-momentum tensor that lead to such a solution.
\textcolor{dRed}{The presented examples were chosen because they can be solved in closed form.
Realistic applications of this formalism will necessarily require numerical
solutions beyond the scope of this~paper.}
\section{Particle Mechanics}
\label{particle}
\vspace{-6pt}
\subsection{Particle Lagrangian in Standard GR}
Regarding the spacetime manifold $\cm$ as a 4D block universe, general relativity begins with consideration of the squared interval
\begin{equation}
\delta x^2 = \gamma_{\mu\nu} \delta x^\mu \delta x^\nu = \left( x_2 - x_1 \right)^2
\label{interval}
\end{equation}
between two neighboring points of $\cm$.
The invariance of this interval, viewed as an instantaneous displacement in the block
universe, is a geometrical statement referring to the freedom that is permitted in
assigning a coordinate map to the manifold.
To extract dynamics from geometry, one considers the spacetime trajectory of a material
event (
{some appropriate} abstraction of point mass, which in GR
would necessarily be a black hole}), described as a mapping of an arbitrary parameter
$\zeta$ to a continuous sequence of events $x^\mu(\zeta)$ in $\cm$.
Because the interval between any two points on a trajectory must be timelike, the proper
time $s$ may be taken as parameter, and ``motion'' along the trajectory is observed
through advances in the time coordinate $x^0(s)$ for advancing values of $s$.
The invariant interval (\ref{interval}) can be written
\begin{equation}
\delta x^2 = \gamma_{\mu\nu} \delta x^\mu \delta x^\nu
= \gamma_{\mu\nu} \frac{dx^\mu}{ds}\frac{dx^\nu}{ds} \delta s^2
= \gamma_{\mu\nu} \dot x^\mu \dot x^\nu \delta s^2
\label{interval-2}
\end{equation}
suggesting \cite{DiracGR} a dynamical description of the trajectory by the action
\begin{equation}
S = \int dx = \int ds \ \sqrt{-\gamma_{\mu\nu} \dot x^\mu \dot x^\nu }
\label{sqrt}
\end{equation}
and leading to geodesic equations of motion as an expression of the equivalence
principle.
The geodesic equations can also be derived from the action
\begin{equation}
S = \int ds \ \frac{1}{2} \ \gamma_{\mu\nu} \dot x^\mu \dot x^\nu
\end{equation}
which removes the constraint $\dot x^2 = -c^2$ associated with (\ref{sqrt}).
\subsection{Particle Lagrangian in SHP GR}
To extend the SHP classical mechanics of a free particle to a manifold with a
$\tau$-dependent local metric, we begin by considering the
interval
\begin{equation}
dx^\mu = x^\mu_1 (\tau_1 ) - x^\mu_2 (\tau_2)
\end{equation}
between an event $x^\mu_1 \in \cm(\tau_1)$ and an event $ x ^\mu_2 \in \cm(\tau_2) $.
Writing these events as
\begin{equation}
X_1 = (x_1,c_5 \tau_1) \qquad \qquad X_2 = ( x_2 , c_5 \tau_2)
\end{equation}
we introduce a notion of 5D distance by combining the {\em geometrical distance} $\delta x$
between any two arbitrary points
in $\cm(\tau)$, with the {\em dynamical distance} between events generated by a Hamiltonian
that evolves $\cm(\tau) \longrightarrow \cm(\tau + \delta \tau)$.
The geometrical distance is characterized by the squared relativistic interval
(\ref{interval}) and taking $ \tau_2 = \tau_1 + \delta \tau$, so that
\begin{equation}
x_2 (\tau_1 + \delta \tau) - x_1(\tau_1) \simeq x_2 (\tau_1)+ \frac{dx (\tau)}{d\tau}
\delta \tau - x_1(\tau_1) =
\delta x + \frac{dx (\tau)}{d\tau} \delta \tau
\end{equation}
and we write the difference in the form
\begin{equation}
X_2 - X_1 = \left( \delta x + \frac{d{ x} (\tau)}{d\tau} \delta \tau ,
c_5 \delta \tau \right)
\label{difference}
\end{equation}
which motivates the notion of a 5D invariant interval through
\begin{equation}
dX^2 = \gamma_{\mu\nu} \left( \delta x^\mu + \frac{d{ x}^\mu (\tau)}{d\tau} \delta \tau
\right) \left( \delta x^\nu + \frac{d{ x}^\nu (\tau)}{d\tau} \delta
\tau \right)
+ \sigma c_5^2 \delta \tau^2
= g_{\alpha\beta} \left( x,\tau\right) \delta x^\alpha \delta x^\beta
\label{5D-interval}
\end{equation}
referred to $x_1$ coordinates at $\tau =\tau_1$.
\textcolor{dRed}{Because the manifold $\cm(\tau) $ evolves, the spacetime metric
$\gamma_{\mu\nu}$ must depend on $x$ and $\tau$ in some manner to be determined.}
As in 4D general relativity, the squared interval (\ref{5D-interval}) suggests the
Lagrangian
\begin{equation}
L=\frac{1}{2}Mg_{\alpha\beta}\big( x^\mu,x^5\big) \dot{x}^{\alpha}\dot{x}^{\beta}
\qquad \lambda,\mu,\nu=0,1,2,3
\qquad \alpha,\beta,\gamma=0,1,2,3,5
\label{5-lag}
\end{equation}
from which we may find equations of motion in the space determined by the local metric
$g_{\alpha\beta}$.
\subsection{Equations of Motion}
\textcolor{dRed}{Before examining particle dynamics in SHP GR, we consider
a straightforward extension of GR to unbroken 5D, with coordinates $x^\alpha$, for $\alpha
= 0,1,2,3,5$ and external evolution parameter $\tau$.
Naively~applying the Euler--Lagrange equations to the action (\ref{5-lag}),
posing no fixed relationship between $x^5$ and $\tau$, we find
}
\begin{equation}
0 = \frac{d}{d\tau }\frac{\partial L}{\partial \dot{x}^{\gamma}}-\frac{\partial L
}{\partial x^{\gamma}}
= \frac{d}{d\tau }\left( g_{\alpha\gamma}\dot{x}^{\alpha}\right) -\frac{1}{2}\frac{
\partial }{\partial x^{\gamma}}g_{\alpha\beta}\dot{x}^{\alpha}\dot{x}^{\beta}
\end{equation}
leading to the \textcolor{dRed}{five} geodesic equations
\begin{equation}
0 =\frac{D\dot{x}^{\gamma}}{D\tau}=\ddot{x}^{\gamma}+\Gamma _{\alpha\beta}^{\gamma}\dot{x}^{\alpha}\dot{x} ^{\beta}
\label{motion-1}
\end{equation}
where $D/D\tau$ is the absolute derivative (in the notation of Weinberg
\cite{Weinberg}) and
\begin{equation}
\Gamma _{\alpha \beta}^{\gamma }=g^{\gamma \delta}\Gamma _{\delta \alpha
\beta}=\frac{1}{2}g^{\gamma \delta}\left( \partial _{\alpha }g_{\delta
\beta}+\partial _{\beta}g_{\delta \alpha}-\partial _{\delta
}g_{\beta\alpha}\right)
\label{connection}
\end{equation}
is the standard Christoffel symbol in 5D. Writing the canonical momentum
\begin{equation}
p_{\alpha} = \frac{\partial L}{\partial \dot{x}^{\alpha}}=Mg_{\alpha\beta}\dot{x}^{\beta} \qquad
\longrightarrow \qquad
\dot{x}^{\alpha} = \frac{1}{M}g^{\alpha\beta}p_{\beta}
\end{equation}
the Hamiltonian
\begin{equation}
K=\dot{x}^{\alpha}p_{\alpha}-L = \frac{1}{2M}g^{\alpha\beta}p_{\alpha}p_{\beta} = L
\end{equation}
is conserved, as seen directly through
\begin{equation}
\frac{d}{d\tau}\left(
\frac{1}{2}Mg_{\alpha\beta}\dot{x}^{\alpha}\dot{x}^{\beta}\right)
=Mg_{\alpha\beta}\dot{x}^{\alpha}\frac{D\dot{x}^{\beta}}{D\tau}=0
\end{equation}
where we used metric compatibility
\begin{equation}
\frac{Dg_{\alpha\beta}}{D\tau}=0 .
\end{equation}
Time independence of the Hamiltonian may also
be found from the canonical equations of motion
\begin{equation}
\dot{x}^{\alpha}=\frac{dx^{\alpha}}{d\tau }=\frac{\partial K}{\partial p_{\alpha}}\qquad
\qquad \dot{p}_{\alpha}=\frac{dp_{\alpha}}{d\tau }=-\frac{\partial K}{\partial x^{\alpha}}
\end{equation}
and the Poisson bracket
\begin{equation}
\left\{ F,G\right\} =\frac{\partial F}{\partial x^{\alpha}}\frac{\partial G}{
\partial p_{\alpha}}-\frac{\partial F}{\partial p_{\alpha}}\frac{\partial G}{\partial
x^{\alpha}}
\end{equation}
so that
\begin{equation}
\frac{d}{d\tau}\left( \frac{1}{2M}g^{\alpha\beta}p_{\alpha}p_{\beta} \right)
=\frac{dK}{d\tau}=\left\{ K,K\right\} +\frac{\partial K}{\partial \tau}=\frac{1
}{2M} p_{\alpha}p_{\beta} \ \frac{\partial g^{\alpha\beta}}{\partial \tau} =0
\end{equation}
because the metric is not explicitly dependent on $\tau$, which
\textcolor{dRed}{in this case} bears no specific
relationship with $x^5$. As seen in SHP
electrodynamics, the equation
\begin{equation}
0 =\frac{D\dot{x}^{5}}{D\tau}=
\ddot{x}^{5}+\Gamma _{\alpha\beta}^{5}\dot{x}^{\alpha}\dot{x} ^{\beta}
\label{unphysical}
\end{equation}
\textcolor{dRed}{cannot generally be made consistent with the SHP condition}
$x^5 =c_5 \tau \ \Rightarrow \ \ddot{x}^{5} = 0$.
Rather, \textcolor{dRed}{the SHP formalism defines $x^5 $ to be a scalar, in
which case the absolute derivative reduces to the total derivative, so that
}
\begin{equation}
\frac{D\dot{x}^{5}}{D\tau}= \frac{d\dot{x}^{5}}{d\tau}= 0
\label{physical}
\end{equation}
\textcolor{dRed}{will replace} (\ref{unphysical}).
To obtain the correct equations of motion \textcolor{dRed}{for SHP},
we must break the 5D symmetry of (\ref{5-lag})
to 4+1 prior to applying the Euler--Lagrange equations and not treat $x^5$ as a dynamical
quantity.
Expanding
\begin{equation}
L=\dfrac{1}{2}Mg_{\alpha\beta}(x,\tau)\dot{x}^{\alpha}\dot{x}^{\beta}
= \dfrac{1}{2}Mg_{\mu\nu}\; \dot{x}^{\mu}\dot{x}^{\nu}
+ Mc_5 \; g_{\mu 5}\dot{x}^{\mu}
+ \dfrac{1}{2}Mc^2_5 \; g_{55}
\label{EL-2}
\end{equation}
the equations of motion have four components
\begin{equation}
\ddot{x}^{\mu}+\Gamma _{\lambda \sigma
}^{\mu }\dot x^\lambda \dot x^\sigma +2c_5\Gamma _{5\sigma
}^{\mu }\dot x^\sigma +c^2_5\Gamma _{55}^{\mu } = 0
\label{eq-m}
\end{equation}
and, because $x^5$ is not a dynamical quantity, it has no conjugate momentum.
Thus, while (\ref{eq-m}) is identical to (\ref{motion-1}) for \textcolor{dRed}{$\mu =
0,1,2,3$}, we understand (\ref{unphysical}) in the sense of (\ref{physical}).
\textcolor{dRed}{The breaking of 5D symmetry is expressed here in that $\Gamma
_{\alpha\beta}^{5}$ can be calculated, but it plays no part in the equations of
motion.}
The~4-momentum is
\vspace{-6pt}
\begin{equation}
p_{\mu} = \frac{\partial L}{\partial \dot{x}^{\mu}}=Mg_{\mu\nu}\dot{x}^{\nu} + Mc_5 \; g_{\mu 5}
\qquad \longrightarrow \qquad \dot{x}_{\mu}
= \frac{1}{M} \left( p_{\mu} - Mc_5 \; g_{\mu 5}\right)
\label{b-mom}
\end{equation}
allowing us to write the Hamiltonian in the form
\begin{equation}
K = p_{\mu }\dot{x}^\mu -L
= \left( Mg_{\mu\nu}\dot{x}^{\nu} + Mc_5 \; g_{\mu 5} \right) \dot{x}^\mu -L
= \frac{1}{2}Mg_{\mu\nu}\dot{x}^{\mu}\dot{x}^{\nu} - \frac{1}{2}Mc^2_5 g_{55}
\label{Ham-1}
\end{equation}
which, unlike the Hamiltonian for unbroken 5D symmetry, is not equal to the
Lagrangian (
{The~difference} is precisely the term $p_5 \dot x^5$ that would be
present in the Legendre transformation if we had taken $x^5$ to be dynamical}).
Taking the total $\tau$-derivative of (\ref{Ham-1}) and inserting the equations of motion
(\ref{eq-m}) leads to
\begin{equation}
\frac{d K}{d \tau } =
-\frac{1}{2}M\dot{x}^{\mu }\dot{x}^{\nu }\frac{\partial g_{\mu
\nu }}{\partial \tau }-\frac{1}{2}Mc_{5}^{2}\frac{\partial g_{55}}{
\partial \tau }
\label{non-cons}
\end{equation}
showing that this Hamiltonian is not conserved for a $\tau$-dependent metric.
Using (\ref{b-mom}) to eliminate $\dot{x}_{\mu}$, we put the Hamiltonian into
the form
\begin{equation}
K = \frac{1}{2M}g^{\mu\nu}p_{\mu}p_{\nu} - c_5 g^\mu_5 p_\mu + \frac{1}{2}M c_5^2
\left( g^\mu_{\ 5} g_{\mu 5} - g_{55} \right)
\label{Ham-2}
\end{equation}
and find its non-conservation from the Poisson bracket
\begin{equation}
\frac{d K}{d \tau } = \{K,K\} +
\frac{\partial K}{\partial \tau } = - \frac{1}{2M}p^{\mu}p^{\nu}
\frac{\partial g_{\mu\nu}}{\partial \tau} - c_5 p_\mu \frac{\partial g_{\mu 5}}{\partial \tau}
+ \frac{1}{2}M c_5^2 \left( 2 g^\mu_{\ 5} \frac{\partial g_{\mu 5}}{\partial \tau}
- \frac{\partial g_{55}}{\partial \tau} \right)
\label{non-cons-2}
\end{equation}
where we used
\begin{equation}
\frac{ \partial g^{\mu \nu }}{\partial \tau } = - g^{\mu \rho }g^{\nu \sigma }\frac{\partial
g_{\rho \sigma }}{\partial \tau } \ .
\end{equation}
{When}
$g_{\alpha 5} = 0$, the Hamiltonian (\ref{Ham-2}) is
seen to generalize the nonrelativistic expression ${\mathbf p}^2/2m$ for the
energy of a free particle. Because $K$ is a Lorentz scalar, SHP theory associates this
Hamiltonian with the dynamical mass of the particle motion.}
Section \ref{weak} provides an example of a test particle evolving with variable mass in a $\tau$-dependent local metric.
\subsection{Mass-Energy-Momentum Tensor}
When considering non-thermodynamic dust, we define $n(x,\tau)$ to be the number of events
per spacetime volume,
and
\begin{equation}
j^{\alpha }\left( x,\tau \right) =\rho(x,\tau) \dot{x}^{\alpha }(\tau) =M
n(x,\tau)\dot{x}^{\alpha }(\tau)
\end{equation}
is the five-component event current. The continuity equation in flat space is
\begin{equation}
\partial_\alpha j^\alpha = \partial_\mu j^\mu + \partial_5 j^5
= \partial_\mu j^\mu + \frac{\partial \rho}{\partial \tau} =0
\end{equation}
and with a local metric is generalized to
\begin{equation}
\nabla _{\alpha }j^{\alpha } =0
\end{equation}
where (in the notation of Wald \cite{Wald}), the covariant derivative for a vector is
\begin{equation}
\nabla_\alpha X^\beta = \frac{\partial X^\beta}{\partial x^\alpha} +
X^\gamma \Gamma^\beta_{\gamma\alpha} \ .
\label{cov-div}
\end{equation}
\textcolor{dRed}
{But again,} since $j^5$ is a scalar (the number density is scalar on physical
grounds) for which the covariant derivative is just the partial derivative, we must have }
\begin{equation}
\nabla_5 j^5 = \frac{\partial \rho}{\partial \tau}
\label{no-g5-2}
\end{equation}
so the continuity equation becomes
\begin{equation}
\frac{\partial \rho}{\partial \tau} + \nabla _{\mu }j^{\mu } =0 .
\end{equation}
Generalizing the 4D stress-energy-momentum tensor to 5D, we write the
mass-energy-momentum tensor \cite{antenna} as
\begin{equation}
T^{\alpha \beta }
\rho \dot{x}^{\alpha } \dot{x}^{\beta } \ \longrightarrow \ \left\{
\begin{array}{l}
T^{\mu \nu }=\rho \dot{x}^{\mu } \dot{x}^{\nu } \strt{8} \\
T^{5\beta} =c_{5}j^\beta
\end{array}
\right.
\end{equation}
where, in addition to the 4D components $T^{\mu\nu}$, we have the current density $
T^{5\beta}=\dot{x}^{5}\dot{x}^{\beta}\rho =c_{5}j^\beta$.
The~conservation equation is
\begin{equation}
0= \nabla_\beta T^{\alpha \beta }=\nabla_\beta \left( \rho \dot{x}^{\alpha }
\dot{x}^{\beta }\right)
=\dot{x}^{\alpha }\nabla_\beta \left( \rho
\dot{x}^{\beta }\right)+\rho \dot{x}^{\beta }\nabla_\beta \dot{x}^{\alpha }
=\dot{x}^{\alpha }\nabla_\beta j^{\beta }
+\rho \dot{x}^{\beta }\nabla_\beta \dot{x}^{\alpha }
\end{equation}
which vanishes by virtue of the continuity and geodesic equations
\begin{equation}
\nabla _{\alpha }j^{\alpha } =0 \qquad \qquad
\dot{x}^{\beta }\nabla_\beta \dot{x}^{\alpha }=\frac{D\dot{x}
^{\alpha }}{D\tau }=0
\end{equation}
when the equations of motion (\ref{motion-1}) are
evaluated in the sense of (\ref{physical}).
\subsection{Weak Field Approximation}
\label{weak}
\textcolor{dRed}{As a first step in obtaining field equations for $g_{\alpha
\beta }$ } we extend
the Einstein equations to 5D as
\begin{equation}
G_{\alpha \beta} =
R_{\alpha \beta} - \frac{1}{2} Rg_{\alpha \beta} = \frac{8\pi G}{c^4} T_{\alpha \beta}
\end{equation}
where the Ricci tensor $R_{\alpha \beta}$ and scalar $R$ are obtained by contracting
indices of the 5D curvature tensor $R_{\gamma \alpha \beta}^\delta$.
The weak field approximation \textcolor{dRed}{(see for example
\cite{MTW,DiracGR,AS})}
is generalized to \textcolor{dRed}{SHP GR} by introducing a
perturbation $h_{\alpha \beta }$ to the flat metric, such that
\begin{equation}
g_{\alpha \beta }=\eta _{\alpha \beta }+h_{\alpha \beta } \ \longrightarrow \ \partial_\gamma g_{\alpha\beta}
= \partial_\gamma h_{\alpha\beta} \qquad \qquad \left( h_{\alpha \beta }\right) ^{2}\approx 0
\end{equation}
leading to the Ricci tensor
\begin{equation}
R_{\alpha \beta }\simeq \frac{1}{2}\left(
\partial_{\beta }\partial_{\gamma }h_{\alpha }^{\gamma } + \partial_{\alpha
}\partial_{\gamma }h_{\beta }^{\gamma }- \partial_{\gamma }\partial_{\gamma
}h_{\alpha \beta } - \partial_{\alpha }\partial_{\beta }h
\right)
\qquad R\simeq \eta ^{\alpha \beta }R_{\alpha \beta }
\qquad h\simeq\eta ^{\alpha \beta }h_{\alpha \beta }
\end{equation}
which naturally contains only the perturbation.
Defining $\bar{h}_{\alpha \beta }=h_{\alpha \beta }-\frac{1}{2}\eta _{\alpha
\beta }h$, the Einstein equations~become
\begin{equation}
\frac{16\pi G}{c^{4}}T_{\alpha \beta }=
\partial_{\beta }\partial_{\gamma }\bar{h}_{\alpha }^{\gamma } + \partial_{\alpha
}\partial_{\gamma }\bar{h}_{\beta }^{\gamma }- \partial_{\gamma }\partial_{\gamma
}\bar{h}_{\alpha \beta } - \partial_{\alpha }\partial_{\beta }\bar{h}
\end{equation}
which take the form of a wave equation
\begin{equation}
\frac{16\pi G}{c^{4}}T_{\alpha \beta }
=-\partial ^{\gamma }\partial _{\gamma }\bar{h}_{\alpha \beta }
=-\left( \partial ^{\mu }\partial _{\mu } + \frac{\eta_{55}}{c_5^2}
\partial_\tau^2 \right) \bar{h}_{\alpha \beta }
\end{equation}
by imposing the usual gauge condition
$\partial_\lambda \bar{h}^{\alpha \lambda } =0$.
The principal part Green's function \cite{green} for this wave equation is
\begin{equation}
G(x,\tau ) = -{\frac{1}{{2\pi }}}\delta (x^{2})\delta (\tau )-{\frac{c_5}{{
2\pi ^{2}}}}{\frac{\partial }{{\partial {x^{2}}}}}{\theta (-\eta_{55}g_{\alpha
\beta }x^{\alpha }x^{\beta })}{\frac{1}{\sqrt{-\eta_{55}g_{\alpha \beta
}x^{\alpha }x^{\beta }}}}
\end{equation}
in which the first term is dominant at long distance, leading to the solution
\begin{equation}
\bar{h}_{\alpha \beta }\left( x,\tau \right)
=\frac{4G}{c^{4}}\int d^{3}x^{\prime }\frac{T_{\alpha \beta }\left( t-
\frac{\left\vert \mathbf{x}-\mathbf{x}^{\prime }\right\vert }{c},\mathbf{x}
^{\prime },\tau \right) }{\left\vert \mathbf{x}-\mathbf{x}^{\prime
}\right\vert }
\end{equation}
relating
the field $\bar{h}_{\alpha \beta }\left( x,\tau \right)$ to the
source $T_{\alpha \beta }\left( x,\tau \right)$.
\textcolor{dRed}{As a simple example, }
we consider a source $X = (cT(\tau),{\mathbf 0})$ in a co-moving
frame, so that $\dot T \ne $ constant corresponds to a variation in energy without
corresponding variation in momentum, producing a variation in mass.
The non-zero components of the mass-energy-momentum tensor are
\begin{equation}
T^{00} = mc^{2}\dot{T}^{2}\delta ^{3}\left( \mathbf{x} \right)
\rho \left( t-T\left( \tau \right) \right)
\qquad \
T^{\alpha i} = 0
\qquad \
T^{55} = \frac{c_5^2}{c^2} T^{00} \approx 0
\end{equation}
where we neglect $c_5^2 / c^2 \ll 1$ and have written $M(\tau)
= m \, \rho \left( t-T\left( \tau \right) \right)$ to represent a slowly varying density
function \textcolor{dRed}{(the source is sharply located in space but smeared
along the $t$-axis)}.
The perturbed metric is found to be
\begin{equation}
\bar{h}^{00}\left( x,\tau \right) = \frac{4GM}{c^{2}R} \dot T^2
\qquad
\bar{h}^{\alpha i}\left( x,\tau \right) = 0
\qquad
\bar{h}^{55}\left( x,\tau \right) = 0
\label{pert-1}
\end{equation}
so using
$h_{\alpha \beta }=\bar{h}_{\alpha \beta }-\frac{1}{2}\eta _{\alpha \beta }
\bar{h}$, we see that $h^{00}=\bar{h}^{00}$.
Since $ g^{\alpha\beta} h_{\beta\gamma} \simeq \eta^{\alpha\beta} h_{\beta\gamma}$
the non-zero Christoffel symbols are
\begin{equation}
\Gamma _{00}^{\mu } = -\dfrac{1}{2}\eta ^{\mu \nu }\partial _{\nu }h_{00}
\qquad \qquad
\Gamma _{0i}^{\mu } =\dfrac{1}{ 2}\eta ^{\mu \nu }\partial _{i}h_{\nu 0}
\qquad \qquad
\Gamma _{50}^{\mu } =\dfrac{1}{2c_{5}}\eta ^{\mu 0}\partial _{\tau }h_{00}
\end{equation}
and the equations of motion for a distant test particle split into
\begin{equation}
\ddot{t} = \left( \partial _{\tau }h_{00}\right) \dot{t}+
\mathbf{\dot{x}}\cdot \left( \nabla h_{00}\right) \dot{t}^{2}
\qquad \qquad
\mathbf{\ddot{x}} = \frac{c^{2}}{2}\left( \nabla h_{00} \right)
\dot{t}^{2}
\end{equation}
where the factor $ \partial _{\tau }h_{00}$ distinguishes these equations
from the Newtonian model.
We write the space part in spherical coordinates, putting $\theta =\pi /2$, so
that the angular and radial equations become
\begin{equation}
2\dot{R}\dot{\phi}+R\ddot{\phi}=0 \ \longrightarrow \ \dot{\phi}=\frac{L}{MR^{2}}
\ \longrightarrow \
\ddot{R}-\frac{L^{2}}{M^{2}R^{3}}=-\frac{GM}{R^{2}}\dot{t}^{2}\dot{T}^{2}
\end{equation}
\textcolor{dRed}{where $L$ is a constant of integration with units of angular
momentum. Introducing $\alpha \left( \tau \right)$ through}
\begin{equation}
\dot{T} = 1+\frac{\alpha \left( \tau \right)}{2} \ \longrightarrow \ \dot{T}^{2}\simeq
1+\alpha \left( \tau \right) \ \longrightarrow \ \dot{T}\ddot{T} \simeq
\left( 1+\frac{\alpha \left( \tau \right)}{2}\right) \frac{\dot{
\alpha}\left( \tau \right)}{2}
\end{equation}
\textcolor{dRed}{the relationship between $t$ and $\tau$} becomes
\begin{equation}
\ddot{t}=\frac{2G\partial _{\tau }M}{c^{2}R}\dot{t}+\frac{4GM}{c^{2}R}\dot{T}
\ddot{T}\dot{t}-\frac{2GM}{R^{2}c^{2}}\dot{R}\dot{T}^{2}
\approx \frac{2GM}{c^{2}R} \left( 1+\frac{\alpha \left( \tau \right)}{2}\right)
\dot{ \alpha}\left( \tau \right)\dot{t}
\end{equation}
where we neglect the nonrelativistic velocity $\dot R /c \approx 0$
and the slow variation in the source distribution $\partial_\tau \rho \approx 0$.
In the \textcolor{dRed}{absence of the mass perturbation, we have}
$\alpha = 0 \longrightarrow \dot t = 1$, \textcolor{dRed}{recovering a Newtonian
notion of time,} but this $t$ equation has the solution
\begin{equation}
\dot{t} = \exp \left[ \frac{2GM}{c^{2}R}\left( \alpha
+\frac{1}{4}\alpha ^{2} \right) \right]
\ \longrightarrow \ \textcolor{dRed}{\dot{t}^{2}\dot{T}^{2}
\simeq 1+\left( 1+\frac{4GM}{c^{2}R}\right) \alpha }
\end{equation}
\textcolor{dRed}{indicating a more complicated relationship between $t$ and $\tau$.
Since $4GM / c^2 R \ll 1$,} this leads finally to a radial equation in the form
\begin{equation}
\textcolor{dRed}{\frac{d}{d\tau }\left\{
\frac{1}{2}\dot{R}^{2}+\frac{1}{2}\frac{L^{2}}{
M^{2}R^{2}}-\frac{GM}{R}\left[ 1+\alpha \left( \tau \right) \strt{5}
\right] \right\} = \frac{dK}{d\tau} =-\frac{GM}{R}\frac{d}{d\tau }\alpha \left( \tau \right) .}
\end{equation}
{We} recognize $K$ on the LHS as the Hamiltonian of the test particle moving in this
local metric, recovering~the Newtonian expression when the perturbation
$\alpha(\tau) $ vanishes.
The mass fluctuation of the point source is seen to induce a fluctuation in the mass of
the distant test particle, acting through the field $g_{\alpha\beta} (x,\tau)$ in order to produce
a small modification of Newtonian gravity.
\section{Field Equations}
\label{field}
In a 3+1 formalism such as ADM, a spacetime trajectory is defined with respect to a
foliation of $\cm$.
For any point $x^\mu \in \cm$, we define a time function $t(x)$ on $\cm$ whose level sets
\begin{equation}
\Sigma (t_0) = \left\{ x^{\mu } \ \big\vert \ t(x) = t_0 \right\}
\label{E-255}
\end{equation}
are hypersurfaces of constant time.
The 4D hypersurface $\Sigma (t_0) \subset \cm $ is homeomorphic to a spacelike 3D
submanifold $\hat \Sigma$ with coordinates $x^i$, $i=1,2,3$, and the homeomorphism forms
an embedding of $\hat \Sigma$ into $\cm$, which may be expressed as
\begin{equation}
x^\mu_{t_0} = x^\mu ({\mathbf x},t_0)
\end{equation}
for fixed $t_0$.
The trajectory
\begin{equation}
x_{{\mathbf x}_0}^{\mu }\left( t \right) = x^{\mu }\left( {\mathbf x}_{0},t \right)
\end{equation}
associated with this embedding connects the point ${\mathbf x}_0$ with fixed 3D
coordinates on different hyperspaces, suggesting a notion of time evolution from one
hyperspace to the next.
We extend these ideas to SHP general relativity, taking advantage of the analogy with
the 3+1 formalism \textcolor{dRed}{\cite{ADM,isham,Bertschinger,zilhao}}
and employing its standard notation.
Roughly following the tutorial exposition of 3+1 numerical relativity that is given in
\cite{Gourgoulhon,Blau}, \textcolor{dRed}{we} decompose the Einstein field equations
into spacetime and $\tau$ sectors, leading to a set of coupled partial
differential equations for the phase space variables of the field theory,
$\gamma_{\mu\nu} ( x,\tau ) $ and $ \dot \gamma_{\mu\nu} ( x,\tau ) = \partial
\gamma_{\mu\nu} ( x,\tau ) / \partial\tau$.
\textcolor{dRed}{Although the general presentation is familiar, it differs in
certain details, because the foliation is natural and the field theory is presumed to carry the
factor $\sigma$ associated with objects carrying a five-index.}
With appropriate initial conditions for the metric and the matter distribution, this
poses an initial value problem that can be integrated forward in $\tau$ to solve for
evolving spacetime configurations.
\subsection{Embedding and Foliation}
\label{embedding}
The first step is to introduce a 5D pseudo-spacetime by defining the injective mapping
\begin{equation}
\Phi: \mathcal{M} \ \longrightarrow \ \mathcal{M}_5 = \mathcal{M} \times R
\qquad \qquad \qquad X = \Phi(x,\tau) = (x,c_5\tau)
\end{equation}
with coordinates $X^\alpha \in \mathcal{M}_5 $, for $\alpha=0,1,2,3,5$.
This structure admits the natural foliation defined by level surfaces of the
scalar field $\tau(X) = \tau$
\begin{equation}
\Sigma (\tau_0) = \left\{ X \in \cm_5 \ \big\vert \ \tau(X) = X^5 / c_5 = \tau_0 \right\}
\end{equation}
which is homeomorphic to $\cm(\tau_0)$ for any $\tau_0$ (and so we drop reference to
$\tau_0$ in referring to the hypersurfaces).
We take
\begin{equation}
E_{\mu }^{\alpha } = \left( \frac{ \partial X^{\alpha }\left( x,\tau \right) }{\partial
x^{\mu }}\right) _{\tau _{0}}
\qquad \qquad \mu = 0,1,2,3
\label{E-6}
\end{equation}
as the four basis elements $E_\mu = \partial_\mu $ for $\ct\left( \Sigma \right) $,
the tangent space of $\Sigma $. Thus, when restricted to $X \in \Sigma $, the squared interval becomes
\begin{equation}
\left. dX^2 \right\vert_{\Sigma} = \left. g_{\alpha \beta }dX^{\alpha }dX^{\beta
}\right|_{\Sigma } = g_{\alpha \beta } \frac{\partial X^{\alpha }
}{\partial x^{\mu }}\frac{\partial X^{\beta }}{\partial x^{\nu }}dx^{\mu }dx^{\nu }
= \gamma_{\mu \nu } dx^{\mu }dx^{\nu }
\label{E-14}
\end{equation}
where we identify $ \gamma _{\mu \nu } = g_{\alpha \beta }E_{\mu }^{\alpha }E_{\nu
}^{\beta }$, the induced metric on $\Sigma $,
with the 4D spacetime metric we began with.
For a vector in the time direction of $\ct(\cm_5) $, we write
\begin{equation}
\partial_\alpha \p \tau(X) = \delta_\alpha^5 \p \partial_5 \p \tau(X)
= \delta_\alpha^5 \p \frac{1}{c_5} \p \partial_\tau \p \tau(X)
\label{E-268}
\end{equation}
which is normal to the tangent space of $ \Sigma $ in the sense that $\tau
\left( X \right) = \tau_0 $ is constant throughout $\Sigma (\tau_0)$.
Thus, in $\ct(\cm_5) $, the vector $(E_5)_\alpha = \partial_\alpha \p \tau(X)$ points out
of $\ct\left( \Sigma \right) $ in the direction of time evolution.
The~unit normal $n_\alpha$ in the time direction is defined as
\begin{equation}
n = \sigma \frac{1}{\sqrt{\left\vert g^{55}\right\vert}} E_5 \ \longrightarrow \
n^2 = \frac{1}{\left\vert g^{55}\right\vert}
g^{\alpha \beta} (E_5)_\alpha(E_5)_\beta
= \frac{1}{\left\vert g^{55}\right\vert} g^{55} = \sigma
\end{equation}
so that
\begin{equation}
n^\alpha = g ^{\alpha \beta }n_{\beta }=g ^{\alpha \beta }\sigma
\frac{1}{\sqrt{\left\vert g ^{55}\right\vert }}\delta _{\beta
}^{5}=\sigma g ^{\alpha 5}\frac{1}{\sqrt{\left\vert g
^{55}\right\vert }} \ .
\label{E-44}
\end{equation}
{For} any vector $A \in \ct \left( \cm_5 \right) $ in the tangent space of $\cm_5$ we can
project onto parallel and normal~components
\begin{equation}
A_{\parallel } = \sigma \left( A\cdot n \right) n \qquad \qquad \qquad
A_{\perp } = A-\sigma \left( A\cdot n \right) n
\end{equation}
and so define the normal projection operator
\begin{equation}
\Pi_{\alpha \beta }= \sigma n_{\alpha }n_{\beta } \qquad \qquad \qquad
\Pi_{\alpha \gamma }\Pi^{\gamma \beta }
= \sigma^2 n^2 \; n_{\alpha }n^{\beta }= \Pi_{\alpha }^{\beta }
\end{equation}
and the tangent projection operator
\begin{equation}
P_{\alpha \beta }=g_{\alpha \beta }-\sigma n_{\alpha }n_{\beta } \qquad P^{\alpha\beta}
=g^{\alpha\beta} -\sigma n^{\alpha }n^{\beta } \qquad P_{\alpha \gamma }P^{\gamma \beta
}=P_{\alpha }^{\beta } = \delta_\alpha^\beta -\sigma n_{\alpha }n^{\beta }
\label{projector}
\end{equation}
along with the completeness relation
\begin{equation}
g_{\alpha\beta} = P_{\alpha \beta } + \sigma n_{\alpha }n_{\beta }
\qquad \qquad
\delta^\alpha_\beta = P^{\alpha }_{\beta } + \sigma n^{\alpha }n_{\beta } \ .
\label{complete}
\end{equation}
For any vector $V \in \ct\left( \cm_5 \right) $, the vector $V^\alpha_\perp =
P^\alpha_\beta V^\beta $ is in $\ct\left( \Sigma \right), $
and so there is some vector $v \in \ct \left( \cm \right) $, such that
\begin{equation}
V_\perp^\alpha =v^\mu E_\mu^\alpha
\end{equation}
which entails
\begin{equation}
v_\mu = \gamma_{\mu\nu} v^\nu = g_{\alpha\beta} E^\alpha_\mu E^\beta_\nu v^\nu
= g_{\alpha\beta} E^\alpha_\mu V_\perp^\beta = E^\alpha_\mu V^\perp_\alpha
= E^\alpha_\mu P_\alpha^\beta V_\beta = E^\beta_\mu V_\beta
\end{equation}
since $ E^\alpha_\mu \in \ct\left( \Sigma \right)$.
In particular, expressing the metric in terms of (\ref{complete}), we find
\begin{equation}
\gamma _{\mu \nu }=g_{\alpha \beta
}E_{\mu }^{\alpha }E_{\nu }^{\beta }=\left( P_{\alpha \beta }+\sigma
n_{\alpha }n_{\beta }\right) E_{\mu }^{\alpha }E_{\nu }^{\beta }=P_{\alpha
\beta }E_{\mu }^{\alpha }E_{\nu }^{\beta } = P_{\mu \nu }
\end{equation}
so that the projector $P_{\alpha \beta }$ when restricted to $\Sigma $ acts
precisely as the 4D metric $\gamma_{\mu \nu }$.
Generalizing the characterization of 5D distance that is expressed in (\ref{difference}),
we write
\begin{equation}
X_2 - X_1 = \left( \delta x^\mu + N^\mu \delta x^5 , N \delta x^5 \right)
\qquad \qquad
\delta X^\alpha = \left( \delta x^\mu + N^\mu \delta x^5 \right) E^\alpha_\mu +
N n^\alpha \delta x^5
\label{difference-1}
\end{equation}
where $N$ is a lapse function and $N^\mu$ is a shift four-vector.
The 5D squared invariant interval now takes the~form
\begin{eqnarray}
dX^2 \eq g_{\alpha\beta} \big( x,\tau\big) \delta X^\alpha \delta X^\beta \notag \\
\eq g_{\alpha\beta} \big( x,\tau\big)
\big[ \big( \delta x^\mu + N^\mu \delta x^5 \big) E^\alpha_\mu +
N n^\alpha \delta x^5\big]
\big[ \big( \delta x^\nu + N^\nu \delta x^5 \big) E^\beta_\nu + N n^\beta \delta
x^5\big] \notag \\
\eq \gamma_{\mu\nu} \big( x,\tau\big) \big( \delta x^\mu + N^\mu \delta x^5 \big)
\big( \delta x^\nu + N^\nu \delta x^5 \big)
+ \sigma ^2 N^2 \big( \delta x^5\big) ^2 \notag \\
\eq \gamma_{\mu\nu} \big( x,\tau\big) \delta x^\mu \delta x^\nu
+ 2 \gamma_{\mu\nu} \big( x,\tau\big) N^\nu \delta x^\mu \delta x^5
+\textcolor{dRed}{\big( \gamma_{\mu\nu} \big( x,\tau\big) N^\mu N^\nu + \sigma ^2
N^2 \big)
\big( \delta x^5\big) ^2 }
\label{5D-interval-2}
\end{eqnarray}
allowing us to decompose the 5D metric
\vspace{-12pt}
\begin{equation}
g_{\alpha \beta } = \left[
\begin{array}{cc}
\gamma _{\mu \nu } & N_{\mu } \\
N_{\mu } & \sigma N^{2}+\gamma _{\mu \nu }N^{\mu }N^{\nu }
\end{array}
\right]
\qquad \qquad
g^{\alpha \beta }=\left[
\begin{array}{cc}
\gamma ^{\mu \nu }+\sigma \dfrac{1}{N^{2}}N^{\mu }N^{\nu } & -\sigma \dfrac{1}{
N^{2}}N^{\mu } \strt{12} \\
-\sigma \dfrac{1}{N^{2}}N^{\mu } & \sigma \dfrac{1}{N^{2}}
\end{array}
\right]
\end{equation}
into the spacetime and $\tau$ sectors.
Once again, on any 4D \textcolor{dRed}{SHP spacetime $\cm (\tau)$}, the induced
metric $\gamma_{\mu\nu} ( x,\tau ) $ is just the local metric, we assumed to
exist at the outset.
In this decomposition, the unit normal $n^\alpha$ becomes
\begin{equation}
n_\alpha = \sigma \frac{1}{\sqrt{\left\vert g^{55}\right\vert}} \partial_\alpha \p \tau(X)
= \sigma N \delta^5_\alpha \ .
\label{unit_normal}
\end{equation}
One can easily establish that $\sqrt{g} = \sqrt{\gamma }N$ by writing
\begin{equation}
\left[
\begin{array}{cc}
g_{\mu \nu } & g_{\mu 5} \\
g_{\mu 5} & g_{55
\end{array
\right] =\left[
\begin{array}{cc}
\gamma _{\mu \nu } & N_{\mu } \\
N_{\mu } & \sigma N^{2}+\gamma _{\mu \nu }N^{\mu }N^{\nu
\end{array
\right]
= \left[
\begin{array}{cc}
I & 0 \\
N^{\mu } &
\end{array
\right] \left[
\begin{array}{cc}
\gamma _{\mu \nu } & 0 \\
0 & \sigma N^{2
\end{array
\right] \left[
\begin{array}{cc}
I & N^{\nu } \\
0 &
\end{array
\right] \ .
\label{E-330}
\end{equation
\subsection{Intrinsic and Extrinsic Geometry}
With compatible connection (\ref{connection}) the covariant derivative
(\ref{cov-div}) on $\cm_5$ obeys $\nabla _\gamma g_{\alpha \beta } = 0$, leading~to the standard Ricci identity
\begin{equation}
\left[ \nabla _{\beta } , \nabla _{\alpha } \right] X_{\delta } =
X_{\gamma }R_{\delta \alpha \beta }^{\gamma }
\label{F-curvature}
\end{equation}
with Riemann tensor
\begin{equation}
R_{\delta \alpha \beta }^{\gamma }=\frac{\partial }{\partial x^{\alpha }}
\Gamma _{\delta \beta }^{\gamma }-\frac{\partial }{\partial x^{\beta }}
\Gamma _{\delta \alpha }^{\gamma }+\Gamma _{\sigma \alpha }^{\gamma }\Gamma
_{\delta \beta }^{\sigma }-\Gamma _{\sigma \beta }^{\gamma }\Gamma _{\delta
\alpha }^{\sigma }
\label{E-113}
\end{equation}
and associated Bianchi relations.
To find the corresponding structures on the hyperspaces defined through foliation we
examine their projections onto $\ct(\Sigma)$.
\begin{comment}
The 5D covariant derivative used in (\ref{cov-div}), the Ricci identity is
\begin{equation}
\nabla _{\beta }\nabla _{\alpha }X_{\delta }-\nabla _{\alpha }\nabla _{\beta
}X_{\delta }=X_{\gamma }R_{\delta \alpha \beta }^{\gamma }
\label{F-curvature}
\end{equation}
where the curvature tensor
\begin{equation}
R_{\delta \alpha \beta }^{\gamma }=\frac{\partial }{\partial x^{\alpha }}
\Gamma _{\delta \beta }^{\gamma }-\frac{\partial }{\partial x^{\beta }}
\Gamma _{\delta \alpha }^{\gamma }+\Gamma _{\sigma \alpha }^{\gamma }\Gamma
_{\delta \beta }^{\sigma }-\Gamma _{\sigma \beta }^{\gamma }\Gamma _{\delta
\alpha }^{\sigma }
\label{E-113}
\end{equation}
enjoys the symmetries
\begin{equation}
R_{\gamma \delta \alpha \beta }=-R_{\delta \gamma \alpha \beta }=-R_{\gamma
\delta \beta \alpha }=R_{\alpha \beta \gamma \delta }
\label{E-114}
\end{equation}
and the Bianchi relations
\begin{eqnarray}
R_{\left\{ \delta \alpha \beta \right\} }^{\gamma } &=&R_{\delta \alpha
\beta }^{\gamma }+R_{\alpha \beta \delta }^{\gamma }+R_{\beta \delta \alpha
}^{\gamma }=0
\label{E-115} \\
R_{\delta \left\{ \alpha \beta ;\sigma \right\} }^{\gamma } &=&R_{\delta
\alpha \beta ;\sigma }^{\gamma }+R_{\alpha \beta \sigma ;\delta }^{\gamma
}+R_{\beta \sigma \delta ;\alpha }^{\gamma }=0 \ .
\label{E-116}
\end{eqnarray}
The Ricci tensor is
\begin{equation}
R_{\delta \alpha }=R_{\delta \alpha \beta }^{\beta }
\label{E-117}
\end{equation}
with symmetry
\begin{equation}
R_{\alpha \delta }=g^{\gamma \beta }R_{\gamma \alpha \delta \beta
}=-g^{\gamma \beta }R_{\alpha \gamma \delta \beta }=g^{\gamma \beta
}R_{\alpha \gamma \beta \delta }=g^{\beta \gamma }R_{\beta \delta \alpha
\gamma }=R_{\delta \alpha }
\label{E-118}
\end{equation}
Applying the second Bianchi identity
\begin{eqnarray}
R_{\delta \alpha \beta ;\sigma }^{\gamma }+R_{\alpha \beta \sigma ;\delta
}^{\gamma }+R_{\beta \sigma \delta ;\alpha }^{\gamma } &=&0
\label{E-119} \\
R_{\delta \alpha \beta ;\gamma }^{\gamma }+R_{\alpha \beta \gamma ;\delta
}^{\gamma }+R_{\beta \gamma \delta ;\alpha }^{\gamma } &=&0
\label{E-120} \\
g^{\delta \alpha }\left( R_{\delta \alpha \beta ;\gamma }^{\gamma
}+R_{\alpha \beta \gamma ;\delta }^{\gamma }+R_{\beta \gamma \delta ;\alpha
}^{\gamma }\right) &=&0
\label{E-121} \\
\left( g^{\delta \alpha }R_{\delta \alpha \beta ;\gamma }^{\gamma
}+g^{\delta \alpha }R_{\alpha \beta \gamma ;\delta }^{\gamma }+g^{\delta
\alpha }R_{\beta \gamma \delta ;\alpha }^{\gamma }\right) &=&0
\label{E-122}
\\
\left( g^{\delta \alpha }R_{\delta \alpha \beta }^{\gamma }\right) _{;\gamma
}+\left( g^{\delta \alpha }R_{\alpha \beta \gamma }^{\gamma }\right)
_{;\delta }+\left( g^{\delta \alpha }R_{\beta \gamma \delta }^{\gamma
}\right) _{;\alpha } &=&0
\label{E-123} \\
R_{\beta ;\gamma }^{\gamma }+R_{\beta ;\gamma }^{\gamma }-R_{;\beta } &=&0
\label{E-124} \\
R_{\beta ;\gamma }^{\gamma }-\frac{1}{2}R_{;\beta } &=&0
\label{E-125}
\end{eqnarray}
\begin{equation}
\nabla _{\gamma }\left( R^{\beta \gamma }-\frac{1}{2}g^{\beta \gamma
}R\right) =0
\label{E-126}
\end{equation}
On the hypersurface $\Sigma $ as a 4D manifold with non-degenerate metric $
\gamma ^{\mu \nu }$ (for $\Sigma $ non-timelike), there is a unique
covariant derivative $D_{\mu }$ compatible with $\gamma ^{\mu \nu }=P^{\mu
\nu }$ such that
\begin{equation}
\nabla _{\alpha }~,\alpha =0,1,2,3,5\longrightarrow D_{\mu }~,\mu =0,1,2,3
\label{E-127}
\end{equation}
and the curvature equations above similarly apply for the \emph{intrinsic
curvature} $R_{\lambda \mu \nu }^{\rho }$.
\end{comment}
For a vector $V=V^{\perp }\in \ct\left( \Sigma \right) $ we define the projected covariant
derivative $ \overline{\nabla }_{\alpha }$ in which the projected derivative acts on the
projected vector.
Thus,
\begin{equation}
\overline{\nabla }_{\alpha }V_{\beta }^{\perp }=\overline{\nabla }_{\alpha
}\left( P_{\beta }^{\delta }V_{\delta }\right) =P_{\beta }^{\delta }
\overline{\nabla }_{\alpha }\left( V_{\delta }\right) =P_{\beta }^{\delta
}\left( P_{\alpha }^{\gamma }\nabla _{\gamma }\right) V_{\delta }=P_{\alpha
}^{\gamma }P_{\beta }^{\delta }\nabla _{\gamma }V_{\delta } \ .
\label{E-139}
\end{equation}
We justify the second equality by noting that the full 5D covariant derivative of
the projector is
\begin{equation}
\nabla _{\alpha }P_{\beta \gamma }=\nabla _{\alpha }\left( g_{\beta \gamma
}-\sigma n_{\beta }n_{\gamma }\right) =-\sigma \nabla _{\alpha }\left(
n_{\beta }n_{\gamma }\right) =-\sigma \left[ \left( \nabla _{\alpha
}n_{\gamma }\right) n_{\beta }+n_{\beta }\nabla _{\alpha }n_{\gamma }\right]
\label{F-grad_P}
\end{equation}
and so the projected covariant derivative of the projector is
\begin{eqnarray}
\overline{\nabla }_{\alpha }P_{\beta \gamma } \eq -\sigma P_{\alpha }^{\alpha
^{\prime }}P_{\beta }^{\beta ^{\prime }}P_{\gamma }^{\gamma ^{\prime
}}\left( \left( \nabla _{\alpha ^{\prime }}n_{\gamma ^{\prime }}\right)
n_{\beta ^{\prime }}+n_{\beta ^{\prime }}\nabla _{\alpha ^{\prime
}}n_{\gamma ^{\prime }}\right)
\notag
\label{E-141} \\
\eq -\sigma P_{\alpha }^{\alpha ^{\prime }}P_{\gamma }^{\gamma ^{\prime
}}\left( \nabla _{\alpha ^{\prime }}n_{\gamma ^{\prime }}\right) \left(
P_{\beta }^{\beta ^{\prime }}n_{\beta ^{\prime }}\right) -\sigma P_{\alpha
}^{\alpha ^{\prime }}P_{\gamma }^{\gamma ^{\prime }}\left( P_{\beta }^{\beta
^{\prime }}n_{\beta ^{\prime }}\right) \nabla _{\alpha ^{\prime }}n_{\gamma
^{\prime }}=0
\label{E-142}
\end{eqnarray}
which follows from $P_{\beta }^{\delta }n_{\delta }\equiv 0$.
This compatibility justifies regarding $\overline{\nabla }_{\alpha }$ as the intrinsic
covariant derivative on $\ct\left( \Sigma \right) $, denoted as
\begin{equation}
D_{\alpha }=\overline{\nabla }_{\alpha }=P_{\alpha }^{\gamma }\nabla
_{\gamma }\mbox{\qquad}D_{\mu }=E_{\mu }^{\alpha }D_{\alpha }=E_{\mu
}^{\alpha }P_{\alpha }^{\gamma }\nabla _{\gamma }=E_{\mu }^{\gamma }\nabla
_{\gamma }
\label{E-143}
\end{equation}
and satisfying $ D_{\mu }\gamma _{\lambda \rho }=0$.
That is, for $V_\mu^\perp \in \ct\left( \Sigma \right) $ and $v_\nu \in \ct\left(
\cm\right) $ with $v_\mu = E_\mu^\alpha V_\alpha^\perp $
we have
\begin{equation}
E_{\mu }^{\alpha }E_{\nu }^{\beta }\left( D_{\alpha }V_{\beta }^{\perp
}\right) =E_{\mu }^{\alpha }D_{\alpha }E_{\nu }^{\beta }\left( P_{\beta
}^{\delta }V_{\delta }\right) =E_{\mu }^{\alpha }D_{\alpha }\left( E_{\nu
}^{\beta }P_{\beta }^{\delta }\right) V_{\delta }=E_{\mu }^{\alpha
}D_{\alpha }E_{\nu }^{\delta }V_{\delta }=D_{\mu }v_{\nu } \ .
\label{E-148}
\end{equation}
The projected curvature $ \bar{R} _{\lambda \mu \nu }^{\rho }$ is defined through
\begin{equation}
\left[ D_{\nu },D_{\mu }\right] X_{\lambda }=X_{\rho }\bar{R}
_{\lambda \mu \nu }^{\rho }
\label{E-146}
\end{equation}
and will be examined below.
Restricted to $\ct\left( \Sigma \right) \subset \ct\left( \mathcal{M}
\right) $
the Weingarten map $\chi $ associates to a
tangent vector $V\in \ct\left( \Sigma \right) $ the variation of the $\tau$-like
unit vector $n$ along $V$. Thus,
\begin{equation}
\chi \left( V\right)
=\nabla _{V}n=V\cdot \left( \nabla n\right)
\qquad \qquad
\chi ^{\alpha }\left( V\right) = V^{\beta }\nabla _{\beta }n^{\alpha }
\end{equation}
and
\begin{equation}
U\cdot \left( \nabla _{V}n\right) =V\cdot \left( \nabla _{U}n\right) \ .
\label{E-130}
\end{equation}
The extrinsic curvature on $\ct\left( \Sigma \right) $ is
\begin{equation}
K:\ct\left( \Sigma \right) \times \ct\left( \Sigma \right) \rightarrow R
\label{E-131}
\end{equation}
defined as the projection onto a vector $U$ of the Weingarten map along a vector $V$
\begin{eqnarray}
K\left( U,V\right) \eq -U\cdot \chi \left( V\right) =-U\cdot \nabla
_{V}n=-g_{\alpha \gamma }V^{\alpha }U^{\beta }\nabla _{\beta }n^{\gamma }
\label{E-132} \\
K_{\alpha \beta } \eq -g_{\alpha \gamma }\nabla _{\beta }n^{\gamma }=-\nabla
_{\beta }n_{\alpha } \ .
\label{E-133}
\end{eqnarray}
Using the projector $P_{\alpha \beta }$, we extend this definition to the full manifold
$\ct(\cm)$ as
\begin{eqnarray}
K\left( U_{\perp },V_{\perp }\right) \eq K\left( PU,PV\right) =-g_{\gamma
\alpha }\left( P_{\varepsilon }^{\gamma }V^{\varepsilon }\right) \left(
P_{\phi }^{\beta }U^{\phi }\right) \nabla _{\beta }n^{\alpha }
\label{E-135}
\\
K_{\phi \varepsilon }U^{\phi }V^{\varepsilon } \eq V^{\varepsilon }U^{\phi
}\left( -g_{\gamma \alpha }P_{\varepsilon }^{\gamma }P_{\phi }^{\beta
}\right) \nabla _{\beta }n^{\alpha }
\label{E-136} \\
K_{\alpha \beta } \eq -P_{\alpha }^{\gamma }P_{\beta }^{\delta }~\nabla
_{\delta }n_{\gamma }
\label{E-137}
\end{eqnarray}
where we recall that $\nabla _{\delta }n_{\gamma } $ may have both normal and tangent
components with respect to $\ct\left( \Sigma \right) $.
\begin{comment}
\begin{equation}
P_{\alpha }^{\gamma }n_{\gamma }\equiv 0\not\Rightarrow P_{\beta }^{\delta
}~\left( \nabla _{\delta }n_{\gamma }\right) =0
\label{E-138}
\end{equation}
For a vector $V=V^{\perp }\in \ct\left( \Sigma \right) $ we may also
define the \emph{projected covariant derivative}
\begin{equation}
\overline{\nabla }_{\alpha }V_{\beta }^{\perp }=\overline{\nabla }_{\alpha
}\left( P_{\beta }^{\delta }V_{\delta }\right) =P_{\beta }^{\delta }
\overline{\nabla }_{\alpha }\left( V_{\delta }\right) =P_{\beta }^{\delta
}\left( P_{\alpha }^{\gamma }\nabla _{\gamma }\right) V_{\delta }=P_{\alpha
}^{\gamma }P_{\beta }^{\delta }\nabla _{\gamma }V_{\delta }
\label{E-139}
\end{equation}
in which the projected derivative acts on the projected vector. To justify
the second equality we note that the full 5D covariant derivative of the
projector is
\begin{equation}
\nabla _{\alpha }P_{\beta \gamma }=\nabla _{\alpha }\left( g_{\beta \gamma
}-\sigma n_{\beta }n_{\gamma }\right) =-\sigma \nabla _{\alpha }\left(
n_{\beta }n_{\gamma }\right) =-\sigma \left[ \left( \nabla _{\alpha
}n_{\gamma }\right) n_{\beta }+n_{\beta }\nabla _{\alpha }n_{\gamma }\right]
\label{F-grad_P}
\end{equation}
and so the projected covariant derivative of the projector is
\begin{eqnarray}
\overline{\nabla }_{\alpha }P_{\beta \gamma } &=&-\sigma P_{\alpha }^{\alpha
^{\prime }}P_{\beta }^{\beta ^{\prime }}P_{\gamma }^{\gamma ^{\prime
}}\left( \left( \nabla _{\alpha ^{\prime }}n_{\gamma ^{\prime }}\right)
n_{\beta ^{\prime }}+n_{\beta ^{\prime }}\nabla _{\alpha ^{\prime
}}n_{\gamma ^{\prime }}\right)
\label{E-141} \\
&=&-\sigma P_{\alpha }^{\alpha ^{\prime }}P_{\gamma }^{\gamma ^{\prime
}}\left( \nabla _{\alpha ^{\prime }}n_{\gamma ^{\prime }}\right) \left(
P_{\beta }^{\beta ^{\prime }}n_{\beta ^{\prime }}\right) -\sigma P_{\alpha
}^{\alpha ^{\prime }}P_{\gamma }^{\gamma ^{\prime }}\left( P_{\beta }^{\beta
^{\prime }}n_{\beta ^{\prime }}\right) \nabla _{\alpha ^{\prime }}n_{\gamma
^{\prime }}=0
\label{E-142}
\end{eqnarray}
which follows from $P_{\beta }^{\delta }n_{\delta }\equiv 0$. This
compatibility justifies regarding $\overline{\nabla }_{\alpha }$ as the
covariant derivative on $\ct\left( \Sigma \right) $ and we denote
\begin{equation}
D_{\alpha }=\overline{\nabla }_{\alpha }=P_{\alpha }^{\gamma }\nabla
_{\gamma }\mbox{\qquad}D_{\mu }=E_{\mu }^{\alpha }D_{\alpha }=E_{\mu
}^{\alpha }P_{\alpha }^{\gamma }\nabla _{\gamma }=E_{\mu }^{\gamma }\nabla
_{\gamma }
\label{E-143}
\end{equation}
Restricted to the $\ct\left( \Sigma \right) $, we write the covariant
derivative as
\begin{equation}
D_{\mu }V_{\nu }=\partial _{\mu }V_{\nu }-\Gamma _{\mu \nu }^{\lambda
}V_{\lambda }
\label{E-144}
\end{equation}
where
\begin{equation}
D_{\mu }\gamma _{\lambda \rho }=0
\label{E-145}
\end{equation}
and so have the intrinsic curvature
\begin{equation}
D_{\nu }D_{\mu }X_{\lambda }-D_{\mu }D_{\nu }X_{\lambda }=X_{\rho }\bar{R}
_{\lambda \mu \nu }^{\rho }
\label{E-146}
\end{equation}
Consider for
\begin{equation}
E_{\nu }^{\delta }V_{\delta }^{\perp }=v_{\nu }
\label{E-147}
\end{equation}
In adapted coordinates, where $S\left( x\right) $ has a first order zero in $
\tau $, we may write
\begin{equation}
n_{\alpha }=f\left( x\right) ~\partial _{\alpha }S
\label{E-149}
\end{equation}
leading to
\begin{equation}
\nabla _{\alpha }n_{\beta }=\frac{\partial _{\alpha }f}{f}n_{\beta
}+f~\left( \partial _{\alpha }\partial _{\beta }S-\Gamma _{\alpha \beta
}^{\gamma }\partial _{\gamma }S\right)
\label{E-150}
\end{equation}
In adapted coordinate for which $S$ is one of the coordinates
\begin{equation}
\partial _{\alpha }\partial _{\beta }S=0
\label{E-151}
\end{equation}
and so
\begin{eqnarray}
K_{\mu \nu } &=&E_{\mu }^{\alpha }E_{\nu }^{\beta }K_{\alpha \beta }=-E_{\mu
}^{\alpha ^{\prime }}E_{\nu }^{\beta ^{\prime }}P_{\alpha ^{\prime
}}^{\alpha }P_{\beta ^{\prime }}^{\beta }~\nabla _{\alpha }n_{\beta }
\label{E-152} \\
&=&-E_{\mu }^{\alpha ^{\prime }}E_{\nu }^{\beta ^{\prime }}P_{\alpha
^{\prime }}^{\alpha }P_{\beta ^{\prime }}^{\beta }~\left( \frac{\partial
_{\alpha }f}{f}n_{\beta }+f~\left( \partial _{\alpha }\partial _{\beta
}S-\Gamma _{\alpha \beta }^{\gamma }\partial _{\gamma }S\right) \right)
\label{E-153} \\
K_{\mu \nu } &=&E_{\mu }^{\alpha }E_{\nu }^{\beta }\left( \Gamma _{\alpha
\beta }^{\gamma }\partial _{\gamma }S\right) =-f\left( x\right) ~\Gamma
_{\mu \nu }^{S}
\label{F-K_eval}
\end{eqnarray}
(This may not be valid for the SHP theory.)
\end{comment}
Because the projection is idempotent, we can write
\begin{equation}
P_{\beta }^{\delta }\left( \nabla _{\gamma }n_{\delta }\right) \equiv \nabla
_{\gamma }n_{\beta }
\label{E-170}
\end{equation}
leading to the identity
\begin{equation}
K_{\alpha \beta } =-P_{\alpha }^{\gamma }P_{\beta }^{\delta }\nabla
_{\gamma }n_{\delta }=-P_{\alpha }^{\gamma }\nabla _{\gamma }n_{\beta
}=-\left( \gamma _{\alpha }^{\gamma }-\sigma n_{\alpha }n^{\gamma }\right)
\nabla _{\gamma }n_{\beta }
= -\nabla _{\alpha }n_{\beta }+\sigma n_{\alpha }\left( n^{\gamma }\nabla
_{\gamma }n_{\beta }\right)
\label{F-K_ident}
\end{equation}
and the contracted form
\begin{equation}
K=\gamma ^{\alpha \beta }K_{\alpha \beta }=\gamma ^{\alpha \beta }P_{\alpha
}^{\gamma }P_{\beta }^{\delta }\nabla _{\gamma }n_{\delta }=\gamma ^{\gamma
\delta }\nabla _{\gamma }n_{\delta }=\nabla _{\alpha }n^{\alpha } \ .
\label{E-155}
\end{equation}
Using (\ref{unit_normal}) for the unit normal $n_\alpha$, we may expand
\begin{eqnarray}
\left( n^{\gamma }\nabla _{\gamma }n_{\beta }\right)
\eq \sigma n^{\gamma }\nabla _{\gamma }\left( N\nabla _{\beta
}\tau \right)
= \sigma n^{\gamma }\left( \nabla _{\gamma }N\right) \frac{n_{\beta }}{
\sigma N}+\sigma n^{\gamma }N\nabla _{\beta }\left( \frac{n_{\gamma }}{
\sigma N}\right)
\label{E-365} \notag \\
\eq \frac{1}{N}\left[ n^{\gamma }n_{\beta }\nabla _{\gamma }N-\sigma \delta
_{\beta }^{\gamma }\nabla _{\gamma }N\right]
= -\sigma \frac{1}{N}\left[
\delta _{\beta }^{\gamma }-\sigma n^{\gamma }n_{\beta }\right] \nabla
_{\gamma }N
\label{E-368a} \notag \\
\eq -\sigma \frac{1}{N}P_{\beta }^{\gamma }\nabla _{\gamma }N
= -\sigma \frac{1}{N}D_{\beta }N
\label{E-369}
\end{eqnarray}
to put (\ref{F-K_ident}) into the form
\begin{equation}
K_{\alpha \beta } = - \nabla_\alpha n_\beta - n_{\alpha }\frac{1}{N}D_{\beta }N
\ .
\label{F-K_ident-2}
\end{equation}
If $V \in \ct(\cm_5) $ has components both tangent and normal to
$\ct(\Sigma) $, and it so can be written as
\begin{equation}
V^{\beta } = E_{\lambda }^{\beta }v^{\lambda }-\sigma \left( n\cdot
V\right) n^{\beta } \ \longrightarrow \
\nabla _{\alpha }V^{\textcolor{dRed}{\beta} } = \nabla _{\alpha }E_{\lambda }^{\beta
}v^{\lambda }-\sigma \nabla _{\alpha }\left( n\cdot V\right) n^{\beta }
\end{equation}
we see that
\begin{equation}
D_{\mu }v_{\nu }=E_{\mu }^{\alpha }E_{\nu }^{\beta }\nabla _{\alpha
}V_{\beta }-\sigma \left( n\cdot V\right) K_{\mu \nu }
\label{E-162}
\end{equation}
in which the first term represents the tangential part of the covariant
derivative, and the second term is seen to expresses the connection for the normal
components of $V$ in the full covariant derivative.
\subsection{Evolution of the Hypersurface $\Sigma $}
{From (\ref{difference-1}) we see that the variation of $X \in \Sigma$ for a
small time variation $\delta x^5$ at a given point $x_0 \in \cm$~is}
\begin{equation}
\delta X^{\alpha }=\left( \dfrac{ \partial X^{\alpha }}{\partial
x^5}\right)_{x_0} \delta x^5 =\left( \dfrac{ \partial X^{\alpha }}{\partial
\tau}\right)_{x_0} \delta \tau \longrightarrow E^\alpha_5 = \left( \partial_5
\right)^\alpha = N n^\alpha + N^{\mu }E_{\mu }^{\alpha }
\end{equation}
Defining $m^\alpha = N n^\alpha $ we write $E_5$ as $\partial_5 = m + {\mathbf N} $
and characterize time evolution through the Lie derivative in the time direction
\begin{equation}
{\mathcal{L}}_5 = {\mathcal{L}}_m + {\mathcal{L}}_{\mathbf N} \ .
\end{equation}
For the metric $ \gamma _{\alpha \beta }$, the Lie derivative is
\begin{equation}
{\mathcal{L}}_{m}\,\gamma _{\alpha \beta } = m^{\gamma }\nabla _{\gamma
}\gamma _{\alpha \beta }+\gamma _{\gamma \beta }\nabla _{\alpha }m^{\gamma
}+\gamma _{\alpha \gamma }\nabla _{\beta }m^{\gamma }
\label{lie-1}
\end{equation}
which we may evaluate by using (\ref{projector}) for $ P_{\alpha \beta } =
\gamma_{\alpha \beta }$ in the first term and using (\ref{F-K_ident-2}) to obtain
\begin{equation}
\nabla _{\beta }m_{\alpha }= N\nabla _{\beta }n_{\alpha } +n_{\alpha } \nabla _{\beta } N =
-NK_{\beta \alpha }-n_{\beta }D_{\alpha }N+n_{\alpha }\nabla _{\beta }Nu
\label{nab-m}
\end{equation}
in the remaining terms. Notice that ${\mathcal{L}}_{m}\,P^\alpha_\beta$ is
the derivative in the normal direction of the projector onto the tangent space,
so that direct calculation while using (\ref{F-K_ident-2}) and (\ref{nab-m}) provides
\begin{equation}
{\mathcal{L}}_{m}\,P_{\ \,\beta }^{\alpha }
= m^{\gamma }\nabla _{\gamma }(\delta^\alpha_\beta -\sigma n^\alpha
n_{\beta })-\gamma_{\ \,\beta }^{\gamma }\nabla _{\gamma }m^{\alpha
}+\gamma_{\ \,\gamma }^{\alpha }\nabla _{\beta }m^{\gamma } = 0
\end{equation}
expressing compatibility of ${\mathcal{L}}_{m}$ with $P_{\ \,\beta }^{\alpha }$.
As a result, if $V \in \ct(\cm_5)$ is tangent to $\Sigma$, its Lie
derivative in the time direction is tangent to $\Sigma$, and so tangent vectors
propagate as tangent vectors as $\tau$ \textcolor{dRed}{advances monotonically}.
As a result, (\ref{lie-1}) simplifies to
\begin{equation}
{\mathcal{L}}_{m}\,\gamma _{\alpha \beta } = -2NK_{\alpha \beta }
\end{equation}
leading to
\begin{equation}
{\mathcal{L}}_5\,\gamma _{\alpha \beta } -
{\mathcal{L}}_{{\mathbf N}}\,\gamma _{\alpha \beta } = -2NK_{\alpha \beta }
\ \longrightarrow \
{\mathcal{L}}_5\,\gamma _{\mu \nu } - {\mathcal{L}}_{{\mathbf N}}\,\gamma _{\mu \nu } = -2NK_{\mu \nu }
\end{equation}
as the evolution equation for the metric.
\begin{comment}
For a curve defined by
\begin{equation}
x^{\alpha }\left( \lambda \right)
\label{E-163}
\end{equation}
for some external scalar parameter $\lambda $ we have
\begin{equation}
u^{\alpha }=\frac{dx^{\alpha }}{d\lambda }=\dot{x}^{\alpha }
\label{E-164}
\end{equation}
and
\begin{equation}
a^{\alpha }=\frac{D\dot{x}^{\alpha }}{D\lambda }=\ddot{x}^{\alpha }+\Gamma
_{\beta \gamma }^{\alpha }\dot{x}^{\beta }\dot{x}^{\gamma }=\dot{u}^{\alpha
}+\Gamma _{\beta \gamma }^{\alpha }u^{\beta }u^{\gamma }
\label{E-165}
\end{equation}
Using
\begin{equation}
\frac{D}{D\lambda }A^{\beta }=\left( \partial _{\alpha }A^{\beta }+\Gamma
_{\alpha \gamma }^{\beta }A^{\gamma }\right) \frac{dx^{\alpha }}{d\lambda }=
\dot{x}^{\alpha }\nabla _{\alpha }A^{\beta }
\label{E-166}
\end{equation}
we can write
\begin{equation}
a^{\alpha }=\frac{D\dot{x}^{\alpha }}{D\lambda }=\ddot{x}^{\alpha }+\Gamma
_{\beta \gamma }^{\alpha }\dot{x}^{\beta }\dot{x}^{\gamma }=\dot{x}^{\beta
}\nabla _{\beta }\dot{x}^{\alpha }=u^{\beta }\nabla _{\beta }u^{\alpha }
\label{E-167}
\end{equation}
Thus the definition of a geodesic curve is one that satisfies
\begin{equation}
a^{\alpha }=u^{\beta }\nabla _{\beta }u^{\alpha }=0
\label{E-168}
\end{equation}
So consider the normal $n^{\alpha }$ satisfying $n^{2}=\sigma $ everywhere
in $\mathcal{M}$ (not just on $\Sigma $) as describing a velocity (for $
\sigma =-1$, the normal is a timelike vector and this makes obvious sense).
Now
\begin{equation}
\nabla _{\alpha }\sigma =\nabla _{\alpha }n^{2}=2\left( \nabla _{\alpha
}n_{\beta }\right) n^{\beta }=0
\label{E-169}
\end{equation}
which tells us that the gradient $\nabla _{\alpha }n_{\beta }$ of $n$ is
entirely in the tangent space and has no normal component. Therefore,
because the projection is idempotent, we can write
Because the projection is idempotent, we can write
\begin{equation}
P_{\beta }^{\delta }\left( \nabla _{\gamma }n_{\delta }\right) \equiv \nabla
_{\gamma }n_{\beta }
\label{E-170}
\end{equation}
and
\begin{eqnarray}
K_{\alpha \beta } &=&-P_{\alpha }^{\gamma }P_{\beta }^{\delta }\nabla
_{\gamma }n_{\delta }=-P_{\alpha }^{\gamma }\nabla _{\gamma }n_{\beta
}=-\left( \gamma _{\alpha }^{\gamma }-\sigma n_{\alpha }n^{\gamma }\right)
\nabla _{\gamma }n_{\beta }
\label{E-171} \\
&=&-\nabla _{\alpha }n_{\beta }+\sigma n_{\alpha }\left( n^{\gamma }\nabla
_{\gamma }n_{\beta }\right) =-\nabla _{\alpha }n_{\beta }+\sigma n_{\alpha
}n^{\gamma }\nabla _{\gamma }n_{\beta } \right)
\label{F-K_ident}
\end{eqnarray}
where we write
\begin{equation}
a_{\beta }=n^{\gamma }\nabla _{\gamma }n_{\beta }
\label{E-173}
\end{equation}
as the covariant acceleration of the normal $n$. This term vanishes for
geodesic motion of the normal.
\begin{equation}
K_{\mu \nu }=E_{\mu }^{\gamma }E_{\nu }^{\delta }\nabla _{\gamma }n_{\delta
}=E_{\mu }^{\gamma }E_{\nu }^{\delta }K_{\gamma \delta }
\label{E-174}
\end{equation}
Similarly, for a tangent vector
\begin{equation}
V_{\perp }^{\beta }=E_{\nu }^{\beta }v^{\nu }
\label{E-175}
\end{equation}
we have
\begin{eqnarray}
\left( E_{\mu }^{\alpha }\nabla _{\alpha }V_{\beta }^{\perp }\right)
n^{\beta } &=&E_{\mu }^{\alpha }\nabla _{\alpha }\left( V_{\beta }^{\perp
}n^{\beta }\right) -E_{\mu }^{\alpha }V_{\perp }^{\beta }\nabla _{\alpha
}n_{\beta }=-E_{\mu }^{\alpha }E_{\nu }^{\beta }v^{\nu }\nabla _{\alpha
}n_{\beta }
\label{E-176} \\
&=&-v_{\nu }E_{\mu }^{\alpha }E_{\mu }^{\beta }\nabla _{\alpha }n_{\beta
}=-v^{\nu }K_{\mu \nu }
\label{E-177}
\end{eqnarray}
which says that the normal component is $-v^{\nu }K_{\mu \nu }$, so we can
decompose the object $E_{\mu }^{\alpha }\nabla _{\alpha }V_{\beta }^{\perp }$
as
\begin{equation}
E_{\mu }^{\alpha }\nabla _{\alpha }V_{\beta }^{\perp }=\overline{\nabla }
_{\mu }v^{\nu }E_{\nu }^{\beta }-\sigma K_{\mu \nu }v^{\nu }n^{\beta }
\label{E-178}
\end{equation}
and so
\begin{equation}
v^{\mu }E_{\mu }^{\alpha }\nabla _{\alpha }V_{\beta }^{\perp }=\left( v^{\mu
}\overline{\nabla }_{\mu }v^{\nu }\right) E_{\nu }^{\beta }-\sigma K_{\mu
\nu }v^{\mu }v^{\nu }n^{\beta }
\label{E-179}
\end{equation}
The case for which the covariant acceleration of $V_{\perp }^{\beta }$
vanishes
\begin{equation}
V^{\alpha }\nabla _{\alpha }V_{\beta }^{\perp }=v^{\mu }\left( E_{\mu
}^{\alpha }\nabla _{\alpha }V_{\beta }^{\perp }\right) =0
\label{E-180}
\end{equation}
is then equivalent to
\begin{equation}
K_{\mu \nu }v^{\mu }v^{\nu }=K_{\alpha \beta }V^{\alpha }V^{\beta }=0
\label{E-181}
\end{equation}
In this case (if and only if $K_{\alpha \beta }V^{\alpha }V^{\beta }=0$) the
geodesics of $\Sigma $ are also the geodesics of $\mathcal{M}$.
\end{comment}
\subsection{Decomposition of the Riemann Tensor}
\label{de-R}
The 4+1 decomposition of $R_{\ \,\delta \alpha \beta }^{\gamma } $ is accomplished by
projecting onto $\Sigma $ and $n$. Using the completeness relation (\ref{complete}) to
write
\begin{equation}
R_{\ \,\delta \alpha \beta }^{\gamma }=\left( P_{\alpha }^{\alpha ^{\prime
}}+\sigma n_{\alpha }n^{\alpha ^{\prime }}\right) \left( P_{\beta }^{\beta
^{\prime }}+\sigma n_{\beta }n^{\beta ^{\prime }}\right) \left( P_{\gamma
^{\prime }}^{\gamma }{}\!+\sigma n^{\gamma }n_{\gamma ^{\prime }}\right)
\left( P_{\delta }^{\delta ^{\prime }}~+\sigma n_{\delta }n^{\delta ^{\prime
}}\right) R_{\ \,\delta ^{\prime }\alpha ^{\prime }\beta ^{\prime }}^{\gamma
^{\prime }}
\label{E-397}
\end{equation}
we obtain products of the type
\begin{equation}
R_{\delta \alpha \beta }^{\gamma }= \delta_{\alpha }^{\alpha ^{\prime }}
\p\p \delta_{\beta }^{\beta ^{\prime }} \p\p \delta_{\gamma ^{\prime }}^{\gamma }
\p\p \delta_{\delta }^{\delta ^{\prime }}\p\p R_{\delta ^{\prime }\alpha ^{\prime }\beta
^{\prime }}^{\gamma ^{\prime }} \longrightarrow
\left\{
\begin{array}{l}
E_\mu^\alpha
\p\p E_\nu^\beta
\p\p E_\gamma^\lambda
\p\p E_\sigma^\delta
\p\p P_\alpha^{\alpha^\prime}
\p\p P_\beta^{\beta^\prime}
\p\p P_{\gamma^\prime}^\gamma
\p\p P_\delta^{\delta^\prime} \p\p
\p\p R_{\delta^\prime \alpha^\prime \beta^\prime \gamma^\prime}
= R_{\sigma \mu \nu }^\lambda \strt{12} \\
E_\mu^\alpha
\p\p E_\nu^\beta
\p\p E_\gamma^\lambda
\p\p P_{\gamma^\prime}^\gamma n^\delta
\p\p P_\alpha^{\alpha^\prime}
\p\p P_\beta^{\beta^\prime}
\p\p R_{\delta \alpha^\prime \beta^\prime}^{\gamma^\prime}
= \sigma N \p\p R_{5\mu \nu }^\lambda \strt{12} \\
E^{\alpha \mu }
\p\p E_\nu^\beta
\p\p P_{\alpha \alpha^\prime} \p\p n^\delta
\p\p P_\beta^{\beta^\prime} \p\p n^\gamma
\p\p R_{\delta \beta^\prime \gamma}^{\alpha^\prime} = N^2 \p\p R_{5\nu 5}^\mu
\end{array}
\right.
\end{equation}
where $ \ R_{\delta \alpha \beta }^{\gamma } \p\p n^\delta \p\p n^\alpha \p\p
n^\beta = 0 \ $, because of the symmetries of the Riemann tensor.
For the projected curvature defined in (\ref{E-146}), we write
\begin{equation}
D_{\alpha }D_{\beta }V^{\gamma }=D_{\alpha }\left( D_{\beta }V^{\gamma
}\right) =P_{\alpha }^{\alpha ^{\prime }}P_{\beta }^{\beta ^{\prime
}}P_{\gamma }^{\gamma ^{\prime }}\nabla _{\alpha ^{\prime }}\left( D_{\beta
^{\prime }}V^{\gamma ^{\prime }}\right)
\label{E-183}
\end{equation}
we expand and use (\ref{F-grad_P}) in order to obtain
\begin{equation}
D_{\alpha }D_{\beta }V^{\gamma }=\sigma K_{\alpha \beta }P_{\gamma ^{\prime
}}^{\gamma }n^{\beta ^{\prime }}\nabla _{\beta ^{\prime }}V^{\gamma ^{\prime
}}+\sigma K_{\alpha }^{\gamma }K_{\beta \delta }V^{\delta }+P_{\alpha
}^{\alpha ^{\prime }}P_{\beta }^{\beta ^{\prime \prime }}P_{\gamma ^{\prime
\prime }}^{\gamma }(\nabla _{\alpha ^{\prime }}\nabla _{\beta ^{\prime
\prime }}V^{\gamma ^{\prime \prime }})
\label{E-206}
\end{equation}
so that
\begin{equation}
\left[ D_{\alpha },D_{\beta }\right] V^{\gamma }=
\bar{R}_{\delta \alpha \beta }^{\gamma }V^{\delta } =
-\sigma \left(
K_{\alpha \delta }K_{\ \,\beta }^{\gamma }-K_{\beta \delta }K_{\ \,\alpha
}^{\gamma }\right) V^{\delta }+P_{\ \,\alpha }^{\alpha ^{\prime }}P_{\
\,\beta }^{\beta ^{\prime }}P_{\ \,\gamma ^{\prime }}^{\gamma }{}\!R_{\
\,\delta ^{\prime }\alpha ^{\prime }\beta ^{\prime }}^{\gamma ^{\prime
}}P_{\ \,\delta }^{\delta ^{\prime }}V^{\delta }
\label{E-208}
\end{equation}
which, by the quotient theorem on $\Sigma $, leads to
\begin{equation}
P_{\ \,\alpha }^{\alpha ^{\prime }}P_{\ \,\beta }^{\beta ^{\prime }}P_{\
\,\gamma ^{\prime }}^{\gamma }{}\!P_{\ \,\delta }^{\delta ^{\prime }}R_{\
\,\delta ^{\prime }\alpha ^{\prime }\beta ^{\prime }}^{\gamma ^{\prime }} =
\bar{R}_{\delta \alpha \beta }^{\gamma }-\sigma \left( K_{\alpha }^{\gamma
}K_{\beta \delta }-K_{\beta }^{\gamma }K_{\alpha \delta }\right)
\label{F-gauss}
\end{equation}
This is known as the Gauss relation.
Acting on this expression with
$E_{\gamma }^{\mu }E_{\nu }^{\delta }E_{\lambda }^{\alpha }E_{\rho }^{\beta
} $
we find
\begin{equation}
R_{\ \,\nu \lambda \rho }^{\mu }=\bar{R}_{\ \,\nu \lambda \rho }^{\mu
}-\sigma \left( K_{\lambda }^{\mu }K_{\rho \nu }-K_{\rho }^{\mu }K_{\lambda
\nu }\right)
\label{E-216}
\end{equation}
providing an expression for the intrinsic curvature
$R_{\ \,\nu \lambda \rho }^{\mu } $ in terms of the projected curvature $ \bar{R}_{\
\,\nu \lambda \rho }^{\mu }$ and the intrinsic curvature $ K_{\rho \nu }$.
Contracting on $\alpha $ and $\gamma $ in (\ref{F-gauss}) leads to
\begin{equation}
P_{\ \,\alpha }^{\alpha ^{\prime }}P_{\ \,\beta }^{\beta ^{\prime }}R_{\
\,\alpha ^{\prime }\beta ^{\prime }}-\sigma P_{\alpha \alpha ^{\prime
}}n^{\delta ^{\prime }}P_{\ \,\beta }^{\beta ^{\prime }}n^{\gamma \prime
}R_{\ \,\delta ^{\prime }\beta ^{\prime }\gamma ^{\prime }}^{\alpha ^{\prime
}}=\bar{R}_{\alpha \beta }-\sigma \left( KK_{\alpha \beta }-K_{\alpha
}^{\delta }K_{\beta \delta }\right)
\label{E-221}
\end{equation}
and contracting on $\alpha $ and $\beta $ gives
\begin{equation}
R-2\sigma R_{\alpha \beta }n^{\alpha }n^{\beta } = \bar{R}-\sigma
\left( K^{2}-K^{\alpha \beta }K_{\alpha \beta }\right)
\label{F-scalar_gauss}
\end{equation}
called the scalar Gauss relation.
Applying the Ricci identity (\ref{F-curvature}) to the vector $n$ as
\begin{equation}
\left( \nabla _{\beta }\nabla _{\alpha }-\nabla _{\alpha }\nabla _{\beta
}\right) n^{\gamma }=R_{\gamma ^{\prime }\alpha \beta }^{\gamma }n^{\gamma
^{\prime }}
\label{E-228}
\end{equation}
projecting the LHS onto $\Sigma $ as
\begin{equation}
P_{\ \,\alpha }^{\alpha ^{\prime }}P_{\ \,\beta }^{\beta ^{\prime
}}P_{\gamma ^{\prime }}^{\gamma }\left( \nabla _{\alpha ^{\prime }}\nabla
_{\beta ^{\prime }}-\nabla _{\beta ^{\prime }}\nabla _{\alpha ^{\prime
}}\right) n^{\gamma ^{\prime }}
\label{E-229}
\end{equation}
and using the identity (\ref{F-K_ident}) leads us to
\begin{equation}
D_{\beta }K_{\alpha }^{\gamma }-D_{\alpha }K_{\beta }^{\gamma }=P_{\gamma
^{\prime }}^{\gamma }n^{\delta }P_{\ \,\alpha }^{\alpha ^{\prime }}P_{\
\,\beta }^{\beta ^{\prime }}R_{\delta \alpha ^{\prime }\beta ^{\prime
}}^{\gamma ^{\prime }}
\label{F-Codazzi}
\end{equation}
which is called the Codazzi relation.
Contracting on $ \alpha$ and $ \gamma$ produces
\begin{equation}
n_{\delta }R_{\mu \nu \lambda }^{\delta }=D_{\lambda }K_{\nu \mu }-D_{\nu
}K_{\lambda \mu } \ .
\label{E-239}
\end{equation}
Using (\ref{unit_normal}) for the unit normal $n_\alpha$ provides an
interpretation of this expression as
\begin{equation}
n_{\delta }R_{\mu \nu \lambda }^{\delta }=\sigma N\delta _{\delta
}^{5}R_{\mu \nu \lambda }^{\delta }\longrightarrow R_{\mu \nu \lambda
}^{5}=\sigma \frac{1}{N}\left( D_{\lambda }K_{\nu \mu }-D_{\nu }K_{\lambda
\mu }\right)
\label{E-252}
\end{equation}
recalling the role of the extrinsic curvature $ K_{\mu \nu }$ as the curvature of $\cm $
mapped to the hypersurface $\Sigma $ and embedded in the larger manifold $\cm_5 $.
\begin{comment}
\begin{equation}
K_{\alpha \beta }=-\nabla _{\alpha }n_{\beta }+\sigma n_{\alpha }a_{\beta
}\longrightarrow \nabla _{\alpha }n_{\beta }=-K_{\alpha \beta }+\sigma
n_{\alpha }a_{\beta }
\label{E-230}
\end{equation}
\begin{eqnarray}
P_{\ \,\alpha }^{\alpha ^{\prime }}P_{\ \,\beta }^{\beta ^{\prime
}}P_{\gamma ^{\prime }}^{\gamma }\nabla _{\alpha ^{\prime }}\nabla _{\beta
^{\prime }}n^{\gamma ^{\prime }} &=&P_{\ \,\alpha }^{\alpha ^{\prime }}P_{\
\,\beta }^{\beta ^{\prime }}P_{\gamma ^{\prime }}^{\gamma }\nabla _{\alpha
^{\prime }}\left( -K_{\beta ^{\prime }}^{\gamma ^{\prime }}+\sigma n_{\beta
^{\prime }}a^{\gamma ^{\prime }}\right)
\label{E-231} \\
&=&-P_{\ \,\alpha }^{\alpha ^{\prime }}P_{\ \,\beta }^{\beta ^{\prime
}}P_{\gamma ^{\prime }}^{\gamma }\nabla _{\alpha ^{\prime }}K_{\beta
^{\prime }}^{\gamma ^{\prime }}+\sigma P_{\ \,\alpha }^{\alpha ^{\prime
}}\left( P_{\ \,\beta }^{\beta ^{\prime }}n_{\beta ^{\prime }}\right)
P_{\gamma ^{\prime }}^{\gamma }\left( \nabla _{\alpha ^{\prime }}a^{\gamma
^{\prime }}\right)
\label{E-232} \\
&&+\sigma P_{\ \,\alpha }^{\alpha ^{\prime }}P_{\ \,\beta }^{\beta ^{\prime
}}P_{\gamma ^{\prime }}^{\gamma }a^{\gamma ^{\prime }}\left( \nabla _{\alpha
^{\prime }}n_{\beta ^{\prime }}\right)
\label{E-233} \\
&=&-P_{\ \,\beta }^{\beta ^{\prime }}P_{\gamma ^{\prime }}^{\gamma }\left(
P_{\ \,\alpha }^{\alpha ^{\prime }}\nabla _{\alpha ^{\prime }}\right)
K_{\beta ^{\prime }}^{\gamma ^{\prime }}
\label{E-234} \\
&&+\sigma \left( P_{\gamma ^{\prime }}^{\gamma }a^{\gamma ^{\prime }}\right)
\left[ P_{\ \,\alpha }^{\alpha ^{\prime }}P_{\ \,\beta }^{\beta ^{\prime
}}\left( \nabla _{\alpha ^{\prime }}n_{\beta ^{\prime }}\right) \right]
\label{E-235} \\
&=&-D_{\alpha }K_{\beta }^{\gamma }+\sigma a^{\gamma }K_{\alpha \beta }
\label{E-236}
\end{eqnarray}
So
\begin{equation}
P_{\ \,\alpha }^{\alpha ^{\prime }}P_{\ \,\beta }^{\beta ^{\prime
}}P_{\gamma ^{\prime }}^{\gamma }\nabla _{\alpha ^{\prime }}\nabla _{\beta
^{\prime }}n^{\gamma ^{\prime }}-P_{\ \,\alpha }^{\alpha ^{\prime }}P_{\
\,\beta }^{\beta ^{\prime }}P_{\gamma ^{\prime }}^{\gamma }\nabla _{\beta
^{\prime }}\nabla _{\alpha ^{\prime }}n^{\gamma ^{\prime }}=-D_{\alpha
}K_{\beta }^{\gamma }+D_{\beta }K_{\alpha }^{\gamma }=D_{\beta }K_{\alpha
}^{\gamma }-D_{\alpha }K_{\beta }^{\gamma }
\label{E-237}
\end{equation}
and
Alternatively, starting from
\begin{equation}
\nabla _{\beta }\nabla _{\alpha }n_{\gamma }-\nabla _{\alpha }\nabla _{\beta
}n_{\gamma }=n_{\delta }R_{\delta \alpha \beta }^{\delta }
\label{E-240}
\end{equation}
and projecting
\begin{equation}
P_{\ \,\alpha }^{\alpha ^{\prime }}P_{\ \,\beta }^{\beta ^{\prime
}}P_{\gamma }^{\gamma ^{\prime }}\left( \nabla _{\alpha ^{\prime }}\nabla
_{\beta ^{\prime }}-\nabla _{\beta ^{\prime }}\nabla _{\alpha ^{\prime
}}\right) n_{\gamma ^{\prime }}
\label{E-241}
\end{equation}
and using (\ref{F-K_ident})
\begin{equation}
K_{\alpha \beta }=-\nabla _{\alpha }n_{\beta }+\sigma n_{\alpha }a_{\beta
}\longrightarrow \nabla _{\alpha }n_{\beta }=-K_{\alpha \beta }+\sigma
n_{\alpha }a_{\beta }
\label{E-242}
\end{equation}
\begin{eqnarray}
P_{\ \,\alpha }^{\alpha ^{\prime }}P_{\ \,\beta }^{\beta ^{\prime
}}P_{\gamma }^{\gamma ^{\prime }}\nabla _{\alpha ^{\prime }}\nabla _{\beta
^{\prime }}n_{\gamma ^{\prime }} &=&P_{\ \,\alpha }^{\alpha ^{\prime }}P_{\
\,\beta }^{\beta ^{\prime }}P_{\gamma }^{\gamma ^{\prime }}\nabla _{\alpha
^{\prime }}\left( -K_{\beta ^{\prime }\gamma ^{\prime }}+\sigma n_{\beta
^{\prime }}a_{\gamma ^{\prime }}\right)
\label{E-243} \\
&=&-P_{\ \,\alpha }^{\alpha ^{\prime }}P_{\ \,\beta }^{\beta ^{\prime
}}P_{\gamma }^{\gamma ^{\prime }}\nabla _{\alpha ^{\prime }}K_{\beta
^{\prime }\gamma ^{\prime }}+\sigma P_{\ \,\alpha }^{\alpha ^{\prime
}}\left( P_{\ \,\beta }^{\beta ^{\prime }}n_{\beta ^{\prime }}\right)
P_{\gamma }^{\gamma ^{\prime }}\left( \nabla _{\alpha ^{\prime }}a_{\gamma
^{\prime }}\right)
\label{E-244} \\
&&+\sigma P_{\ \,\alpha }^{\alpha ^{\prime }}P_{\ \,\beta }^{\beta ^{\prime
}}P_{\gamma }^{\gamma ^{\prime }}a_{\gamma ^{\prime }}\left( \nabla _{\alpha
^{\prime }}n_{\beta ^{\prime }}\right)
\label{E-245} \\
&=&-P_{\ \,\beta }^{\beta ^{\prime }}P_{\gamma }^{\gamma ^{\prime }}\left(
P_{\ \,\alpha }^{\alpha ^{\prime }}\nabla _{\alpha ^{\prime }}\right)
K_{\beta ^{\prime }\gamma ^{\prime }}
\label{E-246} \\
&&+\sigma \left( P_{\gamma }^{\gamma ^{\prime }}a_{\gamma ^{\prime }}\right)
\left[ P_{\ \,\alpha }^{\alpha ^{\prime }}P_{\ \,\beta }^{\beta ^{\prime
}}\left( \nabla _{\alpha ^{\prime }}n_{\beta ^{\prime }}\right) \right]
\label{E-247} \\
&=&-D_{\alpha }K_{\beta \gamma }+\sigma a_{\gamma }K_{\alpha \beta }
\label{E-248}
\end{eqnarray}
So
\begin{equation}
P_{\ \,\alpha }^{\alpha ^{\prime }}P_{\ \,\beta }^{\beta ^{\prime
}}P_{\gamma }^{\gamma ^{\prime }}\nabla _{\alpha ^{\prime }}\nabla _{\beta
^{\prime }}n_{\gamma ^{\prime }}-P_{\ \,\alpha }^{\alpha ^{\prime }}P_{\
\,\beta }^{\beta ^{\prime }}P_{\gamma }^{\gamma ^{\prime }}\nabla _{\beta
^{\prime }}\nabla _{\alpha ^{\prime }}n_{\gamma ^{\prime }}=-D_{\alpha
}K_{\beta \gamma }+D_{\beta }K_{\alpha \gamma }=D_{\beta }K_{\alpha \gamma
}-D_{\alpha }K_{\beta \gamma }
\label{E-249}
\end{equation}
and
\begin{equation}
D_{\beta }K_{\alpha \gamma }-D_{\alpha }K_{\beta \gamma }=P_{\ \,\alpha
}^{\alpha ^{\prime }}P_{\ \,\beta }^{\beta ^{\prime }}P_{\gamma }^{\gamma
^{\prime }}n_{\delta }R_{\gamma ^{\prime }\alpha ^{\prime }\beta ^{\prime
}}^{\delta }
\label{E-250}
\end{equation}
which is called the \textbf{Codazzi relation}. Thus, the projected part is
\begin{equation}
n_{\delta }R_{\mu \nu \lambda }^{\delta }=D_{\lambda }K_{\nu \mu }-D_{\nu
}K_{\lambda \mu }
\label{E-251}
\end{equation}
so in a coordinate system with $n_{\alpha }=\sigma N\delta _{\alpha }^{5}$ we
find
\subsection{Evolution of the hypersurface $\Sigma $}
Again treating $n$ as a velocity associated with a curve between between
hyperspaces, so that the acceleration is
\begin{equation}
a_{\beta }=n^{\gamma }\nabla _{\gamma }n_{\beta }
\label{E-362}
\end{equation}
as in (\ref{F-K_ident-2}), using
\begin{equation}
n_{\alpha }=\sigma N\partial _{\alpha }\tau \left( X\right) =\sigma N\nabla
_{\alpha }\tau
\label{E-363}
\end{equation}
where $\tau \left( X\right) $ has a simple zero, we write
From $m=Nn$ we find
\begin{equation}
\nabla _{\beta }m^{\alpha }=-NK_{\beta }^{\alpha }-n_{\beta }D^{\alpha
}N+n^{\alpha }\nabla _{\beta }N
\label{E-372}
\end{equation}
We examine the evolution of $\gamma _{\alpha \beta }$ as the Lie derivative
along $m$.
\begin{eqnarray}
{\mathcal{L}}_{m}\,\gamma _{\alpha \beta } &=&m^{\gamma }\nabla _{\gamma
}\gamma _{\alpha \beta }+\gamma _{\gamma \beta }\nabla _{\alpha }m^{\gamma
}+\gamma _{\alpha \gamma }\nabla _{\beta }m^{\gamma }
\label{E-373} \\
&=&Nn^{\gamma }\nabla _{\gamma }\left( g_{\alpha \beta }-\sigma n_{\alpha
}n_{\beta }\right) -\gamma _{\gamma \beta }\left( NK_{\ \,\alpha }^{\gamma
}+D^{\gamma }N\,n_{\alpha }-n^{\gamma }\nabla _{\alpha }N\right)
\label{E-374} \\
&&-\gamma _{\alpha \gamma }\left( NK_{\ \,\beta }^{\gamma }+D^{\gamma
}N\,n_{\beta }-n^{\gamma }\nabla _{\beta }N\right)
\label{E-375} \\
&=&-\sigma N\left( n^{\gamma }\left( \nabla _{\gamma }n_{\alpha }\right)
n_{\beta }+n_{\alpha }n^{\gamma }\nabla _{\gamma }n_{\beta }\,\right)
-NK_{\beta \alpha }-\left( D_{\beta }N\right) \,n_{\alpha }
\label{E-376} \\
&&-NK_{\alpha \beta }-\left( D_{\alpha }N\right) \,n_{\beta }
\label{E-377}
\\
&=&-\sigma N\left( a_{\alpha }n_{\beta }+n_{\alpha }a_{\beta }\,\right)
-2NK_{\beta \alpha }-\left( D_{\beta }N\right) \,n_{\alpha }-\left(
D_{\alpha }N\right) \,n_{\beta }
\label{E-378} \\
&=&-\sigma N\left( \left( -\sigma D_{\alpha }\ln N\right) n_{\beta
}+n_{\alpha }\left( -\sigma D_{\beta }\ln N\right) \,\right) -2NK_{\beta
\alpha }
\label{E-379} \\
&&-\left( D_{\beta }N\right) \,n_{\alpha }-\left( D_{\alpha }N\right)
\,n_{\beta }
\label{E-380} \\
&=&-\sigma N\left( \left( -\sigma \frac{1}{N}D_{\alpha }N\right) n_{\beta
}+n_{\alpha }\left( -\sigma \frac{1}{N}D_{\beta }N\right) \,\right)
-2NK_{\beta \alpha }
\label{E-381} \\
&&-\left( D_{\beta }N\right) \,n_{\alpha }-\left( D_{\alpha }N\right)
n_{\beta }
\label{E-382} \\
&=&N\left( \left( \frac{1}{N}D_{\alpha }N\right) n_{\beta }+n_{\alpha
}\left( \frac{1}{N}D_{\beta }N\right) \,\right)
\label{E-383} \\
&&-2NK_{\beta \alpha }-\left( D_{\beta }N\right) \,n_{\alpha }-\left(
D_{\alpha }N\right) \,n_{\beta }
\label{E-384} \\
&=&\left( D_{\alpha }N\right) n_{\beta }+\left( n_{\alpha }D_{\beta }\right)
N\,-2NK_{\beta \alpha }-\left( D_{\beta }N\right) \,n_{\alpha }-\left(
D_{\alpha }N\right) \,n_{\beta }
\label{E-385} \\
{\mathcal{L}}_{m}\,\gamma _{\alpha \beta } &=&-2NK_{\alpha \beta }
\label{E-386}
\end{eqnarray}
We can also write
\begin{equation}
{\mathcal{L}}_{m}\,\gamma _{\alpha \beta }=N{\mathcal{L}}_{n}\,\gamma
_{\alpha \beta }\longrightarrow K_{\alpha \beta }=-\frac{1}{2N}{\mathcal{L}}
_{m}\,\gamma _{\alpha \beta }=-\frac{1}{2}{\mathcal{L}}_{n}\,\gamma _{\alpha
\beta }
\label{E-387}
\end{equation}
The Lie derivative of the projector along the normal evolution vector then
satisfies
\subsection{Decomposition of the Riemann tensor}
\end{comment}
\begin{comment}
The 4+1 decomposition is accomplished by projecting onto $\Sigma $ and $n$.
Using the completeness relation to write
\begin{equation}
R_{\ \,\delta \alpha \beta }^{\gamma }=\left( P_{\alpha }^{\alpha ^{\prime
}}+\sigma n_{\alpha }n^{\alpha ^{\prime }}\right) \left( P_{\beta }^{\beta
^{\prime }}+\sigma n_{\beta }n^{\beta ^{\prime }}\right) \left( P_{\gamma
^{\prime }}^{\gamma }{}\!+\sigma n^{\gamma }n_{\gamma ^{\prime }}\right)
\left( P_{\delta }^{\delta ^{\prime }}~+\sigma n_{\delta }n^{\delta ^{\prime
}}\right) R_{\ \,\delta ^{\prime }\alpha ^{\prime }\beta ^{\prime }}^{\gamma
^{\prime }}
\label{E-397}
\end{equation}
we obtain products of the type
\begin{eqnarray}
P_{\ \,\alpha }^{\alpha ^{\prime }}P_{\ \,\beta }^{\beta ^{\prime }}P_{\
\,\gamma ^{\prime }}^{\gamma }{}\!P_{\ \,\delta }^{\delta ^{\prime }}~R_{\
\,\delta ^{\prime }\alpha ^{\prime }\beta ^{\prime }}^{\gamma ^{\prime }}
&\rightarrow &R_{\ \,\nu \rho \lambda }^{\mu }
\label{E-398} \\
P_{\gamma ^{\prime }}^{\gamma }n^{\delta }P_{\ \,\alpha }^{\alpha ^{\prime
}}P_{\ \,\beta }^{\beta ^{\prime }}~R_{\delta \alpha ^{\prime }\beta
^{\prime }}^{\gamma ^{\prime }} &\rightarrow &R_{\ \,5\rho \lambda }^{\mu }
\label{E-399} \\
P_{\alpha \alpha ^{\prime }}n^{\gamma ^{\prime }}P_{\beta }^{\beta ^{\prime
}}~n^{\delta }R_{\delta \beta ^{\prime }\gamma ^{\prime }}^{\alpha ^{\prime
}} &\rightarrow &R_{\ \,5\rho 5}^{\mu }
\label{E-400}
\end{eqnarray}
where terms with 3 or more factors of $n$ vanish because of the antisymmetry
of $R_{\ \,\delta \alpha \beta }^{\gamma }$.
Recalling the Gauss and Codazzi
relations (\ref{F-gauss}) and (\ref{F-Codazzi})
\begin{equation}
P_{\ \,\alpha }^{\alpha ^{\prime }}P_{\ \,\beta }^{\beta ^{\prime }}P_{\
\,\gamma ^{\prime }}^{\gamma }{}\!P_{\ \,\delta }^{\delta ^{\prime }}~R_{\
\,\delta ^{\prime }\alpha ^{\prime }\beta ^{\prime }}^{\gamma ^{\prime }}=
\bar{R}_{\delta \alpha \beta }^{\gamma }-\sigma \left( K_{\alpha }^{\gamma
}K_{\beta \delta }-K_{\beta }^{\gamma }K_{\alpha \delta }\right)
\label{E-401}
\end{equation}
\begin{equation}
P_{\gamma ^{\prime }}^{\gamma }n^{\delta }P_{\ \,\alpha }^{\alpha ^{\prime
}}P_{\ \,\beta }^{\beta ^{\prime }}~R_{\delta \alpha ^{\prime }\beta
^{\prime }}^{\gamma ^{\prime }}=D_{\beta }K_{\alpha }^{\gamma }-D_{\alpha
}K_{\beta }^{\gamma }
\label{E-402}
\end{equation}
we see that the first involves 4 projections of the curvature onto $\Sigma $
and the second involves 3 projections onto $\Sigma $ and one projection onto
$n$, normal to $\Sigma $.
\end{comment}
Returning to the Ricci identity for $n^\alpha$, we apply (\ref{F-K_ident-2}) twice
to terms $\nabla _{\beta }\nabla _{\gamma }n^{\alpha }$
and project onto~(\ref{E-228}) with
$P_{\alpha \alpha ^{\prime }}n^{\gamma ^{\prime }}P_{\beta }^{\beta ^{\prime }}$ to obtain
\begin{equation}
-K_{\alpha \gamma }K_{\ \,\beta }^{\gamma }+\frac{1}{N}D_{\beta }D_{\alpha
}N +
P_{\ \,\alpha }^{\alpha^\prime }P_{\ \,\beta }^{\beta^\prime }\,n^{\gamma }\nabla
_{\gamma }K_{\alpha^\prime \beta^\prime }
=
P_{\alpha \alpha ^{\prime }}n^{\gamma ^{\prime }}P_{\beta }^{\beta ^{\prime
}}~R_{\delta \beta ^{\prime }\gamma ^{\prime }}^{\alpha ^{\prime }}n^{\delta
} \ .
\label{E-406}
\end{equation}
Again using (\ref{nab-m}) in the Lie derivative of $K_{\alpha \beta }$
to write
\begin{equation}
{\mathcal{L}}_{m}\,K_{\alpha \beta }=Nn^{\gamma }\nabla _{\gamma }K_{\alpha
\beta }-2NK_{\alpha \gamma }K_{\ \,\beta }^{\gamma }-K_{\alpha \gamma
}D^{\gamma }Nn_{\beta }-K_{\beta \gamma }D^{\gamma }Nn_{\alpha }
\label{E-409}
\end{equation}
the last two equations are combined as
\begin{equation}
\frac{1}{N}{\mathcal{L}}_{m}\,K_{\alpha \beta }+\frac{1}{N}D_{\alpha }D_{\beta
}N+K_{\alpha \gamma }K_{\ \,\beta }^{\gamma } = P_{\alpha \alpha ^{\prime
}}\,n^{\delta }P_{\ \,\beta }^{\beta ^{\prime }}\,n^{\gamma }\,\!R_{\ \,\delta
\beta ^{\prime }\gamma }^{\alpha ^{\prime }}
\label{F-proj3}
\end{equation}
to provide an evolution equation for $K_{\alpha \beta }$.
Rewriting (\ref{E-221}) as
\begin{equation}
P_{\alpha \alpha ^{\prime }}n^{\delta ^{\prime }}P_{\ \,\beta }^{\beta
^{\prime }}n^{\gamma \prime }R_{\ \,\delta ^{\prime }\beta ^{\prime }\gamma
^{\prime }}^{\alpha ^{\prime }} = \sigma P_{\ \,\alpha }^{\alpha ^{\prime
}}P_{\ \,\beta }^{\beta ^{\prime }}R_{\ \,\alpha ^{\prime }\beta ^{\prime }}
-\sigma \bar{R}_{\alpha \beta }+KK_{\alpha \beta }-K_{\alpha }^{\delta
}K_{\beta \delta }
\end{equation}
we can put (\ref{F-proj3}) into the form
\begin{equation}
P_{\ \,\alpha }^{\alpha ^{\prime }}P_{\ \,\beta }^{\beta ^{\prime }}R_{\
\,\alpha ^{\prime }\beta ^{\prime }} = \sigma
\frac{1}{N}{\mathcal{L}}_{m}\,K_{\alpha \beta }+\sigma \frac{1}{N} D_{\alpha
}D_{\beta }N+\bar{R}_{\alpha \beta }-\sigma KK_{\alpha \beta }+\sigma 2K_{\alpha
}^{\delta }K_{\beta \delta }
\label{F-ricci_proj}
\end{equation}
in which only $P_{\ \,\alpha }^{\alpha ^{\prime }}P_{\ \,\beta }^{\beta ^{\prime }}R_{\
\,\alpha ^{\prime }\beta ^{\prime }}$ on the LHS refers to the 5D geometry of
$\cm_5$.
\begin{comment}
\begin{equation}
\nabla _{\beta }\nabla _{\gamma }n^{\alpha }-\nabla _{\gamma }\nabla _{\beta
}n^{\alpha }=R_{\delta \beta \gamma }^{\alpha }n^{\delta }
\label{E-403}
\end{equation}
we perform
\begin{equation}
P_{\alpha \alpha ^{\prime }}n^{\gamma ^{\prime }}P_{\beta }^{\beta ^{\prime
}}\left( \nabla _{\beta ^{\prime }}\nabla _{\gamma ^{\prime }}n^{\alpha
^{\prime }}-\nabla _{\gamma ^{\prime }}\nabla _{\beta ^{\prime }}n^{\alpha
^{\prime }}\right) =P_{\alpha \alpha ^{\prime }}n^{\gamma ^{\prime
}}P_{\beta }^{\beta ^{\prime }}~R_{\delta \beta ^{\prime }\gamma ^{\prime
}}^{\alpha ^{\prime }}n^{\delta }
\label{E-404}
\end{equation}
and use
\begin{equation}
\nabla _{\alpha }n^{\beta }=-K_{\alpha }^{\beta }-n_{\alpha }D^{\beta }\ln N
\label{E-405}
\end{equation}
twice to obtain
The Lie derivative of $K_{\alpha \beta }$ is
\begin{equation}
{\mathcal{L}}_{m}\,K_{\alpha \beta }=m^{\gamma }\nabla _{\gamma }K_{\alpha
\beta }+K_{\gamma \beta }\nabla _{\alpha }m^{\gamma }+K_{\alpha \gamma
}\nabla _{\beta }m^{\gamma }
\label{E-407}
\end{equation}
so using
\begin{equation}
\nabla _{\beta }m^{\alpha }=-NK_{\beta }^{\alpha }-n_{\beta }D^{\alpha
}N+n^{\alpha }\nabla _{\beta }N
\label{E-408}
\end{equation}
we find
Projecting onto $\Sigma $ and using ${\mathcal{L}}_{m}P_{\ \,\beta }^{\alpha
}=0$ we arrive at
In a coordinate system for which
\begin{equation}
n_{\alpha }=\sigma N\delta _{\alpha }^{5}
\label{E-411}
\end{equation}
we can characterize these projections through
\begin{eqnarray}
E_{\mu }^{\alpha }E_{\nu }^{\beta }E_{\gamma }^{^{\lambda }}{}\!E_{^{\sigma
}}^{\delta }P_{\,\alpha }^{\alpha ^{\prime }}P_{\beta }^{\beta ^{\prime
}}P_{\gamma ^{\prime }}^{\gamma }{}\!P_{\delta }^{\delta ^{\prime }}~R_{\
\,\delta ^{\prime }\alpha ^{\prime }\beta ^{\prime }}^{\gamma ^{\prime }}
&=&R_{\ \,\sigma \mu \nu }^{\lambda }
\label{E-412} \\
E_{\mu }^{\alpha }E_{\nu }^{\beta }E_{\gamma }^{^{\lambda }}P_{\gamma
^{\prime }}^{\gamma }n^{\delta }P_{\ \,\alpha }^{\alpha ^{\prime }}P_{\
\,\beta }^{\beta ^{\prime }}~R_{\delta \alpha ^{\prime }\beta ^{\prime
}}^{\gamma ^{\prime }} &=&\sigma NR_{5\mu \nu }^{\lambda }
\label{E-413} \\
E^{\alpha \mu }E_{\nu }^{\beta }P_{\alpha \alpha ^{\prime }}\,n^{\delta
}P_{\ \,\beta }^{\beta ^{\prime }}\,n^{\gamma }\,\!R_{\ \,\delta \beta
^{\prime }\gamma }^{\alpha ^{\prime }} &=&\!N^{2}R_{\ \,5\nu 5}^{\mu }
\label{E-414}
\end{eqnarray}
making explicit the decomposed components of the 5D curvature tensor.
Comparing (\ref{F-proj3}) with the contracted Gauss relation
\begin{eqnarray}
P_{\ \,\alpha }^{\alpha ^{\prime }}P_{\ \,\beta }^{\beta ^{\prime }}R_{\
\,\alpha ^{\prime }\beta ^{\prime }}-\sigma P_{\alpha \alpha ^{\prime
}}n^{\delta ^{\prime }}P_{\ \,\beta }^{\beta ^{\prime }}n^{\gamma \prime
}R_{\ \,\delta ^{\prime }\beta ^{\prime }\gamma ^{\prime }}^{\alpha ^{\prime
}} &=&\bar{R}_{\alpha \beta }-\sigma \left( KK_{\alpha \beta }-K_{\alpha
}^{\delta }K_{\beta \delta }\right)
\label{E-415} \\
-\sigma P_{\alpha \alpha ^{\prime }}n^{\delta ^{\prime }}P_{\ \,\beta
}^{\beta ^{\prime }}n^{\gamma \prime }R_{\ \,\delta ^{\prime }\beta ^{\prime
}\gamma ^{\prime }}^{\alpha ^{\prime }} &=&-P_{\ \,\alpha }^{\alpha ^{\prime
}}P_{\ \,\beta }^{\beta ^{\prime }}R_{\ \,\alpha ^{\prime }\beta ^{\prime }}
\label{E-416} \\
&&+\bar{R}_{\alpha \beta }-\sigma \left( KK_{\alpha \beta }-K_{\alpha
}^{\delta }K_{\beta \delta }\right)
\label{E-417} \\
P_{\alpha \alpha ^{\prime }}n^{\delta ^{\prime }}P_{\ \,\beta }^{\beta
^{\prime }}n^{\gamma \prime }R_{\ \,\delta ^{\prime }\beta ^{\prime }\gamma
^{\prime }}^{\alpha ^{\prime }} &=&\sigma P_{\ \,\alpha }^{\alpha ^{\prime
}}P_{\ \,\beta }^{\beta ^{\prime }}R_{\ \,\alpha ^{\prime }\beta ^{\prime }}
\label{E-418} \\
&&-\sigma \bar{R}_{\alpha \beta }+KK_{\alpha \beta }-K_{\alpha }^{\delta
}K_{\beta \delta }
\label{E-419}
\end{eqnarray}
we get
\begin{eqnarray}
\frac{1}{N}{\mathcal{L}}_{m}\,K_{\alpha \beta }+\frac{1}{N}D_{\alpha
}D_{\beta }N &=&\sigma P_{\ \,\alpha }^{\alpha ^{\prime }}P_{\ \,\beta
}^{\beta ^{\prime }}R_{\ \,\alpha ^{\prime }\beta ^{\prime }}-\sigma \bar{R}
_{\alpha \beta }+KK_{\alpha \beta }-K_{\alpha }^{\delta }K_{\beta \delta }
\label{E-420} \\
\frac{1}{N}{\mathcal{L}}_{m}\,K_{\alpha \beta }+\frac{1}{N}D_{\alpha
}D_{\beta }N+K_{\alpha \gamma }K_{\ \,\beta }^{\gamma } &=&\sigma P_{\
\,\alpha }^{\alpha ^{\prime }}P_{\ \,\beta }^{\beta ^{\prime }}R_{\ \,\alpha
^{\prime }\beta ^{\prime }}-\sigma \bar{R}_{\alpha \beta }+KK_{\alpha \beta
}-K_{\alpha }^{\delta }K_{\beta \delta }
\label{E-421}
\end{eqnarray}
which of 5D elements contains only the Ricci tensor. Contracting on $\alpha $
and $\beta $ we get
\begin{eqnarray}
\gamma ^{\alpha \beta }P_{\ \,\alpha }^{\alpha ^{\prime }}P_{\ \,\beta
}^{\beta ^{\prime }}R_{\ \,\alpha ^{\prime }\beta ^{\prime }} &=&\sigma
\frac{1}{N}\gamma ^{\alpha \beta }{\mathcal{L}}_{m}\,K_{\alpha \beta
}+\sigma \frac{1}{N}\gamma ^{\alpha \beta }D_{\alpha }D_{\beta }N
\label{E-424} \\
&&+\gamma ^{\alpha \beta }\bar{R}_{\alpha \beta }-\sigma \gamma ^{\alpha
\beta }KK_{\alpha \beta }+\sigma 2\gamma ^{\alpha \beta }K_{\alpha }^{\delta
}K_{\beta \delta }
\label{E-425} \\
\gamma ^{\alpha \beta }R_{\ \,\alpha \beta } &=&\sigma \frac{1}{N}\gamma
^{\mu \nu }{\mathcal{L}}_{m}\,K_{\mu \nu }+\sigma \frac{1}{N}\gamma ^{\mu
\nu }D_{\mu }D_{\nu }N+\bar{R}
\label{E-426} \\
&&-\sigma K^{2}+\sigma 2K^{\mu \nu }K_{\mu \nu }
\label{E-427}
\end{eqnarray}
Writing
\begin{equation}
\gamma ^{\alpha \beta }R_{\ \,\alpha \beta }=P^{\alpha \beta }R_{\ \,\alpha
\beta }=\left( g^{\alpha \beta }-\sigma n^{\alpha }n^{\beta }\right) R_{\
\,\alpha \beta }=R-\sigma R_{\ \,\alpha \beta }n^{\alpha }n^{\beta }
\label{E-428}
\end{equation}
\begin{equation}
\gamma ^{\mu \nu }{\mathcal{L}}_{m}\,K_{\mu \nu }={\mathcal{L}}_{m}\left(
\gamma ^{\mu \nu }\,K_{\mu \nu }\right) -\left( {\mathcal{L}}_{m}\gamma
^{\mu \nu }\,\right) K_{\mu \nu }={\mathcal{L}}_{m}\,K-\left( {\mathcal{L}}
_{m}\gamma ^{\mu \nu }\,\right) K_{\mu \nu }
\label{E-429}
\end{equation}
and $\gamma ^{\mu \gamma }\gamma _{\gamma \nu }=\delta _{\nu }^{\mu }$ leads
to
\begin{eqnarray}
0 &=&\gamma ^{\mu \gamma }{\mathcal{L}}_{m}\gamma _{\gamma \nu }+\gamma
_{\gamma \nu }{\mathcal{L}}_{m}\gamma ^{\mu \gamma }
\label{E-430} \\
&=&\gamma ^{\lambda \nu }\gamma ^{\mu \gamma }{\mathcal{L}}_{m}\gamma
_{\gamma \nu }+\gamma ^{\lambda \nu }\gamma _{\gamma \nu }{\mathcal{L}}
_{m}\gamma ^{\mu \gamma }
\label{E-431} \\
&=&\gamma ^{\lambda \nu }\gamma ^{\mu \gamma }{\mathcal{L}}_{m}\gamma
_{\gamma \nu }+{\mathcal{L}}_{m}\gamma ^{\mu \lambda }
\label{E-432} \\
{\mathcal{L}}_{m}\gamma ^{\mu \lambda } &=&\gamma ^{\lambda \nu }\gamma
^{\mu \gamma }\left( -{\mathcal{L}}_{m}\gamma _{\gamma \nu }\right) =\gamma
^{\lambda \nu }\gamma ^{\mu \gamma }2NK_{\gamma \nu }=2NK^{\mu \lambda }
\label{E-433}
\end{eqnarray}
so
\begin{equation}
\gamma ^{\mu \nu }{\mathcal{L}}_{m}\,K_{\mu \nu }={\mathcal{L}}
_{m}\,K-2NK^{\mu \nu }K_{\mu \nu }
\label{E-434}
\end{equation}
and finally
\begin{eqnarray}
R-\sigma R_{\ \,\alpha \beta }n^{\alpha }n^{\beta } &=&\sigma \frac{1}{N}
\left( {\mathcal{L}}_{m}\,K-2NK^{\mu \nu }K_{\mu \nu }\right) +\sigma \frac{1
}{N}\gamma ^{\mu \nu }D_{\mu }D_{\nu }N+\bar{R}-\sigma K^{2}
\label{E-435}
\\
&&+\sigma 2K^{\mu \nu }K_{\mu \nu }
\label{E-436} \\
&=&\bar{R}-\sigma K^{2}+\sigma \frac{1}{N}{\mathcal{L}}_{m}\,K+\sigma \frac{1
}{N}D_{\mu }D^{\mu }N
\label{E-437}
\end{eqnarray}
Recalling the scalar Gauss relation
\begin{eqnarray}
{}\!R-2\sigma R_{\alpha \beta }n^{\alpha }n^{\beta } &=&\bar{R}-\sigma
\left( K^{2}-K^{\alpha \delta }K_{\alpha \delta }\right)
\label{E-438} \\
-\sigma R_{\alpha \beta }n^{\alpha }n^{\beta } &=&\frac{1}{2}\left[ \bar{R}
-R-\sigma \left( K^{2}-K^{\alpha \delta }K_{\alpha \delta }\right) \right]
\label{E-439}
\end{eqnarray}
\begin{equation}
R+\frac{1}{2}\left[ \bar{R}-R-\sigma \left( K^{2}-K^{\mu \nu }K_{\mu \nu
}\right) \right] =\bar{R}-\sigma K^{2}+\sigma \frac{1}{N}{\mathcal{L}}
_{m}\,K+\sigma \frac{1}{N}D_{\mu }D^{\mu }N
\label{E-440}
\end{equation}
\begin{eqnarray}
R &=&\bar{R}-\sigma K^{2}+\sigma \frac{1}{N}{\mathcal{L}}_{m}\,K+\sigma
\frac{1}{N}D_{\mu }D^{\mu }N-\frac{1}{2}\left[ \bar{R}-R-\sigma \left(
K^{2}-K^{\mu \nu }K_{\mu \nu }\right) \right]
\label{E-441} \\
&=&\bar{R}-\sigma K^{2}+\sigma \frac{1}{N}{\mathcal{L}}_{m}\,K+\sigma \frac{1
}{N}D_{\mu }D^{\mu }N-\frac{1}{2}\bar{R}+\frac{1}{2}R+\frac{1}{2}\sigma
\left( K^{2}-K^{\mu \nu }K_{\mu \nu }\right)
\label{E-442} \\
&=&\frac{1}{2}\bar{R}-\frac{1}{2}\sigma K^{2}-\frac{1}{2}\sigma K^{\mu \nu
}K_{\mu \nu }+\sigma \frac{1}{N}{\mathcal{L}}_{m}\,K+\sigma \frac{1}{N}
D_{\mu }D^{\mu }N
\label{E-443} \\
\frac{1}{2}R &=&\frac{1}{2}\bar{R}-\frac{1}{2}\sigma K^{2}-\frac{1}{2}\sigma
K^{\mu \nu }K_{\mu \nu }+\sigma \frac{1}{N}{\mathcal{L}}_{m}\,K+\sigma \frac{
1}{N}D_{\mu }D^{\mu }N
\label{E-444} \\
R &=&\bar{R}-\sigma K^{2}-\sigma K^{\mu \nu }K_{\mu \nu }+\sigma \frac{2}{N}{
\mathcal{L}}_{m}\,K+\sigma \frac{2}{N}D_{\mu }D^{\mu }N
\label{E-445}
\end{eqnarray}
These relations complete the decomposition of the 5D curvature to 4+1. By
its symmetry/antisymmetry properties projecting 3 times onto $R_{\delta
\beta \gamma }^{\alpha }$ with $n$ vanishes identically.
\end{comment}
\subsection{Decomposition of the Einstein Equation}
The Einstein equations
\begin{equation}
G_{\alpha \beta }=R_{\alpha \beta }-\frac{1}{2}g_{\alpha \beta }R=\frac{8\pi
G}{c^{4}}T_{\alpha \beta }
\label{E-446}
\end{equation}
can be written
\begin{equation}
R_{\alpha \beta }=\frac{8\pi G}{c^{4}}\left( T_{\alpha \beta }-\frac{1}{2}
g_{\alpha \beta }T\right)
\label{E-447}
\end{equation}
where $T=g^{\alpha \beta }T_{\alpha \beta }$.
As above, we decompose the field equations by projecting onto $\Sigma $ and $n$
as
\begin{equation}
T_{\alpha \beta } = T_{\alpha ^{\prime }\beta ^{\prime }}
\left( P_{\alpha }^{\alpha ^{\prime }}+\sigma n^{\alpha ^{\prime }}n_{\alpha }\right) \left(
P_{\beta }^{\beta ^{\prime }}+\sigma n^{\beta ^{\prime }}n_{\beta }\right)
= S_{\alpha \beta }+2\sigma n_{\alpha }p_{\beta }+n_{\alpha }n_{\beta }\kappa
\end{equation}
where
\begin{equation}
S_{\alpha \beta } = P_{\alpha }^{\alpha ^{\prime }}P_{\beta }^{\beta
^{\prime }}T_{\alpha ^{\prime }\beta ^{\prime }}
\qquad \qquad
p_{\beta } = -n^{\alpha ^{\prime }}P_{\beta }^{\beta ^{\prime }}T_{\alpha
^{\prime }\beta ^{\prime }}
\qquad \qquad
\kappa = n^{\alpha }n^{\beta }T_{\alpha \beta }
\end{equation}
so that $ S_{\mu \nu }$ corresponds to \textcolor{dRed}{the 4D
energy-momentum tensor} $ T_{\mu \nu }$, $ p_{\mu }$ corresponds
to \textcolor{dRed}{the mass current into the $\mu$ direction} $ T_{5\mu }$,
and $ \kappa$ corresponds to \textcolor{dRed}{the mass density} $ T_{55}$.
\textcolor{dRed}{It is useful to regard mass in this context as being related to the
difference between energy and momentum, a dynamical quantity in the SHP
framework.}
The trace is
\begin{equation}
T = g^{\alpha \beta }T_{\alpha \beta }=g^{\alpha \beta }\left( S_{\alpha
\beta }-2\sigma n_{\alpha }p_{\beta }+n_{\alpha }n_{\beta }\kappa \right)
= S-2\sigma g^{\alpha \beta }n_{\alpha }p_{\beta }+g^{\alpha \beta
}n_{\alpha }n_{\beta }\kappa
= S+\sigma \kappa
\label{E-458}
\end{equation}
where we used
\begin{equation}
g^{\alpha \beta }n_{\alpha }p_{\beta }=n\cdot p=0
\label{E-459}
\end{equation}
which follows from
\begin{equation}
p_{\beta }= -P_{\beta }^{\beta ^{\prime }}\left( n^{\alpha }T_{\alpha \beta
^{\prime }}\right) \in \ct\left( \Sigma \right) \ .
\label{E-460}
\end{equation}
Thus, projecting the field equations (\ref{E-447}) onto $\Sigma$
with $P_{\alpha }^{\alpha ^{\prime
}}P_{\beta }^{\beta ^{\prime }} $ leads to
\begin{equation}
P_{\alpha }^{\alpha ^{\prime }}P_{\beta }^{\beta ^{\prime }}\left( T_{\alpha
^{\prime }\beta ^{\prime }}-\frac{1}{2}g_{\alpha ^{\prime }\beta ^{\prime
}}T\right) =S_{\alpha \beta }-\frac{1}{2}\gamma _{\alpha \beta }\left(
S+\sigma \kappa \right)
\end{equation}
on the RHS and while the LHS is $P_{\ \,\alpha }^{\alpha ^{\prime }}P_{\
\,\beta }^{\beta ^{\prime }}R_{\ \,\alpha ^{\prime }\beta ^{\prime }}$ which
from (\ref{F-ricci_proj}) provides
\begin{equation}
{\mathcal{L}}_{m}\,K_{\mu \nu }=-D_{\mu }D_{\nu }N+N\left\{ -\sigma \bar{R}
_{\mu \nu }+KK_{\mu \nu }-2K_{\mu }^{\lambda }K_{\nu \lambda }+\sigma \frac{
8\pi G}{c^{4}}\left[ S_{\mu \nu }-\frac{1}{2}\gamma _{\mu \nu }\left(
S+\sigma \kappa \right) \right] \right\}
\label{F-evolve}
\end{equation}
as the evolution equation for $K_{\mu \nu }$.
The double projection onto the time direction $n$ is
\begin{equation}
\left( R_{\alpha \beta }-\frac{1}{2}g_{\alpha \beta }R\right) n^{\alpha
}n^{\beta } = \frac{8\pi G}{c^{4}}T_{\alpha \beta }n^{\alpha }n^{\beta }
\qquad \longrightarrow \qquad
R_{\alpha \beta }n^{\alpha }n^{\beta }-\frac{1}{2}\sigma R = \frac{8\pi G}{
c^{4}}\kappa
\label{E-465}
\end{equation}
and using the scalar Gauss relation (\ref{F-scalar_gauss}), we obtain
\begin{equation}
\bar{R}-\sigma \left( K^{2}-K^{\mu \nu }K_{\mu \nu }\right) = -\sigma \frac{
16\pi G}{c^{4}}\kappa
\label{E-471}
\end{equation}
This expression, called the Hamiltonian constraint, has no $\tau $-derivatives,
and so, if it is satisfied by the initial conditions, then it will be satisfied at all times.
\textcolor{dRed}{We observe that this constraint applies to the mass density of
the gravitational field, not the energy density as in 4D GR.}
The mixed projection with $P_{\ \,\beta }^{\beta ^{\prime }}n^{\alpha } $
\begin{equation}
n^{\alpha }P_{\ \,\beta }^{\beta ^{\prime }}\left( R_{\alpha \beta ^{\prime
}}-\frac{1}{2}g_{\alpha \beta ^{\prime }}R\right) =n^{\alpha }P_{\ \,\beta
}^{\beta ^{\prime }}\frac{8\pi G}{c^{4}}T_{\alpha \beta ^{\prime }}
\longrightarrow
P_{\ \,\beta }^{\beta ^{\prime }}n^{\alpha }R_{\alpha \beta ^{\prime }}-
\frac{1}{2}g_{\alpha \beta ^{\prime }}n^{\alpha }P_{\ \,\beta }^{\beta
^{\prime }}R = -\frac{8\pi G}{c^{4}}p_{\beta }
\label{E-475}
\end{equation}
is combined with the contracted Codazzi relation (\ref{E-239}) and
$g_{\alpha \beta ^{\prime }}n^{\alpha }P_{\ \,\beta }^{\beta ^{\prime
}}=n^{\alpha }P_{\alpha \beta }=0$ to obtain
\begin{equation}
D_{\mu }K_{\nu }^{\mu }-D_{\nu }K = \frac{8\pi G}{c^{4}}p_{\nu }
\label{E-476}
\end{equation}
which is called the momentum constraint, \textcolor{dRed}{referring to the flow
of mass into the field}, and it also has no $\tau $-derivatives.
We notice that the evolution equation contains only objects defined on $\Sigma $
and, thus, includes no factors of $\Gamma _{\mu \nu }^{5}$.
Any such factors can only appear in the constraint equations.
Writing the Einstein tensor as
\begin{equation}
G_{\textcolor{dRed}{\alpha \beta} } = R_{\alpha \beta }-\frac{1}{2}g_{\alpha \beta }R
\end{equation}
the Bianchi relations are
\begin{equation}
\nabla _{\alpha }G^{\alpha \beta }=\partial _{\alpha }G^{\alpha \beta }+
\text{Christoffel Symbols}\times G^{\alpha \beta }=0
\label{E-477}
\end{equation}
forming a set of relations among the field entities.
The five equations in 5D reduce the number of independent components of
$G^{\alpha \beta }$ from fifteen to ten, and are understood as constraints on the
initial conditions of the evolution equations.
Because the Einstein equations are second order in $\tau$ derivatives of the
metric, a solution requires that the initial conditions include $\gamma_{\mu \nu
}$ and $\partial_{\tau }\gamma_{\mu \nu }$ at the initial time.
Expanding $\partial _{\alpha }G^{\alpha \beta } = \partial_{\mu }G^{\mu
\beta } + \partial_{5 }G^{5 \beta }$ to rewrite the Bianchi relations as
\begin{equation}
\frac{1}{c_5}\partial _{\tau }G^{5\beta }=-\partial _{\mu }G^{\mu \beta }-\text{
Christoffel Symbols}\times G^{\alpha \beta }
\label{E-478}
\end{equation}
and noticing that the LHS cannot be more than second order in $\partial
_{\tau }$, we see that $G^{5\beta }$ cannot be more than first
order in $\partial _{\tau }$.
Therefore, the expressions contained in $G^{5\beta }$ must be part of the initial conditions.
The Einstein equations
\begin{equation}
G^{\alpha \beta }=8\pi G_{N}T^{\alpha \beta }
\label{E-479}
\end{equation}
thus split into components
\begin{eqnarray}
G^{\mu \beta } \eq 0 \ \longrightarrow \ \text{ten equations of second order in
} \partial_\tau
\label{E-480} \\
G^{5\beta } \eq 0 \ \longrightarrow \ \text{five relations among the
initial conditions of first order } \partial_\tau \ .
\label{E-481}
\end{eqnarray}
Moreover, the constraints are propagated to future times because
\begin{equation}
G^{5\beta }|_{\tau _{0}}=0\Rightarrow \partial _{\beta }G^{5\beta }|_{\tau
_{0}}=0\Rightarrow \partial _{\tau }G^{5\beta }|_{\tau _{0}}=0
\label{E-482}
\end{equation}
and so they do not change.
\subsection{Summary of Einstein System as Differential Equations}
The decomposition of the Einstein equations into a 4+1 system of partial
differential equations permits particular structures to be solved as an initial
value problem.
The initial conditions that are to be specified at some $\tau$ are the metric
$\gamma_{\mu \nu }$ and its first Lie derivative $ K_{\mu \nu }$, along with the
mass-energy-momentum distribution of matter as represented by $S_{\mu \nu } $,
$p_{\nu }$, and $\kappa $.
The initial conditions must satisfy the Hamiltonian constraint,
\textcolor{dRed}{a constraint on mass rather than energy,}
\begin{equation}
\bar{R}-\sigma \left( K^{2}-K^{\mu \nu }K_{\mu \nu }\right) = -\sigma \frac{
16\pi G}{c^{4}}\kappa
\label{einstein-3}
\end{equation}
and the momentum constraint
\begin{equation}
D_{\mu }K_{\nu }^{\mu }-D_{\nu }K = \frac{8\pi G}{c^{4}}p_{\nu }
\label{einstein-4}
\end{equation}
at the initial time.
Because these constraints contain no $\tau$ derivatives, they are guaranteed to
be satisfied at subsequent times.
Given appropriate initial conditions, the metric is found at subsequent times by
integrating forward---analytically or numerically---the coupled evolution equations
\begin{equation}
\frac{1}{c_5}{\mathcal{L}}_\tau\,\gamma _{\mu \nu } -
{\mathcal{L}}_{{\mathbf N}}\,\gamma _{\mu \nu } = -2NK_{\mu \nu }
\label{einstein-1}
\end{equation}
and
\begin{eqnarray}
\left( \frac{1}{c_5}{\mathcal{L}}_\tau - {\mathcal{L}}_{{\mathbf N}} \right)K_{\mu \nu }
\eq -D_{\mu }D_{\nu }N \vspace{-4pt} \notag \\ && \hspace{-44pt} +N\left\{ -\sigma \bar{R}
_{\mu \nu }+KK_{\mu \nu }-2K_{\mu }^{\lambda }K_{\nu \lambda }+\sigma \frac{
8\pi G}{c^{4}}\left[ S_{\mu \nu }-\frac{1}{2}\gamma _{\mu \nu }\left(
S+\sigma \kappa \right) \right] \right\}
\label{einstein-2}
\end{eqnarray}
We note that the lapse $N$ and shift $N^\mu$ are not dynamical variables,
but they are part of the metric specified at the initial time.
\section{The ADM Hamiltonian Formulation}
\label{ADM}
\textcolor{dRed}{The configuration space variable in the ADM
formalism is
$g _{\alpha \beta }=g _{\alpha \beta }\left( \gamma _{\mu \nu
},N^{\mu },N\right)$,
the full metric on $\cm_5$.
Becausre $N$ and $N^\mu$ are not dynamical, the phase space consists of $\gamma _{\mu \nu }$ and
$\dot \gamma _{\mu \nu }$, as given by}
\begin{equation}
\dot \gamma _{\mu \nu } = \frac{1}{c_5}{\mathcal{L}}_\tau\,\gamma _{\mu \nu } =
{\mathcal{L}}_{{\mathbf N}}\,\gamma _{\mu \nu }+2NK_{\mu \nu }
\label{einstein-1a}
\end{equation}
where we use (\ref{einstein-1}) with the sign convention for $K_{\mu \nu }$ reversed.
Contracting on $\alpha $ and $\beta $ in (\ref{F-ricci_proj}) and combining with
the scalar Gauss relation (\ref{F-scalar_gauss}), we obtain
\begin{equation}
R=\bar{R}-\sigma \left( K^{2}-K^{\alpha \beta }K_{\alpha \beta }\right)
+2\sigma \nabla _{\alpha }\left( n^{\beta }\nabla _{\beta }n^{\alpha
}-n^{\alpha }\nabla _{\beta }n^{\beta }\right)
\label{E-519}
\end{equation}
so that the Einstein-Hilbert action for GR in the absence of matter can be expanded as
\begin{equation}
S_{\textcolor{dRed}{ADM}}\left[ \gamma _{\mu \nu },\dot{\gamma}_{\mu \nu },N^{\mu },N\right]
=\int d\tau d^{4}x~\sqrt{g} \bar{R}
=\int d\tau d^{4}x~\sqrt{\gamma }N\left[ \bar{R}-\sigma \left( K^{\mu \nu }K_{\mu
\nu }-K^{2}\right) \right]
\label{E-523}
\end{equation}
where $g = \left\vert \det g_{\alpha \beta} \right\vert$ and
$\gamma = \left\vert \det \gamma_{\mu \nu } \right\vert$
and the total gradient in (\ref{E-519}) is discarded as a boundary term.
The DeWitt metric is defined as
\begin{equation}
G^{\mu \nu \lambda \rho }=\frac{1}{2}\left( \gamma ^{\mu \lambda }\gamma
^{\nu \rho }+\gamma ^{\mu \rho }\gamma ^{\nu \lambda }-2\gamma ^{\mu \nu
}\gamma ^{\lambda \rho }\right)
\label{E-524}
\end{equation}
with inverse in $D$ dimensions
\begin{equation}
G_{\mu \nu \lambda \rho } = \frac{1}{2}\left( \gamma
_{\lambda \zeta }\gamma _{\rho \kappa }+\gamma _{\lambda \kappa }\gamma
_{\rho \zeta }-\frac{2}{D-1}\gamma _{\lambda \rho }\gamma _{\zeta \kappa
}\right)
\end{equation}
in terms of which
\begin{equation}
G^{\mu \nu \lambda \rho }K_{\mu \nu }K_{\lambda \rho } = \frac{1}{2}\left(
\gamma ^{\mu \lambda }\gamma ^{\nu \rho }+\gamma ^{\mu \rho }\gamma ^{\nu
\lambda }-2\gamma ^{\mu \nu }\gamma ^{\lambda \rho }\right) K_{\mu \nu
}K_{\lambda \rho }
= K^{\mu \nu }K_{\mu \nu }-K^{2}
\label{E-527}
\end{equation}
so that
\begin{equation}
\mathcal{L}_{\textcolor{dRed}{ADM}}\left[ \gamma _{\mu \nu },\dot{\gamma}_{\mu \nu },N^{\mu },N
\right] =\sqrt{\gamma }N\left[ -\sigma G^{\mu \nu \lambda \rho }K_{\mu \nu
}K_{\lambda \rho }+\bar{R}\right] \ .
\label{E-528}
\end{equation}
Because $K^{\mu \nu }$ is first order in derivatives, the first term has
the form of kinetic energy. The canonical conjugate momentum to
$\gamma _{\mu \nu }$ is
\begin{equation}
\pi ^{\mu \nu }=\frac{\partial \mathcal{L}_{\textcolor{dRed}{ADM}}}{\partial \dot{\gamma}_{\mu
\nu }}=-2\sigma \sqrt{\gamma }NG^{\zeta \kappa \lambda \rho }K_{\lambda \rho
}\frac{\partial K_{\zeta \kappa }}{\partial \dot{\gamma}_{\mu \nu }}
\label{E-529}
\end{equation}
so that using (\ref{einstein-1a}) to obtain
\begin{equation}
\frac{\partial
K_{\zeta \kappa }}{\partial \dot{\gamma}_{\mu \nu }}=\frac{1}{2N}\delta
_{\zeta }^{\mu }\delta _{\kappa }^{\nu }
\label{E-531}
\end{equation}
we find
\begin{equation}
\pi ^{\mu \nu }=-2\sigma \sqrt{\gamma }NG^{\zeta \kappa \lambda \rho
}K_{\lambda \rho }\frac{1}{2N}\delta _{\zeta }^{\mu }\delta _{\kappa }^{\nu
}=-\sigma \sqrt{\gamma }G^{\mu \nu \lambda \rho }K_{\lambda \rho }=-\sigma
\sqrt{\gamma }\left( K^{\mu \nu }-\gamma ^{\mu \nu }K\right)
\label{E-532}
\end{equation}
with trace
\begin{equation}
\pi = \gamma _{\mu \nu }\pi ^{\mu \nu }=\sigma \left( D-1\right) \sqrt{ \gamma }K
\ \ \longrightarrow \ \
K = \frac{\sigma }{\left( D-1\right) \sqrt{\gamma }}\pi \ .
\label{E-534}
\end{equation}
Writing $K^{\mu \nu }$ in terms of $\pi ^{\mu \nu }$
\begin{equation}
K^{\mu \nu } = -\frac{\sigma }{\sqrt{\gamma }}\left( \pi ^{\mu \nu }-\gamma
^{\mu \nu }\frac{1}{\left( D-1\right) }\pi \right)
\label{E-538}
\end{equation}
and lowering the indices of $\pi^{\mu \nu }$
\begin{equation}
G_{\mu \nu \lambda \rho }\pi ^{\lambda \rho } =
\pi _{\mu \nu }-\frac{1}{D-1}\gamma _{\mu \nu }\pi
= -\sigma \sqrt{\gamma }K_{\mu \nu }
\label{E-558}
\end{equation}
we see that $K_{\mu \nu }$ represents the momentum conjugate to $
\gamma _{\mu \nu }$.
Replacing $K_{\mu \nu }$ in (\ref{einstein-1a}), we can write the velocity as
\begin{equation}
\dot{\gamma}_{\mu \nu } =
-\sigma \frac{2N}{\sqrt{\gamma }}G_{\mu \nu \lambda \rho }\pi ^{\lambda
\rho }+L_{N}\gamma _{\mu \nu }
\label{E-564}
\end{equation}
in terms of the momentum and configuration variable.
Because $\bar R$ is independent of the lapse $N$ and shift $N^{\mu }$ the
Lagrangian $\mathcal{L}_{\textcolor{dRed}{ADM}}\left[ \gamma _{\mu \nu },\dot{\gamma}_{\mu \nu
},N^{\mu },N\right] $ contains no derivatives of $N,N^{\mu }$ and these
act as Lagrange multipliers enforcing as constraints their conjugates.
Thus
\begin{equation}
p_{N}=\frac{\partial \mathcal{L}_{\textcolor{dRed}{ADM}}}{\partial \dot{N}}\mbox{\qquad}
p_{N^{\mu }}=\frac{\partial \mathcal{L}_{\textcolor{dRed}{ADM}}}{\partial \dot{N}^{\mu }}=0
\label{E-565}
\end{equation}
and variation with respect to the lapse and shift produces
\begin{equation}
0
=-\frac{\partial }{
\partial N}\left( \sqrt{\gamma }N\left[ R-\sigma G^{\mu \nu \lambda \rho
}K_{\mu \nu }K_{\lambda \rho }\right] \right)
= -\sqrt{\gamma }R+\sigma \sqrt{\gamma }G^{\mu \nu \lambda \rho }\frac{
\partial }{\partial N}\left( NK_{\mu \nu }K_{\lambda \rho }\right) \ .
\label{E-567}
\end{equation}
Rewriting (\ref{einstein-1a}) as
\begin{equation}
K_{\mu \nu }=\frac{1}{2N}\left( \dot{\gamma}_{\mu \nu }-D_{\mu }N_{\nu
}+D_{\nu }N_{\mu }\right)
\ \ \longrightarrow \ \ NK_{\mu \nu }K_{\lambda \rho }\sim \frac{1}{N}
\label{E-569}
\end{equation}
the Hamiltonian constraint becomes
\begin{equation}
0 = \sqrt{\gamma }\left[ -\sigma G^{\mu \nu \lambda \rho }K_{\mu \nu
}K_{\lambda \rho }-\bar R\right]
= \sqrt{\gamma }\left[ -\sigma \left( K^{\mu \nu }K_{\mu \nu }-K^{2}\right)
-\bar R\right]
=\mathcal{H}
\label{einstein-3a}
\end{equation}
where we used (\ref{E-527}) and the momentum constraint is
\begin{equation}
0 = -\frac{\partial \mathcal{L}_{\textcolor{dRed}{ADM}}}{\partial N_{\mu }}=
2\sigma \sqrt{\gamma }D_{\nu }\left( G^{\nu \mu \lambda \rho
}K_{\lambda \rho }\right)
= 2\sigma \sqrt{\gamma } \left( D_{\nu} K^{\nu \mu }- D^\mu K \right)
=\mathcal{H}^{\mu } \ .
\label{E-571}
\end{equation}
Comparison with (\ref{einstein-3}) and (\ref{einstein-4}) shows that these
constraints correspond to the non-evolving $G_{\mu 5}$ components of the
Einstein field equations.
Using (\ref{E-532}), we can also write
\begin{equation}
\mathcal{H}^{\nu }=2\sigma \sqrt{\gamma }D_{\mu }\left( G^{\mu \nu
\lambda \rho }K_{\lambda \rho }\right) =\sigma 2D_{\mu }\pi
^{\mu \nu } \ .
\label{E-588}
\end{equation}
The Legendre transformation to the Hamiltonian density is
\vspace{-12pt}
\begin{eqnarray}
\mathcal{H}_{ADM} \eq \pi ^{\mu \nu }\dot{\gamma}_{\mu \nu }-\mathcal{L}_{\textcolor{dRed}{ADM}}
\left[ \gamma _{\mu \nu },\dot{\gamma}_{\mu \nu },N^{\mu },N\right]
\notag \\
\eq -\sigma N\sqrt{\gamma }G^{\mu \nu \lambda \rho }K_{\mu \nu }K_{\lambda
\rho }+2\pi ^{\mu \nu }D_{\mu }N_{\nu }-\sqrt{\gamma }NR
\label{E-589}
\end{eqnarray}
where $N,N^{\mu }$ are Lagrange multipliers and do not require kinetic terms.
Integrating by parts and discarding the total gradient provides
\begin{equation}
2\pi ^{\mu \nu }D_{\mu }N_{\nu }=2D_{\mu }\left( \pi ^{\mu \nu }N_{\nu
}\right) -N_{\nu }\left( 2D_{\mu }\pi ^{\mu \nu }\right) =N_{\nu }\mathcal{H}
^{\nu }
\label{E-601}
\end{equation}
and using (\ref{einstein-3a}), we arrive at
\begin{equation}
\mathcal{H}_{\textcolor{dRed}{ADM}} = N\mathcal{H}+N_{\nu }\mathcal{H}^{\nu } \ .
\label{E-606}
\end{equation}
Writing the Hamiltonian in the form
\begin{equation}
\mathcal{H}_{\textcolor{dRed}{ADM}}=\pi ^{\mu \nu }\dot{\gamma}_{\mu \nu }+\dot{p}_{N}\mathcal{
H+}\dot{p}_{N^{\mu }}\mathcal{H}^{\mu }-\mathcal{L}_{\textcolor{dRed}{ADM}}
\label{E-609}
\end{equation}
the Hamiltonian and momentum constraints $\mathcal{H}=0$ and $\mathcal{H}^{\nu
}=0$ are seen to be
secondary constraints arising from the requirement that the primary constraints
$p_{N}=0$ and $p_{N^{\mu }}=0$
are preserved under time evolution,
\begin{equation}
\dot{p}_{N}=\left\{ p_{N},\mathcal{H}_{\textcolor{dRed}{ADM}}\right\} =0\mbox{\qquad}\dot{p}
_{N^{\mu }}=\left\{ p_{N^{\mu }},\mathcal{H}_{\textcolor{dRed}{ADM}}\right\} =0 \ .
\label{E-610}
\end{equation}
The remaining Einstein equations---the evolution equations $G^{\mu \nu }=0$
--- then follow from
\begin{equation}
\dot{\gamma}_{\mu \nu }=\left\{ \gamma _{\mu \nu },\mathcal{H}_{\textcolor{dRed}{ADM}}\right\}
\mbox{\qquad}\dot{\pi}^{\mu \nu }=\left\{ \pi ^{\mu \nu },\mathcal{H}
_{\textcolor{dRed}{ADM}}\right\}
\label{E-611}
\end{equation}
for the canonical variables
\begin{equation}
\left\{ \gamma _{\mu \nu } ,\pi ^{\lambda \rho }
\right\} =\frac{1}{2}\left( \delta _{\mu }^{\lambda }\delta _{\nu
}^{\rho }+\delta _{\mu }^{\rho }\delta _{\nu }^{\lambda }\right)
\label{E-612}
\end{equation}
The equation for $\dot{\gamma}_{\mu \nu }$ reproduces the definition of $\pi
^{\mu \nu }$, since $R$ does not contain $\dot{\gamma}_{\mu \nu }$ and so $
\left\{ \gamma _{\mu \nu },R\right\} =0$. The equation for $\dot{\pi}^{\mu
\nu }$ is thus equivalent to $G^{\mu \nu }=0$.
\section{Perturbations to Schwarzschild Geometry}
\label{SG}
\textcolor{dRed}{To get a feel for some simple possibilities in this formalism,
we pose a Schwarzschild-like interval in an empty pseudo-spacetime $\cm_5$}
\begin{equation}
ds^{2}=-c^{2}Bdt^{2}+Adr^{2}+r^{2}d\theta ^{2}+r^{2}\sin ^{2}\theta d\phi
^{2}+\sigma N^{2}c_{5}^{2}d\tau ^{2}\mbox{\qquad} \qquad T_{\alpha \beta }=0
\label{E-627}
\end{equation}
where $ N = N(x,\tau) $ and we allow the mass parameter $M = M(\tau)$ in the coefficients
\begin{equation}
B\left( r,\tau \right) = A^{-1}\left( r,\tau \right) =\left( 1-\frac{
\textcolor{dRed}{2}GM\left( \tau \right) }{rc^{2}}\right) \qquad \qquad
N^{2} = N^{2}\left( t,r,\tau \right)
\label{E-629}
\end{equation}
to be \textcolor{dRed}{$\tau$}-dependent. Although the 4D connection and curvature on $\cm$ may
now depend on $\tau$, these~generalizations do not change \textcolor{dRed}{their} structure.
The 4+1 metric is
\begin{equation}
\gamma _{\mu \nu }=\text{diag}\left( -B,A,r^{2},r^{2}\sin ^{2}\theta \right)
\mbox{\qquad}g_{5\mu } = N_\mu =0\mbox{\qquad}g_{55}=\sigma N^{2}
\label{E-631}
\end{equation}
and so the Einstein equations reduce to
\begin{equation}
\partial _{\tau }\gamma _{\mu \nu }=-2NK_{\mu \nu } \qquad \qquad
\partial _{5}K_{\mu \nu }=-D_{\mu }D_{\nu }N+N\left( -\sigma \bar{R}_{\mu
\nu }+KK_{\mu \nu }-2K_{\mu }^{\lambda }K_{\nu \lambda }\right)
\label{E-645}
\end{equation}
with constraints
\begin{equation}
\bar{R}-\sigma \left( K^{2}-K^{\mu \nu }K_{\mu \nu }\right) = 0
\qquad \qquad \qquad \qquad
D_{\mu }K_{\nu }^{\mu }-D_{\nu }K = 0
\label{E-648}
\end{equation}
\subsection{Constant Mass Source}
Taking $M\left( \tau \right) =\textcolor{dRed}{m}=\text{constant}$, we find
\begin{equation}
\partial _{5}\gamma _{\mu \nu }=0=-2NK_{\mu \nu } \ \longrightarrow \ K_{\mu \nu
}=0 \ \longrightarrow \ \bar{R}_{\mu \nu } = -\sigma \frac{1}{N
}D_{\mu }D_{\nu }N
\label{E-650}
\end{equation}
for the dynamical equations, as expected for a $\tau$-independent 4D Schwarzschild geometry.
The~momentum constraint is trivially satisfied, and the Hamiltonian constraint reduces to
$\bar{R}=0$.
Therefore, we must have
\begin{equation}
\textcolor{dRed}{\bar{R}=\gamma^{\mu \nu }\bar{R}_{\mu \nu }=-\sigma \frac{1}{N}\gamma^{\mu \nu }D_{\mu
}D_{\nu }N=0\longrightarrow \gamma^{\mu \nu }D_{\mu }D_{\nu }N=0 }
\label{E-655}
\end{equation}
meaning that $N$ can be any solution to the source-free 4D wave equation, which
in Schwarzschild geometry is
\begin{equation}
\left[ \partial _{0}^{2}-\frac{B}{r^{2}}\partial _{r}\left( r^{2}B\partial
_{r}\right) \right] N (t,r,\tau) =0
\label{S-wave}
\end{equation}
where $B$ is given in (\ref{E-629}). Writing the Lagrangian for a test
particle as
\begin{equation}
L =\frac{1}{2}M g_{\alpha \beta} \dot x^\alpha \dot x^\beta
=\frac{1}{2}M\left[ -c^{2}B\left( r,\tau \right) \dot{t}^{2}+A\left( r,\tau
\right) dr^{2}+r^{2}\dot{\theta}^{2}+r^{2}\sin ^{2}\theta \dot{\phi}
^{2}+\sigma c_{5}^{2}N^{2}\right]
\end{equation}
the equations of motion are
\begin{eqnarray}
0 \eq \ddot{t}+\frac{\partial _{r}B}{B}\dot{r}\dot{t}+c_{5}
\frac{\partial _{5 }B}{B}\dot{t}+\frac{1}{2}\sigma \frac{c_{5}^{2}}{c^{2}
}\frac{\partial _{t}N^{2}}{B} \\
0 \eq \ddot{r}+\frac{1}{2}\frac{\partial _{r}A}{A}\dot{r}^{2}+\frac{1}{2}c^{2}
\frac{\partial _{r}B}{A}\dot{t}^{2}-\frac{1}{A}r\dot{\theta}^{2}-\frac{1}{A}
r\sin ^{2}\theta \dot{\phi}^{2}+c_{5}\frac{\partial _{5 }A}{A}\dot{r}
-c_{5}^{2}\frac{1}{2}\sigma \frac{\partial _{r}N^{2}}{A} \\
0 \eq r^{2}\ddot{\theta}+2r\dot{r}\dot{\theta}-r^{2}\sin \theta \cos \theta
\dot{\phi}^{2} \\
0 \eq \ddot{\phi}+2\frac{1}{r}\dot{r}\dot{\phi}+2\cot \theta \dot{\theta}\dot{
\phi}
\end{eqnarray}
which are simplified using the rotational symmetry to put $\theta =\pi /2$.
Writing $\partial _{5 }= (1 / c_5) \partial _{\tau }$, these~become
\begin{eqnarray}
0 \eq \ddot{t}+\frac{\partial _{r}B}{B}\dot{r}\dot{t}+
\frac{1}{2}\sigma \frac{
c_{5}^{2}}{c^{2}}\frac{\partial _{t}N^{2}}{B} \label{t} \\
0 \eq \ddot{r}+\frac{1}{2}\frac{\partial _{r}A}{A}\dot{r}^{2}+\frac{1}{2}c^{2}
\frac{\partial _{r}B}{A}\dot{t}^{2}-\frac{1}{A}r\dot{\phi}^{2}-c_{5}^{2}
\frac{1}{2}\sigma \frac{\partial _{r}N^{2}}{A} \label{r} \\
0 \eq \ddot{\phi}+2\frac{1}{r}\dot{r}\dot{\phi}
\end{eqnarray}
where we used $\partial_\tau B = 0$. The angular equation has the standard solution
\begin{equation}
0 = \ddot{\phi}+2\frac{1}{r}\dot{r}\dot{\phi} \ \longrightarrow \
0=\frac{\ddot{ \phi}}{\dot{\phi}}+2\frac{\dot{r}}{r}
\ \longrightarrow \ r^{2}\dot{\phi} = J
\label{J}
\end{equation}
with constant $J$. In the equation for $t$ we recognize
\begin{equation}
\ddot{t}+\frac{\partial _{r}B}{B}\dot{r}\dot{t}
=\frac{1}{B}\left( B\frac{d\dot{t}}{d\tau }+\frac{dB
}{d\tau }\dot{t}\right) =\frac{1}{B}\frac{d\left( \dot{t}B\right) }{d\tau }
\end{equation}
and so (\ref{t}) becomes
\begin{equation}
0 = \frac{d\left( \dot{t}B\right) }{d\tau }+\frac{1}{2}\sigma
\frac{c_{5}^{2}}{c^{2}}\partial _{t}N^{2}
\label{t-2}
\end{equation}
leading to a perturbation in the evolution of the $t$ coordinate, which recovers
the usual relation
\begin{equation}
\dot{t} =\textcolor{dRed}{\left( 1-\frac{ 2Gm }{rc^{2}}\right)^{-1}}
\label{t-dot}
\end{equation}
for $ \partial _{t}N^{2} \rightarrow 0$.
It is convenient to rewrite (\ref{t-2}) as
\begin{equation}
\dot{t} = \frac{1}{B}\left( 1-\sigma \frac{c_{5}^{2}}{c^{2}}\frac{1}{2}
\int^\tau d\tau ~\partial _{t}N^{2}\right) \ .
\label{t-3}
\end{equation}
Using (\ref{J}) and (\ref{t-3}), the radial equation becomes
\begin{equation}
0 = \ddot{r}+\frac{1}{2}\frac{\partial _{r}A}{A}\dot{r}^{2}+\frac{1}{2}c^{2}
\frac{\partial _{r}B}{B}\left( 1-\sigma \frac{c_{5}^{2}}{c^{2}}\frac{1}{2}
\int^\tau d\tau ~\partial _{t}N^{2}\right) ^{2}-\frac{1}{A}\frac{J^{2}}{r^{3}}
-c_{5}^{2}\frac{1}{2}\sigma \frac{\partial _{r}N^{2}}{A}
\end{equation}
which we multiply by $2A\dot{r}$ and use
\begin{eqnarray}
\frac{d}{d\tau }\left( A\dot{r}^{2}\right) \eq \textcolor{dRed}{2A\dot{r}\ddot{r}+
\dot{r}^{3}\partial _{r}A} \\
\frac{d}{d\tau }\left( \frac{J^{2}}{r^{2}}\right) \eq
\textcolor{dRed}{-2\dot{r}\frac{J^{2}}{r^{3}}} \\
\frac{d}{d\tau }\left( -\frac{1}{B}\right) \eq
\textcolor{dRed}{\dot{r}\frac{\partial _{r}B}{B^{2}}} \\
\frac{d}{d\tau }N^{2} \eq \dot{r}\partial _{r}N^{2}+\dot{t}\partial _{t}N^{2}
+\partial_\tau N^{2}
\end{eqnarray}
to obtain
\begin{equation}
0 = \frac{d}{d\tau }\left[ A\dot{r}^{2}-c^{2}\frac{1}{B}+\frac{J^{2}}{r^{2}}
-c_{5}^{2}\sigma N^{2}\right] +\sigma c_{5}^{2}\left[ \left( \frac{d}{d\tau }
\frac{1}{B}\right) \int^\tau d\tau ~\partial _{t}N^{2}+\frac{1}{B}\partial _{t}N^{2}
+\partial_\tau N^{2} \right]
\label{r-2}
\end{equation}
where we dropped terms in $c_{5}^{4}$. Integrating by parts
\begin{equation}
\left( \frac{d}{d\tau }\frac{1}{B}\right) \int^\tau d\tau ~\partial _{t}N^{2}
=\frac{d}{d\tau }\left( \frac{1}{B}
\int^\tau d\tau ~\partial _{t}N^{2}\right) -\frac{1}{B}~\partial _{t}N^{2}
\end{equation}
and so the radial equation (\ref{r-2}) becomes
\begin{equation}
0 = \frac{d}{d\tau }\left[ A\dot{r}^{2}-c^{2}\frac{1}{B}+\frac{J^{2}}{r^{2}}
-c_{5}^{2}\sigma \left( N^{2}-\frac{1}{B}\int^\tau d\tau ~\partial _{t}N^{2}\right)
\right] +c_{5}^{2}\sigma \partial_\tau N^{2} \ .
\end{equation}
Using (\ref{Ham-1}) the Hamiltonian in these coordinates is
\begin{equation}
K=\frac{1}{2}Mg_{\mu \nu }\dot{x}^{\mu }\dot{x}^{\nu }-\frac{1}{2}
Mc_{5}^{2}g_{55}=\frac{1}{2}M\left( A\dot{r}^{2}-c^{2}\frac{1}{B}+\frac{J^{2}
}{r^{2}}\right) -\frac{1}{2}Mc_{5}^{2}\sigma N^{2}
\end{equation}
and the radial equation is now
\begin{equation}
0 = \frac{d}{d\tau }\left[\textcolor{dRed}{\frac{K}{2M}}
+c_{5}^{2}\sigma \frac{1}{B}\int^\tau d\tau ~\partial _{t}N^{2}
\right] +c_{5}^{2}\sigma \partial_\tau N^{2} \ .
\label{ham-f}
\end{equation}
showing that the Hamiltonian, and thus the dynamical mass of the test
particle, is not conserved.
If, \textcolor{dRed}{for example}, we consider a very short perturbation, so that
\begin{equation}
N(t,r,\tau) = \alpha (\tau) W(t,r)
\end{equation}
with $\alpha (\tau)$ a narrow distribution centered on $\tau = \tau_0$, then
writing
\begin{equation}
\textcolor{dRed}{\Delta M = \sigma c_{5}^{2}\int d\tau ~\partial _{t}N^{2} \simeq \sigma
c_{5}^{2}\partial_{t}\textcolor{dRed}{W^2}(t(\tau_0),r(\tau_0))\int d\tau \alpha^2 (\tau)}
\end{equation}
we may integrate (\ref{ham-f}) to obtain
\begin{equation}
\textcolor{dRed}{\frac{K}{2M}} + \Delta M \p \textcolor{dRed}{\left( 1-\frac{ 2Gm
}{rc^{2}}\right)^{-1}} \textcolor{dRed}{\simeq - c_{5}^{2}\sigma W^2 (t(\tau_0),r(\tau_0))
\left[ \alpha^2 (\infty) - \alpha^2 (-\infty)\right] }
= \kappa = \text{constant}
\end{equation}
describing a distance-dependent shift in the mass of the test particle.
Thus, while the mass parameter \textcolor{dRed}{$m$} that is associated with a source mass
remains constant, the addition of a $g_{55} $ component to the metric induces
mass transfer in the gravitational field.
\subsection{Variable Mass Source}
As a second example, we put $N=1$ and consider a $\tau$-dependent variation in
the mass $M$ parameter of the metric, as given by
\begin{equation}
M\left( \tau \right) =m\left[ 1+\alpha \left( \tau \right) \right]
\end{equation}
where the perturbation is small and so
\begin{equation}
\alpha ^{2} \ll 1 \ \longrightarrow \
B = A^{-1} = 1 - \Phi_0 \left[ 1+\alpha \left( \tau \right) \right]
\end{equation}
\textcolor{dRed}{where by comparison with (\ref{E-629}) we have $\Phi_0 = 2Gm / rc^2$.}
The 4D connection is now $\tau$-dependent, but retains its unperturbed form
with respect to the coordinates $x^\mu$, so the space remains Ricci flat with $ \ \bar{R} = 0$.
\textcolor{dRed}{We may ask what kind of mass-energy-momentum configuration would
give rise to a Schwarzschild geometry with $\tau$-varying mass parameter, and if
such a configuration can be made consistent with the Hamiltonian and momentum
constraints.}
The dynamical equation for the metric (neglecting terms in $\alpha^2$ and
$\Phi_0^2$) is
\begin{equation}
\partial _{5}\gamma _{\mu \nu } = -2NK_{\mu \nu } \ \longrightarrow \
K_{\mu \nu } = -\dfrac{1}{2c_{5}}\partial _{\tau }\gamma _{\mu \nu }
=-\dfrac{\Phi _{0}\dot{\alpha} \left( \tau \right) }{2c_{5}}\text{diag}\left( 1,\dfrac{1}{B^{2}},0,0\right)
\label{K-eqn}
\end{equation}
and raising one index, we find
\begin{equation}
K_{\nu }^{\mu } = \gamma ^{\mu \lambda }K_{\lambda \nu } = -\dfrac{\Phi
_{0}\dot{\alpha}\left( \tau \right) }{ 2c_{5}\textcolor{dRed}{B}}\text{diag}\left( -1,1,0,0\right)
\ \longrightarrow \ K = K_{\mu }^{\mu } = 0 \ .
\end{equation}
Using $ \bar{R} = 0$, $ N = 1 $, $N^\mu = 0$, $ \left( K_{\mu \nu}
\right)^2 \propto \alpha^2 \approx 0 $, along with (\ref{K-eqn}), the evolution
equation for the extrinsic curvature can be written
\begin{equation}
\dfrac{1}{c_{5}}\partial _{\tau }K_{\mu \nu } = -\dfrac{1}{2c_{5}^{2}}\Phi
_{0}\ddot{\alpha}\left( \tau \right) \text{diag} \left(
1,\dfrac{1}{B^{2}},0,0\right) =\sigma \dfrac{8\pi G}{c^{4}} \left[ S_{\mu \nu
}-\dfrac{1}{2}\gamma _{\mu \nu }\left( S+\sigma \kappa \right) \right]
\label{k-ev}
\end{equation}
which can be solved for $\alpha (\tau)$ if the energy-momentum tensor is known.
Because $\bar{R} = K = 0$ and $K^{\mu \nu }K_{\mu \nu } \propto \alpha^2
\Phi_0^2 \approx 0$, we may take the Hamiltonian constraint
\begin{equation}
\bar{R}-\sigma \left( K^{2}-K^{\mu \nu }K_{\mu \nu }\right) = -\sigma \frac{
16\pi G}{c^{4}}\kappa
\end{equation}
as the statement that the mass density $\kappa $ is approximately zero.
Thus, the evolution equation (\ref{k-ev}) for $K_{\mu \nu}$ can be satisfied by
\begin{equation}
S_{00} = \textcolor{dRed}{B^2} S_{11} = \left( -\sigma \dfrac{c_{5}^{2}}{c^{2}}\dfrac{16\pi G}{c^{2}}\right)
^{-1}\Phi _{0} \p \ddot{\alpha}\left( \tau \right) \qquad \qquad
S_{22} = S_{33} = 0 \ \longrightarrow \ S = 0
\end{equation}
\textcolor{dRed}{describing} a $\tau$-dependent energy density $ S_{00}$ and an
energy-momentum $S_{11} $ flowing into the radial direction.
Using the nonzero Christoffel symbols for the Schwarzschild metric
\begin{equation}
\Gamma _{10}^{0}=\frac{1}{2}\frac{\partial _{r}B}{B}\mbox{\qquad}\Gamma
_{00}^{1}=\frac{1}{2}\frac{\partial _{r}B}{A}
\label{E-2004}
\end{equation}
\begin{equation}
\Gamma _{11}^{1}=\frac{1}{2}\frac{\partial _{r}A}{A}=-\frac{1}{2}\frac{
\partial _{r}B}{B}\mbox{\qquad}\Gamma _{22}^{1}=-\frac{1}{A}r\mbox{\qquad}
\Gamma _{33}^{1}=-\frac{1}{A}r
\label{E-2005}
\end{equation}
\begin{equation}
\Gamma _{12}^{2}=\frac{1}{r}\mbox{\qquad}\Gamma _{13}^{3}=\frac{1}{r}
\label{E-2006}
\end{equation}
the momentum constraint
\begin{equation}
p_{\nu } = D_{\mu }K_{\nu }^{\mu
}-D_{\nu }K = \partial _{\mu }K_{\nu }^{\mu }+K_{\nu }^{\lambda }\Gamma
_{\lambda \mu }^{\mu }-K_{\lambda }^{\mu }\Gamma _{\nu \mu }^{\lambda }
\end{equation}
has components
\begin{eqnarray*}
p_{0} \eq \partial _{r}K_{0}^{1}+K_{0}^{0}\Gamma _{0\mu }^{\mu }-K_{\lambda
}^{\mu }\Gamma _{0\mu }^{\lambda }= K_{0}^{0}\Gamma _{0\mu }^{\mu }-K_{0}^{0}\Gamma
_{00}^{0}-K_{1}^{1}\Gamma _{01}^{1}=0
\\
p_{1} \eq \partial _{r}K_{1}^{1}+K_{1}^{\lambda }\Gamma _{\lambda \mu }^{\mu
}-K_{\lambda }^{\mu }\Gamma _{1\mu }^{\lambda } = -\frac{1}{2}\frac{1}{c_{5}r}\Phi _{0}\dot
\alpha}\left( \tau \right) \\
p_{2} \eq p_3 = 0
\end{eqnarray*}
which corresponds to a mass current $p_1$ flowing in the radial direction, driving
the varying mass parameter $M(\tau)$ in the metric.
Ascribing $M(\tau)$ to a $\tau $-dependent mass distribution that produces the
energy density $ S_{00}$, energy-momentum $S_{11} $, \textcolor{dRed}{and mass
current $p_{1}$},
we see once again that a
variation in source mass will be transferred across spacetime by the induced
gravitational field and, in turn, this field will lead to geodesic motion
corresponding to varying mass in a test event.
\section{Discussion}
Stueckelberg--Horwitz--Piron (SHP) theory is a
covariant approach to relativistic dynamics developed to address the problem of
time as it arises in electrodynamics.
In order to account for pair creation/annihilation processes in particular,
and to remove from kinematics any {\em a priori} constraints that may lead to
formal difficulties in describing relativistic interaction in general, SHP poses a theory of
spacetime events $x^\mu $ occurring irreversibly at a chronological time $\tau$.
By working through the implications of gauge theory at the classical and quantum
levels, SHP introduces five $\tau$-dependent electromagnetic potentials that reduce to
Maxwell fields at $\tau$-equilibrium.
The equations of SHP electrodynamics suggest a formal 5D symmetry structure
that must be broken to 4+1 representations of O(3,1) on physical grounds.
The resulting interactions form an integrable system in which event
evolution generates an instantaneous current defined over spacetime at $\tau$,
and, in turn, these currents induce $\tau$-dependent fields that act on other
events at $\tau$.
In this paper, we extend these ideas into general relativity, posing a 5D
pseudo-spacetime coordinatized by $(x^\mu, \tau)$ and possessing a formal 5D
general diffeomorphism symmetry, which must similarly break to 4+1
representations of geometrical and dynamical symmetries.
This approach makes SHP general relativity naturally amenable to an unambiguous
4+1 foliation, permitting a $\tau$-dependent generalization of gravitation
that can be decomposed to a set of 4D curvature and matter distribution
structures that evolve in $\tau$.
We have shown that the 15 Einstein equations in 5D decompose into five constraints on
initial conditions and 10 unconstrained evolution equations for the
gravitational field, equivalent to removing the {\em a priori} constraints from
the 10 Einstein equations in 4D.
\textcolor{dRed}{It is the removal of these constraints that permit mass transfer in SHP
gravitation, just as the absence of a mass-shell constraint permits the exchange
of particles and fields in SHP electrodynamics.}
We~completed the transformation of this system to
an ADM-like canonical system, although computation is
generally simpler in the system defined by the intrinsic and
extrinsic curvatures.
In analogy to SHP electrodynamics, the resulting formulation of general relativity
describes an instantaneous distribution of mass and energy at $\tau$ expressed
through $T_{\alpha\beta} (x, \tau)$, inducing a local metric $g_{\alpha\beta}
(x, \tau)$, which, in turn, determines geodesic equations of motion for any
particular event at~$x^\mu (\tau)$.
As a simple first example of this method, we obtained a nonrelativistic
generalization of Newtonian gravitation in the weak field approximation, by
considering a $\tau$-dependent massive source.
We saw that the non-constant source mass induces a $\tau$-dependent metric, that, in
turn, leads to geodesic motion for a test event associated with
non-conservation of the Hamiltonian function and, thus, mass variation.
We then considered two generalizations of the Schwarzschild solution. In the
first, we introduced a non-trivial metric component $g_{55}$ and saw that it
must satisfy a 4D sourceless wave equation. This generalized plane wave
similarly has the effect of inducing a mass shift in a test event.
In a second generalization, we treated the mass parameter in the standard
components of the Schwarzschild metric as $\tau$-dependent and solved for the
matter distribution that would produce this perturbation.
We found that the mass density of the matter distribution effectively vanishes,
while the energy density and momentum density into the radial direction drive
the variation of $M$.
These~mass effects may be compared to Equation (\ref{L-2}), in which we saw that
the SHP electromagnetic field component $f_{5 \alpha } $ permits the exchange of
mass between particles and fields, and is, thus, the condition for
non-conservation of proper time.
The first term of
\begin{equation}
f_{5 \alpha} = \partial_5 a_\alpha - \partial_\alpha a_5
\end{equation}
induces mass exchange through $\tau$-dependence of the electromagnetic field
$a_\alpha$, in analogy with the $\tau$-dependent gravitational field
$\gamma_{\mu\nu} (x,\tau)$ seen in (\ref{pert-1}) and (\ref{E-629}).
The second term induces mass exchange through a non-trivial fifth field
component $a_5$, in analogy with the $g_{55}$ metric component used in (\ref{S-wave}).
Beyond the theoretical interest in modified gravity, the
4+1 formalism in SHP general relativity offers a potentially significant tool
for the calculation of complex dynamics in numerical relativity.
For~example, the weak field approximation for a single source
event given in Section \ref{weak} may be extended by introducing a second source moving at
nonrelativistic velocity toward the first.
\textcolor{dRed}{Although the equations of motion for a test particle in the resulting field is
not amenable to closed form solution, a straightforward numerical
solution will take account of the nonlinear evolution of the particle's
coordinate time $t$. By comparison, a solution using $x^0 = ct$ as the evolution
parameter for the particles and fields will necessarily involve significantly
more computational complexity.}
\textcolor{dRed}{It has been shown \cite{speeds} that Maxwell theory emerges from
SHP electrodynamics by taking $c_5 \rightarrow 0$ and, in this sense, can be seen
as an equilibrium limit as $\tau$-evolution of the field reaches a steady state.
Alternatively, this limit is obtained, under appropriate boundary conditions,
by integration \cite{saad} of the fields and currents
over $\tau$, with the effect of summing at each spacetime point the
contributions from all values of $\tau$.
At equilibrium, the electromagnetic fields become $\tau$-independent and the
fields associated with the potential $a_5$ decouple from matter, so that, while
proper time remains unconstrained, it behaves as a classical conserved quantity.
Thus, by restricting his electromagnetic formalism to the $\tau$-independent
four-vector potential $A^\mu(x)$, Fock remained within standard Maxwell theory.
This~restriction was also applied in the formulations of quantum electrodynamics (QED) by
Schwinger \cite{Schwinger} and Feynman \cite{Feynman-1}, leading to fixed masses
for asymptotic particle states.}
\textcolor{dRed}{As we saw in Equation (\ref{E-480}) the Einstein tensor in SHP
can be split into ten unconstrained components and five constraints among
initial condition.
It remains to be shown that the unconstrained components correspond to the
ten components of the Einstein tensor in 4D, and the five constraints
permit mass exchange but conserve the total mass of matter and fields.
We further expect that, as in electrodynamics, an appropriate restriction in SHP
GR will lead to a decoupling of field components and the conservation of four
of the ten components of the Einstein tensor.
This restriction should produce a $\tau$-parameterized formulation of standard
4D GR, analogous to Fock's proper-time formulation of Maxwell theory.
While it is evident from (\ref{EL-2}) that taking $c_5$ to zero recovers
the standard spacetime geodesic Equation (\ref{eq-m}), the nonlinearity of the
Einstein field equations makes the problem of extracting a $\tau$-parameterized
field theory considerably more difficult.
These aspects of the theory will be reported in future work.
Such a theory would }
be especially \textcolor{dRed}{useful} in cases of strong gravitational fields.
We thus expect that calculations of black hole collisions and radiation from
stellar collapse may be improved by posing the initial value problem with
respect to the external evolution parameter $\tau$.
Numerical calculations of this type are beyond the scope of this paper and
they will be discussed in future~work.
\section{Acknowledgment}It is my pleasure to dedicate this paper to Lawrence P.
Horwitz---teacher, collaborator, and~inspiration---on the occasion of his
90th birthday.
\section*{References}
\bibliographystyle{iopart-num}
|
2,877,628,091,248 | arxiv |
\section{Comments on misalignments sub-samples}
\label{sec:subsamples}
As explained in Section~\ref{sec:misalignments}, all 20$k$ samples of
each scenario considered have been produced as 10 sub-samples of 2$k$ events,
each of which using a different set of the 10 sets of a particular
misalignment scenario.
This procedure was chosen to avoid any potentially ``friendly'' or
``catastrophic'' set of misalignments.
Figures~\ref{fig:pr_0sigma},~\ref{fig:pr_velo},~\ref{fig:pr_ot}
and~\ref{fig:pr_all} exemplify the distributions (over the 10 sets) of
pattern recognition efficiencies for the \texttt{Matching} and the \texttt{Forward}
algorithms respectively for the 0$\sigma$ case and for the $5\sigma$
misalignments cases of the VELO, the T-stations, and the VELO and
T-stations.
Pattern recognition efficiency variations of the order of a few percent are
observed on a per sub-sample basis.
Furthermore, regarding the measurement of resolutions it can be argued that when averaging over 10 samples of different misalignments one measures not only the average resolution but also a contribution coming from the spread of a potential bias in the single samples.
To assess the size of this effect the resolutions and biases have been measured on each of the 10 samples.
Thereafter, the average resolution was compared to the spread of the observed biases by taking a ratio of these quantities.
In the ideal case of a negligible bias this ratio would take values larger than 1.
Over all the resolutions under study this quantity varied between 14 and 36 in the $0\sigma$ case and between 3 and 15 in the $5\sigma$ case.
This means that even in the worst case the contribution of the variation in the bias is a factor 3 smaller than the average resolution.
When combining these numbers, by naively adding them in quadrature, one arrives at the conclusion that a variation in the bias of the single misalignment samples should account for at most $10\%$ of the measured resolution.
Hence the method of averaging over 10 samples of different misalignments is valid and does not lead to significantly wrong results.
\section{Introduction and motivation}
\label{sec:int}
An accurate and efficient tracking system is of crucial importance to
the success of the LHCb experiment \cite{LHCb,optimTDR}.
The alignment of the tracking system is of great importance, as
misalignments potentially cause losses in both tracking and physics
performances.
Understanding the alignment of the tracking detector planes is of paramount
importance in track reconstruction. This can be schematically demonstrated
with the simple example shown in Figure~\ref{fig:event}.
Here, a particle passes through a misaligned detector (left panel), but is
fitted assuming the uncorrected geometry (right panel).
As a consequence, wrong hit positions are assigned to the track and the
tracking performance deteriorates.
\begin{figure}[htbp]
\vspace*{0.3cm}
\begin{center}
\scalebox{0.6}{\includegraphics{eps/res2.eps}}
\end{center}
\caption{The basic alignment problem:
a particle passes through a misaligned detector (left) but is
fitted using the uncorrected geometry (right).
Both the ``true'' track (full line) and the ``mis-reconstructed''
track (dashed line) are visible.}
\label{fig:event}
\end{figure}
First studies of the deterioration of the LHCb tracking and (software) trigger
performance due to residual misalignments in the Vertex Locator (VELO) were
discussed in~\cite{petrie}~\footnote{Note that these studies relate to
a rather old and obsolete version of the trigger.}.
A study of the consequences of a misaligned Outer Tracker on the
signal and background separation of $B^0_{(s)} \to h^+h^{'-}$ decays
can be found in~\cite{jacopothesis}.
Here, the effects of misalignments of both the
Vertex Locator and the tracking T-stations on the analysis of the
$B^0_{(s)} \to h^+h^{'-}$ decays are investigated in detail.
Not only the effects on the pattern recognition, but also on the
event selection and reconstruction performance are described.
It is important to emphasise that the systematic study presented in this
note aims at assessing the effect of misalignments purely based on their size.
The next section details the implementation of misalignments and the
data samples used for the study. Section~\ref{sec:b2hh} presents the
impact of misalignments on the analysis of $B^0_{(s)} \to h^+h^{'-}$ decays separately for
several ``classes'' of misalignments of the tracking detectors VELO,
Inner Tracker (IT) and Outer Tracker (OT).
The conclusion and areas of future work are discussed in Section~\ref{sec:conclusions}.
\section{Implementation of misalignments}
\label{sec:misalignments}
\subsection{Misalignment scales}
\label{sec:misalignments_scales}
As mentioned in the previous section, here the effects
of misalignments based solely on their magnitude are considered.
In particular, no assumptions are made based on the quality of the metrology
or the expected performance of the alignment algorithms.
The misalignment effects are looked at as a function of a
``misalignment scale''.
The scales were chosen to be roughly $~1/3$ of the detector single-hit
resolution -- called ``1$\sigma$''.
Misalignments were then applied to each VELO module and sensor,
each IT box and OT layer following a Gaussian distribution with a sigma
corresponding to the 1$\sigma$ values.
The list of these misalignment 1$\sigma$ scales is summarised in
Table~\ref{tab:scales}.
Figure~\ref{fig:db} graphically confirms the Gaussian distributions
with some examples.
\begin{table}
\vspace{0.5cm}
\begin{center}
\begin{tabular}{|c||c|c|c||c|c|c|}
\hline
Detector & \multicolumn{3}{|c||}{Translations ($\mathrm{\mu m}$)} & \multicolumn{3}{|c|}{Rotations ($\mathrm{mrad}$)}\\
& $\Delta_x$ & $\Delta_y$ & $\Delta_z$ & $R_x$ & $R_y$ & $R_z$\\
\hline\hline
VELO modules & 3 & 3 & 10 & 1.00 & 1.00 & 0.20 \\
VELO sensors & 3 & 3 & 10 & 1.00 & 1.00 & 0.20 \\
IT boxes & 15 & 15 & 50 & 0.10 & 0.10 & 0.10 \\
OT layers & 50 & 0 & 100 & 0.05 & 0.05 & 0.05 \\
\hline
\end{tabular}
\caption{Misalignment ``1$\sigma$'' scales for the VELO modules and sensors, the IT boxes and OT layers.}
\label{tab:scales}
\end{center}
\vspace{0.5cm}
\end{table}
For each sub-detector 10 sets of such 1$\sigma$ misalignments were generated,
to avoid any potentially ``friendly'' or ``catastrophic'' set of misalignments.
Likewise, this procedure was repeated with the creation of 10 similar sets
for each VELO module and sensor and each IT box and OT layer with
misalignment scales increased by factors of 3 (3$\sigma$) and 5 (5$\sigma$).
Each of these 10 misalignment sets were implemented and stored in dedicated
(conditions) databases.
In total 9 databases were produced, corresponding to the 1$\sigma$, 3$\sigma$
and 5$\sigma$ misalignments for the VELO, IT and OT detectors.
\subsection{Data samples}
\label{sec:misalignments_samples}
The study was performed with a 20$k$ sample of $B^0 \to \pi^{+}\pi^{-}$\hspace{1.0mm}
events~\footnote{For the sake of simplicity only one of the $B^0_{(s)} \to h^+h^{'-}$ family
of decays was considered, as their different final states and
$B$-mother are not relevant in the present study.} for each scenario:
\begin{itemize}
\item perfect alignment (denoted 0$\sigma$ in the rest of the note);
\item 1$\sigma$, 3$\sigma$ and 5$\sigma$ misalignments for the
following cases:\\
VELO misalignments, IT and OT misalignments, and misalignments of
VELO, IT and OT.
\end{itemize}
Each 20$k$ sample consists in reality of 10 sub-samples of 2$k$ events,
each of which was processed with a different one of the 10 sets of a
particular misalignment scenario.
In addition, the effects of a systematic change in the VELO $z$-scale
have also been studied.
\subsection{Event processing}
\label{sec:misalignments_processing}
All the events were generated and digitized with a perfect geometry
(\texttt{Gauss} generation program version v25r8 and
\texttt{Boole} digitization program version v12r10).
Starting always from the same digitized data samples,
the misalignments were only introduced at reconstruction level, where pattern
recognition, track fitting, primary vertexing and particle identification
are performed. The version \texttt{v32r2} of the \texttt{Brunel}
reconstruction software was used for this task.
Physics analysis was later performed with the \texttt{DaVinci} program
version \texttt{v19r9}.
\section{Impact of misalignments on the analysis\\of $B^0_{(s)} \to h^+h^{'-}$ decays}
\label{sec:b2hh}
In this section the direct effect of misalignments on the selection of the
$B^0_{(s)} \to h^+h^{'-}$ decays (as extensively described
in~\cite{dc04b2hh_selection}) is analysed.
First, the misalignments of the VELO are considered
(in Section~\ref{sec:b2hh_velo}), then the ones of the IT and OT tracking
stations (in Section~\ref{sec:b2hh_t}). Next the combined effects
of both the VELO and the tracking stations were considered
(in Section~\ref{sec:b2hh_all}).
The last subsection, \ref{sec:b2hh_z-scale}, is devoted to the analysis
of effects of $z$-scaling of the VELO detector.
In all four studies, first the effect of misalignments on the pattern
recognition algorithms efficiencies is described, subsequently the effect
on the event selection is considered and every single cut is described
in detail.
Finally, the variations of momentum and $B^0$ mass, vertex and proper time
resolutions are shown as a function of the misalignments.
In Table~\ref{tab:shh} the selection cuts applied to the $B^0_{(s)} \to h^+h^{'-}$
channels are shown. A more detailed explanation of all cuts can be
found in~\cite{dc04b2hh_selection}.
\begin{table}[hbtp]
\vspace{0.5cm}
\begin{center}
\begin{tabular}{|l|c|}
\hline
$B^0_{(s)} \to h^+h^{'-}$ selection parameter & cut value \\
\hline \hline
smallest $p_t$($\mathrm{GeV}$) of the daughters & $>$ 1.0\\
largest $p_t$($\mathrm{GeV}$) of the daughters & $>$ 3.0 \\
$B^0_{(s)}$ $p_t$($\mathrm{GeV}$) & $>$ 1.2\\
\hline
smallest $IP/\sigma_{IP}$ of the daughters & $>$ 6 \\
largest $IP/\sigma_{IP}$ of the daughters & $>$ 12\\
$B^0_{(s)}$ $IP/\sigma_{IP}$ & $<$ 2.5\\
\hline
$B^0_{(s)}$ vertex fit $\chi^2$ & $<$ 5\\
($L$)$/\sigma_L$ & $>$ 18\\
$|\Delta$m$|$($\mathrm{MeV}$) & $<$ 50\\
\hline
\end{tabular}
\caption{Selection cuts applied to the $B^0_{(s)} \to h^+h^{'-}$ channels.}
\label{tab:shh}
\end{center}
\vspace{0.5cm}
\end{table}
\input lhcbnote_text_VELO.tex
\input lhcbnote_text_T.tex
\input lhcbnote_text_VELO_and_T.tex
\input lhcbnote_text_VELO_zScale.tex
\clearpage
\section{Conclusions}
\label{sec:conclusions}
In this note an extensive study of the effects of misalignments of the
tracking stations on the analysis of B decays has been presented,
illustrated by the example channel of $B^0_{(s)} \to h^+h^{'-}$.
The effects of the VELO and of the downstream tracking
stations IT and OT are rather decoupled. A summary for various quantities
related to the $B^0$ candidates is given in Table~\ref{tab:summary}.
\begin{table}[hbtp]
\vspace{0.5cm}
\begin{center}
\begin{tabular}{|l||c|c|}
\hline
& Affected by & Affected by\\
& VELO misalignments & T misalignments\\
\hline\hline
$B^0$ daughters momentum & no & yes\\
$B^0$ mass & no & yes\\
$B^0$ vertex & yes & no\\
$B^0$ impact parameter & yes & no\\
$B^0$ proper time & yes & no\\
\hline
\end{tabular}
\caption{Summary of the combined effect of misalignments in the VELO and
the T-stations on various physics quantities involved in the selection of
$B^0_{(s)} \to h^+h^{'-}$ events.}
\label{tab:summary}
\end{center}
\vspace{0.5cm}
\end{table}
This study has shown that misalignments of the order of one third of the detector
single-hit resolutions (our ``1$\sigma$'' scales) have little or negligible
effects on the quality of the reconstruction
and of the analysis of $B^0_{(s)} \to h^+h^{'-}$ decays.
The impact of misalignments on the performance of the pattern recognition
algorithms and on the primary vertex resolutions has also been assessed
for the first time.
It is important to realise that these quantitative results obtained are
rather general and not restricted to $B^0_{(s)} \to h^+h^{'-}$ decays -- they are independent
of the actual B decay.
A natural follow-up study would involve the assessment of residual effects
after application of the several dedicated alignment algorithms being developed
at present.
Preliminary studies with the latest versions of the algorithms seem to
indicate a performance leading to residual misalignments of the order of
our ``1$\sigma$'' scales. A detailed discussion is left for a future analysis.
\subsection{Misalignments in the T stations}
\label{sec:b2hh_t}
\subsubsection{Effect on the pattern recognition}
In Table~\ref{tab:pr_ot} both the \texttt{Matching}\ and the \texttt{Forward}\ pattern
recognition efficiencies are shown for the 0$\sigma$, 1$\sigma$, 3$\sigma$ and
5$\sigma$ scenarios. Again, all efficiencies are quoted for all long tracks
in the event with no momentum cut applied.
It can be seen that for the set of misalignments considered there is a relative
loss, for the 5$\sigma$ case, of 0.6\% for the \texttt{Forward}\ efficiency and of
5\% for the \texttt{Matching}\ efficiency.
\begin{table}[hbtp]
\vspace{0.5cm}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Misalignment & \texttt{Forward} & \texttt{Matching} \\
scenario & efficiency (\%) & efficiency (\%) \\
\hline\hline
0$\sigma$ & $85.9 \pm 0.2$ & $81.1 \pm 0.2$\\
1$\sigma$ & $85.8 \pm 0.2$ & $81.0 \pm 0.2$\\
3$\sigma$ & $85.6 \pm 0.2$ & $79.9 \pm 0.4$\\
5$\sigma$ & $85.4 \pm 0.3$ & $77.2 \pm 1.3$\\
\hline
\end{tabular}
\caption{\texttt{Matching}\ and \texttt{Forward}\ pattern recognition efficiencies
for various misalignment scenarios of the T-stations.}
\label{tab:pr_ot}
\end{center}
\vspace{0.5cm}
\end{table}
\subsubsection{Effect on the event selection}
In this section the effect of misalignments of the T-stations on the
various discriminating variables shown in Table~\ref{tab:shh} is analysed.
In Figures~\ref{fig:sel_ot} and \ref{fig:sel_ot_2} the distributions of the
discriminating variables are shown for the 0$\sigma$, 1$\sigma$, 3$\sigma$
and 5$\sigma$ cases (refer to Section~\ref{sec:b2hh_velo_sel} for an
explanation of the distributions, lines and cuts applied).
None of the discriminating variables is strongly affected
by the misalignments considered. This explains the relatively small
-- compared to the VELO case -- loss in number of selected events, shown in
Table~\ref{tab:sel_ot}, which amounts to 4.2\% in the worst-case scenario.
\subsubsection{Effect on resolutions}
In Figures~\ref{fig:resolution_ot} and~\ref{fig:resolution_ot_2} the
$B^0$ daughters' momentum, $B^0$ mass and proper time and primary vertex
and $B^0$ vertex resolutions are shown for the 0$\sigma$, 1$\sigma$,
3$\sigma$ and 5$\sigma$ cases. The values of the resolutions (the sigmas of
single-Gaussian fits) are summarised in
Tables~\ref{tab:resolution_ot} and~\ref{tab:resolution_ot_2}.
\begin{table}[hbtp]
\vspace{0.3cm}
\begin{center}
\begin{tabular}{|c|c|}
\hline
Misalignment & Number of \\
scenario & selected events \\
\hline\hline
0$\sigma$ & 4141 (100\%) \\
1$\sigma$ & 4131 (99.8\%)\\
3$\sigma$ & 4095 (98.9\%)\\
5$\sigma$ & 3969 (95.8\%)\\
\hline
\end{tabular}
\caption{Number of selected events after running the $B^0_{(s)} \to h^+h^{'-}$ selection
for the different T-stations misalignment scenarios.}
\label{tab:sel_ot}
\end{center}
\vspace{0.3cm}
\end{table}
\begin{table}[hbtp]
\vspace{0.3cm}
\begin{center}
\begin{tabular}{|c||c|c|c|}
\hline
& Momentum & Mass & Proper time \\
\raisebox{1.5ex}{Misalignment} & resolution & resolution & resolution \\
\raisebox{1.75ex}{scenario} & (\%) & ($\mathrm{MeV}$) & ($\mathrm{fs}$) \\
\hline\hline
0$\sigma$ & 0.49 & 22.5 & 37.7\\
1$\sigma$ & 0.50 & 22.6 & 37.4 \\
3$\sigma$ & 0.54 & 23.4 & 37.4 \\
5$\sigma$ & 0.59 & 25.8 & 38.8 \\
\hline
\end{tabular}
\caption{Values of the resolutions on the daughters' momentum, the $B^0$ mass
and the $B^0$ proper time for the different T-stations misalignment scenarios.
The resolutions correspond to the sigmas of single-Gaussian fits.
The errors on all resolutions are around 1-1.5 \%.}
\label{tab:resolution_ot}
\end{center}
\vspace{0.3cm}
\end{table}
\begin{table}[hbtp]
\vspace{0.3cm}
\begin{center}
\begin{tabular}{|c||c|c|c||c|c|c|}
\hline
Misalignment & \multicolumn{3}{|c||}{Primary vertex} & \multicolumn{3}{|c|}{$B^0$ vertex}\\
scenario & \multicolumn{3}{|c||}{resolutions ($\mathrm{\mu m}$)} &
\multicolumn{3}{|c|}{resolutions ($\mathrm{\mu m}$)} \\
& $x$ & $y$ & $z$ & $x$ & $y$ & $z$\\
\hline\hline
0$\sigma$ & 9 & 9 & 41 & 14 & 14 & 147 \\
1$\sigma$ & 9 & 9 & 42 & 14 & 14 & 146 \\
3$\sigma$ & 9 & 9 & 42 & 14 & 14 & 145 \\
5$\sigma$ & 9 & 9 & 42 & 14 & 14 & 142 \\
\hline
\end{tabular}
\caption{Values of the position resolutions on the primary and the $B^0$ decay
vertices for the different T-stations misalignment scenarios.
The errors on all resolutions are around 1-2 \%.}
\label{tab:resolution_ot_2}
\end{center}
\vspace{0.3cm}
\end{table}
Hardly any effect is visible on either the primary and $B^0$ decay vertex
resolutions or on the $B^0$ proper time resolution. This is expected because
the resolution on these quantities is dominated by the VELO resolutions
and alignment. On the contrary, a 17\% effect can be seen on the momentum
resolution of the daughters and a 15\% effect on the $B^0$ invariant mass.
Being a two-body decay, the $B^0$ invariant mass resolution is dominated by
the momentum resolution of the daughters, which explains the effect on
the $B^0$ mass resolution.
The effect of misalignments on the proper time error has also been studied.
Figure~\ref{fig:tau_ot} shows the distribution of the proper time error for
the various misalignment scenarios as well as the respective pull
distributions. As already shown in the VELO case,
Section~\ref{sec:b2hh_velo_res}, it can be observed that even in the
aligned case there is a bias in the estimation of the proper time
and that the errors are under-estimated (see Table~\ref{tab:tau_t}).
No significant worsening is observed in case of misalignments.
\begin{table}[hbtp]
\vspace{0.5cm}
\begin{center}
\begin{tabular}{|c||c|c|}
\hline
Misalignment scenario & Mean & Sigma\\
\hline\hline
0$\sigma$ & $0.06\pm{}0.02$ & $1.14\pm{}0.01$\\
1$\sigma$ & $0.06\pm{}0.02$ & $1.14\pm{}0.01$\\
3$\sigma$ & $0.06\pm{}0.02$ & $1.15\pm{}0.01$\\
5$\sigma$ & $0.05\pm{}0.02$ & $1.17\pm{}0.01$\\
\hline
\end{tabular}
\caption{Values for the mean and sigma of the proper time pulls for the different T-stations misalignment scenarios.}
\label{tab:tau_t}
\end{center}
\vspace{0.5cm}
\end{table}
\subsection{Misalignments in the VELO}
\label{sec:b2hh_velo}
\subsubsection{Effect on the pattern recognition}
Once the misalignments are introduced at the reconstruction level, as explained
in Section~\ref{sec:misalignments}, their effects need to be studied both on
the pattern recognition (track finding efficiencies) and on the event
selection (efficiency for finding the correct decay).
The pattern recognition algorithms~\footnote{For more details
about the definitions of the pattern recognition efficiencies
see~\cite{tracking}.} considered are the ones that find:
\begin{itemize}
\item tracks in the VELO detector in $r$-$z$ and 3D-space. The algorithms are
hereafter denoted by \texttt{VeloR}\ and \texttt{VeloSpace}, respectively;
\item tracks that traverse the whole LHCb detector (called ``long tracks'').
The two existing long tracking algorithms are hereafter denoted
\texttt{Forward}\ and \texttt{Matching}.
\end{itemize}
In Table~\ref{tab:pr_velo} the efficiencies for the \texttt{VeloR}, \texttt{VeloSpace},
\texttt{Forward}, and \texttt{Matching}\ pattern recognition algorithms are shown for the
0$\sigma$, 1$\sigma$, 3$\sigma$ and 5$\sigma$ scenarios.
All efficiencies are quoted for all long tracks in the event with
no momentum cut applied.
A clear relative degradation of $6.1\%$ for the \texttt{VeloSpace}\ track finding
efficiency is observed in the $5\sigma$ scenario.
For the $1\sigma$ scenario there is hardly any effect.
The deterioration in the \texttt{Forward}\ and \texttt{Matching}\ long tracking efficiencies
can be fully attributed to the worsening in the VELO part.
Hence, these algorithms are not directly affected by VELO misalignments,
as expected.
All pattern recognition efficiencies quoted throughout the note refer to
averages obtained from the 10 sub-samples explained in
Section~\ref{sec:misalignments}. Further details are discussed in
Appendix~\ref{sec:subsamples}.
\begin{table}[hbtp]
\vspace{0.5cm}
\begin{center}
\begin{tabular}{|c||c|c|c|c|}
\hline
Misalignment & \texttt{VeloR} & \texttt{VeloSpace} & \texttt{Forward} & \texttt{Matching} \\
scenario & efficiency (\%) & efficiency (\%) & efficiency (\%) & efficiency (\%) \\
\hline\hline
0$\sigma$ & $98.0 \pm 0.1$ & $97.0 \pm 0.1$ & $85.9 \pm 0.2$ & $81.1 \pm 0.2$\\
1$\sigma$ & $98.0 \pm 0.1$ & $96.7 \pm 0.1$ & $85.6 \pm 0.2$ & $80.9 \pm 0.2$\\
3$\sigma$ & $98.0 \pm 0.1$ & $93.9 \pm 0.8$ & $83.1 \pm 0.6$ & $78.3 \pm 0.6$\\
5$\sigma$ & $97.7 \pm 0.2$ & $91.1 \pm 1.7$ & $80.1 \pm 1.6$ & $75.5 \pm 1.5$\\
\hline
\end{tabular}
\caption{\texttt{VeloR}, \texttt{VeloSpace}, \texttt{Forward}\ and \texttt{Matching}\ pattern recognition
efficiencies for a perfectly aligned VELO and various VELO
misalignment scenarios.}
\label{tab:pr_velo}
\end{center}
\vspace{0.5cm}
\end{table}
\subsubsection{Effect on the event selection}
\label{sec:b2hh_velo_sel}
In this section the effect of misalignments of the VELO on the
various discriminating variables shown in Table~\ref{tab:shh} is analysed.
In this instance it is important to realise that the selection cuts on these
discriminating variables were optimized for a perfect alignment. It may be
that the deteriorations reported hereafter can be reduced to some extent
with a cuts re-optimization for the non-perfectly aligned scenarios.
This same comment holds for all the studies summarised in the remainder of
the note.
In Figures~\ref{fig:sel_velo} and~\ref{fig:sel_velo_2} the distributions of the
discriminating variables are shown for the 0$\sigma$, 1$\sigma$, 3$\sigma$
and 5$\sigma$ cases.
In each plot the full line represents the $0\sigma$ case; the dashed line
represents the $1\sigma$ case; the dotted line represents the $3\sigma$ case
and the dot-dashed line represents the $5\sigma$ case.
Note that all plots are obtained after applying the
full selection on all the variables but the plotted one.
The cut value is indicated by a vertical line.
In case of the $p_{T}$ and impact parameter significance cuts on the pions,
where one threshold is applied to both pions and another has to be exceeded
by at least one of them, the common threshold is depicted by a solid line,
while the additional threshold is shown as a dashed line.
In addition, normalised integrals are shown to give a direct comparison of
the acceptances for different misalignments~\footnote{In case of the $p_{T}$
and impact parameter significance cuts, which have been switched off
simultaneously, the acceptance related to the second cut can not be directly
read off the acceptance functions.}.
The most sensitive variable to VELO misalignments is the impact parameter
significance of the $B^0$ (Figure~\ref{fig:sel_velo}(a)).
The acceptance of this cut drops by a factor two for misalignments as large
as in the $5\sigma$ case.
Other discriminating variables that are affected by VELO misalignments are
the $B^0$ decay vertex $\chi^2$ (relative loss $\approx{}30\%$) and the
daughters' impact parameter significance (relative loss $<10\%$).
The overview of the number of selected events is given in
Table~\ref{tab:sel_velo}.
\begin{table}[hbtp]
\vspace{0.5cm}
\begin{center}
\begin{tabular}{|c|c|}
\hline
Misalignment & Number of \\
scenario & selected events \\
\hline\hline
0$\sigma$ & 4141 (100\%) \\
1$\sigma$ & 3829 (92.5\%)\\
3$\sigma$ & 2194 (53.0\%)\\
5$\sigma$ & 1082 (26.1\%)\\
\hline
\end{tabular}
\caption{Number of selected events after running the $B^0_{(s)} \to h^+h^{'-}$ selection
for the different VELO misalignment scenarios.}
\label{tab:sel_velo}
\end{center}
\vspace{0.5cm}
\end{table}
\subsubsection{Effect on resolutions}
\label{sec:b2hh_velo_res}
After having studied the effect of misalignments on the event selection it is
equally important to evaluate their impact on physics analysis observables;
therefore, momentum and mass resolution have been studied as well as the
resolutions on the primary and secondary ($B^0$ decay) vertices and on the
proper time. These resolutions are shown in Figures~\ref{fig:res_velo}
and~\ref{fig:res_velo_2} while their values
(the sigmas of single-Gaussian fits) are summarised in
Tables~\ref{tab:resolution_velo} and~\ref{tab:resolution_velo_2}.
The momentum and mass resolutions are only affected on the
percent level by VELO misalignments, while the vertex related quantities are
strongly affected.
For the latter the resolutions worsen by roughly a factor two for the $5\sigma$
case compared to the fully aligned scenario.
Note that, in particular, the results on the precision of the reconstruction
of primary vertices are fully general and not restricted to $B^0_{(s)} \to h^+h^{'-}$ decays.
In addition to the ``bare'' resolutions, the effect of misalignments on the
assigned proper time error has been studied. Figure~\ref{fig:tau_velo} shows
the distribution of the proper time error for the various misalignment
scenarios as well as the respective pull distributions.
It is clearly visible and confirmed by the numbers in Table~\ref{tab:tau_velo}
that even in the aligned case there is a bias in the estimation of the proper
time and that the errors are under-estimated.
With misalignments both the bias and the error estimation worsen significantly.
Note that the errors used in the pull distributions do not account for
the change in the effective single-hit resolution due to misalignments.
Hence, these figures should only be seen as another way
of illustrating the degrading precision as a function of misalignments.
\begin{table}
\vspace{0.3cm}
\begin{center}
\begin{tabular}{|c||c|c|c|}
\hline
& Momentum & Mass & Proper time \\
\raisebox{1.5ex}{Misalignment} & resolution & resolution & resolution \\
\raisebox{1.75ex}{scenario} & (\%) & ($\mathrm{MeV}$) & ($\mathrm{fs}$) \\
\hline\hline
0$\sigma$ & 0.49 & 22.5 & 37.7\\
1$\sigma$ & 0.50 & 22.5 & 39.4 \\
3$\sigma$ & 0.52 & 22.2 & 58.1 \\
5$\sigma$ & 0.52 & 23.5 & 82.0 \\
\hline
\end{tabular}
\caption{Values of the resolutions on the daughters' momentum, the $B^0$ mass
and the $B^0$ proper time for the different VELO misalignment scenarios.
The resolutions correspond to the sigmas of single-Gaussian fits.
The errors on all resolutions are around 1-1.5 \%.}
\label{tab:resolution_velo}
\end{center}
\vspace{0.3cm}
\end{table}
\begin{table}
\vspace{0.3cm}
\begin{center}
\begin{tabular}{|c||c|c|c||c|c|c|}
\hline
Misalignment & \multicolumn{3}{|c||}{Primary vertex} & \multicolumn{3}{|c|}{$B^0$ vertex}\\
scenario & \multicolumn{3}{|c||}{resolutions ($\mathrm{\mu m}$)} &
\multicolumn{3}{|c|}{resolutions ($\mathrm{\mu m}$)} \\
& $x$ & $y$ & $z$ & $x$ & $y$ & $z$\\
\hline\hline
0$\sigma$ & 9 & 9 & 41 & 14 & 14 & 147 \\
1$\sigma$ & 10 & 10 & 48 & 15 & 15 & 155 \\
3$\sigma$ & 16 & 16 & 81 & 21 & 21 & 226 \\
5$\sigma$ & 23 & 27 & 147 & 28 & 29 & 262 \\
\hline
\end{tabular}
\caption{Values of the position resolutions on the primary and the $B^0$ decay
vertices for the different VELO misalignment scenarios.
The resolutions correspond to the sigmas of single-Gaussian fits.
The errors on all resolutions are around 1-2 \%.}
\label{tab:resolution_velo_2}
\end{center}
\vspace{0.3cm}
\end{table}
\begin{table}
\vspace{0.3cm}
\begin{center}
\begin{tabular}{|c||c|c|}
\hline
Misalignment scenario & Mean & Sigma\\
\hline\hline
0$\sigma$ & $0.06\pm{}0.02$ & $1.14\pm{}0.01$\\
1$\sigma$ & $0.09\pm{}0.02$ & $1.20\pm{}0.01$\\
3$\sigma$ & $0.09\pm{}0.03$ & $1.62\pm{}0.02$\\
5$\sigma$ & $0.18\pm{}0.04$ & $2.08\pm{}0.03$\\
\hline
\end{tabular}
\caption{Values for the mean and sigma of the proper time pulls
for the different VELO misalignment scenarios.}
\label{tab:tau_velo}
\end{center}
\vspace{0.3cm}
\end{table}
\subsection{Misalignments in the VELO and T stations}
\label{sec:b2hh_all}
\subsubsection{Effect on the pattern recognition}
In Table~\ref{tab:pr_all} the \texttt{VeloR}, \texttt{VeloSpace}, \texttt{Forward}\ and \texttt{Matching}\
pattern recognition efficiencies for all long tracks in the event with
no momentum cut applied are shown for the
0$\sigma$, 1$\sigma$, 3$\sigma$ and the 5$\sigma$ scenarios.
For the set of misalignments considered there is
a relative loss of 8.6\% for the \texttt{Forward}\ efficiency and of 12.9\% for
the \texttt{Matching}\ efficiency. These numbers roughly correspond to the combined
losses due to the misalignments applied independently in the VELO
and in the T-stations, shown in the previous subsections. This indicates that
the two effects are largely uncorrelated.
\begin{table}[hbtp]
\vspace{0.5cm}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Misalignment & \texttt{VeloR} & \texttt{VeloSpace} & \texttt{Forward} & \texttt{Matching}\\
scenario & efficiency (\%) & efficiency (\%) & efficiency (\%) & efficiency (\%)\\
\hline\hline
0$\sigma$ & $98.0 \pm 0.1$ & $97.0 \pm 0.1$ & $85.9 \pm 0.2$ & $81.1 \pm 0.2$\\
1$\sigma$ & $98.0 \pm 0.1$ & $96.8 \pm 0.1$ & $85.6 \pm 0.2$ & $80.8 \pm 0.2$\\
3$\sigma$ & $98.0 \pm 0.1$ & $94.3 \pm 0.4$ & $83.3 \pm 0.5$ & $77.3 \pm 0.7$\\
5$\sigma$ & $97.8 \pm 0.2$ & $90.1 \pm 1.7$ & $78.5 \pm 1.8$ & $70.6 \pm 1.9$\\
\hline
\end{tabular}
\caption{\texttt{VeloR}, \texttt{VeloSpace}, \texttt{Forward}\ and \texttt{Matching}\ pattern recognition
efficiencies for various misalignment scenarios of both the VELO
and the T-stations.}
\label{tab:pr_all}
\end{center}
\vspace{0.5cm}
\end{table}
\subsubsection{Effect on the event selection}
In Table~\ref{tab:selall} the number of selected events is shown for the
different misalignment scenarios of both the VELO and the T-stations.
As already shown, if only the T-stations misalignments are considered,
the loss in the number of selected events amounts to 4.2\%, while in the
VELO case, the loss in number of selected events amounts to 73.9\%.
It can be concluded that the 75.6\% loss in number of selected events,
here seen in the worst-case scenario, is mostly due to losses induced by
misalignments in the VELO.
Therefore the distributions of the single-cut variables of the selection
have not been studied again.
It should be kept in mind that though the effects of misalignments on the
performance of the particle identification have not been studied here,
the latter is expected to be influenced mainly by T-stations misalignments.
\begin{table}[hbtp]
\vspace{0.5cm}
\begin{center}
\begin{tabular}{|c|c|}
\hline
Misalignment & Number of \\
scenario & selected events \\
\hline\hline
0$\sigma$ & 4141 (100\%) \\
1$\sigma$ & 3807 (91.9\%)\\
3$\sigma$ & 2041 (49.3\%)\\
5$\sigma$ & 1009 (24.4\%)\\
\hline
\end{tabular}
\caption{Number of selected events after running the $B^0_{(s)} \to h^+h^{'-}$ selection
for the different misalignment scenarios of both the VELO
and the T-stations considered.}
\label{tab:selall}
\end{center}
\vspace{0.5cm}
\end{table}
\subsubsection{Effect on resolutions}
In Figures~\ref{fig:resolution_all} and \ref{fig:resolution_all_2} the
$B^0$ daughters' momentum, $B^0$ mass and proper time and primary vertex
and $B^0$ vertex resolutions are shown for the 0$\sigma$, 1$\sigma$,
3$\sigma$ and 5$\sigma$ cases. The values of the resolutions (the sigmas of
single-Gaussian fits) are summarised in
Tables~\ref{tab:resolution_all} and \ref{tab:resolution_all_2}.
\begin{table}[hbtp]
\vspace{0.5cm}
\begin{center}
\begin{tabular}{|c||c|c||c|c|}
\hline
& Momentum & Mass & Proper time \\
\raisebox{1.5ex}{Misalignment} & resolution & resolution & resolution \\
\raisebox{1.75ex}{scenario} & (\%) & ($\mathrm{MeV}$) & ($\mathrm{fs}$) \\
\hline\hline
0$\sigma$ & 0.49 & 22.5 & 37.7\\
1$\sigma$ & 0.50 & 22.3 & 40.9 \\
3$\sigma$ & 0.56 & 25.1 & 58.0 \\
5$\sigma$ & 0.63 & 25.5 & 78.6 \\
\hline
\end{tabular}
\caption{Values of the resolutions on the daughters' momentum, the $B^0$ mass
and the $B^0$ proper time for the different misalignment scenarios of both the
VELO and the T-stations.
The resolutions correspond to the sigmas of single-Gaussian fits.
The errors on all resolutions are around 1-1.5 \%.}
\label{tab:resolution_all}
\end{center}
\vspace{0.5cm}
\end{table}
Comparing these results with the ones previously shown for independent
misalignments of the VELO and of the T-stations, it can be seen that while
VELO misalignments strongly influence the primary and the $B^0$ vertex
resolutions, and consequently the proper time resolution, T-stations
misalignments have an effect on the daughters' momentum resolution and
therefore on the $B^0$ mass resolution. Both misalignments have complementary
effects.
\begin{table}[hbtp]
\vspace{0.5cm}
\begin{center}
\begin{tabular}{|c||c|c|c||c|c|c|}
\hline
Misalignment & \multicolumn{3}{|c||}{Primary vertex} & \multicolumn{3}{|c|}{$B^0$ vertex}\\
scenario & \multicolumn{3}{|c||}{( resolutions $\mathrm{\mu m}$)} &
\multicolumn{3}{|c|}{resolutions ($\mathrm{\mu m}$)} \\
& $x$ & $y$ & $z$ & $x$ & $y$ & $z$\\
\hline\hline
0$\sigma$ & 9 & 9 & 41 & 14 & 14 & 147 \\
1$\sigma$ & 10 & 10 & 48 & 15 & 15 & 159 \\
3$\sigma$ & 14 & 17 & 84 & 20 & 21 & 214 \\
5$\sigma$ & 23 & 27 & 153 & 26 & 31 & 260 \\
\hline
\end{tabular}
\caption{Values of the position resolutions on the primary and the $B^0$ decay
vertices for the different misalignment scenarios of both the VELO
and the T-stations.
The errors on all resolutions are around 1-2 \%.}
\label{tab:resolution_all_2}
\end{center}
\vspace{0.5cm}
\end{table}
Finally, the effect of misalignments on the proper time error has been studied
(see Table~\ref{tab:tau_all}).
Figure~\ref{fig:tau_all} shows the distribution of the proper time error for
the various misalignment scenarios as well as the respective pull
distributions. Again, a bias is observed in the estimation of the proper
time, and the proper time errors are under-estimated.
Comparing these results with the ones previously shown in
Tables~\ref{tab:tau_velo} and Table~\ref{tab:tau_t},
it can be concluded that these results strongly resemble the ones in
Table~\ref{tab:tau_velo}, indicating that the worsening seen here is
originated by the misalignments in the VELO.
\begin{table}[hbtp]
\vspace{0.5cm}
\begin{center}
\begin{tabular}{|c||c|c|}
\hline
Misalignment scenario & Mean & Sigma\\
\hline\hline
0$\sigma$ & $0.06\pm{}0.02$ & $1.14\pm{}0.01$\\
1$\sigma$ & $0.05\pm{}0.02$ & $1.22\pm{}0.02$\\
3$\sigma$ & $0.11\pm{}0.04$ & $1.63\pm{}0.03$\\
5$\sigma$ & $0.15\pm{}0.07$ & $2.10\pm{}0.06$\\
\hline
\end{tabular}
\caption{Values for the mean and sigma of the proper time pulls for the
different misalignment scenarios of both the VELO and the T-stations.}
\label{tab:tau_all}
\end{center}
\vspace{0.5cm}
\end{table}
\subsection{Effects of changes in the VELO $z$-scale}
\label{sec:b2hh_z-scale}
In addition to studying the effects of random misalignments, the change
of the VELO $z$-scale has been examined.
This is of particular interest to lifetime measurements as it potentially
directly introduces a bias in the measured proper time.
A $z$-scaling effect could be expected from an expansion due to temperature
variations of the VELO components, particularly the Aluminium base plate
onto which the individual modules are screwed.
However, the base plate is kept constant at $20^{\circ}\mathrm{C}$ by
additional local heating.
In addition, the scaling should be limited by the carbon-fibre constraint
system that keeps the modules in place with a precision of
$100~\mathrm{\mu{}m}$ and which is less prone to temperature-induced expansion
given its material\footnote{A conservative estimate using a temperature change
of $10$ K yields a scaling in the $z$-direction of $2\times{}10^{-5}$. The $10$ K are estimated as a maximal change in the temperature of the constraint system as it has a large area contact to the base plate at $20^{\circ}$C and only a small cross-section with the VELO modules at about $-5^{\circ}$C.}.
To assess the influence of an incorrect knowledge of the VELO $z$-scale,
four scenarios with different $z$-scales have been simulated and studied.
For each scenario the $z$-position of each module has been changed according
to the equation
\begin{equation}
z_{module}\rightarrow{}z_{module}\cdot{}(1+scale),
\end{equation}
where $scale$ takes the four values $\frac{1}{3}10^{-4}$, $10^{-4}$,
$\frac{1}{3}10^{-3}$, and $10^{-3}$ for the four scenarios, respectively.
\subsubsection{Effect on the pattern recognition}
The first quantities to be studied with a changed VELO $z$-scale were the
pattern recognition efficiencies.
As shown in Table~\ref{tab:pr_z_scaling} no deterioration has been observed
up to a change in the $z$-scale of $1/3\times{}10^{-3}$.
This is expected for the VELO-based pattern recognitions as a $z$-scaling
effectively only changes the track slopes.
For the largest $z$-scaling under study small losses in the VELO-based
pattern recognition efficiencies are observed.
These also propagate to the \texttt{Forward}\ and \texttt{Matching}\ efficiencies.
\begin{table}[hbtp]
\vspace{0.5cm}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
$z$-scale & \texttt{VeloR} & \texttt{VeloSpace} & \texttt{Forward} & \texttt{Matching}\\
& efficiency (\%) & efficiency (\%) & efficiency (\%) & efficiency (\%)\\
\hline\hline
$1.00000$ & 98.0 & 97.0 & 85.9 & 81.1\\
$1.00003$ & 98.0 & 97.0 & 85.9 & 81.2\\
$1.00010$ & 98.0 & 97.0 & 85.9 & 81.2\\
$1.00033$ & 98.0 & 96.8 & 85.7 & 81.0\\
$1.00100$ & 96.5 & 94.3 & 83.8 & 79.0\\
\hline
\end{tabular}
\caption{\texttt{VeloR}, \texttt{VeloSpace}, \texttt{Forward}\ and \texttt{Matching}\ pattern recognition
efficiencies for the various VELO $z$-scaling misalignment scenarios.}
\label{tab:pr_z_scaling}
\end{center}
\vspace{0.5cm}
\end{table}
\subsubsection{Effect on the event selection}
When studying the influence of various $z$-scales on the event selection
the situation observed for the pattern recognition performances repeats itself.
The overview of the number of selected events is given in
Table~\ref{tab:sel_z}.
The first four scales under study show only a minor loss in the number of
selected events, while a relative loss of about $20\%$ is observed for the
largest $z$-scale.
As for the studies in the previous chapters, this is due to a
worsening in the resolution of the various cut parameters, where particularly
the VELO-related quantities have shown great sensitivity.
\begin{table}[hbtp]
\vspace{0.5cm}
\begin{center}
\begin{tabular}{|c|c|}
\hline
$z$-scale & Number of \\
& selected events \\
\hline\hline
$1.00000$ & 4141 (100.0\%)\\
$1.00003$ & 4137 (99.9\%)\\
$1.00010$ & 4142 (100.0\%)\\
$1.00033$ & 4063 (98.1\%)\\
$1.00100$ & 3273 (79.0\%)\\
\hline
\end{tabular}
\caption{Number of selected events after running the $B^0_{(s)} \to h^+h^{'-}$ selection
for the various VELO $z$-scaling misalignment scenarios.}
\label{tab:sel_z}
\end{center}
\vspace{0.5cm}
\end{table}
\subsubsection{Effect on resolutions}
The effect of an incorrectly known VELO $z$-scale on the resolutions of
various physics quantities is summarised in Tables~\ref{tab:resolution_z}
and~\ref{tab:resolutionz_2}.
The relevant resolution distributions are pictured in Figures~\ref{fig:res_z}
and~\ref{fig:res_z_2}.
For the first three $z$-scaling scenarios the observed
changes in the resolutions are minimal.
Only for the two largest $z$-scaling cases one observes a sizeable deterioration
in particular of the proper time and vertex resolutions.
\begin{table}[hbtp]
\vspace{0.5cm}
\begin{center}
\begin{tabular}{|c||c|c|c|}
\hline
& Momentum & Mass & Proper time \\
$z$-scale & resolution & resolution & resolution \\
& (\%) & ($\mathrm{MeV}$) & ($\mathrm{fs}$) \\
\hline\hline
$1.00000$ & 0.49 & 22.5 & 37.7 \\
$1.00003$ & 0.49 & 22.2 & 37.7 \\
$1.00010$ & 0.49 & 22.1 & 37.7 \\
$1.00033$ & 0.49 & 22.0 & 38.5 \\
$1.00100$ & 0.50 & 22.0 & 46.8 \\
\hline
\end{tabular}
\caption{Values of the resolutions on the daughters' momentum, the $B^0$ mass
and the $B^0$ proper time for the various VELO $z$-scaling misalignment
scenarios.
The resolutions correspond to the sigmas of single-Gaussian fits.
The errors on all resolutions are around 1-1.5 \%.}
\label{tab:resolution_z}
\end{center}
\vspace{0.5cm}
\end{table}
\begin{table}[hbtp]
\vspace{0.5cm}
\begin{center}
\begin{tabular}{|c||c|c|c||c|c|c|}
\hline
$z$-scale & \multicolumn{3}{|c||}{Primary vertex} & \multicolumn{3}{|c|}{$B^0$ vertex}\\
& \multicolumn{3}{|c||}{resolutions ($\mathrm{\mu m}$)}
& \multicolumn{3}{|c|}{resolutions ($\mathrm{\mu m}$)} \\
& $x$ & $y$ & $z$ & $x$ & $y$ & $z$\\
\hline\hline
$1.00000$ & 9 & 9 & 41 & 14 & 14 & 147 \\
$1.00003$ & 9 & 9 & 42 & 14 & 14 & 147 \\
$1.00010$ & 9 & 9 & 42 & 14 & 14 & 145 \\
$1.00033$ & 9 & 9 & 46 & 14 & 14 & 149 \\
$1.00100$ & 11 & 11 & 72 & 16 & 15 & 184 \\
\hline
\end{tabular}
\caption{Values of the resolutions of the primary and the $B^0$ decay vertices
for the various VELO $z$-scaling scenarios.
The errors on all resolutions are around 1-1.5 \%.}
\label{tab:resolutionz_2}
\end{center}
\vspace{0.5cm}
\end{table}
Looking at the pull distributions for the reconstructed proper time shown in
Figure~\ref{fig:tau_z} and their summary in Table~\ref{tab:tau_z},
it appears that there is no significant change in the proper time bias due to
a change in the $z$ scale.
This is expected as, even for the largest $z$-scale under study, the estimated effect on the pull mean is of the order of its uncertainty.
\begin{table}[hbtp]
\vspace{0.5cm}
\begin{center}
\begin{tabular}{|c||c|c|}
\hline
$z$-scale & Mean & Sigma\\
\hline\hline
$1.00000$ & $0.06\pm{}0.02$ & $1.14\pm{}0.01$\\
$1.00003$ & $0.06\pm{}0.02$ & $1.15\pm{}0.01$\\
$1.00010$ & $0.07\pm{}0.02$ & $1.15\pm{}0.02$\\
$1.00033$ & $0.07\pm{}0.02$ & $1.15\pm{}0.01$\\
$1.00100$ & $0.05\pm{}0.02$ & $1.35\pm{}0.02$\\
\hline
\end{tabular}
\caption{Values for the mean and sigma of the proper time pulls
for the various VELO $z$-scaling scenarios.}
\label{tab:tau_z}
\end{center}
\vspace{0.5cm}
\end{table}
|
2,877,628,091,249 | arxiv | \section{Introduction}
The study of symmetries in physics has helped to the simplification of difficult problems. For example, the symmetries in the Hamiltonian dynamical evolution of a quantum system can be related to the definition of different conservation laws which, as in the classical theory, can be used to answer different questions. The use of symmetries in quantum mechanics, in particular the definition of states associated to point symmetry groups has been covered in several works \cite{manko1,manko2,manko3,castanos}. Especially, the states carrying the symmetry of the cyclic group $C_2=\mathbb{Z}/(2 \mathbb{Z})$, also called odd an even cat states, have been of great interest in the past decades. The nonclassical properties of this kind of states have been discussed in \cite{buzek}, together with their use in fundamental quantum theory \cite{sanders,wenger,jeong,stob,wineland} and in the quantum information framework \cite{vanerick,jeong2,ralph,gilchrist,bergmann}.
For several years, there was an impossibility to construct a cat state with a large photon number. Instead of that, the low photon cat states, known as kitten states, were generated \cite{ourjoumtsev1}. After that, the possibility to obtain full cat states has been demonstrated in several studies as: by using the reflexion of a coherent pulse from a optical cavity with one atom \cite{hacker,wang}, the use of homodyne detection in a photon number state \cite{ourjoumtsev2}, the photon subtraction from a squeezed vacuum state in a parametric amplifier \cite{neegaard}, via ancilla-assisted photon subtraction \cite{takahashi}, and by the subtraction of an specific photon number in a squeezed vacuum state \cite{gerrits}. The superposition of coherent states have non-classical features like squeezing of the quadrature components \cite{buzek,janszky,domokos}. There exist a possible experimental implementation of these superpositions \cite{szabo}, in particular superpositions of coherent states on a circle \cite{janszky,domokos,gonzalez}. The states adapted to this type of symmetry have also a connection to the phase-time operators in the harmonic oscillator \cite{susskind,nieto1,pegg1,pegg2}. The definition of states carrying the circle symmetry has been extended by the use of spin coherent states as in \cite{calixto}, also in \cite{calixto1} the use of $su(1,1)$ coherent states on the hyperboloid were considered.
More recently, a proposed method to generate states with higher discrete symmetries, as the ones defined here, has been obtained by the dynamic evolution of a matter-field interaction described by the Tavis-Cummings model \cite{cordero1,cordero2}. There is experimental evidence for the generation of superpositions of four coherent states with a number of 111 photons \cite{vlastakis}. Also, the cluster structure of light nuclei as $^{12}C$ and $^{13}C$ have been describe by the point symmetry groups, as the ones discussed here, $D_{3h}$ and $D_{3h}^\prime$, respectively.
In this work, the generalization of the quantum states associated to the irreducible representations of the group whose elements are the symmetry rotations of the $n$-sided regular polygon, also named the cyclic group ($C_n=\mathbb{Z}/(n \mathbb{Z})$), and the group containing the rotational and reflexion symmetries of the regular polygon, i.e., the dihedral group ($D_n$), is presented. Some of these type of states have been previously defined using coherent states \cite{manko1,manko2,manko3,castanos}.
In the present work, it is shown that the cyclic and dihedral states form an orthogonal set of states, which can be used to define a discrete representation of states made of the superposition of rotations, in the case of the cyclic group, and rotations plus reflections in the case of the dihedral group. Also it can be seen that this discrete representation can simplify the calculation of quantum parameters as the entanglement between two subsystems within a system. For these reasons, we consider that given the applications of the cyclic and dihedral coherent states in quantum information, the generalization of such states to the noncoherent case is important.
The proposed method discussed here, makes use of an initial state $\vert \phi \rangle$ which is not invariant under rotations. To define the cyclic states, the superposition of the rotated states $\vert \phi_r \rangle=\hat{R}(\theta_r)\vert \phi \rangle$ ($r=1,\ldots,n$; $\theta_r=2\pi(r-1)/n$), and the characters associated to the $\lambda$-th irreducible representation and the $r$-th element of the group ($\chi^{(\lambda)}(g_r)$ ), are used. It is also discussed the relation between the cyclic states and the renormalized states obtained from the erasure of certain photon numbers in the photon statistics of $\vert \phi \rangle$ or $\hat{\rho}$, e.g., the cat states associated to the cyclic group $C_2$: $\vert \xi_\pm \rangle = N_\pm (\vert \alpha \rangle\pm\vert-\alpha\rangle)$ are the renormalized states resulting of eliminating the even and odd photon number states from the coherent state $\vert \alpha \rangle$ respectively.
On the other hand, the dihedral group $D_n$ is the non-Abelian group that contains the rotations and inversions which leave the $n$-sided regular polygon invariant. The elements of the dihedral group are $D_n:\{\hat{R}(\theta_j), \hat{U}_j,\, j=1,\ldots,n \}$, with $\theta_j=2\pi(j-1)/n$, where the inversion operators in the phase space are defined by a rotation plus the complex conjugation ($\hat{C}$), i.e., $\hat{U}_j=\hat{C}\hat{R}(\theta_j)$.
Additionally to pure, non-pure cyclic and dihedral states can be defined through a density matrix. These states correspond to a quantum map of an noninvariant, arbitrary operator $\hat{\rho}$. This type of quantum maps have been recently relevant in quantum information theory. In particular, the quantum maps have been important for the quantum error correction as some of the studied qubit maps represent the interaction between a qubit and an environment \cite{terhal,caruso}. Furthermore, the study of the erasure map, presented here, can be important to figure out the experimental realization of the defined states, as the resulting states, depend on the absorption (erasure) of certain state numbers.
As a remainder of some group characteristics we establish that given a $n$ dimensional group $\{g_r;\, r=1, \ldots,n \}$, a conjugacy class is formed by all the elements $g_k$ which satisfy the similarity transformation $g_k^{-1} g_j g_k=g_j$, where $g_j$ is also a member of the group. An irreducible representation $\lambda$ is the representation of a group that cannot decompose further. To obtain the irreducible representation sometimes the following procedure should be applied: if there exist a similarity transformation of an element of the group $g_j$ which diagonalize it, i.e., $C^{-1}g_j C=A_D$, where $A_D$ is made of diagonal matrices $A_{D_j^{(\lambda)}}$, then the matrices $A_{D_j^{(\lambda)}}$ form an irreducible representation of $g_j$. The character $\chi$ associated to the irreducible representation $\lambda$, is defined as the trace of the diagonal matrix $A_{D_\lambda}$, that is $\chi^{(\lambda)}(g_j)={\rm Tr}\left(A_{D_j^{(\lambda)}}\right)$. Also, all the members of a conjugacy class share the same characters. In the case of the cyclic states the character associated to the irreducible representation $\lambda$ and element $g_r$ of the group is given by $\chi^{(\lambda)}_n(g_r)=e^{2\pi i(\lambda-1)(r-1)/n}$
This work is organized as follows: In section 2 a review of the cyclic states constructed by means of coherent states are presented. The generalization of these type of states for a non-coherent system is then described in section 3. The correspondence between the generalized cyclic state and a renormalized state obtained through the elimination of certain photon numbers in an original system is studied in section 4. In section 5, some examples are given, the cyclic Gaussian states are defined and some of their properties are exemplified. Also, the circle symmetry states are presented as an extension to the states associated to $C_n$, where $n\rightarrow \infty$. In section 6, the idea of the pure cyclic states of $C_n$ is extended to the case of non-pure density matrices. This is done by the definition of a map of the density matrix, which can also be related to the erasure and renormalization of certain photon numbers in the initial state. The usefulness of this kind of systems for the study of the entanglement in a two-mode system is shown in section 7. The dihedral states are defined in section 8. Finally, some conclusions are given.
\section{Cyclic coherent states}
In previous works, different states associated to the irreducible representation of cyclic groups \cite{manko1,manko2,manko3,castanos} have been defined using coherent states \cite{glauber,titulaer,birula,stoler}. The resulting states called crystallized cat states have some interesting properties as subpoissonian photon statistics, squeezing, and antibunching \cite{sun1,sun2,castanos}. Also, it has been demonstrated that they can be generated by the interaction of an atom with an electromagnetic field \cite{hacker,wang}. Here, we present a summary of the definition and some properties of the coherent cyclic states.
The cyclic group $C_n$ have as elements the discrete rotations associated to the symmetries of the regular polygon of $n$ sides, i.e. $C_n=\{R(\theta_j), \theta_j = 2\pi (j-1) /n, \ {\rm with}\ (j=1,\ldots,n)\}$. The number of elements is equal to the cycle of the group and they can be divided in different conjugacy classes $\{g_r\}$. The characteristic (or character) of the class $g_r$ for the irreducible representation $\lambda$ is denoted as $\chi^{(\lambda)}_n(g_r)$ is given by the trace of the irreducible representation. It is known that in the case of the cyclic group each element forms its own class ($g_j=R(\theta_j)$) and that the character of the class are the $n$ roots of the identity,
\begin{equation}
\chi^{(\lambda)}_n(g_r)=\exp \left[ \frac{2 i \pi (\lambda-1)(r-1) }{n}\right] \, , \quad {\rm with}\ \lambda,r=1,\ldots,n \, .
\label{chi}
\end{equation}
Additionally, the characters for any two irreducible representations $\lambda$ and $\lambda'$ are orthonormal, i.e.,
\begin{equation}
\frac{1}{n}\sum_{r=1}^n \chi^{(\lambda)}_n(g_r) \chi^{*(\lambda')}_n(g_r)= \delta_{\lambda \lambda'}
\label{ort1}
\end{equation}
and also the sum of the characters over all the irreducible representations $\lambda$ satisfy that
\begin{equation}
\frac{1}{n}\sum_{\lambda=1}^n \chi^{(\lambda)}_n(g_r) \chi^{*(\lambda)}_n(g_{r'})= \delta_{r r'} \, .
\label{ort2}
\end{equation}
These two orthogonality conditions can be quickly checked using the rule for the sum of the identity roots
\begin{equation}
\sum_{j=1}^n \mu_n^j=0\, , \quad {\rm where} \ \mu_n=\exp\left(\frac{2\pi i}{n}\right),
\label{powers}
\end{equation}
such property also leads to the following theorem.
\begin{theorem}
\label{tt1}
Let $r$ be an integer and $\mu_n=\exp(2\pi i/n )$, then $\sum_{j=1}^n \mu_n^{jr}=n\, \delta_{{\rm mod}(r,n),0}$.
\end{theorem}
\begin{proof}
It is clear that for $r$ being a multiple of $n$: ${\rm mod}(r,n)=0$, $\mu_n^{rj}=1$ and thus the sum $\sum_{j=1}^n \mu_n^{jr}$ is equal to $n$. For $r$ not being a multiple of $n$ (${\rm mod}(r,n) \neq 0$) we remember that the sum
\[
\sum_{j=1}^n x^j=x\frac{x^n-1}{x-1} \, ,
\]
which in the case of $x=\mu_n^r$, implies
\[
\sum_{j=1}^n \mu_n^{jr}=\mu_n^r \frac{\mu_n^{rn}-1}{\mu_n^r-1}=0\, ,
\]
as $\mu_n^{rn}=1$. It is important to notice that this property is satisfied for any integer, in particular by $r$ being a negative integer.
\end{proof}
Given the orthogonality properties in Eqs.~(\ref{ort1}) and (\ref{ort2}) one can define a macroscopic quantum state for each one of the irreducible representations of the cyclic group as follows
\begin{equation}
\left\vert \psi^{(\lambda)}_n \right\rangle=\mathcal{N}_\lambda \sum_{r=1}^n \chi^{(\lambda)}_n (g_r) \vert \alpha_r \rangle \, , \quad \sum_{r,r'=1}^n \chi^{(\lambda)}_n (g_r) \chi^{*(\lambda)}_n (g_{r'}) \langle \alpha_{r'} \vert \alpha_r \rangle = \mathcal{N_\lambda}^{-2} \, ,
\label{cats}
\end{equation}
where the coherent state parameter $\alpha_r={\rm Re}(\alpha_r)+i\,{\rm Im}(\alpha_r)$ is given by the rotation of a fixed number $\alpha$ in the complex plane,
\[
\left( \begin{array}{cc} {\rm Re}(\alpha_r) \\ {\rm Im}(\alpha_r)\end{array}\right)= R(\theta_r) \left( \begin{array}{cc} {\rm Re}(\alpha) \\ {\rm Im}(\alpha)\end{array}\right) \, .
\]
It is important to notice that all the states for different irreducible representations form an orthonomal set with $\left\langle \psi_n^{(\lambda)} \Big\vert \psi_n^{(\lambda')} \right\rangle = \delta_{\lambda \lambda'}$. In the case of the cyclic group $C_2$ we have as the result the standard odd and even cat states $\vert \psi^{(1,2)} \rangle= \mathcal{N}_\pm (\vert \alpha \rangle \pm \vert - \alpha \rangle)$, which can have subpoissonian photon statistic, squeezing, and antibunching \cite{castanos}.
The coherent cyclic states $\vert \psi_n^{(\lambda)}\rangle$ are eigenvalues of the power of the annihilation operator $\hat{a}^n$, i.e.,
\[
\hat{a}^n \vert \psi_n^{(\lambda)}\rangle= \alpha^n \vert \psi_n^{(\lambda)}\rangle \, .
\]
Also, one can change the irreducible representation of the state by acting the annihilation operator $\hat{a}$ to another state:
\[
\hat{a}\vert \psi_n^{(\lambda)}\rangle=\alpha \frac{\mathcal{N}_{\lambda}}{ \mathcal{N}_{\lambda'}} \vert \psi_n^{(\lambda')}\rangle \, ,
\]
where the value of the new irreducible representation depends on the original one $\lambda'(\lambda)$.
\section{Generalization of cyclic states as superpositions of rotations in the phase space.}
The necessity of a generalization of the cyclic states to a superposition of arbitrary, non-coherent systems can be explained by their possible use in quantum information theory. Also, the cyclic states form an orthogonal set of states which can lead to a finite representation of certain quantum systems.
First, let us suppose an initial quantum state $\vert \phi \rangle$ and its representation in the Fock basis
\[
\vert \phi \rangle = \sum_{m=0}^\infty A_m(\phi) \vert m \rangle \, , \quad {\rm with} \ \sum_{m=0}^\infty \vert A_m(\phi) \vert^2 =1 \, .
\]
The discrete rotations in the phase space associated to the symmetries of the regular polygon in the cyclic group $C_n$ are given by the operator $\hat{R}(\theta_j)=\exp(-i \theta_j \hat{n})$, where $\theta_j=2 \pi(j-1)/n$; $j=1, \ldots , n$, and $\hat{n}$ is the bosonic number operator. To every one of the elements of the cyclic group we have then a rotation of the general state $\vert \phi \rangle$, which can be expressed as
\[
\vert \phi_j \rangle = \hat{R}(\theta_j) \vert \phi \rangle \, .
\]
\begin{mydef}
Let $\vert \phi \rangle=\sum_{m=0}^\infty A_m (\phi) \vert m \rangle$ be a quantum state with at least one mean quadrature component ($\hat{x}=(\hat{a}+\hat{a}^\dagger)/\sqrt{2}$, $\hat{p}=i(\hat{a}^\dagger-\hat{a})/\sqrt{2}$) different from zero, i.e., $\langle \phi \vert \hat{x} \vert \phi \rangle \neq 0$, or $\langle \phi \vert \hat{p} \vert \phi \rangle \neq 0$.We define the general cyclic state for the irreducible representation $\lambda$ of the group $C_n$ as
\begin{equation}
\left\vert \psi_n^{(\lambda)} (\phi) \right\rangle = \mathcal{N}_\lambda \sum_{r=1}^n \chi^{(\lambda)}_n (g_r) \vert \phi_r \rangle \, ,
\label{ccy}
\end{equation}
where $\chi_n^{(\lambda)}(g_r)$ is the character associated to the irreducible representation $\lambda$ and to the element of the group $g_r \in C_n$, and where
\[
\mathcal{N}_\lambda^{-2}=\sum_{r,r'=1}^n \chi^{(\lambda)}(g_r) \chi^{*(\lambda)}(g_{r'}) \langle \phi_{r'} \vert \phi_r \rangle \, .
\]
\label{defi1}
\end{mydef}
To obtain a well defined state we emphasize that the original state cannot be invariant under the rotations discussed above, i.e., $\vert \phi_r \rangle \neq \vert \phi \rangle$ for $r=2,\ldots,n$. This property can be satisfied when the Wigner function of the state in the phase space $W(x,p)$ is given by a non symmetric distribution or when the state is not centered at the origin of the phase space, i.e., $\int dx \, dp \, x \, W(x,p)\neq 0$, or $\int dx \, dp \, p \, W(x,p)\neq 0$.
As these states carry the irreducible representation of the group $C_n$, they are invariant, up to a phase, under the discrete rotations $\hat{R}(\theta_j)$. To prove this property, lets suppose the action of the rotation $\hat{R}(\theta_l)$, $1\leq l \leq n$, over the state $\left\vert \psi_n^{(\lambda)} (\phi) \right\rangle$
\[
\hat{R}(\theta_l) \left\vert \psi_n^{(\lambda)} (\phi) \right\rangle= \mathcal{N}_\lambda \sum_{r=1}^n \chi_n^{(\lambda)} (g_r) \hat{R}(\theta_{r+l}) \vert \phi \rangle \, ,
\]
as the character of the representation $\lambda$ is
\[
\chi_n^{(\lambda)} (g_r)=\mu_n^{(\lambda-1)(r-1)}=\mu_n^{(\lambda-1)(r+l-1)}\mu_n^{(1-\lambda)l} = \chi_n^{(\lambda)} (g_{r+l}) \mu_n^{(1-\lambda)l} \, ,
\]
then we obtain
\[
\hat{R}(\theta_l) \left\vert \psi_n^{(\lambda)} (\phi) \right\rangle= \mathcal{N}_\lambda \, \mu_n^{(1-\lambda)l} \sum_{r=1}^n \chi_n^{(\lambda)} (g_{r+l}) \hat{R}(\theta_{r+l}) \vert \phi \rangle \, ,
\]
given the periodicity of the characters and the rotation operators ($\mu_n^{x+n}=\mu_n^x$, $\hat{R}(\theta_{j+n})=\hat{R}(\theta_{j})$), this sum give us, up to a phase, the same state as the original, i.e.,
\begin{equation}
\hat{R}(\theta_l) \left\vert \psi_n^{(\lambda)} (\phi) \right\rangle= \mu_n^{(1-\lambda)l} \left\vert \psi_n^{(\lambda)}(\phi) \right\rangle \, .
\label{cyc_inv}
\end{equation}
It can also be seen that by the use of the explicit form of the rotated states in the Fock basis $\vert \phi_r \rangle=\sum_{m=0}^\infty A_m(\phi) e^{-i \theta_r m} \vert m \rangle$, one obtains
\[
\left\vert \psi_n^{(\lambda)} (\phi) \right\rangle = \mathcal{N}_\lambda \sum_{r=1}^n \sum_{m=0}^\infty \chi^{(\lambda)}_n (g_r) A_m(\phi) e^{-i\theta_r m}\vert m \rangle \, ,
\]
which can be also rewritten as
\begin{equation}
\left\vert \psi_n^{(\lambda)} (\phi) \right\rangle = \mathcal{N}_\lambda \sum_{r=1}^n \sum_{m=0}^\infty \mu_n^{(\lambda-1-m)(r-1)} A_m(\phi) \vert m \rangle \, .
\label{cyc1}
\end{equation}
Given the characteristics of the sum of the powers of the parameter $\mu_n$, expressed in Eq.~(\ref{powers}), one can show that the different states for the cyclic group $C_n$ form an ortonormal set. To show this, lets suppose the inner product of two cyclic states with irreducible representations $\lambda$, and $\lambda^\prime$, i.e.,
\[
\left\langle \psi_n^{(\lambda^\prime)} (\phi) \right\vert \psi_n^{(\lambda)} (\phi) \Big\rangle= \mathcal{N}_\lambda \mathcal{N}_{\lambda^\prime} \sum_{r,r'=1}^n \sum_{m,m'=0}^\infty A_m(\phi) A_{m'}^*(\phi)\, \mu_n^{(\lambda-1-m)(r-1)} \mu_n^{(1-\lambda^\prime+m')(r'-1)} \delta_{m',m} \, ,
\]
performing first the sums over the parameter $r'$, we have
\[
\left\langle \psi_n^{(\lambda^\prime)} (\phi) \right\vert \psi_n^{(\lambda)} (\phi) \Big\rangle= \mathcal{N}_\lambda \mathcal{N}_{\lambda^\prime} n \sum_{m=0}^\infty \sum_{r=1}^n \vert A_m (\phi) \vert^2 \mu_n^{\lambda^\prime-1-m}\, \mu_n^{(\lambda-1-m)(r-1)} \delta_{{\rm mod}(1-\lambda^\prime+m,n),0} \, .
\]
As established by Theorem~\ref{tt1}, this sum is different from zero when $1-\lambda^\prime+m=s n$ (with $s\in \mathbb{Z}$ ). This leads to the condition $m=sn-1+\lambda^\prime$. From this, we can change the sum over $m$ to a sum over $s$, obtaining
\[
\left\langle \psi_n^{(\lambda^\prime)} (\phi) \right\vert \psi_n^{(\lambda)} (\phi) \Big\rangle= \mathcal{N}_\lambda \mathcal{N}_{\lambda^\prime} \sum_{s=0}^\infty \sum_{r=1}^n \vert A_{ns-1+\lambda} (\phi) \vert^2 \mu_n^{(\lambda-\lambda^\prime)r} \mu_n^{\lambda-\lambda'}\, .
\]
Similarly to the previous step, the sum over the parameter $r$ is different from zero when $\lambda-\lambda'=s' n$ with $s' \in \mathbb{Z}$. As the parameters satisfy $1\leq \lambda,\lambda' \leq n$, the only possible value is that $\lambda-\lambda'=0$, so
\[
\left\langle \psi_n^{(\lambda^\prime)} (\phi) \right\vert \psi_n^{(\lambda)} (\phi) \Big\rangle= \mathcal{N}_\lambda \mathcal{N}_{\lambda^\prime} \sum_{s=0}^\infty \vert A_{ns-1+\lambda} (\phi) \vert^2 \delta_{\lambda,\lambda'} \, ,
\]
which in the case $\lambda\neq \lambda'$ is equal to zero and by the expression for the normalization constant in Def.~(\ref{defi1}) is equal to one when $\lambda= \lambda'$. Finally, arriving to the expression
\[
\left\langle \psi_n^{(\lambda^\prime)} (\phi) \right\vert \psi_n^{(\lambda)} (\phi) \Big\rangle=\delta_{\lambda,\lambda'} \, .
\]
Other important properties of the cyclic states are addressed in the next section.
\section{State erasure as a quantum map and the cyclic states.}
In this section, the connection between the erasure map and the cyclic states is studied. This correspondence can lead to the experimental implementation of the cyclic states as these states can be seen as coming from the absorption (or erasure) of certain photon numbers.
The general cyclic states defined above, can also be defined as the result of selective loss of information in a quantum system, that is, from the erasure of a subset of states of an original state $\vert \phi \rangle=\sum_{n=0}^\infty A_m(\phi) \vert m \rangle$.
As an example, one can suppose the selective erasure of the probabilities $A_m(\phi)$ for all values of odd $m$, and after this erasure, the renormalization of the state is performed. In that case, one will have the following state made with only even number states
\begin{equation}
\vert \psi_{even} \rangle = N \sum_{m \ even} A_m (\phi) \vert m \rangle \, ,
\label{even}
\end{equation}
where $N$ is the normalization constant $N^{-2}=\sum_{m \ even} \vert A_m(\phi) \vert^2$. Lets compare the previous expression with the cyclic state for $n=2$, $\lambda=1$: $\left\vert \psi_2^{(1)} (\phi)\right\rangle$. This state is given by
\[
\left\vert \psi_2^{(1)} (\phi) \right\rangle=\mathcal{N}_1 \sum_{r=1}^2 \sum_{m=0}^\infty \mu_2^{r m} A_m (\phi) \vert m \rangle, \quad \mu_2=-1\, .
\]
By performing the sum over $r$, we then obtain
\[
\left\vert \psi_2^{(1)} (\phi)\right\rangle=\mathcal{N}_1 \sum_{m=0}^\infty (1+(-1)^m) A_m (\phi) \vert m \rangle=\mathcal{N}_1 \sum_{m \ even} 2 A_m (\phi) \vert m \rangle \, ,
\]
which is the same expression as Eq.~(\ref{even}) with $N=2\mathcal{N}_1$. The same can be done to the state resulting of the elimination of even states, which is equal to the cyclic state with $n=2$, $\lambda=2$, i.e., $\left\vert \psi_2^{(2)}(\phi) \right\rangle=N \sum_{m \ odd} A_m (\phi) \vert m \rangle$. In general, the equality between the cyclic states and the states resulting of the elimination of certain number states can be established in the following theorem.
\begin{theorem}
Let $n$ and $\lambda$ be two positive integers with $\lambda\leq n$, and $\vert \Psi_{n,\lambda} (\phi) \rangle$ be the renormalized state obtained after the elimination of the number states $\vert m \rangle$ in $\vert \phi \rangle=\sum_{m=0}^\infty A_m (\phi) \vert m \rangle$, which do not satisfy the condition ${\rm mod}(m-\lambda+1,n)=0$, then $\vert \Psi_{n,\lambda} (\phi)\rangle$ is equal to the cyclic state $\left\vert \psi_n^{(\lambda)}(\phi)\right\rangle$ up to a phase.
\label{teo2}
\end{theorem}
\begin{proof}
The state after the erasure map $\vert \Psi_{n,\lambda} (\phi) \rangle$, has the following expression
\[
\vert \Psi_{n,\lambda} \rangle=N_{\lambda,n} \sum_{m} A_m (\phi) \delta_{{\rm mod}(m-\lambda+1,n),0} \vert m \rangle
\]
which only contains the number states that satisfy $m-\lambda+1=ln$ (with $l$ an nonnegative integer), then
\begin{equation}
\vert \Psi_{n,\lambda} (\phi) \rangle=N_{\lambda,n} \sum_{l} A_{\lambda-1+ln} (\phi) \vert \lambda-1+ln \rangle \, .
\label{elim}
\end{equation}
On the other hand, by using the property for the roots of the identity ($\mu_n$) given in Theorem~\ref{tt1} ($\sum_{j=1}^n \mu_n^{j l}=n \, \delta_{{\rm mod}(l,n),0}$), in the definition of $\left\vert \psi_n^{(\lambda)}(\phi)\right\rangle$ in Eq.~(\ref{cyc1}), we can show that
\[
\sum_{r=1}^n \mu_n^{(\lambda-1-m)(r-1)}=n \, \mu_n^{1-\lambda} \delta_{{\rm mod}(\lambda-1-m,n),0} \, ,
\]
this means that only the states with $m=\lambda-1+l n$ (with $l$ a nonnegative integer) are part of $\left\vert \psi_n^{(\lambda)}\right\rangle$, i. e.,
\begin{equation}
\left\vert \psi_n^{(\lambda)}(\phi)\right\rangle=n \, \mathcal{N}_\lambda \, \mu_n^{1-\lambda} \sum_{l=0}^\infty A_{\lambda-1+l n} (\phi)\vert \lambda-1+l n \rangle \, .
\label{cyc2}
\end{equation}
Finally, when comparing Eqs.~(\ref{elim}) and~(\ref{cyc2}) we arrive to the conclusion
\begin{equation}
\vert \Psi_{n, \lambda} \rangle = \mu_n^{1-\lambda} \left\vert \psi_n^{(\lambda)} (\phi)\right\rangle \, ,
\end{equation}
with the relation between the normalization constants being $n\, \mathcal{N}_\lambda=N_{\lambda,n}$, and the phase between the cyclic state and the erasure state, being $\mu_n^{1-\lambda}=\exp{(2 \pi i (1-\lambda)/n)}$.
\end{proof}
Given this identification, it can be seen that the photon number statistics for the state $\left\vert \psi_n^{(\lambda)} (\phi) \right\rangle$ contain only the photon numbers which satisfy ${\rm mod}(m-\lambda+1,n)=0$.
The correspondence between the cyclic states and the states resulting from the quantum erasure map can lead to the experimental realization of the cyclic states. One can for example think of an initial nonivariant state $\vert \phi \rangle$, with a small mean photon number ($\langle \phi \vert \hat{n} \vert \phi \rangle \approx 0$). If one has a process where the number states $\vert 1 \rangle$ or $\vert 2 \rangle$ are erased, e.g., by the absorption of one or two photons of the electromagnetic field, then one can expect that the resulting state will be similar to a cyclic state.
\section{Examples.}
\subsection{Cyclic Gaussian states.}
Here we define different superpositions of Gaussian states associated to the cyclic groups. These superpositions are connected with the squeezed states defined in \cite{nieto,hillery}. As an example of the general procedure described above, one can define cyclic states using Gaussian wavepackets as initial systems.
Suppose a general one dimensional Gaussian state in the position basis
\begin{equation}
\psi (x)= \left(\frac{a+a^*}{\pi}\frac{1+2a}{1+2a^*}\right)^{1/4} \exp\left\{ -\frac{b^2+b b^*}{4(a+a^*)}\right\} \exp \left\{ -a x^2+b x\right\}\, , \quad a_R>0\, , \ b\neq0 \, ,
\label{gaus}
\end{equation}
with $a=a_R+ia_I$, $b=b_R+i b_I$. This state can be characterized by the mean values of the quadrature components $(\hat{p},\hat{q})$, and the corresponding covariance matrix $\sigma$. Which in the case of the state (\ref{gaus}) are
\begin{equation}
\langle \hat{x} \rangle=\frac{b+b^*}{2(a+a^*)}, \quad \langle \hat{p} \rangle= \frac{i (a b^*-a^* b)}{a+a^*}\, , \quad \sigma=\frac{1}{2(a+a^*)}\left(\begin{array}{cc}
4 \vert a \vert^2 & i(a-a^*) \\ i(a-a^*) & 1
\end{array}\right) \, .
\end{equation}
When this state is rotated in the phase space using the propagator $\langle x \vert \hat{R}(\theta) \vert y \rangle$, where $\hat{R}(\theta)=\exp(-i\theta \hat{n})$ is the rotation operator, the obtained state is still Gaussian with new parameters $a(\theta)$, $b (\theta)$ given in terms of the original Gaussian parameters $a$, and $b$, as follows
\begin{eqnarray}
a (\theta)&=&\frac{2 i a \cos \theta-\sin \theta}{2 (i \cos \theta-2 a \sin \theta)} \, , \nonumber \\
b (\theta)&=& \frac{b}{\cos \theta+2 i a \sin \theta} \, .
\end{eqnarray}
The cyclic Gaussian state for the irreducible representation $\lambda$ of the group $C_n$ is then given by the expression
\begin{equation}
\Psi_n^{(\lambda)}(x)= \mathcal{N}_\lambda \sum_{r=1}^n \chi_n^{(\lambda)} (g_r) \psi_r (x)\, ,
\label{gcic}
\end{equation}
with a value of $\psi_r (x)$ analogous to the initial state of Eq.~(\ref{gaus})
\begin{eqnarray}
\psi_r (x)=\left(\frac{a(\theta_r)+a^*(\theta_r)}{\pi}\frac{1+2a(\theta_r)}{1+2a^*(\theta_r)}\right)^{1/4} \exp\left\{ -\frac{b^2(\theta_r)+b(\theta_r) b^*(\theta_r)}{4(a(\theta_r)+a^*(\theta_r))}\right\} \times \nonumber \\
\exp \{ -a(\theta_r) x^2+b (\theta_r) x \} \, .
\end{eqnarray}
Given this expression one can construct then the cyclic states using Eq.~(\ref{gcic}).
For the cyclic group $C_2$, the cyclic states can be described by the following two orthogonal states
\begin{equation}
\Psi^{(1,2)}_2 (x)=\mathcal{N}_{1,2} \, e^{-a x^2}(e^{b x} \pm e^{-b x}) \, , \quad \mathcal{N}_{1,2}=\left( \frac{a+a^*}{\pi}\frac{1+2a}{1+2a^*}\right)^{1/4} \frac{e^{-\frac{b^2+b b^*}{4(a+a^*)}}}{\sqrt{2}\left(1\pm e^{-\frac{b b^*}{a+a^*}}\right)^{1/2}} \, ,
\label{goga}
\end{equation}
\begin{figure}
\centering
(a)\includegraphics[scale=0.25]{a14go}
(b)\includegraphics[scale=0.25]{a12go}
(c)\includegraphics[scale=0.25]{a1go}
(d)\includegraphics[scale=0.25]{a14ga}
(e)\includegraphics[scale=0.25]{a12ga}
(f)\includegraphics[scale=0.25]{a1ga}
\caption{Mandel parameter $M_Q$ as a function of the real and imaginary parts of the parameter $b=b_R+i b_I$, for the states associated to the cyclic group $C_2$, $\Psi_2^{(1)}(x)$ with (a) $a=1/4$, (b) $a=1/2$, (c) $a=1$; and for the state $\Psi_2^{(2)}(x)$ for (d) $a=1/4$, (e) $a=1/2$, and (f) $a=1$ . \label{mandel}}
\end{figure}
which have specific properties. In Fig.~\ref{mandel}, the Mandel parameter \cite{mandel} $M_Q=\langle (\Delta \hat{n})^2 \rangle/ \langle \hat{n} \rangle$ is shown for the cyclic Gaussian states of $C_2$ given in Eq.~(\ref{goga}). The figure was made taking into account three different $a$ parameters.
A Mandel parameter $M_Q<1$ can be used to distinguish a subpoissonian from a superpoissonian photon statistics ($M_Q>1$), or poissonian statistics $M_Q=1$. As it can be seen in the figure, the cyclic states can have subpoissonian distributions for certain regions of the parameter $b=b_R+i b_I$. As can be seen in the figure, the presence of this photon statistic is more prominent in the states associated to the second irreducible representation of the group $\Psi^{(2)}_2(x)$.
Similar to the states above, the ones associated to the cyclic group $C_3$ can be obtained. In Fig.~\ref{wignerc3}, the plots and contours for the Wigner function \cite{wigner}: $W_\psi(x,p)=\int \, dy \, \psi^*(x+y) \psi (x-y) e^{2ipy}/\pi$, can be seen. In the contour plots of the phase space ($p,x$) is noticed the symmetry of the state under the rotations with angles $0$, $2\pi/3$, and $4\pi/3$ with respect to the $x$ axis. It is also important to say that the Wigner functions depicted in the figure do not have inversion symmetry as they are only invariant under the rotations contained in the $C_3$ group.
\begin{figure}
\centering
\includegraphics[scale=0.25]{c31wig-1}
\includegraphics[scale=0.25]{c33wig-1}
\includegraphics[scale=0.25]{c32wig-1}
\includegraphics[scale=0.22]{c31con-1}
\includegraphics[scale=0.22]{c33con-1}
\includegraphics[scale=0.22]{c32con-1}
\caption{Wigner functions and their contour plots for the cyclic Gaussian states associated to $C_3$ for the irreducible representations $\lambda=1$ (left), $\lambda=2$ (center), and $\lambda=3$ (right). For these figures the chosen parameters were $a=1$ and $b=\sqrt{2}(1+i)$. The black lines in the contour plots depict the symmetry axis associated to the $C_3$ group. \label{wignerc3}}
\end{figure}
\subsection{Circle symmetric states}
When one increases the degree of the cyclic group the obtained states described by our method must be invariant under more and more rotations in the phase space. It is known \cite{janszky} that there exist a correspondence between the circle symmetric states in the coherent case and the Fock number states. This lead us to the question, how do the generalized cyclic states associated to a very big number of symmetries look like? e.g., when the order of the cyclic group tends to infinite ($n\rightarrow \infty$), can they also be associated to the Fock states?. To answer these questions, one can notice that the Definition~\ref{defi1} of the cyclic states allow us to make a generalization in the case when the angle $\theta$, which determine the rotations $\hat{R}(\theta)$, becomes a continuous variable. In that case, the definition of the cyclic states becomes
\begin{equation}
\vert \psi_\infty^{(\lambda)}\rangle= \mathcal{N}_\lambda \int_0^{2\pi} d\theta \, e^{i\theta (\lambda-1)} e^{-i \theta \hat{n}} \vert \phi \rangle \, ,
\end{equation}
where we have an infinity number of irreducible representations, i.e., $\lambda \in \mathbb{Z}^+$. By means of the photon number decomposition of $\vert \phi \rangle=\sum_m A_m (\phi) \vert m \rangle$, one obtains
\[
\vert \psi_\infty^{(\lambda)}\rangle= \mathcal{N}_\lambda \sum_{m=0}^\infty \int_0^{2\pi} d\theta \, A_m (\phi) e^{i\theta (\lambda-1-m)} \vert m \rangle \, ,
\]
as the integral is equal to $2\pi$ times the Kronecker delta $\delta_{\lambda-1,m}$, we arrive to the result
\[
\vert \psi_\infty^{(\lambda)}\rangle= 2\pi \mathcal{N}_\lambda A_{\lambda-1}(\phi) \vert \lambda-1 \rangle \, ,
\]
which, after the renormalization process, we notice corresponds to the number state
\begin{equation}
\vert \psi_\infty^{(\lambda)}\rangle=\vert \lambda-1 \rangle \, .
\end{equation}
We point out that this expression for the circle cyclic states is consistent with the erasure map of the state $\vert \phi \rangle$, as in principle we need to erase all the different states but the one that satisfies the condition $m-\lambda+1=0$. This result lead us to the conclusion that the cyclic superposition ($n \rightarrow \infty$) of any state which is noninvariant under any rotation in the phase space, is equal to a Fock state. This, regardless of the initial, noninvariant state $\vert \phi \rangle$ that we take into consideration. We would like to emphasize that in order of this property to be true, the state under consideration $\vert \phi \rangle$ must be noninvariant under all possible rotations in the phase space. This implies that $\vert \phi \rangle$ must be expressed by an infinite sum of the photon number states $\vert m \rangle$ with a nonzero probability amplitude $A_m(\phi)$. To show this we can take as an example the $C_2$ group. In order for a state $\vert \phi \rangle$ to be noninvariant under the $C_2$ rotation, it should be made by the superposition of at least two states $\vert m \rangle$ and $\vert n \rangle$, $m$ being even and $n$ being odd ($m,n\in \mathbb{Z}^+$). In the case of $C_3$ we need at least three states $\vert m \rangle$, $\vert n \rangle$, and $\vert l \rangle$ such mod$(m,3)=0$, mod$(n,3)=1$, and mod$(l,3)=2$ ($m,n,l\in \mathbb{Z}^+$). By the extension of this argument, we must need an infinite number of photon states in order for $\vert \phi \rangle$ to be an noninvariant state under $C_\infty$. As examples of this type of states one can name the coherent, the squeezed coherent, the non-centered Gaussian, and any noninvariant, continuous variable state.
To show that the superposition of several rotations of an initial continuous variable system can form a Fock state one can take as an example the Gaussian state of Eq. (\ref{gaus}) with $a=1$, $b=\sqrt{6}+2 i$. In Fig.~\ref{wigcirc} are shown the Wigner functions and their contours for the cyclic states associated to the first irreducible representation of $C_n$ for $n=10$ (left), $n=15$ (center), and $n=20$ (right). Here, one can see how the cyclic states for a long degree order are more and more alike to the vacuum state $\vert 0 \rangle$. Additionally to this, it can be checked that for a given irreducible representation of the cyclic group, a different photon state can be formed for a sufficient large number $n$, i.e., the cyclic group degree.
\begin{figure}
\centering
\includegraphics[scale=0.30]{wig10-1}
\includegraphics[scale=0.30]{wig15-1}
\includegraphics[scale=0.30]{wig20-1}\\
\includegraphics[scale=0.35]{wig10-1con}
\includegraphics[scale=0.35]{wig15-1con}
\includegraphics[scale=0.35]{wig20-1con}
\caption{Wigner functions and their contour plots for the cyclic Gaussian state of $C_n$ for the irreducible representation $\lambda=1$ for (a) $n=10$ (right), (b) $n=15$ (center), and (c) $n=20$ (left). In all the plots we took the parameters $a=1$ and $b=\sqrt{6}+2 i$. \label{wigcirc}}
\end{figure}
\section{Cyclic group density matrices.}
The previous discussion about the properties of the erasure map and its relation with the states associated to the cyclic groups can be extended to any kind of state which is not invariant under the rotation operation. For example, one can can think in a density matrix which may correspond to a mixed state $\hat{\rho}$ and define the following cyclic density matrices
\begin{mydef}
Let $\hat{\rho}$ be a density matrix with at least one of its mean quadrature components ($\hat{x}=(\hat{a}+\hat{a}^\dagger)/\sqrt{2}$, $\hat{p}=i(\hat{a}^\dagger-\hat{a})/\sqrt{2}$) different from zero, i.e., ${\rm Tr}(\hat{\rho}\, \hat{x})\neq 0$, or ${\rm Tr}(\hat{\rho}\, \hat{p})\neq 0$. Then the state associated to the irreducible representation $\lambda$ of the cyclic group $C_n$ is defined as
\begin{equation}
\hat{\rho}^{(\lambda)}_n=\mathcal{N}_\lambda \sum_{r,s=1}^n \chi_n^{(\lambda)} (g_r) \chi_n^{*(\lambda)}(g_s) \hat{R}(\theta_r) \hat{\rho} \hat{R}^\dagger(\theta_s) \, ,
\label{rhoo}
\end{equation}
where $\chi_n^{(\lambda)} (g_r)$ is the character for the group element $g_r$, $\hat{R}(\theta_r)=\exp{(-i \theta_r \hat{n})}$, and
\[
\mathcal{N}_\lambda^{-1}=\sum_{r,s=1}^n \chi_n^{(\lambda)} (g_r) \chi_n^{*(\lambda)}(g_s) \, {\rm Tr}(\hat{R}(\theta_r) \hat{\rho} \hat{R}^\dagger(\theta_s)) \, .
\]
\end{mydef}
These type of density matrices have the same properties of the cyclic states as being invariant up to a phase under the rotations in the cyclic group. Also, they have a photon distribution were not all the photon numbers are present as they can be obtained by the elimination of certain Fock states. To show this, one can follow an analogous procedure as in Theorem (\ref{teo2}). Let us suppose $\hat{\rho}=\sum_{m,m'=0}^\infty A_{m,m'}(\hat{\rho}) \vert m \rangle \langle m' \vert$, with ${\rm Tr}(\hat{\rho})=\sum_{m=0}^\infty A_{m,m}(\hat{\rho})=1$. This expression together with Eqs. (\ref{chi}) and (\ref{rhoo}) allow us to rewrite $\hat{\rho}_n^{(\lambda)}$ as follows
\[
\hat{\rho}_n^{(\lambda)}=\mathcal{N}_\lambda \sum_{m,m'=0}^\infty A_{m,m'}(\hat{\rho})\sum_{r,s=1}^n \mu_n^{(\lambda-1)(r-1)} \mu_n^{(\lambda-1)(1-s)} e^{-i \theta_r m} e^{i \theta_s m'} \vert m \rangle \langle m' \vert \, ,
\]
by the use of the definition of $\theta_j=2\pi (j-1)/n$ and Theorem \ref{tt1}, we can perform the sums over the $r$ and $s$ parameters. Those sums are
\begin{eqnarray}
\sum_{r=1}^n \mu_n^{(\lambda-1-m)r}&=&n \, \delta_{{\rm mod}(\lambda-1-m,n),0} \, , \nonumber \\
\sum_{s=1}^n \mu_n^{-(\lambda-1-m')s} &=& n \, \delta_{{\rm mod}(\lambda-1-m',n),0} \, ,
\end{eqnarray}
then we finally can write the cyclic state density matrices as follows
\[
\hat{\rho}_n^{(\lambda)}=\mathcal{N}_\lambda \, n^2 \sum_{m.m'=0}^\infty A_{m,m'}(\hat{\rho}) \, \mu_n^{m'-m}\, \delta_{{\rm mod}(\lambda-1-m,n),0} \, \delta_{{\rm mod}(\lambda-1-m',n),0} \, \vert m \rangle \langle m' \vert \, ,
\]
as the delta functions imply that $\lambda-1-m$ and $\lambda-1-m'$ should be a multiple of $n$, then $\lambda-1-m=\eta n$, and $\lambda-1-m'=\xi n$ and then $m'-m=-(\xi+\eta)n$ is also a multiple of $n$. From these properties, we can conclude that $\mu_n^{m'-m}=1$ and finally arrive to the expression for the cyclic density matrix
\begin{equation}
\hat{\rho}_n^{(\lambda)}=\mathcal{N}_\lambda \, n^2 \sum_{m.m'=0}^\infty A_{m,m'}(\hat{\rho}) \, \delta_{{\rm mod}(\lambda-1-m,n),0} \, \delta_{{\rm mod}(\lambda-1-m',n),0} \, \vert m \rangle \langle m' \vert \, ,
\label{rhot}
\end{equation}
this property is summarized in the following theorem:
\begin{theorem}
Let $n$ and $\lambda$ be two positive integers with $\lambda\leq n$, and $\hat{\rho}_{n,\lambda} $ be the renormalized state obtained after the elimination of the number states operators $\vert m \rangle \langle m' \vert$ in $\hat{\rho}=\sum_{m,m'=0}^\infty A_{m,m'} (\hat{\rho}) \vert m \rangle \langle m' \vert$, which do not satisfy the conditions ${\rm mod}(\lambda-1-m,n)=0$ and ${\rm mod}(\lambda-1-m',n)=0$, then $\hat{\rho}_{n,\lambda} (\phi)\rangle$ is equal to the cyclic state $\hat{\rho}_n^{(\lambda)}$.
\label{teo3}
\end{theorem}
It is noteworthy to see that from Eq. (\ref{rhot}) and the property $m'-m$ being a multiple of $n$, we can immediately show that the cyclic density matrices are invariants over the rotations in the cyclic groups. In other words, the density matrix $\hat{\rho}_n^{(\lambda)}$ after the rotation $\hat{R}(\theta_j)$, i.e.,
\[
\hat{R}(\theta_j)\hat{\rho}_n^{(\lambda)} \hat{R}^\dagger (\theta_j)=\mathcal{N}_\lambda \, n^2 \sum_{m.m'=0}^\infty A_{m,m'}(\hat{\rho}) \, \delta_{{\rm mod}(\lambda-1-m,n),0} \, \delta_{{\rm mod}(\lambda-1-m',n),0} \, \mu_n^{(m'-m)(j-1)}\, \vert m \rangle \langle m' \vert \, ,
\]
is equal to the initial density matrix, so finally one can establish
\[
\hat{R}(\theta_j)\hat{\rho}_n^{(\lambda)} \hat{R}^\dagger (\theta_j)=\hat{\rho}_n^{(\lambda)} \, .
\]
As in the case of the pure cyclic states, the photon number distribution of the cyclic density matrices contains only some of the numbers states. Given that the different states associated to the cyclic group $C_n$ are made with different photon number states, we can conclude that the cyclic density matrices form an orthogonal set.
\section{Example: Calculation of the entanglement in a bipartite state.}
As an example of the applications of the cyclic states we show that this type of states can be used to describe a continuous variable system in a discrete way, and that this discrete form can lead to an easier calculation of parameters, such as the entanglement between parts in a bipartite system. Suppose a two mode state made entirely of the group of rotation states $\{\vert \phi_r \rangle_1, \vert \varphi_r \rangle_2 ; r=1, \ldots, n\}$ for modes 1 and 2 respectively, e.g. the state
\begin{equation}
\vert T\rangle= \sum_{r=1}^n c_r \vert \phi_r \rangle_1 \vert \varphi_r \rangle_2 \, , \quad \sum_{r,r'=1}^n c_r c_{r'}^* \langle \phi_{r'} , \varphi_{r'} \vert \phi_r, \varphi_r\rangle =1 \, .
\label{tstate}
\end{equation}
As the states $\vert \phi_r \rangle=\hat{R}(\theta_r)\vert \phi \rangle$, $\vert \varphi_r \rangle=\hat{R}(\theta_r)\vert \varphi \rangle$ can be general then they might not be orthogonal. On the other hand, the cyclic states generated by these states form an orthogonal set. Most importantly, as there exist the same number of cyclic states $\vert \psi_n^{(\lambda)} (\phi)\rangle$ and $\vert \psi_n^{(\lambda)}(\varphi) \rangle$ as the number of rotated states $\vert \phi_r \rangle$ and $\vert \varphi_r \rangle$, then one can obtain the rotated states in terms of the cyclic, orthogonal ones. To obtain these expressions one must obtain the inverse relation of Eq. (\ref{ccy})
\[
\vert \psi_n^{(\lambda)}(\phi) \rangle=\mathcal{N}_\lambda \sum_{r=1}^n \mu_n^{(\lambda-1)(r-1)} \vert \phi_r \rangle \, ,
\]
to do that, one can treat the characters of the group as a matrix $M_{jk}=\mu_n^{(j-1)(k-1)}$, which has an inverse matrix $M_{jk}^{-1}=\mu_n^{(1-j)(k-1)}/n$. By this expression one can obtain the inverse equation
\begin{equation}
\vert \phi_r \rangle = \frac{1}{n \, \mathcal{N}_\lambda} \sum_{\lambda=1}^n \mu_n^{(1-r)(\lambda-1)} \vert \psi_n^{(\lambda)}(\phi) \rangle \, .
\label{innv}
\end{equation}
By substituting this expression and an analogous expression for $\vert \varphi_r \rangle$ into the two-mode state $\vert T \rangle$, one obtains
\[
\vert T \rangle =\frac{1}{n^2}\sum_{r=1}^n c_r \sum_{\lambda,\lambda'=1}^n \frac{1}{\mathcal{N}_\lambda \mathcal{N}_{\lambda'}}\mu_n^{(1-r)(\lambda-1)} \mu_n^{(1-r)(\lambda'-1)} \vert \psi_n^{(\lambda)}(\phi) \rangle_1 \vert \psi_n^{(\lambda')}(\varphi) \rangle_2 \, .
\]
From this expression is possible to calculate the partial density matrices for each mode in the bipartite state. For this we obtain the total density matrix and perform the partial trace operation. Finally, arriving to
\begin{eqnarray*}
\hat{\rho}(1)=\sum_{r,s,\lambda,\lambda', \mu=1}^n D_{r,\lambda,\lambda'} D_{s,\mu,\lambda'}^* \, \vert \psi_n^{(\lambda)} (\phi) \rangle \langle \psi_n^{(\mu)} (\phi) \vert ,\nonumber \\
\hat{\rho}(2)=\sum_{r,s,\lambda,\lambda', \mu'=1}^n D_{r,\lambda,\lambda'} D_{s,\lambda,\mu'}^* \, \vert \psi_n^{(\lambda')} (\varphi) \rangle \langle \psi_n^{(\mu')} (\varphi) \vert \, ,
\end{eqnarray*}
where $D_{r,\lambda,\lambda'}=\frac{ \mu_n^{(1-r)(\lambda+\lambda'-2)}}{n^2 \mathcal{N}_\lambda \mathcal{N}_{\lambda'}} c_r$. After this, one can calculate the entanglement between the modes. The entanglement is calculated by the linear entropy of the partial density matrices, giving the following result
\begin{equation}
S_L(1)=1- \sum_{\lambda,\mu=1}^n \vert F_{\lambda,\mu}\vert^2\, , \quad F_{\lambda,\mu}=\sum_{r,s,\lambda'=1}^n D_{r,\lambda,\lambda'} D_{s,\mu,\lambda'}^* \, .
\end{equation}
The quantification of the entanglement by using the decomposition of the two-mode system in terms of cyclic states was done in a easier way than by directly taking the expression of the state $\vert T \rangle$ of Eq.~(\ref{tstate}). Several other quantities can be calculated using this decomposition as the mean values and the covariance matrix of the system.
\section{Generalized dihedral states}
The dihedral group of $n$-th order ($D_n$) is a non-Abelian group which contains all the symmetry operations of the $n$-sided regular polygon. In other words, it contains the rotations of the cyclic group $C_n$ and the inversion operators $\hat{U}_r$; $r=1,\ldots,n$. The inversions in the phase space are defined by a rotation plus the complex conjugation operator $\hat{C}$, i.e., $\hat{U}_r=\hat{C}\hat{R}(\theta_r)$, with $\theta_r=2\pi(r-1)/n$. In order to obtain any state associated to the dihedral group, one must impose the condition for the state to be invariant under both the rotations and inversions contained in $D_n$. Inspired by the cyclic states, one can use a superposition of all the rotations and inversions of a noninvariant state $\vert \phi \rangle$, that is the superposition of the states $\hat{R}(\theta_r)\vert \phi \rangle$ and $\hat{U}_r \vert \phi \rangle$. As we have seen in the sections 3 and 4, the superpositions with probability amplitudes given by the characters of the cyclic group $\chi_n^{(\lambda)} (g_r)$ are orthogonal as they contain different photon numbers. Given these arguments we define a set of $n$ dihedral states, each one corresponding to an irreducible representation of the cyclic subgroup $C_n$, as follows
\begin{mydef}
Let $\vert \phi \rangle=\sum_{m=0}^\infty A_m (\phi) \vert m \rangle$ be a quantum state with at least one mean quadrature component ($\hat{x}=(\hat{a}+\hat{a}^\dagger)/\sqrt{2}$, $\hat{p}=i(\hat{a}^\dagger-\hat{a})/\sqrt{2}$) different from zero, i.e., $\langle \phi \vert \hat{x} \vert \phi \rangle \neq 0$, or $\langle \phi \vert \hat{p} \vert \phi \rangle \neq 0$. The general dihedral state for the irreducible representation $\lambda$ of the subgroup $C_n$ is defined as
\begin{equation}
\left\vert \gamma_n^{(\lambda)} (\phi) \right\rangle = \mathcal{N}_\lambda \sum_{r=1}^n (\chi^{(\lambda)}_n (g_r) \vert \phi_r \rangle+\chi^{*(\lambda)}_n (g_r) \vert \phi^*_r \rangle) \, ,
\label{ccy}
\end{equation}
where $\chi_n^{(\lambda)}(g_r)$ is the character associated to the element of the group $g_r$ of the cyclic group, $\vert \phi^*_r \rangle=\hat{U}_r \vert \phi \rangle=\sum_{m=0}^\infty A^*_m (\phi) e^{i \theta_r m} \vert m \rangle$ ($\theta_r=2\pi (r-1)/n$), and where
\[
\mathcal{N}_\lambda^{-2}=\sum_{r,r'=1}^n (\chi_n^{*(\lambda)}(g_{r'}) \langle \phi_{r'} \vert+\chi_n^{(\lambda)}(g_{r'})\langle \phi^*_{r'} \vert)( \chi_n^{(\lambda)}(g_{r}) \vert \phi_r \rangle+\chi_n^{*(\lambda)}(g_{r})\vert \phi^*_r \rangle) \, .
\]
\label{defi3}
\end{mydef}
We would like to emphasize that it is the first time that an orthogonal set of states have been associated to the dihedral group. These set of states are invariant, up to a phase, under the application of all the dihedral group elements. As the construction of the dihedral states corresponds to the sum of two cyclic states: one with initial state $\vert \phi \rangle=\sum_{m=0}^\infty A_m (\phi) \vert m \rangle$ and the other with the initial state $\vert \phi^* \rangle=\sum_{m=0}^\infty A^*_m (\phi) \vert m \rangle$, then the invariance under rotations can be implied from the cyclic states invariance (up to a phase) of Eq.~(\ref{cyc_inv})
\[
\hat{R}(\theta_l)\vert \gamma^{(\lambda)}_n (\phi) \rangle=\mu_n^{(1-\lambda)l}\vert \gamma^{(\lambda)}_n (\phi) \rangle \, ,
\]
from this correspondence one can obtain an expression for the inversions acting on the dihedral states $\hat{U}_l \vert \gamma_n^{(\lambda)}\rangle$ ($\hat{U}_l=\hat{C}\hat{R}(\theta_l)$):
\begin{eqnarray*}
\hat{U}_l \vert \gamma_n^{(\lambda)}\rangle&=&\hat{C}\mu_n^{(1-\lambda)l}\vert \gamma^{(\lambda)}_n (\phi) \rangle \, \\
&=&\mu_n^{(\lambda-1)l} \vert \gamma_n^{(\lambda)} \rangle \, ,
\end{eqnarray*}
and thus one can imply that the dihedral state $\vert \gamma_n^{(\lambda)}\rangle$ in Def.~\ref{defi3} is invariant, up to a phase, under all the elements of the dihedral group $D_n$.
As we can see in Def.~\ref{defi3}, the dihedral group can be defined using the sum of a noninvariant state $\vert \phi \rangle$ and its conjugate $\vert \phi^* \rangle$, this implies that the cyclic state $\vert \psi_n^{(\lambda)}\rangle$ is also a dihedral state $\vert \gamma_n^{(\lambda)}\rangle$ when the initial state has only real photon number probability amplitudes $A_m(\phi)\in \mathbb{R}$, implying $\vert \phi \rangle=\vert \phi^* \rangle$. One can also notice that the dihedral states correspond to the erasure map of the state $(\vert \phi \rangle+\vert \phi^*\rangle)/\sqrt{2}$ since, as stated before, the dihedral state correspond to the sum of the cyclic states for $\vert \phi \rangle$ and $\vert \phi^* \rangle$.
As stated before, the sum $ \chi^{(\lambda)}_n (g_r) \vert \phi \rangle+ \chi^{*(\lambda)}_n (g_r) \vert \phi^*\rangle$ used to obtain the dihedral superpositions, is an state with real probability amplitudes, as $\vert \phi \rangle=\sum_{m=0}^\infty A_m(\phi) \vert m \rangle$, then $ \chi^{(\lambda)}_n (g_r) \vert \phi \rangle+ \chi^{*(\lambda)}_n (g_r) \vert \phi^* \rangle=2 \sum_{m=0}^\infty {\rm Re}( \chi^{(\lambda)}_n (g_r)\, A_m(\phi)) \vert m \rangle$. It can be seen that an analogous procedure to define dihedral states can be done by using the imaginary part of the probability amplitudes $ \chi^{(\lambda)}_n (g_r) A_m(\phi)$, e.g., by using the subtraction of the states $ \chi^{(\lambda)}_n (g_r) \vert \phi \rangle- \chi^{*(\lambda)}_n (g_r)\vert \phi^* \rangle$ instead of the sum $ \chi^{*(\lambda)}_n (g_r)\vert \phi \rangle+ \chi^{*(\lambda)}_n (g_r)\vert \phi^*\rangle$. The states associated to the subtraction are also invariant, up to a phase, under all the transformations contained in the dihedral group, however they are not orthogonal to the states defined in Def.~\ref{defi3}. However, they still can be helpful as they contain the dihedral symmetry.
In fig.~\ref{wignerd3}, the Wigner functions and their contour plots for each one of the three states associated to the dihedral group $D_3$ are shown. To construct this figure, the Gaussian state of Eq.~(\ref{gaus}) with $a=1$ and $b=1+i$ was used to generate the states of $D_3$. In all the cases one can notice that additionally to the rotational symmetry of the $C_3$ subgroup, the inversion invariance is also present.
\begin{figure}
\centering
\includegraphics[scale=0.28]{wigd31}
\includegraphics[scale=0.28]{wigd32}
\includegraphics[scale=0.28]{wigd33}
\includegraphics[scale=0.38]{wigd31con}
\includegraphics[scale=0.38]{wigd32con}
\includegraphics[scale=0.38]{wigd33con}
\caption{Wigner functions and their contour plots for the dihedral Gaussian states associated to the different irreducible representations $\lambda$ of the group $D_3$ for $\lambda=1$ (left), $\lambda=2$ (center), and $\lambda=3$ (right). For these figures the chosen parameters for the initial Gaussian state with $a=1$ and $b=1+i$. \label{wignerd3}}
\end{figure}
\section*{Summary and conclusions}
A general procedure to obtain a set of $n$ orthogonal pure states (or density matrices) associated to each of the irreducible representations of the cyclic group $C_n$ and dihedral group $D_n$ was proposed. This procedure can be summarized as follows: given any state $\vert \phi \rangle$ which is not invariant under the rotations of the cyclic group, the cyclic states can be obtained from the weighted superposition of the phase-space rotations of the initial state $\hat{R}(\theta_j)\vert \phi \rangle$ ($j=1,\ldots,n$), where the weights of each rotated state are given by the characters of each irreducible representation. This procedure is then extended to density matrices where the weighed superpositions are made of the elements $\hat{R}(\theta_r)\hat{\rho} \hat{R}^\dagger (\theta_s)$, where $\hat{\rho}$ is the initial noninvariant density matrix. Additionally, it was shown that the resulting states associated to $C_n$ provided by our method are invariant, up to a phase, under any element of the group. The associated states to the dihedral group $D_n$ are defined through the rotations of the original noninvariant state $\vert \phi \rangle$ and its complex conjugate $\vert \phi^* \rangle$. In the case of the dihedral states, it is the first time that an orthogonal set of states have been associated to the dihedral group.
The correspondence between the cyclic states of $C_n$ and the renormalized states obtained after the erasure of certain photon numbers was established and discussed. In particular, it was shown that the cyclic state corresponds, up to a phase, to the renormalized states with photon number states $\vert m \rangle$ erased, where the erased states do not satisfy the condition ${\rm mod}(\lambda+m-1,n)=0$. In an analogous way, the cyclic density matrices obtained by our method correspond to the renormalized matrices where the photon number operators $\vert m \rangle \langle m' \vert$, which does not satisfy the conditions ${\rm mod}(\lambda-m-1,n)=0$ and ${\rm mod}(\lambda-m'-1,n)=0$, are eliminated. On the other hand, the dihedral states correspond to the sum of the cyclic states defined with the states $\vert \phi \rangle$ and $\vert \phi^* \rangle$, for this reason they correspond to the erasure map of the state $(\vert \phi \rangle+\vert \phi^* \rangle)/\sqrt{2}$.
As example of the procedure the general cyclic Gaussian states were defined. It was shown that these states can present subpoissonian photon number statistics by using the Mandel parameter $M_Q=\langle (\Delta \hat{n})^2 \rangle/ \langle \hat{n} \rangle$. The symmetry properties of the cyclic Gaussian states associated to $C_3$ were also checked using the Wigner function. Also, the correspondence between the circle symmetric states $C_n$ ($n\rightarrow \infty$): $\vert \psi_\infty^{(\lambda)}\rangle$ and the Fock states $\vert \lambda-1 \rangle$ was demonstrated.
Also, as an example of the use of the cyclic states, the calculation of the entanglement between subsystems in a two-mode state was presented. This calculation takes advantage of the orthogonality of the cyclic states to define a finite representation of particular bipartite states.
The possible experimental realization of these states was briefly discussed given the evidence presented in \cite{cordero1,cordero2} for a generation of cyclic states in the atom-field interaction, and in \cite{vlastakis} were these type of superposition can be obtained using a superconducting transmon coupled with a cavity resonator.
\section*{Acknowledgments}
This work was partially supported by DGAPA-UNAM (under project IN101619).
|
2,877,628,091,250 | arxiv | \section{Introduction}
Nonlocal models have recently received a great attention due to their apparent ability to capture novel effects such as in mechanics \cite{teodor2014fractional} and in particular in peridynamics \cite{silling2000,ha2010studies}, turbulence \cite{chen2006speculative}, biophysics \cite{bueno2014fractional} and image denoising \cite{gatto2015numerical}, to mention a few.
In most of the applications, the type of nonlocal interactions are different and their scaling laws are unknown. Initiated by the work in \cite{AO18}, an algorithm is proposed and analyzed in \cite{antil2016optimization} to identify the fractional power $s \in [s_{\min},s_{\max}]$ governing the state equation in an optimization framework. As expected, the algorithm exploits the smoothness of the map $s \mapsto (-\Delta)^{-s} f$ and requires many (costly) evaluations of $(-\Delta)^{-s} f$ for different $s \in (0,1)$. Several numerical methods are available to approximate $(-\Delta)^{-s} f$ and we refer to the surveys \cite{BBNOS18,lischke2018fractional} for the description of different fractional Laplacians along with their numerical approximations. Here, for $f \in L^2(\Omega)$ and $\Omega$ a Lipschitz domain of $\mathbb R^d$, $d=1,2,3$, we set
\begin{equation} \label{def:prob}
u(s):=(-\Delta)^{-s} f:=\sum_{k=1}^{\infty}\lambda_k^{-s} f_k\psi_k,
\end{equation}
where $\{\lambda_k,\psi_k\}_{k\in\mathbb{N}} \subset \mathbb{R}^+ \times H^1_0(\Omega)$ are the eigenpairs of $(-\Delta)$ and $f_k := \int_\Omega f \psi_k$. The eigenfunctions $\{\psi_k\}_{k\in \mathbb N}$ are chosen orthogonal in $H_0^1(\Omega)$ and orthonormal in $L^2(\Omega)$. In \eqref{def:prob} the fractional operator is referred to as the spectral fractional Laplacian and is the one considered in this work. It is worth mentioning that the methodology proposed here is not limited to the Laplacian operator and can be easily extended to regularly accretive operators as in \cite{BP17}.
In this work, we follow the approach proposed in \cite{BP15} to approximate \eqref{def:prob}, see also \cite{BP17}, which is based on the Dunford-Taylor-Balakrishnan representation
\begin{equation*}
u(s)=\frac{\sin(s\pi)}{\pi}\int_{-\infty}^{\infty}e^{(1-s)y} w(y)dy,
\end{equation*}
where $w(y) \in H^1_0(\Omega)$ solves
\begin{equation}\label{eq:wy}
(e^y I -\Delta)w(y) = f.
\end{equation}
Originally introduced in \cite{BP15} and later improved in \cite{BLP18}, a sinc quadrature coupled with a standard finite element method is used for the approximation of the integration in the variable $y$. For $k>0$, it reads
\begin{equation} \label{def:uk}
u(s) \approx u_{h,k}(s)=\frac{k\sin(s\pi)}{\pi}\sum_{l=-M_s}^{N_s}e^{(1-s)y_l}w_h(y_l)
\end{equation}
with $y_l:=lk$,
\begin{equation} \label{def:NsMs}
M_s:=\left\lceil \frac{\pi^2}{(1-s)k^2} \right\rceil, \quad N_s:=\left\lceil \frac{\pi^2}{sk^2} \right\rceil
\end{equation}
and where $w_h(y_l)\in \mathbb V_h$ are standard finite element approximations of $w(y_l)$.
The numerical approximation of $(-\Delta)^{-s} f$ requires $M_s+N_s+1$ finite element solves to determine $w_h(y_l)$, $y_l\in[-M_sk,N_sk]$. This can become prohibitive when the computation of $(-\Delta)^{-s} f$ is needed for many values of $s$ such as within an optimization loop as mentioned above. The reduced basis method seems to be a natural approach to reduce the computational cost when approximating the parametrized reaction-diffusion problems \eqref{eq:wy}. In fact, reduced basis method for this type of one dimensional parametric elliptic partial differential equation has already been partially analyzed in \cite{maday2002priori,MPT02} and recently in \cite{DS19} from which part of our analysis is inspired.
A (\emph{weak}) greedy strategy is advocated (\emph{offline stage}) to iteratively select snapshots $w_h(y^l)$, \mbox{$l=1,...,n <\!\!< \dim(\mathbb V_h)$}, defining the $s$-independent reduced basis space $\Vn := \Span\{w_h(y^1),\ldots,w_h(y^n)\}$. Galerkin approximations $w_h^n(y_l) \in \Vn$ of $w_h(y_l)$ can then be easily computed (\emph{online stage}) to produce a reduced basis approximation of $u_{h,k}(s)$
\begin{equation}\label{def:ukn}
u_{h,k}^n(s) := \frac{k\sin(s\pi)}{\pi}\sum_{l=-M_s}^{N_s}e^{(1-s)y_l}w_h^n(y_l).
\end{equation}
We point out that one of the difficulty faced in this study is that the approximation of the parametric elliptic partial differential equation~\eqref{eq:wy} is required for $y$ in the parametric domain $[-M_sk,N_sk]$ whose length increases as the sinc quadrature parameter $k$ decreases (improving the precision of the algorithm).
The proposed algorithm provides an approximation of the entire map $s \mapsto u(s)$, $s\in[s_{\min},s_{\max}]$ using the same reduced basis space $\Vn$. Our main result is Theorem~\ref{cor:err_D} which guarantees an exponential convergence of the reduced basis approximation $u_{h,k}^n(s)$ toward $u_{h,k}(s)$ in a wide range of Sobolev norms, uniformly in the fractional power $s\in[s_{\min},s_{\max}]$.
We end this introduction by noting that the idea of using the reduced basis method for fractional problems has been recently proposed for instance in \cite{DS19} and \cite{DACCN19}. In \cite{DS19}, the reduced basis is used for the approximation of interpolation norms, as well as evaluations of both types $s \mapsto (-\Delta)^s u$ with $u$ fixed and variable $s\in(0,1)$ and $u \mapsto (-\Delta)^s u$. A reduced basis space based on best rational approximations for a reaction diffusion problem similar to the one satisfied by $w_{h}$ is proposed and exponential convergence of the approximation with respect to the dimension of the reduced basis space is obtained. Worth mentioning, the numerical method is based on the extension method \cite{NOS15} but seemingly apply to other approximation techniques. Actually, this method boils down to the approximation of several reaction diffusion problems as in \cite{BP15}. We take advantage of the technology developed in \cite{DS19} to derive an exponential decay in the approximation of \eqref{def:uk} by \eqref{def:ukn}. In \cite{DACCN19}, a similar approximation $u_{h,k}(s)$ is proposed for a different quadrature. Exponential decay of the reduced basis error is observed numerically but without analysis. In some sense, this work provides a mathematical justifications of the experimental observations in \cite{DACCN19}. Finally, we mention that the reduced basis method has also been used in \cite{BG19} to approximate the parametric PDEs $(-\Delta)^s u=f$, where $s$ is the parameter, in the case of the integral fractional Laplacian.
The rest of the paper is organized as follows. In Section~\ref{sec:frac}, we describe the numerical approximation of $u(s)$ by $u_{h, k}(s)$. Section 3 describes the construction of the reduced basis space and its corresponding error analysis - the main result of this work. Section 4 provides numerical experiments to illustrate the performance of the proposed methodology.
\section{Spectral Fractional Laplacian and its Numerical Approximations}\label{sec:frac}
We start with some notations. Let $\mathbb{H}^r(\Omega)$ be the interpolation space defined by
\begin{equation} \label{def:Hr}
\mathbb{H}^r(\Omega):=\left\{\begin{array}{ll}
\left(L^2(\Omega),H_0^1(\Omega)\right)_r & \mbox{for } r\in [0,1] \\
H_0^1(\Omega)\cap H^r(\Omega) & \mbox{for } r\in (1,2],
\end{array}
\right.
\end{equation}
where $(\cdot,\cdot)_r$ denotes interpolation using the real method.
Notice that for the particular case $r=1$, we have
\begin{equation*}
\|v\|_{H_0^1(\Omega)}:=\|v\|_{\mathbb{H}^1(\Omega)}=\|\nabla v\|_{L^2(\Omega)} \quad \forall v\in H_0^1(\Omega),
\end{equation*}
which is equivalent to the $H^1(\Omega)$ norm thanks to the Poincar\'e inequality
\begin{equation} \label{def:Poincare}
\|v\|_{L^2(\Omega)}\leq C_P\|\nabla v\|_{L^2(\Omega)} \quad \forall v\in H_0^1(\Omega).
\end{equation}
To simplify the notation, we write when $r=0$, $\|\cdot\|:=\|\cdot\|_{L^2(\Omega)}=\|.\|_{\mathbb H^0}$. Moreover, $a\lesssim b$ means that $a\leq Cb$ for a constant $C$ that does not depend on $a$, $b$ and the discretization parameters and whose value might change at each occurrence. Also, $a\approx b$ indicates $a\lesssim b$ and $b\lesssim a$.
\subsection{Dunford-Taylor Representation}
The function $u(s) \in L^2(\Omega)$ in \eqref{def:prob} has the following representation \cite{MR1336382}
\begin{equation} \label{u:Dunford}
u(s) =\frac{1}{2\pi i}\int_{\mathcal{C}}z^{-s}(zI+\Delta)^{-1}fdz,
\end{equation}
where $\mathcal{C}$ is a Jordan curve oriented to have the spectrum of $-\Delta$ to its right. Deforming the contour $\mathcal C$ to the negative real axis, we obtain the Balakrishnan formula, valid for $s\in(0,1)$,
\begin{equation} \label{u:Balak}
u(s)=\frac{\sin(s\pi)}{\pi}\int_0^{\infty}\mu^{-s}(\mu I-\Delta)^{-1}fd\mu.
\end{equation}
The numerical integration of the above improper integral relies on a sinc quadrature method after the change of variable $y=\ln(\mu)$, leading to
\begin{equation} \label{def:u}
u(s)=\frac{\sin(s\pi)}{\pi}\int_{-\infty}^{\infty}e^{(1-s)y}(e^y I -\Delta)^{-1}fdy.
\end{equation}
\subsection{Finite Element Approximation}
We assume that $\Omega$ is a polyhedral domain and we consider a sequence $\{\Th\}_{h>0}$ of conforming and shape-regular partitions of $\Omega$ into $d$-simplices with maximal mesh size $h<1$. Let $\Vh$ be the space of continuous and piecewise linear finite element functions associated with $\Th$. The finite element approximation of \eqref{def:u} is then defined by
\begin{equation} \label{def:uh}
u_h(s):=\frac{\sin(s\pi)}{\pi}\int_{-\infty}^{\infty}e^{(1-s)y}w_h(y)dy,
\end{equation}
where $w_h(y)\in\Vh$ is the solution to
\begin{equation} \label{def:pb_wh}
a(w_h(y),v_h;y) = F(v_h), \quad \forall v_h\in\Vh.
\end{equation}
Here we used the notation
\begin{equation}\label{def:pb_ay}
a(w,v;y) := a_0(w,v)+e^ya_1(w,v):= \int_{\Omega}\nabla w\cdot\nabla v +e^{y}\int_{\Omega}wv
\end{equation}
for $w,v\in H_0^1(\Omega)$ and
\begin{equation} \label{def:pb_F}
F(v) := \int_{\Omega}fv,
\end{equation}
for $v\in H_0^1(\Omega)$.
The Poincar\'e inequality \eqref{def:Poincare} implies that for $v,w \in H^1_0(\Omega)$,
\begin{equation}\label{e:coerc_cont}
\| v \|_{H^1_0(\Omega)}^2 \leq a(v,v;y) \quad \textrm{and} \quad a(v,w;y) \leq (1+C_P^2e^y)\| v \|_{H^1_0(\Omega)}\| w \|_{H^1_0(\Omega)},
\end{equation}
which guarantees that \eqref{def:pb_wh} has a unique solution for any parameter $y \in \mathbb R$ by the Lax-Milgram lemma.
We now collect some estimates for $w_h(y)$, which will be used in the analysis later.
\begin{lemma} \label{lem:tmp_res}
Let $C_P$ be the Poincar\'e constant in \eqref{def:Poincare}. For any $y,\bar y\in\mathbb{R}$, we have
\begin{equation} \label{apriori_wh}
\|\nabla w_h(y)\|\leq C_P\|f\|, \quad \|w_h(y)\|\leq e^{-y}\|f\| ,
\end{equation}
\begin{equation} \label{error_y_small}
\|\nabla (w_h(y)-w_h(\bar y))\|\leq C_P^3|e^y-e^{\bar y}|\|f\| ,
\end{equation}
\begin{equation} \label{error_y_large_1}
\|w_h(y)-w_h(\bar y)\|\leq e^{-y}|e^{y-\bar y}-1|\|f\|,
\end{equation}
and
\begin{equation} \label{error_y_large_2}
\|\nabla (w_h(y)-w_h(\bar y))\|\leq \frac{1}{2}e^{-\frac{y}{2}}|e^{y-\bar y}-1|\|f\|.
\end{equation}
\end{lemma}
\begin{proof}
Choosing $v_h=w_h(y)$ in \eqref{def:pb_wh} yields
\begin{equation*}
\|\nabla w_h(y)\|^2+e^y\|w_h(y)\|^2=\int_{\Omega}fw_h(y)\leq \|f\|\|w_h(y)\|,
\end{equation*}
from which the two relations in \eqref{apriori_wh} can be easily deduced. From \eqref{def:pb_wh} we get
\begin{equation*}
\int_{\Omega}\nabla(w_h(y)-w_h(\bar y))\cdot\nabla v_h+e^y\int_{\Omega}(w_h(y)-w_h(\bar y))v_h = (e^{\bar y}-e^y)\int_{\Omega}w_h(\bar y)v_h \quad \forall v_h\in\Vh.
\end{equation*}
We now choose $v_h=w_h(y)-w_h(\bar y)$ to get
\begin{equation} \label{error_y_tmp}
\|\nabla(w_h(y)-w_h(\bar y))\|^2+e^y\|w_h(y)-w_h(\bar y)\|^2 \leq |e^{\bar y}-e^y|\|w_h(\bar y)\|\|w_h(y)-w_h(\bar y)\|.
\end{equation}
This, the Poincar\'e inequality \eqref{def:Poincare} and \eqref{apriori_wh} with $y=\bar y$ yield \eqref{error_y_small}.
The estimate \eqref{error_y_large_1} follows from \eqref{error_y_tmp} together with \eqref{apriori_wh} with $y=\bar y$. For \eqref{error_y_large_2}, we invoke Young's inequality to estimate the right hand side of \eqref{error_y_tmp} and get
\begin{equation*}
\|\nabla(w_h(y)-w_h(\bar y))\| \leq \frac{1}{2}e^{-\frac{y}{2}}|e^{\bar y}-e^y|\|w_h(\bar y)\|.
\end{equation*}
It remains to invoke (\ref{apriori_wh}) with $y=\bar y$ to derive the desired result and ends the proof.
\end{proof}
We mention that both results in (16) are standard, while the estimates (17), (18) and (19) are less common yet useful in the error analysis below. Moreover, note that \eqref{apriori_wh}-left and \eqref{error_y_small} are favorable for negative $y$, while \eqref{apriori_wh}-right, \eqref{error_y_large_1} and \eqref{error_y_large_2} for positive $y$. This plays a role in our analysis below and is observed in the numerical experiments, see Section \ref{sec:numres}.
We end this section by stating the error in the finite element method derived and analyzed in \cite{BP17}. Before doing this, we define $\alpha \in (0,1]$ to be the elliptic pick-up regularity index, i.e. $\alpha$ is the largest number in $(0,1]$ such that $(-\Delta)$ is an isomorphism from $\mathbb{H}^r(\Omega)$ to $\mathbb{H}^{r+1}(\Omega)$ for all $r\in[0,\alpha]$. Notice that $\alpha >0$ for Lipschitz domains and $\alpha=1$ when $\Omega$ is convex.
\begin{thm} \label{thm:FE_error}
Let $f\in L^2(\Omega)$, $\alpha>0$ denote the elliptic regularity pick-up and $\alpha^*:=\frac{\alpha+\min(\alpha,1-r)}{2}$. Then for any $r\in[0,1]$, we have
\begin{enumerate}[label=\arabic*.]
\item If $r+2\alpha^*-2s\geq 0$ and $f\in \mathbb{H}^{r+2\alpha^*-2s}(\Omega)$ then
\begin{equation*} \label{eqn:FE_error_case1}
\|u-u_h\|_{\mathbb{H}^r(\Omega)} \lesssim \ln(h^{-1})h^{2\alpha^*}\|f\|_{\mathbb{H}^{r+2\alpha^*-2s}(\Omega)}.
\end{equation*}
\item If $r+2\alpha^*-2s\geq 0$ and $f\in \mathbb{H}^{r+2\alpha^*-2s+2\varepsilon}(\Omega)$ with $r+2\alpha^*-2s+2\varepsilon\leq 1+\alpha$ then
\begin{equation*} \label{eqn:FE_error_case2}
\|u-u_h\|_{\mathbb{H}^r(\Omega)} \lesssim h^{2\alpha^*}\|f\|_{\mathbb{H}^{r+2\alpha^*-2s+2\varepsilon}(\Omega)}.
\end{equation*}
\item If $r+2\alpha^*-2s<0$ then
\begin{equation*} \label{eqn:FE_error_case3}
\|u-u_h\|_{\mathbb{H}^r(\Omega)} \lesssim h^{2\alpha^*}\|f\|_{L^2(\Omega)}.
\end{equation*}
\end{enumerate}
\end{thm}
\subsection{Sinc quadrature approximation}
We discuss the sinc quadrature approximation leading to the fully discrete approximation $u_{h,k}$ given by \eqref{def:uk}.
Recall that $k>0$ is the sinc quadrature parameter and that $M_s$, $N_s$ are given by \eqref{def:NsMs}. This choice is dictated from the analysis of the sinc quadrature error, which is the subject of the following theorem; we refer to \cite{BLP18} for its proof.
\begin{thm} \label{thm:sinc_error}
Let $f\in L^2(\Omega)$ and $r\in[0,1]$.
\begin{enumerate}[label=\arabic*.]
\item If $s>r/2$ then
\begin{equation*} \label{eqn:sinc_error_case1}
\|u_h(s)-u_{h,k}(s)\|_{\mathbb{H}^r(\Omega)} \lesssim \left(e^{-\frac{\pi^2}{k}}+e^{-(1-s)M_sk}+e^{-sN_sk}\right)\|f\|_{L^2(\Omega)}.
\end{equation*} \\
\item If $s\leq r/2$ and $f\in \mathbb{H}^{r-2s+\varepsilon}(\Omega)$ with $r-2s+\varepsilon\in[0,1+\alpha]$ then
\begin{equation*} \label{eqn:sinc_error_case2}
\|u_h(s)-u_{h,k}(s)\|_{\mathbb{H}^r(\Omega)} \lesssim \left(e^{-\frac{\pi^2}{k}}+e^{-(1-s)M_sk}+e^{-sN_sk}\right)\|f\|_{\mathbb{H}^{r-2s+\varepsilon}(\Omega)}.
\end{equation*}
\end{enumerate}
\end{thm}
\section{Reduced Basis Approximation}\label{s:rb}
The computation of $u_{h,k}$ in \eqref{def:uk} involves the finite element solution $w_h(y)$ to the reaction diffusion problem \eqref{def:pb_wh} for $M_s+N_s+1$ different values of the parameter $y\in \mathcal{D}_s:=[-M_sk,N_sk]$. We propose to use the reduced basis method to approximate the entire map $y \mapsto w_h(y)$. Notice that the bilinear form $a(\cdot,\cdot;y)$ defining $w_h(y)$ in \eqref{def:pb_wh} is not affine in $y$. However, it becomes affine for $\mu=e^y$.
\subsection{Construction for a fixed $s$}\label{s:construction}
The reduced basis space
\begin{equation*}
\Vn = \Span\{w_h(y^1),...,w_h(y^n)\} \subset \mathbb V_h
\end{equation*}
is constructed using a greedy strategy \cite{BMPPT12}. Starting with $y^1=0$, $y^{m+1} \in \mathcal{D}_s$ is selected iteratively to maximize the error
\begin{equation} \label{def:err_Wm}
W_m(y):=\|w_h(y)-P_{\mathbb V_h^{m}}w_h(y)\|_{H_0^1(\Omega)}
\end{equation}
on $\mathcal{D}_s$, i.e.,
\begin{equation*}
y^{m+1}:=\argmax_{y\in\mathcal{D}_s}W_m(y).
\end{equation*}
Here $P_{\mathbb V_h^m}w_h(y)\in \mathbb V_h^m$ is the unique solution (from Lax-Milgram theory) to
\begin{equation}\label{e:galerkin}
a(P_{\mathbb V_h^m}w_h(y),v_m;y) = a(w_h(y),v_m;y), \quad \forall v_m\in\mathbb V_h^m.
\end{equation}
Notice that in view of the definition \eqref{def:pb_wh} of $w_h(y)$, the above relation is equivalent to
\begin{equation*}
a(P_{\mathbb V_h^m}w_h(y),v_m;y) = F(v_m), \quad \forall v_m\in\mathbb V_h^m.
\end{equation*}
The enrichment of the reduced basis space ends when
\begin{equation}\label{e:eps_W}
\max_{y \in \mathcal{D}_s} W_{m}(y) \leq \varepsilon \|f\|
\end{equation}
for a prescribed accuracy $\varepsilon>0$ or when a maximum number of basis functions $N_{\max}$ is reached. Note that the relations \eqref{error_y_small} and \eqref{error_y_large_2} guarantee that \eqref{e:eps_W} can always be achieved by a uniform selection of points $y$ in $\mathcal D_s$. The aim of the Greedy algorithm is to provide an alternate selection performing as well but of less cardinality.
The error $W_{m}(y)$ defined in \eqref{def:err_Wm} is not a computable quantity and is usually replaced by an equivalent computable quantity leading to the so-called \emph{weak greedy algorithm} \cite{DPW13}. In this work, we use the residual based \emph{a posteriori} error estimate \cite{PR06}
\begin{equation} \label{def:apost}
\|r_{m}(\cdot;y)\|_{\Vh'}:=\sup_{v_h\in \Vh}\frac{r_{m}(v_h;y)}{\|v\|_{H_0^1(\Omega)}},
\end{equation}
where
$$r_{m}(v_h;y):=F(v_h)-a(P_{\mathbb V_h^{m}}w_h(y),v_h;y).$$
We have the following equivalence relation between the error $W_{m}(y)$ and its surrogate
\begin{equation} \label{eqn:equiv_err_est}
(1+C_P^2e^y)^{-1}\|r_m(\cdot;y)\|_{\Vh'} \leq W_m(y)\leq \|r_m(\cdot;y)\|_{\Vh'}
\end{equation}
for all $y\in \mathbb R$. This follows from
\begin{equation*}
\|r_m(\cdot;y)\|_{\Vh'}=\sup_{v_h\in\Vh}\frac{a(w_h(y)-P_{\mathbb V_h^{m}}w_h(y),v_h;y)}{\|v_h\|_{H_0^1(\Omega)}},
\end{equation*}
where $w_h(y)$ satisfies \eqref{def:pb_wh}, and the coercivity and continuity of the bilinear form $a$ \eqref{e:coerc_cont}.
Hence, selecting the samples using $\|r_{m}(\cdot;y)\|_{\Vh'}$ as surrogate for the error $W_{m}(y)$ yields
\begin{equation*}
W_m(y^{m+1})\ge \gamma_s\max_{y\in \mathcal{D}_s} W_m(y),
\end{equation*}
where
\begin{equation}\label{e:gamma_s}
\gamma_s:=\left(\max_{y\in \mathcal{D}_s} (1+C_P^2e^y)\right)^{-1}= (1+C_P^2e^{N_sk})^{-1}.
\end{equation}
The parameter $\gamma_s$ corresponds to the constant in the weak greedy algorithm, see \cite{DPW13} for more details, and will appear in the analysis below.
The dual norm can be computed using the Riesz representation theorem, see for instance \cite{PR06,EPR10} for more details. However, evaluating $\|r_{m}(\cdot;y)\|_{\Vh'}$ for every $y$ in $\mathcal{D}_s$ remains unfeasible. In practice, the maximization is performed over a finite dimensional \emph{training set} $\Theta_s\subset \mathcal{D}_s$, either chosen sufficiently fine to retain the performance of the algorithm, see for instance \cite{CD15}, or based on a random selection of moderate size \cite{CDD18}.
The reduced basis space is constructed \emph{offline} and gives the following \emph{online} approximation of $u_{h,k}(s)$
\begin{equation} \label{def:un}
u_{h,k}^n(s)=\frac{k\sin(s\pi)}{\pi}\sum_{l=-M_s}^{N_s}e^{(1-s)y_l}w_h^n(y_l), \quad w_h^n(y_l):=P_{\Vn}w_{h}(y_l).
\end{equation}
\begin{remark}
The approximate solution \eqref{def:uk} obtained without using the reduced basis method requires to solve $M_s+N_s+1$ sparse finite element systems of dimension $N_h$. In comparison, the solution of $M_s+N_s+1$ systems are needed for the approximation \eqref{def:un}, each requiring $\mathcal{O}(n^3)$ operations, where $n$ stands for the dimension of the reduced basis space. The latter is built once for all (\emph{offline stage}) with a computational cost dominated by the solve of $n$ sparse finite element systems. The value of $n$ depends on the Kolmogorov $n$-width of the solution manifold $\{w_h(y): \, y\in\mathcal{D}_s\}$. We refer for instance to \cite{QMN2016,CD15} for a detailed complexity analysis of the reduced basis method but note that typically for elliptic problems we have $n <\!\!< N_h$. Finally, we anticipate that the proposed reduced basis space is independent on $s$, see Section \ref{sec:universal}.
\end{remark}
\subsection{Error analysis for a fixed $s$}\label{s:error_s}
We now analyze the distortion between $u(s)=(-\Delta)^{-s}f$ and its reduced basis approximation $u_{h,k}^n(s)$ given by \eqref{def:un} in the $\mathbb{H}^r(\Omega)$ norm. In order to avoid unnecessary technicalities, we assume from now on that $r\in[0,1]$ is chosen such that $r<2s$, which includes the natural choice $r=s$ leading to the energy error and $r=0$ for the $L^2(\Omega)$ error. The discussion below can be readily extended to the case $r\geq 2s$ by accounting for the log factor $\ln(h^{-1})$ in the finite element approximation (see Theorem~\ref{thm:FE_error}).
For $u(s)\in \mathbb{H}^r(\Omega)$, we decompose the error into three parts
\begin{equation}\label{e:error_decom}
\begin{split}
\|u(s)-u_{h,k}^n(s)\|_{\mathbb{H}^r(\Omega)}&\leq \|u(s)-u_h(s)\|_{\mathbb{H}^r(\Omega)}+\|u_h(s)-u_{h,k}(s)\|_{\mathbb{H}^r(\Omega)}\\
& \qquad +\|u_{h,k}(s)-u_{h,k}^n(s)\|_{\mathbb{H}^r(\Omega)},
\end{split}
\end{equation}
corresponding to the finite element error, the sinc quadrature error and the reduced basis error, respectively.
Given a target tolerance $\varepsilon>0$, we construct a reduced basis space such that \eqref{e:eps_W} holds. In view of Theorems \ref{thm:FE_error} and \ref{thm:sinc_error}, we select the space discretization and sinc quadrature parameters $h$ and $k$ to balance the finite element and sinc quadrature errors, i.e.,
\begin{equation} \label{eqn:scaling_h_k}
C_{\textrm{FEM}} h^{2\alpha^*} = C_{\textrm{SINC}} e^{-\frac{\pi^2}{k}} = \varepsilon
\end{equation}
for some absolute constants $C_{\textrm{FEM}}$ and $C_{\textrm{SINC}}$.
We now assess the error in the reduced basis modeling by analyzing the behavior of $\|u_{h,k}(s)-u_{h,k}^n(s)\|_{\mathbb{H}^r(\Omega)}$ as $n$ increases. From the definitions \eqref{def:uk} and \eqref{def:un} of $u_{h,k}(s)$ and $u_{h,k}^n(s)$, respectively, we have
\begin{equation}\label{e:Es}
\|u_{h,k}(s)-u_{h,k}^n(s)\|_{\mathbb{H}^r(\Omega)} \leq \frac{k\sin(s\pi)}{\pi}\sum_{l=-M_s}^{N_s}e^{(1-s)y_l}\|w_h(y_l)-w_h^n(y_l)\|_{\mathbb{H}^r(\Omega)}.
\end{equation}
Key ingredients in our analysis are estimates for the reduced basis errors $\|w_h(y_l)-w_h^n(y_l)\|_{\mathbb{H}^r(\Omega)}$ in approximating the inner problems. We discuss this now. Recall that the reduced basis error for the inner problem is given by
\begin{equation}\label{e:error_rb_def}
\sup_{y \in \mathcal{D}_s}\| \nabla(w_h(y)-w_h^n(y)) \|=\sup_{y \in \mathcal{D}_s}\| \nabla(w_h(y)-P_{\Vn}w_h(y)) \|.
\end{equation}
The Kolmogorov $n$-width
\begin{equation} \label{eqn:Kolmo}
d_n:=\inf_{\dim(Y_n)\le n} \,\, \sup_{y\in \mathcal{D}_s} \,\, \inf_{v_n\in Y_n} \| \nabla(w_h(y)-v_n)\|, \quad n\ge 1,
\end{equation}
is the benchmark for the best achievable decay. By convention, we set
\begin{equation} \label{eqn:Kolmo_0}
d_0:=\sup_{y\in \mathcal{D}_s} \| \nabla w_h(y)\|.
\end{equation}
It is quite remarkable that the linear space $\Vn$ constructed by the (weak) greedy selection discussed in Section~\ref{s:construction} leads to an error \eqref{e:error_rb_def} equivalent to $d_n$ \cite{DPW13}. In particular, an exponential decay of the Kolmogorov $n$-width guarantees an exponential decay of the (weak) greedy error. In order to prove that the error in (\ref{e:error_rb_def}) decays exponentially with $n$, see Lemma \ref{lem:error_RB_w} below, we will thus show that the Kolmogorov $n$-width exhibits an exponential decay.
To facilitate the analysis of $d_n$, we use the notations in \eqref{def:pb_ay} and provide a representation of the finite element functions $w_h(y)$ in term of the eigenpairs $\{\mu_i,\varphi_i\}_{i=1}^{N_h} \subset \mathbb{R}^+ \times\Vh$, $N_h:=\textrm{dim}(\Vh)$, of the generalized eigenvalue problem
\begin{equation*}
a_1(\varphi_i,v_h) = \mu_i a_0(\varphi_i,v_h), \qquad \forall v_h \in \Vh.
\end{equation*}
Without loss of generality, we assume that the $\varphi_i$ are $H_{0}^1$-orthonormal, i.e.
\begin{equation*}
a_0(\varphi_i,\varphi_j) = \delta_{ij}, \qquad 1\leq i,j\leq N_h.
\end{equation*}
The inverse inequality
\begin{equation*}
\|\nabla v_h\| \leq C_I h^{-1}\|v_h\| \quad \forall v_h\in\Vh,
\end{equation*}
together with the Poincar\'e inequality \eqref{def:Poincare}, yield
\begin{equation} \label{eqn:mu}
C_I^{-2} h^2 \leq \mu_i \leq C_P^2, \quad 1\leq i\leq N_h.
\end{equation}
With these notations, we can rewrite $w_h(y)$ in (\ref{def:pb_wh}) as
\begin{equation}\label{evrepre}
w_h(y) = \sum_{i = 1}^{N_h}\frac{f_i\varphi_i}{1 + e^y \mu_i}, \quad f_i:=F(\varphi_i).
\end{equation}
We are now in position to assess the reduced basis approximation property.
\begin{lemma} \label{lem:error_RB_w}
For any $n\ge 1$ we have
\begin{equation} \label{eqn:exp_error_RB_w}
\sup_{y \in \mathcal{D}_s}\|\nabla(w_h(y)-P_{\Vn}w_h(y))\| \leq \gamma_s^{-1}C_1 e^{-C_2(h)n}\|f\|,
\end{equation}
where $\gamma_s$ is given by \eqref{e:gamma_s}, $C_1$ is a constant only depending on $C_P$ and
\begin{equation} \label{eqn:cst_C2}
C_2(h)\approx\frac{1}{\ln(C_P^2C_I^2h^{-2})} \qquad \textrm{when }h\to 0.
\end{equation}
\end{lemma}
\begin{proof}
We follow \cite{DS19} to construct a linear space $\Wn\subset\Vh$ with $\dim(\Wn)\leq n$ such that for some constants $c_1$, $c_2$ and $n\geq 1$ we have
\begin{equation} \label{eqn:step1}
d_n \leq \sup_{y \in \mathcal{D}_s}\inf_{v_h^n\in \Wn}\| \nabla(w_h(y)-v_h^n)\|\le c_1 e^{-c_2n}\| f\|.
\end{equation}
Let $\Wn:=\Span\{w_h(y_1),\ldots,w_h(y_n)\}$, where the $y_j$ are chosen such that the $e^{y_j}$ are the transformed Zolotar\"ev points $[C_P^{-2},C_I^2h^{-2}]$ as in \cite{DS19}, see also \cite{gonvcar1969zolotarev,MR1328645}. Notice that this interval is dictated by the lower and upper bound of the eigenvalues $\mu_i$, see \eqref{eqn:mu}. Now, given $y\in\mathcal{D}_s$, we define the approximation
\begin{equation} \label{eqn:vhn}
v_h^n({y}) := \sum_{j=1}^n\alpha_j(y)w_h(y_j)\in\Wn,
\end{equation}
where the coefficients $\alpha_j(y)$ are such that
\begin{equation*}
\frac{1}{1+e^{y}e^{-y_k}} = \sum_{j=1}^n\alpha_j(y)\frac{1}{1+e^{y_j}e^{-y_k}}, \qquad k=1,..,n.
\end{equation*}
The above system is a particular rational interpolation problem and has a unique solution according to Lemma 5.13 in \cite{DS19}.
Furthermore, thanks to Lemma 5.17 in \cite{DS19}, we have
\begin{equation*}
\left|\frac{1}{1+e^{y}\mu_i}-\sum_{j=1}^n\alpha_j(y)\frac{1}{1+e^{y_j}\mu_i}\right|\lesssim \frac{1}{1+e^{y}\mu_i}e^{-C^*n} , \quad i=1,\ldots, N_h,
\end{equation*}
where
\begin{equation*}
C^*=C^*(h)\approx\frac{1}{\ln(C_P^2C_I^2h^{-2})}.
\end{equation*}
Hence, the error between $w_h(y)$ in \eqref{evrepre} and $v_h^n({y})$ in \eqref{eqn:vhn} satisfies
\begin{eqnarray*} \label{eqn:err_Z2}
\|w_h(y)-v_h^n({y})\|_{H_0^1}^2 & = & \sum_{i=1}^{N_h}f_i^2\left(\frac{1}{1+e^y\mu_i}-\sum_{j=1}^n\alpha_j(y)\frac{1}{1+e^{y_j}\mu_i}\right)^2 \nonumber \\
& \lesssim & e^{-2C^*n}\sum_{i=1}^{N_h}f_i^2\left(\frac{1}{1+e^{y}\mu_i}\right)^2 \nonumber \\
& \lesssim & e^{-2C^*n}\|\nabla w_h(y)\|^2,
\end{eqnarray*}
where we have used the $H_{0}^1$-orthonormality of the $\{\varphi_i\}_{i=1}^{N_h}$.
With the help of \eqref{apriori_wh}, this implies
\begin{equation} \label{eqn:err_Z}
\|w_h(y)-v_h^n({y})\|_{H_0^1}^2 \lesssim e^{-2C^*n}C_P^2\|f\|^2.
\end{equation}
The above estimate is \eqref{eqn:step1} with $c_2=C^*$ and $c_1$ only depending on $C_P$ and the hidden constant in \eqref{eqn:err_Z}. Moreover, thanks to \eqref{apriori_wh} we also have $d_0\leq C_P\|f\|$, where $d_0$ is defined in \eqref{eqn:Kolmo_0}. Therefore, we have shown that the Kolmogorov $n$-width (see \eqref{eqn:Kolmo} and \eqref{eqn:Kolmo_0}) satisfies
\begin{equation} \label{eqn:step2}
d_n \leq c_1e^{-c_2n} \| f\|, \quad n\ge 0.
\end{equation}
To conclude, it remains to relate the error decay of the reduced basis generated by the weak greedy algorithm with parameter $\gamma_s$ (see $\eqref{e:gamma_s}$) and the Kolmogorov $n$-width $d_n$. Corollary 8.4 in \cite{CD15}, see also Corollary 3.3 in \cite{DPW13}, guarantees that (\ref{eqn:step2}) implies \eqref{eqn:exp_error_RB_w} with $C_2=c_2/6=C^*/6$ and $C_1=c_1\max(\sqrt{2},\gamma_s e^{C_2})\lesssim c_1$.
\end{proof}
\begin{remark} \label{rem:Maday}
We mention that an exponential decay for the reduced basis error for one dimensional parametric problem of the form (\ref{def:pb_wh}) has already been obtained in \cite{maday2002priori,MPT02}. However, the exponential decay is guaranteed for $n \geq n_\textrm{crit}$ for some integer $n_\textrm{crit}$ depending on the length of the parameter interval $[-M_s k,N_s k]$. We did not pursue this route as the latter restriction seems prohibitive to take full advantage of the performances of the reduced basis method.
\end{remark}
We now use the exponential decay of the reduced basis error for $w_h$ obtained in Lemma \ref{lem:error_RB_w} to estimate the error for $u_{h,k}$ defined in \eqref{e:Es}.
\begin{lemma} \label{lem:E1}
Let $h$ and $k$ be the finite element and sinc quadrature parameters.
Let $r\in[0,1]$. For $n \geq 1$, we have
\begin{equation} \label{eqn:exp_error_RB_u}
\begin{split}
&\|u_{h,k}(s)-u_{h,k}^n(s)\|_{\mathbb{H}^r(\Omega)} \\
& \qquad \leq \frac{\sin(s\pi)}{(1-s)\pi}C_P^{1-r}\gamma_s^{-1} C_1 e^{-C_2(h)n}\left(e^{(1-s)N_s k}-e^{-(1-s)M_sk}\right)\|f\|,
\end{split}
\end{equation}
where $C_1$ and $C_2(h)$ are the constants in \eqref{eqn:exp_error_RB_w}.
\end{lemma}
\begin{proof}
Because $\mathbb{H}^r(\Omega)$ are interpolation spaces between $L^2(\Omega)$ and $H^1_0(\Omega)$, see \eqref{def:Hr}, the Poincar\'e inequality \eqref{def:Poincare} yields
\begin{equation*}
\begin{split}
\|w_h(y)-w_h^n(y)\|_{\mathbb{H}^r(\Omega)}& \leq \|w_h(y)-w_h^n(y)\|^{1-r} \| \nabla(w_h(y)-w_h^n(y)) \|^r \\
&\leq C_P^{1-r}\|\nabla(w_h(y)-w_h^n(y))\|.
\end{split}
\end{equation*}
Therefore, using the estimate \eqref{e:Es} for the error and invoking Lemma~\ref{lem:error_RB_w}, we get
\begin{equation*}
\begin{split}
&\|u_{h,k}(s)-u_{h,k}^n(s)\|_{\mathbb{H}^r(\Omega)} \\
& \qquad \leq \frac{k\sin(s\pi)}{\pi}C_P^{1-r} \gamma_s^{-1} C_1 e^{-C_2(h) n}\|f\|\sum_{l=-M_sk}^{N_s k}e^{(1-s)y_l} \\
& \qquad\leq \frac{\sin(s\pi)}{\pi}C_P^{1-r}\gamma_s^{-1} C_1 e^{-C_2(h)n}\|f\| \int_{-M_sk}^{N_sk}e^{(1-s)y}dy \\
& \qquad\leq \frac{\sin(s\pi)}{(1-s)\pi}C_P^{1-r}\gamma_s^{-1} C_1 e^{-C_2(h)n}\left(e^{(1-s)N_s k}-e^{-(1-s)M_sk}\right)\|f\|,
\end{split}
\end{equation*}
which is the claimed estimate.
\end{proof}
\begin{remark} \label{rem:expo_decay}
From \eqref{eqn:cst_C2}, we see that the constant $C_2(h)$ that appears in \eqref{eqn:exp_error_RB_w} and \eqref{eqn:exp_error_RB_u} tends to $0$ as $h$ tends to $0$. In other words, the reduced basis performances deteriorate as the target accuracy $\varepsilon$ tends to $0$. This phenomenon is observed in the numerical experiments reported in Figure \ref{fig:err_u_wrt_h} of Section \ref{sec:numres}.
\end{remark}
Using Lemma \ref{lem:E1}, we directly derive the following result providing a sufficient condition on the dimension $n$ of the reduced space to achieve an error under a specified tolerance $\delta>0$.
\begin{thm} \label{cor:err_Ds1}(offline construction of the reduced space)
Let $\varepsilon > 0$ be a given tolerance.
Assume that $h$ and $k$ are chosen so that \eqref{eqn:scaling_h_k} holds and that the reduced basis space is constructed such that \eqref{e:eps_W} holds.
Then, for any $\delta \geq \varepsilon$ we have
$$
\|u_{h,k}(s)-u_{h,k}^n(s)\|_{\mathbb{H}^r(\Omega)} \leq \delta \| f\|
$$
provided
\begin{equation} \label{eqn:n_star}
n \approx \ln( C \delta \varepsilon^{\frac{2-s}{s}}) \ln(\varepsilon^{1/\alpha^*}),
\end{equation}
where $C$ is a constant only depending on $s$ and $r$ and $\alpha^*$ is as in Theorem~\ref{thm:FE_error}.
In particular
$$
n \approx \ln(\varepsilon)^2
$$
when $\delta=\varepsilon$.
\end{thm}
\begin{proof}
The claims directly follow from Lemma~\ref{lem:E1} together with \eqref{eqn:scaling_h_k}.
\end{proof}
\subsection{Universal Reduced Basis Space} \label{sec:universal}
In the previous section we constructed a reduced basis space $\Vn$ to approximate $u_{h,k}(s)$ for a fixed $s \in (0,1)$. We now show that it is possible to take real advantage of the \emph{offline} work and construct reduced basis spaces approximating the map $s \mapsto u_{h,k}(s)$ for $s \in [s_{\min},s_{\max}]$, with $0<s_{\min} \leq s_{\max} <1$ fixed.
To see this, it suffices to adjust the constant depending on $s$ as follows. First, we let
\begin{equation} \label{def:NM}
M:=\left\lceil \frac{\pi^2}{(1-s_{\max})k^2} \right\rceil \quad \mbox{and} \quad N:=\left\lceil \frac{\pi^2}{s_{\min}k^2} \right\rceil.
\end{equation}
Then, we define the domain $\mathcal{D}:=[-Mk,Nk]$ containing $\mathcal{D}_{s}$ for all $s \in [s_{\min},s_{\max}]$ and, similarly to \eqref{e:gamma_s}, we introduce the parameter
\begin{equation}\label{e:gamma}
\gamma:=\left(\max_{y\in \mathcal{D}} (1+C_Pe^y)\right)^{-1}= (1+C_Pe^{Nk})^{-1}.
\end{equation}
Finally, for all $y\in\mathcal{D}$ we approximate $w_h(y)$ by $w_h^n(y)=P_{\Vn}w_h(y)$, where the reduced basis space $\Vn$ is constructed as detailed in Section~\ref{s:construction} upon replacing $M_s$, $N_s$, $\mathcal{D}_s$ and $\gamma_s$ by $M$, $N$, $\mathcal{D}$ and $\gamma$, respectively.
With this uniform construction, we directly obtain the universal version of Lemma~\ref{lem:E1} and Theorem~\ref{cor:err_Ds1}.
\begin{lemma} \label{lem:E2}
Let $h$ and $k$ be the finite element and sinc quadrature parameters. Let $r\in[0,1]$. For $n \geq 1$ and any $s\in[s_{\min},s_{\max}]$ we have
\begin{equation*}
\|u_{h,k}(s)-u_{h,k}^n(s)\|_{\mathbb{H}^r(\Omega)} \leq \frac{\sin(s\pi)}{(1-s)\pi}C_P^{1-r}\gamma^{-1} C_1 e^{-C_2(h)n}\left(e^{(1-s)N k}-e^{-(1-s)Mk}\right)\|f\|,
\end{equation*}
where $C_1$ and $C_2(h)$ are the constants in \eqref{eqn:exp_error_RB_w}.
\end{lemma}
\begin{thm} \label{cor:err_D}(offline construction of the universal reduced space)
Let $\varepsilon > 0$ be a given tolerance.
Assume that $h$ and $k$ are chosen so that \eqref{eqn:scaling_h_k} holds and that the reduced basis space is constructed such that \eqref{e:eps_W} holds.
Then, for any $\delta \geq \varepsilon$ we have
\begin{equation*}
\max_{s\in[s_{\min},s_{\max}]}\|u_{h,k}(s)-u_{h,k}^n(s)\|_{\mathbb{H}^r(\Omega)} \leq \delta \|f\|
\end{equation*}
provided
\begin{equation} \label{eqn:n_star_univ}
n \approx \ln( C \delta \varepsilon^{\frac{2-s_{\min}}{s_{\min}}}) \ln(\varepsilon^{1/\alpha^*}),
\end{equation}
where $C$ is a constant only depending on $s_{\min}$, $s_{\max}$ and $r$, and $\alpha^*$ is as in Theorem~\ref{thm:FE_error}. In particular
\begin{equation*}
n \approx \ln(\varepsilon)^2
\end{equation*}
when $\delta=\varepsilon$.
\end{thm}
\section{Numerical Experiments} \label{sec:numres}
We present numerical results to illustrate the performances of the reduced basis approach analyzed in the previous section. Since the focus of this paper is on the reduced basis approximation, the finite element meshsize $h$ and the sinc quadrature parameter $k$ are chosen sufficiently small not to influence the total error, unless otherwise specified. We refer to \cite{BP15,BLP18} for an extensive numerical study on the influence of the discretization parameters $h$ and $k$. The space $\Vn$ is built using a weak greedy algorithm on $[-Mk,Nk]$, starting with the \emph{snapshot} $w_h(0)$. Moreover, the \emph{training set} $\Theta$ consists of $10000$ uniformly distributed points in $[-Mk,Nk]$. Finally, we set $f=1$ in all the numerical examples.
\subsection{1D example} \label{sec:numres_1D}
We consider the case $\Omega=(0,1)$. The subdivision $\Th$ of $\Omega$ consists of a uniform partition of $[0,1]$ with subintervals of length $h=2^{-12}$. The sinc quadrature parameter is fixed to $k=0.5$ and the fractional power $s$ varies from $s_{\min}=0.1$ to $s_{\max}=0.9$. In this setting, we have $N=M=395$ and $[-Mk,Nk]=[-197.5,197.5]$.
\subsubsection{Reduced basis error for $w_h$}
We provide in Figure \ref{fig:err_w}-left the evolution of
\begin{equation*}
e_w(n):=\sup_{y_l \in \Theta} \|w_h(y_l)-P_{\Vn}w_h(y_l)\|_{H_0^1(\Omega)}
\end{equation*}
versus $n\geq 1$ as indicator of the error
\begin{equation*}
\sup_{y \in [-Mk,Nk]} \|w_h(y_l)-P_{\Vn}w_h(y_l)\|_{H_0^1(\Omega)}.
\end{equation*}
The observed exponential decay matches the estimate of Lemma~\ref{lem:error_RB_w}.
Moreover, Figure \ref{fig:err_w}-right reports the values of the selected parameters $y^n$ by the weak greedy procedure. We observe that except for $y^2$, they are all located in the interval $[0,20]$. This behavior can be in part explained by the estimates provided in Lemma~\ref{lem:tmp_res}, indicating the robustness of $w_h(y)$ for small values of $y$ and the smallness of $\|w_h(y)\|$ for $y$ large. The fact that no negative $y$ is selected is also attributable to the choice of the initial snapshot, namely $w_h(0)$, which already provides a good approximation of $w_h(y)$ for $y\leq 0$. We mention that similar results are obtained when changing the range for $s$, for instance setting $s_{\min}=0.01$ and $s_{\max}=0.99$.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{error_H10_w_1D_0109_MN_ref}
\includegraphics[width=0.45\textwidth]{selected_snapshots_MN}
\caption{Left: reduced basis error $e_w(n)$ versus $n$. Right: parameters $y^n$ selected by the weak greedy procedure during the construction of the reduced basis space.} \label{fig:err_w}
\end{figure}
We comment on the use of an \emph{a posteriori} error estimate (weak greedy) in place of the \emph{true} error (greedy), see \eqref{eqn:equiv_err_est}. For this, we compare in Figure \ref{fig:err_w_comparison} the performance of both algorithms and we can conclude that very little efficiency is lost in using the computable \emph{a posteriori} error estimator.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{comparison_greedy_weak_greedy}
\includegraphics[width=0.45\textwidth]{selected_snapshots_weak_greedy}
\caption{Comparison of the greedy and weak greedy strategies. Left: error $e_w(n)$ associated with the greedy and weak greedy strategies. The dashed line represents the equivalent quantity $\max_{y\in\Theta}\|r_n(\cdot;y)\|_{\Vh'}$, see \eqref{eqn:equiv_err_est}. Right: selected parameters $y$ for both strategies.} \label{fig:err_w_comparison}
\end{figure}
\subsubsection{Reduced basis error for $u_{h,k}$}
We now turn our attention to the approximation of $u_{h,k}(s)$ by $u_{h,k}^n(s)$. Figure \ref{fig:err_u} depicts the evolution of
\begin{equation*}
e_{u(s)}(n):=\|u_{h,k}(s)-u_{h,k}^n(s)\|
\end{equation*}
for various values of fractional power. In agreement with Theorem~\ref{cor:err_D}, exponential decay is observed in all cases when using the universal reduced basis space. Notice that in this experiment the sinc quadrature requires $440$ points for $s=0.1,0.9$, $190$ points for $s=0.3,0.7$ and $159$ points for $s=0.5$ to guarantee a sinc quadrature error of the order $e^{-\pi^2/k}\approx 2.7\times 10^{-9}$ for $k=0.5$. A comparable reduced basis accuracy in the $L^2(\Omega)$ norm is achieved for only $n=20$ for $0.5\leq s\leq 0.9$.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{error_L2_u_1D_0109_MN_ref}
\caption{Error $e_{u(s)}(n)$ with respect to $n$ for various values of $s$ in $[0.1,0.9]$. Exponential decay is observed in all cases using the universal reduced basis space.} \label{fig:err_u}
\end{figure}
Finally, we study numerically the behavior of the constant $C_2(h)$ given by (\ref{eqn:cst_C2}). We set $s=0.1$ and consider a sequence of uniform partitions of $[0,1]$ with subintervals of length $h=2^{-k}$ for $k=6,8,10,12$. The reduced basis error $e_{u(0.1)}(n)$ versus $n$ is reported in Figure \ref{fig:err_u_wrt_h} for each finite element discretization. As predicted by Theorem~\ref{cor:err_D}, the exponential decay encoded in $C_2 = C_2(h)$ deteriorates as $h \to 0$. The $L^2(\Omega)$ norm of the error behaves like $e^{-1.4n}$ for $h=2^{-6}$ and $e^{-0.7n}$ for $h=2^{-12}$.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{error_L2_u_1D_0109_MN_wrt_h_ref}
\caption{Effect of the space discretization parameter $h$ in the exponential decay of the error $e_{u(0.1)}(n)$. In accordance with Theorem~\ref{cor:err_D}, the exponential decay coefficient deteriorates as $h$ decreases. } \label{fig:err_u_wrt_h}
\end{figure}
\subsection{2D examples}
We now consider two dimensional domains: a square domain $\Omega=(0,1)^2$ and an L-shaped domain $\Omega = (0,1)^2\setminus ([0,0.5]\times[0.5,1])$. The space discretization consists of a Delaunay triangulation with $22968$ elements for $\Omega=(0,1)^2$ and $17190$ elements for $\Omega = (0,1)^2\setminus ([0,0.5]\times[0.5,1])$. In both cases, the elements in the triangulation have diameters between $0.005$ and $0.01$. All the other parameters are the same as in Section \ref{sec:numres_1D}.
The evolution of the reduced basis error $e_{u(s)}(n)$ for different values of $s$ is reported in Figure \ref{fig:err_u_2D}. As for the one dimensional case, exponential decays are observed for all values of $s \in [0.1,0.9]$ and irrespectively of the shape of the domain.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{error_L2_u_2D_Square_0109_MN_ref}
\includegraphics[width=0.45\textwidth]{error_L2_u_2D_Lshape_0109_MN_ref}
\caption{Error $e_{u(s)}(n)$ with respect to $n$ for various values of $s$ in $[0.1,0.9]$. Left: unit square domain. Right: $L$-shape domain.} \label{fig:err_u_2D}
\end{figure}
\bibliographystyle{plain}
|
2,877,628,091,251 | arxiv |
\section{Introduction}\label{ss:intro}
As learning algorithms increasingly get deployed to inform decision making, a growing literature has emerged that seeks to provide valid estimation of treatment effects in high dimensions \citep{li2021bounds,mozer2020,chernozhukov2018double,yoon2018ganite,wager2018estimation,shalit2017estimating,hill2011bayesian,schneeweiss2009high, Belloni2014, alexanderOverlapObservationalStudies2021, daoudStatisticalModelingThree2020}. Yet there is a lack of methodological frameworks for causal estimation when confounding is induced by patterns or objects observed in an image \citep{castroCausalityMattersMedical2020}.
Images are prevalent not only in social media, but also in the health setting and humanitarian context. For instance, after analyzing an X-ray image of a cancer patient, a doctor may alter their treatment protocol for this patient; the diagnostic information in the image is also correlated with the survival outcome \citep{castroCausalityMattersMedical2020}. Likewise, a policymaker may consult maps to evaluate where to allocate aid to villages that otherwise may remain poor \citep{holmgrenSatelliteRemoteSensing1998a,bediMorePrettyPicture2007}. Besides annotated maps---where objects such as forests, roads, and cities are named in the map---policymakers are increasingly reliant on raw satellite images where no annotation exist. For example, since 2000, policymakers frequently rely on raw satellite images to evaluate damage due to natural disasters or war \citep{voigtGlobalTrendsSatellitebased2016, burkeUsingSatelliteImagery2021, kinoScopingReviewUse2021}.
Based on these images, they decide where to intervene helping the poor \citep{borieMappingResilienceCity2019,daoudUsingSatellitesArtificial2021a}.
These disparate examples share a common structure: an actor examines an image $I$, looking for certain latent patterns $U$ which guide the choice of intervention $T$. These observed patterns indicate the existence of real-world objects which directly influence the outcome of treatment $Y$, thereby inducing confounding if not adjusted for. However, as image patterns $U$ are often unlabeled, even when $I$ is observed, these patterns and objects are difficult to adjust for directly \citep{voigtGlobalTrendsSatellitebased2016}.
In this article, we study observational causal inference in the presence of confounding due to latent image patterns. We describe several causal structures plausibly consistent with real-world applications and discuss their implications on identification and estimation. We focus on under what conditions the image alone is sufficient to adjust for the confounding introduced by $U$. This holds, for example, when $U$ may be derived deterministically from $I$. An important special case of image pattern confounding occurs when decision makers make choices based on the (translation-invariant) existence of a pattern in the image, which motivates adjustment techniques based on convolutional models.
We study finite-sample estimation of average treatment effects in the fully identified case by conducting a simulation in which confounders are derived from convolution of a 2D filter with the observed image. In this setting, we investigate the impact of model misspecification on estimates. Finally, we demonstrate the use of the proposed estimation framework in an application in which we evaluate the effect of international aid programs on poverty by estimating treatment propensity in a geographic region with a convolutional neural network, motivated by our identification results.
There are several important contributions for how algorithms may discover causal structures from images \citep{chalupkaMultiLevelCauseEffectSystems2016,chalupkaUnsupervisedDiscoveryNino2016,chalupkaVisualCausalFeature2015,scholkopfCausalRepresentationLearning2021,yiCLEVRERCoLlisionEvents2020,dingDynamicVisualReasoning2021}, but there is a lack of a systematic analysis how to use images for causal inference, where latent objects in the image may affect treatment and outcome \citep{castroCausalityMattersMedical2020}. Image data is omnipresent, from online advertisement to precision agriculture. Thus, developing techniques for \emph{causal inference under image pattern confounding} would open up new avenues for observational studies \citep{pawlowskiDeepStructuralCausal2020, singlaExplanationProgressiveExaggeration2020, kaddour2021causal}.
\section{Related Work and Contribution}
While the literature on generalized linear regression modeling is well-developed in articulating the bias in model parameter estimation due to spatial dynamics \citep{paciorek2010importance}, there have been fewer works on the nature of image-based confounding in the causal inference context \citep{castroCausalityMattersMedical2020, reddyCANDLEImageDataseta, duAdversarialBalancingbasedRepresentation2021}. Some examples include work on estimating counterfactual outcomes under spatially defined counterfactual treatment strategies \citep{papadogeorgou2020causal}, on accounting for spatial interdependence in causal effect estimation \citep{reich2021review}, and on balancing covariate representations using adversarial networks and image data \citep{kallus2020deepmatch}. Other works address images as treatments \citep{kaddour2021causal}, counterfactual inference and interpretability \citep{pawlowskiDeepStructuralCausal2020, singlaExplanationProgressiveExaggeration2020}, and image-based treatment effect heterogeneity \citep{jerzak2022image}.
Our work is related to the literature on identification via proxies~\citep{tchetgen2020introduction} or drivers~\citep{pearl2013linear} of confounders. For the former, \citeauthor{louizosCausalEffectInference2017a}, developed Causal Effect Variational Autoencoder (CEVAE) which uses proxies to infer the distribution of the latent confounder and use this in adjustment. In contrast, our approach adjusts for an observed variable---the image. Our article formalizes key assumptions required for the correctness of this method, and provided a general framework for conducing causal inference using images, where unlabeled objects in the image may affect both treatment and outcome \citep{castroCausalityMattersMedical2020}. This image-based confounding bias might in some circumstances be equivalent to traditional spatial interdependence, but differs insofar as the confounding bias is defined with reference to unlabeled entities in the image, thereby injecting bias \citep{paciorek2010importance}. Relying on our formalization and model implementation, we analyze aid interventions (treatment) and poverty (outcome) in Africa.
As discussed in \S\ref{ss:intro}, policymakers often rely on satellite images for aid intervention \citep{voigtGlobalTrendsSatellitebased2016, bediMorePrettyPicture2007}. Thus, our approach also facilitates future policymaking---hopefully contributing to the use of AI for social good.
\section{Causal Effect Estimation Under Image Pattern Confounding}\label{ss:theory}
\paragraph{Motivation}
Because each image is defined at a more granular resolution than the unit of analysis, we can use it to potentially reconstruct some of unobserved confounders by learning the function generating confounder from image. For example, assume that researchers seek to analyze the effect of a village-level treatment. From a government census, they obtain mean income information for the village ($s$) and then perform an analysis assuming $Y_s(0),Y_s(1)\perp T_s|X_s$, where $X_s$ contains the income data, $T_s$ is the treatment, and $Y_s(t)$, the potential outcome under $t\in\{0,1\}$. Yet, unless the mean income is the true confounder, the analysis will still be biased, which would occur if, in fact, \emph{minimum} income drove the decision to allocate $T_s$. However, using satellite images for each scene, we seek to reconstruct the minimum income signal based on our access to the higher resolution data.
In other words, we can weaken the assumption used in many empirical analyses that the variables measured at the scene-level in fact contain the true confounders, when in fact there may be highly non-linear functions which use more granular information in generating the confounding structure. With image data, we, in principle, can hope to reconstruct some of those factors using advances in image-based machine learning models effective in the prediction domain (e.g., \citet{sun2013deep}).
\paragraph{Baseline Confounding Model}
With this motivation in mind, we study identification and estimation of the average treatment effect (ATE) of $T$ on a real-valued outcome of interest, $Y$, based on observational data. With $Y(1)$ being the potential outcome~\citep{rubin2005causal} under intervention with $T=1$ and $Y(0)$ for $T=0$,
$$
\text{ATE} = \mathbb{E}[Y(1) - Y(0)]~.
$$
The ATE can represent, for example, the difference in average wealth in villages after anti-poverty interventions and after no intervention, respectively. In our setting, historical interventions were determined by a decision-maker in the context of patterns derived from an image $I$ using an unknown process. Although we focus on a decision-maker being affected by patterns in an image (that is $I \to U$), our approach could be generalized to the case where a phenomena (e.g., an event of natural disaster) is imprinted in an image ($U \to I$) and that phenomena affects the outcome (poverty) and the probability of humanitarian aid (treatment). The former is called a \textit{proxy} and the latter is a \textit{driver}, yet enable the same type analysis \citep{pearlLinearModelsUseful2013} that is found in the proxy literature \citep{kurokiMeasurementBiasEffect2014,louizosCausalEffectInference2017a}. To understand estimation dynamics of the ATE from observational images, and adjust for induced confounding bias, we analyze the data generating process next.
\label{ss:structure}
As a baseline, consider the causal graph in Figure~\ref{fig:SimpleDag}, which depicts a classical confounding relationship where a treatment of interest ($T_{swh}$), such as an anti-poverty intervention, is associated with factors (such as the presence of mineral extraction sites), that affect both the treatment and the outcome ($Y_{swh}$). Observed confounding variables are grouped in $X_{swh}$; unobserved confounders in $U_{swh}$.
\begin{figure}[t]
\begin{subfigure}{0.48\textwidth}
\centering
\vspace{1em}
\begin{center}
\tikzstyle{main node}=[circle,draw,font=\sffamily\small\bfseries]
\tikzstyle{sub node}=[circle,draw,dashed,font=\sffamily\small\bfseries]
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=2cm,thick]
\node[sub node] (1) {$U_{swh}$};
\node[main node] (3) [below right of=1] {$T_{swh}$};
\node[main node] (4) [right of=3] {$Y_{swh}$};
\node[main node] (5) [right of=1] {$X_{swh}$};
\path[every node/.style={font=\sffamily\small}]
(1) edge node {} (3)
(1) edge node {} (4)
(3) edge node {} (4)
(5) edge node {} (3)
(5) edge node {} (4);
\end{tikzpicture}
\end{center}
\vspace{2.5em}
\caption{Baseline DAG. }\label{fig:SimpleDag}
\end{subfigure}
\hfill
\begin{subfigure}{0.48\textwidth}
\centering
\tikzstyle{main node}=[circle,draw,font=\sffamily\small\bfseries]
\tikzstyle{sub node}=[circle,draw,dashed,font=\sffamily\small\bfseries]
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=2cm,thick]
\node[sub node] (1) {$U_{swh}$};
\node[main node] (2) [below of=1] {$I_{sh'w'}$};
\node[main node] (3) [below right of=1] {$T_{swh}$};
\node[main node] (4) [right of=3] {$Y_{swh}$};
\node[main node] (5) [right of=1] {$X_{swh}$};
\node[rectangle,draw=gray, fit=(2),inner sep=2mm,label=below:{$w',h'\in \Pi_s(hw)$}] {};
\path[every node/.style={font=\sffamily\small}]
(1) edge node {} (3)
(1) edge node {} (4)
(2) edge node {} (1)
(3) edge node {} (4)
(5) edge node {} (3)
(5) edge node {} (4);
\end{tikzpicture}
\caption{DAG with image-based confounding. }\label{fig:ConvDag}
\end{subfigure}
\caption{Causal DAGs representing variables associated with a scene $s$. In our running example, $U_{swh}$ represents unobserved confounders, $X_{swh}$ observed confounders, $T_{swh}$ treatment and $Y_{swh}$ the outcome, all at location $w,h$ in scene $s$. In the right-hand DAG, latent confounders $U_{swg}$ are determined by a neighborhood $\Pi_s(hw)$ of the location $h, w$ in the image representing scene $s$.}
\end{figure}
Here, $s$ denotes the image scene (e.g., village), $w$ and $h$ denote horizontal and vertical location in that scene. It is important to emphasize that $s$ need not be spatially defined, as in the case of X-ray data, where the scene index could refer to a patient or body parts.
\paragraph{Pixel-based Image Confounding}
We now turn to the causal model in Figure~\ref{fig:SimpleDag}, which depicts the kind of confounding dealt with in much of observational research \citep{rosenbaum1983central}. We extend this model to describe image-based confounding. First, we define this at the local level, where treatments are implemented at specific locations $h,w$ (in, for example, the precision agriculture context; \citet{liaghat2010review}).
To give a precise example, we introduce the following notation. Let $\Pi_s(hw)\in \mathbb{N}^{2}$ denote the set of location indices involved in the spatial neighborhood analysis at $swh$. For example, in the case of a $z\times z$ square filter applied to the image ($z\in \mathbb{N}$), %
\begin{equation*}
{
\Pi_s(hw) = \big\{h-\lfloor z/2\rfloor,...,h+\lfloor z/2\rfloor\big\} \times
\big\{w-\lfloor z/2\rfloor,...,w+\lfloor z/2\rfloor\big\}
}
\end{equation*}
With this notation, we can define the confounding structure induced by image-based confounding at the neighborhood level in Figure~\ref{fig:ConvDag}. Figure \ref{fig:ConvDag} is a formulation of spatial interdependence, as the image-information for indices in $\Pi_s(hw)$ affects the confounder $U_{swh}$. Conditional on the value of the confounder, the treatment decision for each unit made. This is illustrated by a decision maker who scans a scene looking for similarity of the neighborhood around $swh$ to some mental image, defined, for example, by an image filter $l$, and make a decision on this basis.
We illustrate this process on satellite image data from Landsat (see \S\ref{ss:data} for details). We let a single convolutional filter in the form of a diagonal matrix represent the image pattern pursued by a decision maker, as depicted in Figure \ref{fig:conv}. After applying $f_l$ to the raw image shown in the right panel of Figure \ref{fig:conv}, we obtain the resulting image-derived confounder values. This is a simple illustration of how the presence of objects in images (such as a diagonal line) can generate the confounder values.
\begin{figure}[t]
\centering
\begin{subfigure}[t]{.3\textwidth}
\centering
\includegraphics[width=.8\textwidth]{./ConfoundingConvolutionKernel.pdf}
\vspace{-1em}
\label{fig:convKern}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.65\textwidth}
\centering
\includegraphics[width=.85\textwidth]{./ConvolutionIllustrate.pdf}
\end{subfigure}
\caption{\emph{Left.} The kernel filter used to generate $U_{swh}$ in the right-hand image. \emph{Center, right.} Illustration of image-based confounding using Landsat data for Nigeria. The center panel depicts the raw reflectance; the right panel depicts the transformed values after convolution with the filter, values which enter the model for treatment/outcome to generate confounding.}\label{fig:conv}
\end{figure}
\paragraph{Scene-based Image Confounding}
In practice, however, multiple scales are at play: the image is defined at one resolution, but the treatment, outcome, and confounder potentially at another. To take one example, the treatment and outcome data (e.g., mortality) are defined in the medical context at the patient level, but the X-ray data itself, of course, contains additional height and width dimensions. To take another example, a policymaker may examine an entire village, looking for the the best maximum or average similarity to some target pattern: the village is the unit to which treatment is allocated and the confounder is defined at that level, but is created using more granular information. Thus, our approach accommodates situations where the confounder, treatment and outcome are measured at different scales. For the sake of clarity, we will focus on the case where these variables measured at the same granularity, as this is one of the most common settings in applied research.
Hence, we define the following causal model. Let $\Pi_s \in \mathbb{N}^{2}$ denote the height and width indices (locations) used when aggregating up information to the final scene-based unit of analysis, $s$. This scene-based causal model is significant because, while some studies are able to obtain resolved (e.g., household-level) outcome data,
this data may in other cases be costly or even impossible to obtain due to privacy reasons. In these situations, data can only measured at the scene level.
\begin{figure}[t]
\begin{subfigure}{.48\textwidth}
\vspace{2.8em}
\begin{center}
\tikzstyle{main node}=[circle,draw,font=\sffamily\small\bfseries]
\tikzstyle{sub node}=[circle,draw,dashed,font=\sffamily\small\bfseries]
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.8cm,thick]
\node[sub node] (0) {$U_{swh}$};
\node[sub node] (1) [right of=0] {$U_s$};
\node[main node](2)[below of =0, yshift=2mm]{$I_{sh'w'}$};
\node[main node] (3) [below right of=1] {$T_s$};
\node[main node] (4) [right of=3] {$Y_s$};
\node[main node] (5) [right of=1] {$X_s$};
\node[rectangle,draw=gray, fit=(0) (2),inner sep=7mm,label=below:{$w,h\in \Pi_{s}$}] {};
\node[rectangle,draw=gray, fit=(2),inner sep=1mm,label=below:{\tiny $w',h'\in$ \newline $\Pi_s(hw)$}] {};
\path[every node/.style={font=\sffamily\small}]
(1) edge node {} (3)
(1) edge node {} (4)
(2) edge node {} (0)
(0) edge node {} (1)
(3) edge node {} (4)
(5) edge node {} (3)
(5) edge node {} (4) ;
\end{tikzpicture}
\end{center}
\caption{Image-based confounding at the scene-level.}\label{fig:ComplexConvDag}
\end{subfigure}
\hfill
\begin{subfigure}{.48\textwidth}
\begin{center}
\tikzstyle{main node}=[circle,draw,font=\sffamily\small\bfseries]
\tikzstyle{sub node}=[circle,draw,dashed,font=\sffamily\small\bfseries]
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.8cm,thick]
\node[sub node] (0) {$U_{swh}$};
\node[sub node] (1) [right of=0] {$U_s$};
\node[sub node] (6) [above of=1] {$R_s$};
\node[main node](2)[below of =0, yshift=2mm]{$I_{sh'w'}$};
\node[main node] (3) [below right of=1] {$T_s$};
\node[main node] (4) [right of=3] {$Y_s$};
\node[main node] (5) [right of=1] {$X_s$};
\node[rectangle,draw=gray, fit=(0) (2),inner sep=7mm,label=below:{$w,h\in \Pi_{s}$}] {};
\node[rectangle,draw=gray, fit=(2),inner sep=1mm,label=below:{\tiny $w',h'\in$ \newline $\Pi_s(hw)$}] {};
\path[every node/.style={font=\sffamily\small}]
(1) edge node {} (3)
(6) edge node { Objects not in image} (1)
(1) edge node {} (4)
(2) edge node {} (0)
(0) edge node {} (1)
(3) edge node {} (4)
(5) edge node {} (3)
(5) edge node {} (4);
\end{tikzpicture}
\end{center}
\caption{Some confounding not observed in image.}\label{fig:ResidualBias}
\end{subfigure}
\caption{}
\end{figure}
\paragraph{SUTVA} Before discussing how we identify the causal effect of interest, we discuss how the Stable Unit Treatment Value Assumption (SUTVA) affects out problem formulation. This assumption entails that any unit \textit{i} treated should not affect the treatment status our outcome of another unit \textit{j}. In other words, each unit is i.i.d, as defined by our DAG. However, in spatial analysis, it may be harder to defend SUTVA. Units that are closer in space may affect each other (via "spillover effects"). For example, a policymaker allocating aid to village \textit{i} may unintentionally affect aid in village \textit{j}. But SUTVA violations, and other form of dependence (e.g., spatial clustering), can in principle be accounted for by specifying an appropriate variance-covariance structure \citep{sinclairDetectingSpilloverEffects2012}. Still, we will focus on the simplest case where we assume SUTVA holds as a first approximation.
\paragraph{Identification}\label{ss:identification}
We confirm that, under image-based confounding as formalized in Figures \ref{fig:ConvDag} and \ref{fig:ComplexConvDag}, treatment effects may be identified by adjusting for the image $I$. To start, we assume that the confounder $U$ is a deterministic function of $I$ and return to a case where $U$ has multiple causes later. This is justified, for example, in applications where confounding is based on the existence of an object, either if the policymaker scanned $I$ for the object prompting the policymaker to allocate a treatment in that area of interest, or if $I$ can identify the object without error.
As $U$ is determined fully by $I$, ruling out other potential noise sources, there exists a deterministic function $f$ such that $U = f(I)$. The aforementioned case of $U$ being the (pooled) convolution of a 2D filter with the image $I$ satisfies this assumption.
Here, we suppress dependence on the indexes $s,w,$ and $h$ since the same argument applies if $T/U/Y$ are defined at the $swh$ (pixel) or $s$ (scene) levels.
\begin{proposition}\label{prop:identification}
Suppose the confounder $U$ is deterministic given the image $I$, such that $U = f(I)$, (with $f$ unknown), and that the causal structure obeys either of Figures \ref{fig:ConvDag} \& \ref{fig:ComplexConvDag}. Then, $p(Y(t))$ and therefore ATE of $T$ on $Y$ is identifiable from $p(I, X, T, Y)$.
\end{proposition}
\begin{proof} For simplicity of exposition, we give the proof for the case without additional confounding variables $X$. The proof generalizes readily to non-empty $X$, by marginalization and conditioning. The claim follows from $U$ being a deterministic function of $I$. By the backdoor criterion applied to the graphs in Figures \ref{fig:ConvDag} \& \ref{fig:ComplexConvDag}, $X, U$ is an adjustment set for the effect of $T$ on $Y$, which implies exchangeability of potential outcomes: $Y(t) \perp T \mid X$, see e.g., \citet{hernanbook}. In the case with empty $X$,
\[
p(Y(t)) = \sum_{u} p(Y|T=t,U=u)p(U=u).
\]
Since $U$ is a deterministic, but not necessarily invertible function of $I$, $U=f(I)$, we have that
\begin{align
p(Y\mid T=t, U=u) &= p(Y\mid T=t, I \in f^{-1}(u)) \;\;\mbox{ and }\;\;
p(U=u) &= \sum_{i \in f^{-1}(u) } p(I=i) \label{eq:marginal-u}
\end{align}%
where $f^{-1}$ is the inverse map of $f$, so that
\begin{align*}
p(Y(t)) &= \sum_u \sum_{i \in f^{-1}(u) } p(Y|T=t,I=i)p(I=i) = \sum_i p(Y|T=t,I=i)p(I=i)~.
\end{align*}
Hence, $I$ is also an adjustment set for $T$ on $Y$. From a similar proof, we see that $X, I$ is an adjustment set in the case of non-empty $X$. From here, standard arguments \citep{rosenbaum1983central} show that
$$
\mathbb{E}[Y(t)] = \mathbb{E}\left[Y \frac{p(T=t)}{p(T=t\mid I=i)} \ \bigg| \ T=t \right]
$$
which justifies use of inverse-propensity weighting with respect to $I$.
\end{proof}
\paragraph{Image an Imperfect Confounder Proxy}\label{ss:imperfect}
The argument of Proposition~\ref{prop:identification} rests on the assumption that the image contains all information about the latent confounder. When treatment decisions are made based on object detection, this assumption would be satisfied if the image contains all objects that are relevant to the outcome and treatment decision. This is violated if, for example, unlabeled objects, depicted as $R_s$ in Figure \ref{fig:ResidualBias}, that are themselves the driver of the treatment decision but are not possible to reconstruct from the image data. If the image data imperfectly depicts those objects, full identification is no longer possible as there is a possibility of residual confounding. Specifically, the inverse map $f^{-1}$ in \eqref{eq:marginal-u} is no longer uniquely or well defined as a set of images; $R_s$ must be adjusted for as well. In this imperfect case, the image becomes a \textit{driver} of the confounding, and thus, has similar properties to proxies \citep{pearlLinearModelsUseful2013,penaMonotonicityNondifferentiallyMismeasured2020a}.
\paragraph{Central Role of Resolution} For satellite imagery data of particular relevance to social and health science applications, resolution is a key driver of this residual confounding. We can only adjust for confounder objects in the image that can be resolved. Smaller confounder objects therefore introduce residual bias, indicating how technological improvements to sensor technology play a critical role in improving image-based causal inference methods.
\paragraph{Finite Sample Estimation Considerations}\label{ss:estimation}
We have shown how, under assumptions, the image is itself an adjustment set for estimating the effect of programs on outcomes in the context of image-based confounding. However, non-parametric inference is difficult in the image context because no two images are the same. Thus, the probability of seeing comparable treatment/control images is zero, violating overlap assumptions necessary for model-free inference \citep{d2019comment}. Machine learning models for the image may seek to estimate $U$, forming latent representations for the image. In this lower-dimensional space, there is more likely to be empirical overlap between treatment/control, justifying the use of modeling approaches like the ones discussed in \S\ref{ss:experiments} and \S\ref{sec:mu_empirical}. Thus, while adjusting directly for $U$ would fulfill the overlap assumptions optimally, this is infeasible; when adjusting for $I$ instead, a critical argument for our approach to work is that the propensities depend only on aspects of $I$ that captures $U$, aspects assumed to be compressible to a lower dimensional representation. As such, the situation is more benign than if propensities depended freely on all patterns in $I$.
\section{Experiments Illustrating Performance Under Model Misspecification}\label{ss:experiments}
Although the identification results described in \S\ref{ss:identification} are general, they are also theoretical and, as described in \S\ref{ss:estimation}, there are several practical considerations that will affect performance in real data. In order to understand these dynamics, we use simulation, since (in practice) true causal targets are unknown. In these simulations, the image data is observed, but the confounding features are not, and must be estimated via machine learning, as is the case in real applications.
\paragraph{Simulation Design} In first simulation, pixel-level confounding is generated via convolution applied to our set of Landsat images from Nigeria:
\[
U_{swh} = f_l(I_{s\Pi_s(hw)}),
\]
where $f_l(\cdot)$ denotes the linear kernel function, where the single kernel filter, $l$, is a diagonal matrix and is shown in Figure \ref{fig:conv}. The set of images used in the simulation are drawn from the corpus of Landsat satellite images that we later use in the application (see \S\ref{ss:data}).
Because the confounder, $U_{swh}$ enters the equation both for the treatment probability and the outcome, we have confounding:
\begin{align*}
\Pr(T_{swh} | I_s) &= \textrm{logistic}( \beta U_{swh} + \epsilon_{swh}^{(W)}) \;\;\mbox{ and }\;\; Y_{swh} = \gamma U_{swh} + \tau \, T_{swh} + \epsilon_{swh}^{(Y)},
\end{align*}
where the error terms, $\epsilon_{swh}^{(W)}$ and $\epsilon_{swh}^{(Y)}$, are drawn i.i.d. from $N(0,0.1)$ and $\textrm{logistic}(x)=1/(1+\exp(-x))$ maps real numbers to values between 0 and 1.
We vary the dimensions of the convolutional filter, $l$, keeping the structure of the filter pattern fixed (see Figure \ref{fig:conv}) but normalizing as we vary the dimensionality so that the convolution output values have the same mean and variance. By varying the filter dimensions, we alter the size of the neighborhood used in calculating the pixel-level confounder. A larger filter size means that the treatment allocator scans a larger region around each point in assessing proximity to the target pattern.
\paragraph{Estimators}
We compare three estimators. First, we examine the difference-in-means estimator, $\hat{\tau}_{\textrm{0}}$, defined as the difference in conditional outcome means for the two treatment groups. Because of confounding, this quantity is biased for $\tau$. We use this value as a baseline and report our evaluation metrics relative to those from this estimator in order to account for the fact that, as we vary parameters in the data-generating process, the natural scale of the estimates varies as well.
We also report results from two Inverse Propensity Weighting (IPW) estimators \citep{austin2015moving}, one using the true (``oracle'') propensities that are not known in practice and another using a single-layer ConvNet which is trained using backpropogation with the binary cross-entropy loss. The estimation formula for IPW is $\hat{\pi}(\mathbf{X}_s)=\widehat{\textrm{Pr}}(T_s=1|I_s)$, $\widehat{\tau} = \frac{1}{n}\sum_{s=1}^n \left\{ \frac{T_s Y_s}{\hat\pi(\mathbf{X}_s)} -\frac{(1-T_s)Y_s}{1-\hat\pi(\mathbf{X}_s)}\right\}$ for the scene analysis and is defined analogously at the pixel level. We report results from the Hajek IPW estimator where the weights have been normalized, reducing estimator variance at the cost of some finite sample bias \citep{skinner2017introduction}.
\paragraph{Evaluation Metrics}
For our evaluation, we compare the true $\tau$ with the estimated values. We report the absolute bias of $\hat{\tau}$ and the Root Mean Squared Error (RMSE):
\begin{equation*}
\textrm{Absolute Bias} = \big|\mathbb{E}[\hat{\tau} - \tau]\big|; \;\;
\textrm{RMSE} = \sqrt{\mathbb{E}[(\hat{\tau} - \tau)^2 ]},
\end{equation*}
where expectations are taken over the data-generating process and are approximated via Monte Carlo. As noted, these evaluation metrics are then normalized using those from the baseline $\hat{\tau}_0$.
\paragraph{Simulation Results}
First, in Figure~\ref{fig:PixelResultsVaryGlobality}, we show performance where the treatment and outcome are defined at the pixel-level. The bias and RMSE panels of Figure \ref{fig:PixelResultsVaryGlobality} look similar because the variance of estimation is small due to the large number of pixel realizations from which to estimate parameters.
As expected, we also find that the absolute bias and RMSE are minimized when the kernel width used in estimation is the same as the kernel width used to generate the true confounder. The bias and RMSE grow larger when the true kernel width is greater than the estimating kernel width. This finding is likely due to the fact that, when the neighborhood size used in estimation is larger than that used to create the confounder, the extra, irrelevant information can be ignored by the model. Conversely, when the neighborhood size generating the true confounding is larger than what the model can capture, bias is more severe because the model cannot account for this extra information. We find similar results for the scene-level simulations (see \ref{ss:AdditionalSims}).
\begin{figure}[t]
\centering
\centering
\includegraphics[width=0.52\linewidth]{./BiasVarRMSE_varyGlobality_pixel.pdf}
\caption{Pixel-level results varying extent of the spatial confounding. Results indicated with black circles are obtained using the estimated propensities; results indicated with gray circles are obtained using the true (oracle) values. The estimating kernel width is fixed at 8; the true kernel width varies.}\label{fig:PixelResultsVaryGlobality}
\end{figure}
\section{Empirical Illustration: The Impact of Local Aid Programs in Nigeria}\label{sec:mu_empirical}
Having examined finite-sample performance via simulation, in this section, we demonstrate our method in Nigeria, African's largest economy and a country that is projected to be the world's second most populous by 2100 \citep{vollset2020fertility}. Despite an average economic growth rate of around 3\% since 2000, about 40 percent of the Nigerian population live below the poverty line (\$2 per day). In response, governments and NGOs have deployed a variety of local aid programs to the country. However, the causal impact of these programs is difficult to estimate \citep{roodman2008through}.
While geo-temporal data on poverty, $Y$, and interventions, $T$, are readily available, there is a lack of geo-temporal data on potential confounders, $U$, at the local level. While some of these confounders may be difficult to capture directly using images (such as the quality of political institutions), there may be information present in remote sensing imagery about other confounder objects related to infrastructure or agriculture \citep{schnebele2015review,steven2013applications}. We therefore use satellite images of Nigerian communities in order to estimate the impact of local aid programs.
\paragraph{Data Description}\label{ss:data}
Our outcome data on poverty is drawn from the Demographic and Health Surveys (DHS), which are conducted by a non-profit organization, ICF International, with funding from USAID, WHO, and other international organizations \citep{rutstein2006guide}. The DHS surveys are conducted at the household level for randomly selected clusters that aggregate to geographically explicit scenes of about 300 households for de-identification purposes. Our outcome measure is the International Wealth Index (IWI) from 2018, which is a principal-components-derived summary of 12 variables including households' access to public services and possession of consumer products such as a phone. Its scale is between 0 and 100, with higher values indicating greater wealth.
Our treatment data is drawn from a data set on international aid programs to Nigeria compiled by AidData \citep{aiddata2016} used under an Open Data Commons License. The aid programs we examine took place after 2003. The aid providers include entities such as the World Bank and WHO, as well as domestic governments such as the United States. The programs we examine are diverse, focusing on infrastructure (e.g., support for road development) and agriculture (e.g., support for small-scale farmers), among other things. For the simplicity of our presentation, we take the presence of a geographically-specific aid program within 7000 meters of a DHS point as our treatment.
Our pre-treatment image data is drawn from Landsat, a satellite imagery program operated by NASA/USGS. We use the Orthorectified ETM+ pan-sharpened data derived from the raw satellite imagery captured between 1998 and 2001; the raw data have been processed to contain minimal cloud-cover and to be correctly geo-referenced. Resolution is 14.25 meters; reflectance in the green, near-infrared, and short-wave infrared bands is recorded. Example images of treated/control DHS points are shown in the left two panels of Figure \ref{fig:InsideConvnet} (top/bottom, respectively).
\paragraph{Model Design}\label{ss:model}
Our image model for the treatment is built using three convolutional layers (32 filters each) with max pooling. After the application of the filters, we project the channel dimension into a one-dimensional space to facilitate interpretability via a single image post-convolution. Batch normalization is used on the channel dimensions and before the final projection layer. Weights are learned using ADAM with Nesterov momentum with cosine learning rate decay (baseline rate = 0.005; \citet{gotmare2018closer}). We compare performance across a variety of filter sizes. We attempt to limit overfitting by randomly reflecting each image dimension with probability $1/2$ during training. We assess out-of-sample performance using three random training/test splits (1277/20), averaging over this sampling process and reporting 95\% confidence intervals from the three test set assessments.
\paragraph{Empirical Results}\label{ss:ApplicationResults}
First, in the left panel of Figure \ref{fig:ApplicationResults}, we assess propensity model performance. We find that the image model always improves on the baseline out-of-sample loss value obtained by random guessing of the dominant class (the control class, comprising 71\% of the data sample). Performance is stable across values of the estimating kernel width.
\begin{figure}[t]
\begin{subfigure}{.52\textwidth}
\centering
\includegraphics[width=1.\textwidth]{./hatTau_varyGlobality_scene.pdf}
\caption{The left panel shows out-of-sample binary cross entropy loss compared to the baseline loss when guessing the dominant class. The right panel shows the IPW estimate values across the range of estimating kernel width values compared to the baseline difference in conditional means.}\label{fig:ApplicationResults}
\end{subfigure}
\hfill
\begin{subfigure}{.46\textwidth}
\centering
\includegraphics[width=.7\textwidth]{./FigWLines.png}
\caption{ The left two panels depict the raw data for a treatment (top) and control (bottom) DHS point. The right two panels show the final convolutional layer for each case.}\label{fig:InsideConvnet}
\end{subfigure}
\caption{}
\end{figure}
Next, in the right panel of Figure \ref{fig:ApplicationResults}, we analyze the estimated treatment effects. We find that, across the estimating kernel width range, adjusted estimates are positive but smaller in magnitude than the baseline difference in means. This hints at the importance of confounder adjustment, as these programs may be given to areas already primed for growth.
We also analyze the model output. In Figure \ref{fig:InsideConvnet}, we visualize data for the out-of-sample treated unit with the highest treatment probability (top) and control unit with the lowest probability (bottom). The left panels show the raw data and the right panels show the final hidden convolutional layer pre-flattening. The low treatment probability site is in the remote desert city of Machina (pop. 62,000); the high probability site is from the city of Katsina (pop. 429,000) near a large agricultural basin. The output of this model would seem to resonate with the fact that many of the projects undertaken by global actors are specifically designed to assist farmers and agriculture more generally.
Finally, we access the performance of the estimated propensity scores on reducing covariate imbalance between treated and control groups. The same absence of rich covariate information in places such as sub-Saharan Africa that motivates this paper also makes this assessment task difficult. Still, we can analyze differences in longitude/latitude between treated/control groups. We find a raw difference of (-1.17, -1.13). After weighting, this difference decreases in magnitude (e.g., to (-0.63,-0.30) in a representative run), indicative of improved counterfactual comparisons.
\section{Discussion and Future Work}\label{s:discussion}
In this paper, we show how investigators can use machine-learning innovations for image prediction to adjust for confounding due to latent objects present in images - objects that affect both outcome and treatment decisions. We formally show that, even though these objects are latent, we can adjust for them using the image information alone. We illustrate this approach in order to understand the causal effect of aid programs on reducing poverty in Africa's largest but still poverty-affected economy.
Our approach has some limitations. For example, as described in \S\ref{ss:imperfect}, confounding may be due to objects that cannot be resolved in the image data, and, as a result bias reduction will not occur conditioning on the image information. In this context, the collection and application of imagery with higher spatial, temporal, and spectral resolution is a priority. Spatial and temporal resolution may both be achieved, for example, using imagers mounted on ground-based infrastructure (e.g., \citet{johnston2021measuring}); spatial resolution and extent could be optimized with drone- or airplane-based instruments (e.g., \citet{gray2018integrating}). Future considerations should examine privacy and fairness issues with causal analyses based on passive sensor technologies.
Second, while in some cases, such as healthcare, the scene-level unit of analysis is clearly defined (e.g., the patient), in other context, the scene-level unit of analysis is more ambiguous. The researcher therefore has choices about how to define the scene (e.g., at the street, village, or region levels), a choice that could introduce systematic bias into the analysis. This issue is known as the modifiable areal unit problem \citep{fotheringham1991modifiable}, and the approach described here is vulnerable to it as well. Future work should therefore focus on the development of image-confounding methods that have theoretical guarantees on the robustness of the results to the scale of action examined. This issue of scale is also related to the question of capturing the treatment and outcome at different scales.
Third, our model learns the filters that best predict treatment assignment. More research is needed to connect these patterns, learned inductive, with the mental processes of real actors as they consult images in decisionmaking. This path of research could forge interresting links between cognitive science, machine learning, and causal inference. \hfill $\square$
\medskip
\bibliographystyle{plainnat}
|
2,877,628,091,252 | arxiv | \section{Introduction.}\label{intro}
\section{Introduction}
\label{sec:introduction}
Consider two zero-mean jointly normal random variables, and the expectation of their maximum. Should they be uncorrelated, the expectation of the maximum is $\frac{1}{\sqrt{\pi}}$. With correlation $\frac{1}{2}$ the expectation is $\frac{1}{\sqrt{2\pi}}$, and with correlation $-\frac{1}{2}$ the expectation is $\frac{\sqrt{3}}{\sqrt{2\pi}}$. For arbitrary jointly normal random variables $Z_1,Z_2$ the maximum is as follows (see~\cite{nadarajah2008exact}):
\begin{eqnarray}
\mathbb{E}\left[\max \{Z_1,Z_2\}\right] =&
\mathbb{E}
\left[
Z_1
\right]
\Phi
\left(
\frac
{
\mathbb{E}[Z_1] - \mathbb{E}[Z_2]
}
{\sqrt{\sigma^2(Z_1) + \sigma^2(Z_2) - 2 \mathrm{cov}(Z_1, Z_2)}}
\right)
+ \nonumber \\
& \mathbb{E}
\left[
Z_2
\right]
\Phi
\left(
\frac
{
\mathbb{E}[Z_2] - \mathbb{E}[Z_1]
}
{\sqrt{\sigma^2(Z_1) + \sigma^2(Z_2) - 2 \mathrm{cov}(Z_1, Z_2)}}
\right) + \nonumber \\
& \sqrt{\sigma^2(Z_1) + \sigma^2(Z_2) - 2\mathrm{cov}(Z_1, Z_2)}
\phi
\left(
\frac
{
E[Z_1] - E[Z_2]
}
{\sqrt{\sigma^2(Z_1) + \sigma^2(Z_2) - 2 \mathrm{cov}(Z_1, Z_2)}}
\right),& \nonumber
\end{eqnarray}
where $\Phi$ and $\phi$ are the c.d.f. and p.d.f. of a standard normal random variable, respectively. The evaluation of the expectation requires the calculation of the highly nonlinear c.d.f. and p.d.f. with arguments that are composed of ratios of linear expressions over square roots of linear expressions (and even the product of these expressions). Consider now an optimization problem, where the objective function consists of the expression above,
but the selection of $Z_1$ and $Z_2$ are subject to decisions in the problem. This paper studies this decision making setting, describes real-world problems that can be modeled within this context, and explores an efficient algorithm for solving the resulting optimization problems. More formally, we investigate a class of stochastic optimization problems defined over a multivariate Gaussian distribution. In particular, we study the problem of optimizing the expected value of the maximum of two linear functions defined on the component random variables of a multivariate Gaussian distribution, allowing for correlation among the component random variables.
Real-world problems that can be modelled within this context abound, especially if we consider the connection to order statistics, which is used to model types of auctions where the final prices are determined by the first- or second-highest bidder \citep{brown1986using};
insurance premium determination, where companies use order statistics to determine policies for joint-life insurance \citep
city20705}; failure models for wireless communication networks~\citep{yang_alouini_2011}; and risk management \citep{Koutras2018}.
We study two applications in this paper\textemdash (1) expected score maximization in Daily Fantasy Sports (DFS) and (2) makespan minimization for 2 parallel machines with stochastic processing times. These two applications highlight how both objectives (maximizing and minimizing) are important in different contexts.
Exact algorithms for optimizing expressions composed of the maximum of two different stochastic functions
is challenging due to the complexity of evaluating or even estimating the expectation of order statistics \citep{david2004order,Bertsimas:2006:TBE:1187913.1187922, evans2006distribution}. Specific to the problem class studied in this paper, closed-form expressions for the expected value of the maximum and the minimum are known (see~\cite{nadarajah2008exact}).
We show how one can formulate the problem as a binary optimization problem and investigate exact computational approaches for solving the model. Our approach consists of
an extension of the integer L-shaped method
combined with linearization and discretization techniques tailored for the problem.
Our contributions are the following:
\begin{itemize}
\item We show that the underlying optimization problem is NP-hard even in scenarios where the set of solutions is unconstrained;
\item We develop an exact optimization algorithm that allows for the incorporation of linear constraints and establish theoretical results on the quality of the upper bounds obtained;
\item We study two featured applications, aimed at exhibiting the generality of the class of problems and to showcase the scalability of the proposed algorithms.
\item For~$P2|p \sim \N(\mu,\Sigma),\rho_{j,j'}=0|\mathbb{E}[C_{max}]$, a machine scheduling problem investigated in this article, we show that a single iteration of our algorithm can deliver a solution with constant-factor approximation guarantees,
and we also prove that this problem
is equivalent to its deterministic counterpart (where processing times are replaced by the mean of the respective distributions); and
\item We conduct a comprehensive computational study both on synthetic problem instances and real-world data in order to show the computational performance and the quality of the results delivered by our algorithms.
\end{itemize}
The paper is organized as follows.
Section~\ref{sec:probDesc} formally defines the class of problems we study. Section~\ref{sec:complexity} presents computational complexity results. In Section~\ref{sec:Exact} we present an exact cutting-plane algorithmic framework. Numerical experiments on synthetic instances are presented and discussed in Section \ref{sec:compExperiments}. Sections \ref{sec:DFF} and \ref{sec:scheduling} present our two featured applications. Section \ref{sec:conclusion} concludes the paper and discusses future work.
\section{Problem Description and Featured Applications}
\label{sec:probDesc}
We consider a collection of random variables $\pmb{Y} = \left( Y_1, \ldots, Y_n \right)^T$ following a multivariate Gaussian distribution $\pmb{Y} \sim \mathcal{N}(\pmb{\mu},\pmb{\Sigma})$. The random variables $Y_1, \ldots, Y_n$ will be referred to as \emph{component} random variables which have component-wise expectations $\mu_1, \ldots, \mu_n$. The $n \times n$ covariance matrix $\pmb{\Sigma}$ has elements $\pmb{\Sigma}_{i,j} = \textnormal{Cov}(Y_i, Y_j)$.
We study the class of problems
\begin{equation}
\tag{P}
\label{eqn:optProblem}
\max\limits_{x \in \Omega \subseteq \{0,1\}^{2 \times n}} \textnormal{(or $\min$)} \quad \mathbb{E} \left[ \max \{ Z_1(x),Z_2(x) \} \right],
\end{equation}
where for $i=1,2$, $Z_i(x) = \sum_{j=1}^n Y_j x_{i,j} $ with $x_{i,j} \in \set{0,1}$ for $j = 1, \ldots, n$. The set $\Omega$ defines the feasible region for the decision variables, assumed in this paper to be composed solely of linear constraints.
In order to exhibit the expansive generalizability of this class of problems, we present two featured applications.
\noindent \textbf{Featured Application 1: Daily Fantasy Sports}
In certain DFS competitions, participants select up to~2 fantasy entries, with each entry being composed of a set of players who will participate in upcoming sporting events. The selection of players is subject to roster and budget constraints; more details about these and other constraints are presented in Section~\ref{sec:DFF}. Each fantasy entry receives points based on the actual performance of the players selected in the sporting events. Entries are ranked according to the total fantasy points scored, and the payout of each entry depends on its position in this rank. The payout structures are top-heavy, with a small fraction of the entries receiving substantial amounts, including the top-scoring entry receiving approximately 25\% of the total entry fees paid.
Suppose there are $n$ players that can be selected for any of a participant's $2$ entries. Each player will score a random number of fantasy points $Y_j$. Letting $x_{i,j}$ indicate the selection of player $j$ for entry $i$, for $i = 1, 2$ and $j = 1, \ldots, n$, one can model the selection problem for a participant as \begin{align}
& \max && \mathbb{E} \left[ \max \left\{ Z_1(x), Z_2(x) \right\} \right] \label{fa:dff} \tag{DFS} \\
& \textnormal{s.t.} && Z_i(x) = \sum_{j=1}^n Y_j x_{i,j} && i = 1, 2 \nonumber \\
&&& x \in \Omega &&\nonumber
\end{align}
The correlation between the points received for a pair of players can be significant, for example between a quarterback and a wide receiver in football, since most fantasy points that a wide receiver receives will generally be associated with fantasy points for the quarterback on the same team. The constraints in $\Omega$ define conditions which enforce what configurations of players make a legal entry, which will be explained in more detail later and can be modeled through linear constraints.
Algorithmic sports betting recently became a topic of interest in the operations management and operations research literature~\citep{KapGar2001,ClaLet07,haugh2021play}. Some examples involve selecting multiple entries for maximizing the expected score of the maximum scoring entry in both National Football League (NFL) survival pools (\cite{bergman2017surviving}) and DFS (\cite{hunter2016picking,haugh2021play}).
\noindent \textbf{Featured Application 2: Makespan Minimization with Stochastic Processing Times}
Consider a set of $n$ jobs with processing times~$Y_j$ drawn from a multivariate Gaussian distribution to be partitioned for execution on~$2$ parallel (identitical) machines such that the makespan (the time at which the last job completes) is minimized; this problems is represented by~$P2|p \sim \N(\mu,\Sigma)|\mathbb{E}[C_{max}]$
in the notation of~\cite{graham1979optimization}.
Letting binary variable $x_{i,j}$ indicate if job $j$ is assigned to machine $i$ for $i=1, 2$ and $j=1, \ldots, n$, the problem can be formulated as
\begin{align}
& \min && \mathbb{E} \left[ \max \left\{ Z_1(x),Z_2(x) \right\} \right] \label{fa:ms} \tag{MS} \\
& \textnormal{s.t.} && Z_i(x) = \sum_{j=1}^n Y_j x_{i,j} && i = 1, 2 \nonumber \\
&&& x_{1,j} + x_{2,j} = 1 && j=1, \ldots, n \nonumber \\
&&& x_{i,j} \in \set{0,1} && i=1, 2, \ j = 1, \ldots, n. \nonumber
\end{align}
The objective seeks to minimize the maximum expected completion time of all machines. Note that no assumption is presumed on the correlation between the processing time of the jobs. We incorporate $\rho_{j,j'}=0$ to the notation to represent scenarios where processing times are uncorrelated.
There is a vast literature on stochastic scheduling~(see~\cite{nino2009stochastic}). In particular, \cite{coffman1987minimizing} characterize optimal solutions for makespan minimization in scenarios involving 2 or 3 machines and jobs with exponentially distributed processing times. \cite{pinedo2005planning}
shows the connection between a stochastic flow shop scheduling problem and a deterministic traveling salesman problem; we present a similar result involving~$P2|p \sim \N(\mu,\Sigma),\rho_{j,j'}=0|\mathbb{E}[C_{max}]$ in this~work.
\section{Computational Complexity}
\label{sec:complexity}
Consider random variables $\pmb{Y} = \left( Y_1, \ldots, Y_n \right)^T$ following a multivariate Gaussian distribution $\pmb{Y} \sim \mathcal{N}(\pmb{\mu},\pmb{\Sigma})$ and note that all component random variables~$Y_{j}$ are normally distributed and potentially correlated, i.e.,
$Y_j \sim \N(\mu_{j},\sigma^2_j)$,
with mean $\mu_j$ and variance $\sigma^2_j = \pmb{\Sigma}_{j,j}$, for all $j \in \set{1, \dots, n}$.
\begin{theorem}\label{thm:hard}
Optimization problem \ref{eqn:optProblem} is NP-hard even if $\Omega = \{0,1\}^{2 \times n}$.
\end{theorem}
Before proceeding with the proof, we recall some known results that will be relevant in this section and throughout the manuscript. Since $\pmb{Y}$ follows a multivariate Gaussian distribution, $Z_i(x) = \sum\limits_{j=1}^n Y_j x_{i,j} $ is normally distributed for all $i=1,2$. Moreover, for any $x \in \{0,1\}^{2 \times n}$, the mean and variance of~$Z_{1}(x)$ and~$Z_{2}(x)$, as well as their covariances (and correlations), are given~by:
\begin{align}
& \mathbb{E} \left[Z_i(x) \right] = \sum_{j = 1}^n \mu_j x_{i,j} && \forall i \in \{1,2\} \label{expectedVal} \\
& \sigma^2 \left(Z_i(x) \right) =
\sum_{j=1}^n \sigma^2_j x_{i,j}
+
2 \sum_{1 \leq j < j' \leq n} \mathrm{cov}(Y_{j} , Y_{j'} ) x_{i,j} x_{i,j'}
&&
\forall i \in \{1,2\} \label{variance}
\\
&
\mathrm{cov} \left( Z_{1} (x) , Z_{2} (x) \right) =
\sum_{j = 1}^{n} \sum_{j' = 1}^n \mathrm{cov} \left( Y_{j} , Y_{j'} \right) x_{1,j} x_{2,j'}.
&&
\label{covariance}
\end{align}
Let $\phi(\cdot)$ and $\Phi(\cdot)$ be the probability density function (p.d.f.) and the cumulative distribution function (c.d.f.), respectively, of standard normal random variables; i.e., for $w \in (- \infty , \infty )$,
\begin{align*}
& \phi (w) = \frac{1}{\sqrt[]{2 \pi}} e ^ {\frac{-w^2}{2}},&
& \Phi (w) = \int_{- \infty}^w \phi (u) du.
\end{align*}
An exact expression is known for $\mathbb{E} \left[ \max \{ Z_1(x),Z_2(x) \} \right]$ (see~\cite{nadarajah2008exact} and~\cite{clark1961greatest}):
\begin{equation}\label{eq:objective}
\mathbb{E}
\left[
Z_1 (x)
\right]
\Phi
\left(
\frac
{
\delta(x)
}
{\theta(x)}
\right)
+
\mathbb{E}
\left[
Z_2 (x)
\right]
\Phi
\left(
\frac
{
-\delta(x)
}
{\theta(x)}
\right) +
\theta(x)
\phi
\left(
\frac
{
\delta(x)
}
{\theta(x)}
\right),
\end{equation}
where
\begin{equation*}
\delta(x) =
\mathbb{E}
\left[
Z_1 (x)
\right]
-
\mathbb{E}
\left[
Z_2 (x)
\right],
\end{equation*}
and
\begin{equation}
\theta(x)\label{eq:theta}
=
\sqrt
{
\sigma^2(Z_1(x))
+
\sigma^2(Z_2(x))
-
2 \mathrm{cov} \left(Z_1(x) , Z_2(x)\right)
}.
\end{equation}
In order to simplify notation, we assume without loss of generality that $\mathbb{E}\left[Z_1 (x)\right] \geq \mathbb{E} \left[Z_2 (x)\right]$.
As we can see from Expression~\ref{eq:objective}, due to the (possible) dependency among the random variables $Z_1(x)$ and $Z_2(x)$, the expectation of their maximum is a highly nonlinear function. Note that if $\theta(x)$ is 0, the ratio in the argument of the c.d.f. may be undefined, but the expression holds true if we assume $\Phi\left( \frac{a}{0} \right) $ to be 0 if $a < 0$, $\frac{1}{2}$ if $a = 0$, and 1 if $a > 0$.
Equipped with this expression, we proceed with the proof of Theorem~\ref{thm:hard}.
\begin{proof}{Proof of Theorem~\ref{thm:hard}.}
The result follows from a reduction from the minimum weighted cut problem, which is known to be NP-hard (\cite{mccormick2003easy}). Let $G = (V,E)$ be an undirected graph with integer weights $w_e$ on each edge $e \in E$. An $(S,T)$-\emph{cut} of $G$ is a 2-partition of $V$, and its \emph{weight}~$w(S,T)$ is the sum of the weights of the edges ``crossing'' the cut, i.e., $w(S,T) = \sum\limits_{e : e \cap S, e \cap T \neq \emptyset} w_e$. In the decision version of the problem, we are also given a constant $K$ and wish to know if there exists a~$(S,T)$-cut of~$G$ such that $w(S,T) \leq K$.
We create an instance of \ref{eqn:optProblem} as follows. Assuming there are $n$ vertices in $G$, every vertex $j \in V$ is associated with a normally distributed random variable~$Y_j \sim \N(0,1)$, for $j = 1, \ldots, n$. The covariance between $Y_{j}$ and $Y_{j'}$ is $\mathrm{cov}(Y_{j},Y_{j'}) = \frac{w_{\{j,j'\}}}{4 M+1}$, where $M = \sum\limits_{e \in E} |w_e|$. As a result, we have that $\sum\limits_{j = 1}^n \sum\limits_{\substack{j' = 1\\j' \neq j}}^n \mathrm{cov}(Y_{j},Y_{j'}) \leq \frac{2M}{4M+1} < \frac{1}{2}$ and $|\sigma^2_{j}| \geq \sum\limits_{j' \neq j}|\mathrm{cov}(Y_{j},Y_{j'})|$ for all~$j= 1, \ldots, n$. One can then construct a symmetric and diagonally dominant (consequently, positive semi-definite) matrix~$\Sigma$ whose columns and rows are indexed by variables~$Y_j$. It follows that~$\Sigma$ is a valid covariance matrix.
Finally, let $\Omega = \{0,1\}^{2 \times n}$, i.e., the set of feasible solutions is unconstrained.
By construction, $\mathbb{E}[Z_1(x)] = \mathbb{E}[Z_2(x)] = 0$, so the expression for $\mathbb{E}[Z_{(2)}(x)]$ reduces to~$\theta(x) \frac{1}{\sqrt{2 \pi }}$.
As~$n \geq 1$ and all variances are equal to 1, it follows that at optimality $\theta(x) \geq 1$ (this value is achieved if we assign one item to the first set and leave the second set empty), so any~$x$ that maximizes $\theta(x)^2$ also maximizes $\theta(x)$. Therefore, our problem is equivalent to
\[
\max_{x \in \Omega} \ \theta(x)^2 = \sigma^2(Z_1(x)) + \sigma^2(Z_2(x)) - 2 \mathrm{cov} (Z_1(x),Z_2(x)).
\]
By expanding the terms of the last expression and replacing all variances for 1, $\theta(x)^2$ becomes
\begin{equation}
\begin{aligned}\label{thm:theta2}
&\sum_{j = 1}^n (x_{1,j} + x_{2,j} - 2x_{1,j}x_{2,j} ) \, + \\
& \quad \quad 2
\left[
\sum_{j = 1}^{n-1} \sum_{j' = j + 1}^n \mathrm{cov}(Y_{j}, Y_{j'}) (x_{1,j} x_{1,j'}
+ x_{2,j} x_{2,j'})
- \sum_{j = 1}^{n} \sum_{\substack{j' = 1\\j' \neq j}}^n \mathrm{cov}(Y_{j}, Y_{j'})x_{1,j} x_{2,j'}
\right]
\end{aligned}
\end{equation}
\begin{claim}\label{claim1}
In any optimal solution for $\max_{x \in \Omega} \theta(x)^2 $, $x_{1,j} + x_{2,j} = 1$ for $j = 1, \ldots, n$.
\end{claim}
\begin{Proof}
Let~$A(x)$ denote the sum within the brackets in~\ref{thm:theta2};
$A(x)$ belongs to the interval defined by
$ \pm \sum\limits_{j = 1}^n \sum_{\substack{j' = 1,j' \neq j}}^n \mathrm{cov}(Y_{j},Y_{j'})$,
because each covariance term appears at most twice with a positive coefficient and at most twice with a negative coefficient.
By construction, $ \sum\limits_{j = 1}^n \sum_{\substack{j' = 1\\j' \neq j}}^n \mathrm{cov}(Y_{j},Y_{j'}) < \frac{1}{2}$, so
we have that $-1 < 2A(x) < 1 $. Therefore, any optimal solution to $\max_{x \in \Omega} \ \theta(x)^2 $ also optimizes
\[
\max_{x \in \Omega} \left\{ \sum_{j = 1}^n x_{1,j}
+ \sum_{j = 1}^n x_{2,j}
-
2 \sum_{j=1}^n x_{1,j} x_{2,j} \right\},
\]
since the absolute value of each of the coefficients in this expression is greater than or equal to 1.
Finally, note that this expression is maximized if and only if $x_{1,j} + x_{2,j} = 1$, as desired.
$\blacksquare$
\end{Proof}
It follows from Claim~\ref{claim1} that
$\sum\limits_{j = 1}^n
(
x_{1,j}
+ x_{2,j}
-
2 x_{1,j} x_{2,j} )
= n$, so the problem reduces to
\begin{align}
\label{eq:rewrite1}
\max_{x \in \Omega}\quad &
\sum\limits_{j = 1}^{n-1} \sum\limits_{j' = j + 1}^n \mathrm{cov}(Y_{j}, Y_{j'}) (x_{1,j} x_{1,j'}
+ x_{2,j} x_{2,j'})
- \sum\limits_{j = 1}^{n} \sum\limits_{\substack{j' = 1\\j' \neq j}}^n \mathrm{cov}(Y_{j}, Y_{j'})x_{1,j} x_{2,j'} &\\
\textnormal{s.t.}\quad & x_{1,j} + x_{2,j} = 1 & j \in [n] \\
& x_{i,j} \in \{0,1\} & i \in [2],j \in [n].
\end{align}
Consider the following optimization problem:
\begin{equation}
\label{eq:rewrite2}
\begin{aligned}
& \min && h(x) = \sum_{j = 1}^{n} \sum\limits_{\substack{j' = 1,j' \neq j}}^n \mathrm{cov}(Y_{j}, Y_{j'})x_{1,j} x_{2,j'} \\
& \textnormal{s.t.} &&
x_{1,j} + x_{2,j} = 1 && j \in [n] \\
&&& x_{i,j} \in \{0,1\} && i \in [2], \ j \in [n].
\end{aligned}
\end{equation}
\begin{claim}
An optimal solution to optimization problem~(\ref{eq:rewrite2}) is also optimal to
problem~(\ref{eq:rewrite1}).
\end{claim}
\begin{Proof}
Both optimization problems have the same set of feasible solutions~$\Omega'$. Let $x'$ and $x''$ be two feasible solutions with $ h(x') < h(x'') $. Showing that $\theta(x')^2 > \theta(x'')^2$ establishes the claim. To show this, we first note that for any feasible solution $\tilde{x}$,
\begin{equation}
\label{eq:constant}
\begin{aligned}
& \sum_{j = 1}^{n-1} \sum_{j' = j + 1}^n \mathrm{cov}(Y_{j}, Y_{j'})\tilde{x}_{1,j} \tilde{x}_{1,j'}
+ \sum_{j = 1}^{n-1} \sum_{j' = j + 1}^n \mathrm{cov}(Y_{j}, Y_{j'})\tilde{x}_{2,j} \tilde{x}_{2,j'} \\
& + \sum_{j = 1}^{n} \sum_{j' \neq j
}^p \mathrm{cov}(Y_{j}, Y_{j'})\tilde{x}_{1,j} \tilde{x}_{2,j'} = \sum_{j = 1}^{n-1} \sum_{j'=j + 1}^{n} \mathrm{cov}(Y_{j}, Y_{j'}).
\end{aligned}
\end{equation}
This follows because for any two indices $j \neq j'$, the covariance term $\mathrm{cov}(Y_{j},Y_{j'})$ is counted in exactly one of the three terms in the left-hand size of equation~(\ref{eq:constant}):
\begin{enumerate}
\item If $\tilde{x}_{1,j} = \tilde{x}_{1,j'} = 1 \rightarrow$ $\mathrm{cov}(Y_{j}, Y_{j'})$ is counted only in the first term;
\item If $\tilde{x}_{2,j} = \tilde{x}_{2,j'} = 1 \rightarrow$ $\mathrm{cov}(Y_{j}, Y_{j'})$ is counted only in the second term;
\item If $\tilde{x}_{1,j} = 1, \tilde{x}_{2,j'} = 1 \rightarrow$ $\mathrm{cov}(Y_{j}, Y_{j'})$ is counted only in the third term; and
\item If $\tilde{x}_{2,j} = 1, \tilde{x}_{1,j'} = 1 \rightarrow$ $\mathrm{cov}(Y_{j}, Y_{j'})$ is counted only in the third term.
\end{enumerate}
Finally, because $\tilde{x}_{1,j} + \tilde{x}_{2,j} = 1$ and $\tilde{x}_{1,j'} + \tilde{x}_{2,j'} = 1$, it follows that the list above is exhaustive and contains all possible assignments of~$j$ and~$j'$. Therefore,
\begin{eqnarray*}
h(x') < h(x'') &\implies&
-h(x') > -h(x'') \implies \\
\sum_{j = 1}^{n-1} \sum_{j'=j+1}^{n} \mathrm{cov}(Y_{j}, Y_{j'}) - h(x')
&>&
\sum_{j = 1}^{n-1} \sum_{j'=j+1}^{n} \mathrm{cov}(Y_{j}, Y_{j'}) - h(x''),
\end{eqnarray*}
which implies that $\theta(x')^2 > \theta(x'')^2$,
as desired.
$\blacksquare$
\end{Proof}
We conclude that an optimal solution to~(\ref{eq:rewrite2}) is also optimal to the original problem.
We now show a one-to-one mapping between solutions of~(\ref{eq:rewrite2}) and $(S,T)$-cuts of $G$. Namely, for a feasible solution $x'$, we associate it with the $(S(x'),T(x'))$-cut
defined by $x'_{1,j} = 1 \leftrightarrow j \in S(x')$ and $x'_{2,j} = 1 \leftrightarrow j \in T(x')$. Additionally, $w(S(x'),T(x')) \leq K$ if and only if $h(x') \leq \frac{K}{4M + 1}$. This follows because
\begin{equation*}
\begin{aligned}
w(S(x'),T(x')) & = \sum_{j \in S(x')} \sum_{j' \in T(x')} w_{\{j,j'\} } =
\sum_{j \in S(x')} \sum_{j' \in T(x')}
\left( 4M + 1 \right)
\mathrm{cov}\left(Y_{j}, Y_{j'}\right) \\
& =
\left( 4M + 1 \right)
\sum_{j = 1}^n \sum_{j' = 1, j' \neq j}^n
\mathrm{cov}\left(Y_{j}, Y_{j'}\right) =
\left( 4M + 1 \right) h(x').
\end{aligned}
\end{equation*}
It follows that~(\ref{eq:rewrite2}) is equivalent to the minimum weighted cut problem, as desired.
$\blacksquare$
\end{proof}
\section{Exact Solution Algorithm} \label{sec:Exact}
We propose an exact solution algorithm for problem \ref{eqn:optProblem} and focus on the maximization of the objective function. We discuss in the Appendix how to adapt the algorithm for the minimization case. Omitted proofs can be found in the Appendix.
\subsection{A Cutting-Plane Framework}
\label{sec:Algorithm}
The objective function presented in \eqref{eq:objective} is highly nonlinear, thus making the application of direct formulations combined with off-the-shelf solvers unlikely to be successful. We present an exact cutting-plane algorithm based on a mixed-integer linear program (MILP) referred to as the \emph{relaxed master problem}. This MILP provides upper bounds on the optimal objective value and is iteratively updated through the inclusion of cuts. Lower bounds are obtained through the evaluation of~$\mathbb{E} \left[ \max \{ Z_1(x),Z_2(x) \} \right]$ on the feasible solutions obtained from the optimization of the relaxed master problem. The algorithm stops once it finds a provable optimal solution.
Our approach to solve the problem is presented in Algorithm~\ref{a1}. A key component of our algorithm is the construction of an upper-bounding function for~$\mathbb{E} \left[ \max \{ Z_1(x),Z_2(x) \} \right]$ defined over the set~$\Omega$ of feasible solutions. Namely, we wish to work with a function~$g(x)$ such that
$
g(x) \geq \mathbb{E} \left[ \max \{ Z_1(x),Z_2(x) \} \right]$ for each~$x$ in~$\Omega$.
Given~$g(x)$, the relaxed master problem can be stated as
\begin{equation}
\tag{RMP}
\label{eqn:RMP}
\bar{z} = \max\limits_{x \in \Omega} g(x),
\end{equation}
and $\bar{z}$ provides an upper bound on the optimal objective value of problem~\ref{eqn:optProblem}.
Algorithm~\ref{a1} solves problem RMP iteratively, adding \emph{no-good constraints} (or simply \emph{cuts}) to a set~$\cuts$, which are incorporated to RMP in order to prune previously explored solutions~\citep{balas72}. RMP$(\cuts)$ denotes RMP further constrained by~$\cuts$, so that a solution~$x$ for RMP$(\cuts)$ belongs to $\Omega$ and satisfies all no-good constraints of~$\cuts$. In each iteration, after finding an optimal solution~$\hat{x}$ for RMP$(\cuts)$, Algorithm~\ref{a1} adds the following cut~$c(\hat{x})$ to~$\cuts$.
\begin{align}\label{noGood}
\sum_{i\in\set{1,2}}\sum_{j\in \set{1,\dots,n}:\hat{x}_{i,j}=1} x_{i,j} -\sum_{i\in\set{1,2}}\sum_{j\in \set{1,\dots,n}:\hat{x}_{i,j}=0} x_{i,j} \leq \sum_{i\in\set{1,2}}\sum_{j\in \set{1,\dots,n}} \hat{x}_{i,j}-1.
\end{align}
The only solution in~$\Omega$ that violates~$c(\hat{x})$ is~$\hat{x}$. Therefore, every solution in~$\Omega$ is explored at most once, and as~$\Omega$ is finite, Algorithm~\ref{a1} terminates after a finite number of steps.
Our cutting-plane algorithm keeps a lower bound~\textit{LB} and an upper bound~\textit{UB} for~$\mathbb{E} \left[ \max \{ Z_1(x),Z_2(x) \} \right]$,
which are iteratively updated based on solutions~$\hat{x}$ of RMP($\cuts$). Upper bounds are given by~$g(\hat{x})$; these bounds are non-decreasing, as any solution~$x'$ such that $g(x') > g(\hat{x})$ must have been explored in a previous iteration of Algorithm~\ref{a1}. Similarly, lower bounds are obtained through the exact evaluation of $\mathbb{E} \left[ \max \{ Z_1(\hat{x}),Z_2(\hat{x}) \} \right]$ (i.e., using Equation~\eqref{eq:objective}). Observe that $\mathbb{E} \left[ \max \{ Z_1(\hat{x}),Z_2(\hat{x}) \} \right]$ may be smaller than~$\mathbb{E} \left[ \max \{ Z_1(x'),Z_2(x') \} \right]$ for some previously explored solution~$x'$, so Algorithm~\ref{a1} needs to store the largest~$LB$ found in previous iterations. Algorithm~\ref{a1} terminates when $\textit{LB}$ becomes equal to~$\textit{UB}$ or when RMP($\cuts$) becomes infeasible.
Algorithm \ref{a1} can be seen as an extension of the integer L-shaped method \citep{Laporte93}, dealing with the added difficulty of a highly nonlinear objective function. Similar cutting-plane algorithms have been extensively used in the context of two-stage stochastic programming with integer recourse \citep
sen2006decomposition,angulo2016} and are closely related to the logic-based Benders' decomposition algorithm \citep{hooker2003logic}. As a result, our main contribution lies in the proposed linear upper bounding function described in Section \ref{sec:Bounds}.
\begin{algorithm}[H]
\begin{algorithmic}[1]
\State Set \textit{LB}$ = -\infty$, \textit{UB}$ = \infty$, $\mathcal{C} = \emptyset$, and incumbent solution $\bar{x} = 0$.
\State Optimize RMP$(\mathcal{C})$ to obtain~$\hat{x}$; if the problem is infeasible, go to Step 6.
\State
Set \textit{UB} = $g(\hat{x})$.
\State
If $\mathbb{E} \left[ \max \{ Z_1(\hat{x}),Z_2(\hat{x}) \} \right]$ $>$ \textit{LB}, set \textit{LB} = $\mathbb{E} \left[ \max \{ Z_1(\hat{x}),Z_2(\hat{x}) \} \right]$ and update incumbent $\bar{x} = \hat{x}$.
\State If \textit{LB} = \textit{UB}, go to Step 6. Otherwise, set~$\mathcal{C} = \mathcal{C} \cup \{c(\hat{x})\}$ and
return to Step 2.
\State If \textit{LB} = $-\infty$,
original problem is infeasible. Otherwise,
terminate with optimal solution
$\bar{x}$.
\end{algorithmic}
\caption{A Cutting-Plane Algorithm}
\label{a1}
\end{algorithm}
\subsection{Baseline Approach for Obtaining Upper Bounds on the Objective Function}
\label{sec:Baseline}
The performance of our cutting-plane algorithm is directly tied to the quality of the bounds obtained from~$g(x)$
and the difficulty of solving the RMP. A bounding function that delivers accurate overestimations of $\mathbb{E} \left[ \max \{ Z_1(x),Z_2(x) \} \right]$ but requires an impractical amount of time to solve the resulting RMP is likely to result in poor performance of the cutting-plane algorithm, as the RMP is solved at every iteration of the algorithm. On the other hand, a bounding function for which the resulting RMP is easily solved but provides poor quality upper bounds is likely to result in a large number of iterations before closing the optimality gap.
We remark that $\mathbb{E} \left[ \max \{ Z_1(x),Z_2(x) \} \right] \not= \max \{\mathbb{E} \left[Z_1(x)\right] ,\mathbb{E} \left[Z_2(x)\right] \}$, and in our setting it holds that $\mathbb{E} \left[ \max \{ Z_1(x),Z_2(x) \} \right] \geq \max \{\mathbb{E} \left[Z_1(x)\right] ,\mathbb{E} \left[Z_2(x)\right] \}$. As a result, a simple bounding function is given by the following linear expression:
\begin{equation} \label{eq:simple}
\max \{\mathbb{E} \left[Z_1(x)\right] ,\mathbb{E} \left[Z_2(x)\right] \} + M,
\end{equation}
where $M$ is a sufficiently large constant such that
\begin{equation*}
M \geq \theta(x) \phi \left(\frac{\diff(x)}{\theta(x)} \right).
\end{equation*}
We formulate the corresponding RMP as a linear mixed-integer program (MIP) that can be solved with any off-the-shelf optimization software. Unfortunately, such a simple function yields poor quality bounds and virtually requires a
complete enumeration of the solution space. In preliminary computational experiments, the cutting-plane algorithm is not able to solve to optimality a single problem instance from our test bed using \eqref{eq:simple} as the bounding function, resulting in high optimality gaps at the end of the time limit. As a result, we investigate more complex linear bounding functions, aiming to achieve a balance between the difficulty of solving the RMP (which we cast as a linear MIP) and the quality of the upper bounds obtained.
A challenging task involved in obtaining bounds on the optimal objective is the evaluation of~$\theta(x)$, a nonlinear expression that appears in all terms of \eqref{eq:objective}, including the denominators of the c.d.f. and the p.d.f. of the standard normal distribution. To avoid the technical issues involved in the evaluation of~$\theta(x)$, we propose a baseline approach that evaluates~$\theta(x)^2$ exactly and defines an upper bounding function for~$\mathbb{E} \left[ \max \{ Z_1(x),Z_2(x) \} \right]$, based on a discretization of $\theta(x)^2$, where the value of $\theta(x)^2$ is
\begin{equation}
\label{thetaSquared}
\theta(x)^2 = \sigma^2(Z_1(x)) + \sigma^2(Z_2(x))
- 2 \mathrm{cov} \left(Z_1(x) , Z_2(x)\right).
\end{equation}
The exact value of~$\theta(x)^2$ is computed via a McCormick linearization technique \citep{McCormick1976}. Let function $u_\theta(x)$ be such that $0 \leq \theta(x) \leq u_\theta(x)$. Our baseline upper-bounding function is given in Proposition \ref{Pbase}.
\begin{proposition}
\label{Pbase} For every~$x \in \Omega$,
\begin{equation} \label{ineq:base}
\begin{aligned}
\mathbb{E} \left[ \max \{ Z_1(x),Z_2(x) \} \right]
&
=
\mathbb{E}[ Z_1(x) ]
\Phi\left(\frac{\diff(x)}{\theta(x)}\right)
+
\mathbb{E}[ Z_2(x) ]
\Phi\left(\frac{-\diff(x)}{\theta(x)}\right)
+
\theta(x) \phi \left( \frac{\diff(x)}{\theta(x)} \right) \\
& \leq \mathbb{E}\left[Z_1 (x)\right]
+
u_\theta(x) \frac{1}{\sqrt[]{2\pi}}.
\end{aligned}
\end{equation}
\end{proposition}
\begin{Proof} From the symmetry of the c.d.f. of the normal distribution follows that
\begin{equation*}
\Phi
\left(\frac{\diff(x)}{\theta(x)}\right)
+
\Phi\left(\frac{-\diff(x)}{\theta(x)}\right) = 1.
\end{equation*}
Since $\mathbb{E}\left[Z_1 (x)\right] \geq \mathbb{E}\left[Z_2 (x)\right]$ by assumption, we have
\begin{equation} \label{ineq:base1}
\mathbb{E}\left[Z_1 (x)\right] \geq \Phi\left(\frac{\diff(x)}{\theta(x)}\right)\mathbb{E}\left[Z_1 (x)\right]
+
\Phi\left(\frac{-\diff(x)}{\theta(x)}\right) \mathbb{E}\left[Z_2 (x)\right],
\end{equation}
which constitutes an upper bounding expression for the first two terms of~$\mathbb{E} \left[ \max \{ Z_1(x),Z_2(x) \} \right]$.
Noting that $\phi\left(\frac{\diff(x)}{\theta(x)}\right) \leq \frac{1}{\sqrt[]{2\pi}}$, and $\theta(x) \leq u_\theta(x)$ by assumption, we have
\begin{equation}\label{ineq:base2}
u_\theta(x) \frac{1}{\sqrt[]{2\pi}} \geq \theta(x) \phi\left(\frac{\diff(x)}{\theta(x)}\right).
\end{equation}
Finally, Inequality~\eqref{ineq:base} follows from the addition of Inequality~\eqref{ineq:base1} with Inequality~\eqref{ineq:base2}. $\blacksquare$
\end{Proof}
We propose a discretization approach to obtain $u_\theta(x)$ in Section \ref{discretization1} and present the mathematical model for the baseline RMP in the Appendix. We use this baseline bounding function as a benchmark to measure the improvements in computational performance added by the proposed enhanced bounding techniques in the following sections.
\subsection{Enhanced Upper-Bounding Function}
\label{sec:Bounds}
Our enhanced approach is based on a joint discretization of $\theta(x)^2$ and $\delta(x)$. The proposed enhanced RMP formulation relies on the following proposition, which is valid for any functions $u_\theta(x), \ l_\theta(x)$, $u_\delta(x)$, and $l_\delta(x)$ such that $0 \leq l_\theta(x) \leq \theta(x) \leq u_\theta(x)$ and $0 \leq l_\delta(x) \leq \delta(x) \leq u_\delta(x)$:
\begin{proposition}
\label{P2} For every~$x \in \Omega$,
\begin{equation}\label{ineq:ubfunction2}
\mathbb{E}\left[Z_1 (x)\right] \Phi\left(\frac{u_\delta(x)}{l_\theta(x)}\right)
+
\mathbb{E}\left[Z_2 (x)\right] \left(1-\Phi\left(\frac{u_\delta(x)}{l_\theta(x)}\right)\right)
+
u_\theta(x) \phi\left(\frac{l_\delta(x)}{u_\theta(x)}\right)
\geq \mathbb{E} \left[ \max \{ Z_1(x),Z_2(x) \} \right].
\end{equation}
\end{proposition}
\begin{Proof}
This follows because
\begin{equation*}
\begin{aligned}
\mathbb{E} \left[ \max \{ Z_1(x),Z_2(x) \} \right]
& \leq
\mathbb{E}\left[Z_1 (x)\right] \Phi\left(\frac{\delta(x)}{\theta(x)}\right)
+
\mathbb{E}\left[Z_2 (x)\right] \left(1-\Phi\left(\frac{\delta(x)}{\theta(x)}\right) \right)
+
u_\theta(x)\phi\left(\frac{\delta(x)}{\theta(x)}\right) \\
&
\leq
\mathbb{E}\left[Z_1 (x)\right] \Phi\left(\frac{u_\delta(x)}{l_\theta(x)}\right)
+
\mathbb{E}\left[Z_2 (x)\right] \left(1-\Phi\left(\frac{u_\delta(x)}{l_\theta(x)}\right)\right)
+
u_\theta(x) \phi\left(\frac{l_\delta(x)}{u_\theta(x)}\right).
\end{aligned}
\end{equation*}
The first inequality follows from the symmetry of the c.d.f. and because $u_\theta(x) \geq \theta(x)$.
For the second inequality, first note that, for any constants $a,b$ with $a \geq b$, $a \lambda_1 + b (1-\lambda_1) \leq a \lambda_2 + b (1-\lambda_2)$, for every $0 \leq \lambda_1 \leq \lambda_2 \leq 1$.
As the
c.d.f. is non-decreasing on its domain and $\frac{\delta(x)}{\theta(x)} \leq \frac{u_\delta(x)}{l_\theta(x)}$, and
as
$\phi\left( y \right)$ is non-increasing for $y \geq 0$ and $\frac{\delta(x)}{\theta(x)} \geq \frac{l_\delta(x)}{u_\theta(x)}$, the result follows. $\blacksquare$
\end{Proof}
Suppose we are given $d$ intervals $\left\{[\theta^2_q,\theta^2_{q+1}]\right\}_{q=1}^{d}$ and $l$ intervals $\left\{[\delta_h,\delta_{h+1}]\right\}_{h=1}^{l}$, with $\theta(x)^2 \in [\theta^2_1,\theta^2_{d+1}]$ and $\delta(x) \in [ \delta_1 , \delta_{l+1} ]$ for every $x \in \Omega$.
Furthermore, let $\theta_q$ and $\theta_{q+1}$ denote a lower and upper bound of $\theta(x)$, respectively, for $\theta(x)^2 \in \left[\theta^2_q,\theta^2_{q+1}\right]$. Using these intervals we construct the following enhanced RMP formulation:
\begin{align}
\max \quad &U+U' \label{RMPObj}\\
\text{s.t. } &u_1 = \sum_{j = 1}^n \mu_j x_{1,j}; \ u_2 = \sum_{j = 1}^n \mu_j x_{2,j}; \ u_1 \geq u_2 \label{RMP1}\\
&s = \sum_{i=1}^2\left(\sum_{j=1}^n \sigma^2_j x_{i,j} +
2 \sum_{1 \leq j < j' \leq n} \mathrm{cov}(Y_{j} , Y_{j'}) v_{i,j,j'} \right)-2\sum_{j = 1}^{n} \sum_{j' = 1}^n \mathrm{cov} \left( Y_{j} , Y_{j'} \right) r_{j,j'}
\label{RMP2}\\
&v_{i, j,j'} \leq x_{i,j}; \ v_{i,j,j'} \leq x_{i,j'} \hspace{140pt} \forall j,j'\in \set{1,\dots,n}, \ i \in \set{1,2} \label{RMP3}\\
&v_{i, j,j'} \geq x_{i,j}+x_{i,j'}-1 \hspace{154pt}
\forall j,j' \in \set{1,\dots,n}, \ i \in \set{1,2} \label{RMP4}\\
&r_{j,j'} \leq x_{1,j}; \ r_{j,j'} \leq x_{2,j'} \hspace{149pt}
\forall j,j' \in \set{1,\dots,n} \label{RMP5}\\
&r_{j,j'} \geq x_{1,j}+x_{2,j'}-1 \hspace{158pt}
\forall j,j' \in \set{1,\dots,n} \label{RMP6}\\
& \sum_{q=1}^{d}w_q = 1; \ \sum_{h=1}^{l} y_h = 1; \ s' = \sum_{q=1}^{d} \theta_{q+1} w_q \label{srmp:2} \\
& \theta^2_q w_q \leq s \leq \theta^2_{q+1} + \theta^2_{d+1}(1-w_q)
\hspace{148pt} q = 1, \ldots, d \label{srmp:3} \\
& \delta_h y_h \leq u_1 - u_2 \leq \delta_{h+1}+ \delta_{l+1}(1-y_h) \hspace{125pt}
h = 1, \ldots, l \label{srmp:6} \\
& U \leq
u_1 \Phi \left( \frac{\delta_{h+1}}{\theta_q} \right)
+
u_2 \left(1- \Phi \left( \frac{\delta_{h+1}}{\theta_q} \right) \right)
+
M
\left(
2 - w_q - y_h
\right)
\hspace{10pt} q = 1, \ldots, d, \ h = 1, \ldots, l \label{srmp:7} \\
& U' \leq
\theta_{q+1} \phi\left(\frac{\delta_{h}}{\theta_{q+1}}\right)
+
M
\left(
2 - w_q - y_h
\right)
\hspace{108pt} q = 1, \ldots, d, \ h = 1, \ldots, l \label{srmp:7.5} \\
&v \in \set{0,1}^{n \times n \times 2}; \ r \in \set{0,1}^{n \times n}; \ w \in \{0,1\}^{d}; \ y \in \{0,1\}^{l}; \ x \in \Omega. \label{RMP7}
\end{align}
Binary variables~$w_q$, $q = 1, \ldots, d$, and~$y_h$, $h = 1, \ldots, l$, indicate which interval~$\theta(x)^2$ and~$\delta(x)$ belong to, respectively. Variable $u_1$ ($u_2$) denotes $\mathbb{E}\left[Z_1 (x)\right]$ ($\mathbb{E}\left[Z_2 (x)\right]$) and $s$ ($s'$) represents~$\theta(x)^2$ ($u_\theta(x)$). Binary variable $v_{i,j,j'}$ takes a value of 1 iff $x_{i,j}=x_{i,j'}=1$. Similarly, $r_{j,j'}$ equals 1 iff $x_{1,j}=x_{2,j'}=1$. Variable $U$ captures the first two terms of expression \eqref{ineq:ubfunction2} and $U'$ captures the third term.
The objective function \eqref{RMPObj} maximizes the upper bounding function defined by Proposition \ref{P2}. Constraints \eqref{RMP1} define the $u$-variables according to equation \eqref{expectedVal} and impose the symmetry breaking condition $u_1 \geq u_2$. Constraint \eqref{RMP2} imposes $s=\theta(x)^2$ as described by equation \eqref{thetaSquared}, where $\sigma^2(Z_1(x))$, $ \sigma^2(Z_2(x))$, and $\mathrm{cov} \left(Z_1(x) , Z_2(x)\right)$ are computed according to equations \eqref{variance} and \eqref{covariance}, respectively. Constraints \eqref{RMP3}--\eqref{RMP6} are the McCormick linearization constraints. Constraints~(\ref{srmp:2}) ensure that exactly one interval is chosen for $\theta(x)^2$ and $\delta(x)$, and set $s'$ equal to the upper bound of $\theta(x)$ for the interval that $\theta(x)^2$ belongs to. Constraints (\ref{srmp:3})--(\ref{srmp:6}) select the right interval for $\theta(x)^2$ and $\delta(x)$. Constraints~(\ref{srmp:7}) are only active for the selected intervals and enforce that~$U$ is bounded by a linear combination of $u_1$ and $u_2$, defined by the evaluation of the c.d.f. at appropriately chosen constants associated with the intervals that~$\theta(x)^2$ and~$\delta(x)$ lie in; where $M$ is a sufficiently large value. Similarly, constraints~(\ref{srmp:7.5}) enforce $U'$ to equal the third term of expression \eqref{ineq:ubfunction2} for the corresponding intervals. Constraints~\eqref{RMP7} define the domains of the variables appropriately.
\subsubsection{Discretization of~$\theta(x)^2$\\} \label{discretization1}
We obtain an upper bound~$\theta^2_{d+1}$ for~$\theta(x)^2$ by solving $\max\{s \ | \ (x,v,r,u_1,u_2,s) \in \Psi\}$, where $\Psi$ is the space defined by constraints \eqref{RMP1}--\eqref{RMP6} and \eqref{RMP7}. This problem is computationally challenging (NP-hard from Theorem~\ref{thm:hard}), so
our strategy consists of solving the resulting MILP for a limited amount of time in order to obtain a relatively refined upper bound.
We define $d$ intervals for $\theta(x)^2$ as follows: $\theta^2_1 = 0, \ \theta^2_2 = 1$, and $\theta^2_q =\theta^2_{q-1}+ \frac{\theta_{d+1}^2-1}{d-1}$ for $q = 3, \ldots, d+1$. The first interval is different from the others since $\sqrt{a^2} \geq a^2$ for $0\leq a \leq 1$. Note that for any~$x$ in $\Omega$, $0 \leq \theta(x)^2 \leq \theta^2_{d+1}$,
so $\theta(x)^2$ must belong to $[\theta^2_{q},\theta^2_{q+1}]$ for some
$q = 1,...,d$.
Given these intervals, upper bounds and lower bounds for~$\theta(x)$ are given by
\[
\theta_{q+1} =
\begin{cases}
1, & q = 1 \\
\sqrt{\theta^2_{q+1} } , & q = 2, \ldots, d,
\end{cases}
\qquad
\theta_q =
\begin{cases}
0, & q = 1 \\
\sqrt{\theta^2_{q}}, & q = 2, \ldots, d.
\end{cases}
\]
\subsubsection{Discretization of~$\delta(x)$\\} \label{discretization2}
The procedure is analogous to the one described in~\ref{discretization1}. Namely, we obtain an upper bound~$\delta_{l+1}$ for all $\delta(x)$ by solving problem~$\max\{u_1 - u_2 \ | \ (x,v,r,u_1,u_2,s) \in \Psi \}$ for a limited amount of time. Given~$\delta_{l+1}$,
we generate $l$ discretization intervals defined by $\delta_h = \frac{h-1}{l} \hat{\delta}$ for $h=1, \ldots l+1$. By construction, $\delta(x)$ belongs to one of the intervals defined by the values~$\delta_h$, as desired.
\subsection{Tightness of the Upper-Bounding Function}\label{sec:ub_tightness}
We investigate now $\Delta(x) = g(x) - \mathbb{E} \left[ \max \{ Z_1(x),Z_2(x) \} \right]$, the difference between the upper bound
\begin{equation}\label{eq:bound}
g(x) =
\mathbb{E}\left[Z_1 (x)\right] \Phi\left(\frac{u_\delta(x)}{l_\theta(x)}\right)
+
\mathbb{E}\left[Z_2 (x)\right] \left(1-\Phi\left(\frac{u_\delta(x)}{l_\theta(x)}\right)\right)
+
u_\theta(x) \phi\left(\frac{l_\delta(x)}{u_\theta(x)}\right),
\end{equation}
and the exact expression for~$\mathbb{E} \left[ \max \{ Z_1(x),Z_2(x) \} \right]$, which is given by
\begin{eqnarray*}
\Delta(x)
&=&
\mathbb{E}\left[Z_1 (x)\right]
\left(
\Phi\left(\frac{u_\delta(x)}{l_\theta(x)}\right)
-
\Phi\left(\frac{\delta(x)}{\theta(x)}\right)
\right)
+
\mathbb{E}\left[Z_2 (x)\right] \left(
\Phi\left(\frac{\delta(x)}{\theta(x)}\right) -
\Phi\left(\frac{u_\delta(x)}{l_\theta(x)}\right)
\right) + \\
&& \quad
u_\theta(x) \phi\left(\frac{l_\delta(x)}{u_\theta(x)}\right)
-
\theta(x) \phi\left(\frac{\delta(x)}{\theta(x)}\right) \\
&=&
\delta(x) \left(
\Phi\left(\frac{u_\delta(x)}{l_\theta(x)}\right)
-
\Phi\left(\frac{\delta(x)}{\theta(x)}\right)
\right) +
u_\theta(x) \phi\left(\frac{l_\delta(x)}{u_\theta(x)}\right)
-
\theta(x) \phi\left(\frac{\delta(x)}{\theta(x)}\right).
\end{eqnarray*}
The inflection points of~$\Phi$ and~$\phi$ are 0 and $\pm 1$, respectively, so their first derivatives are bounded by
$\frac{1}{\sqrt{2\pi}}$ and~$\frac{1}{\sqrt{2e\pi}}$, respectively. As
$
\frac{l_\delta(x)}{u_\theta(x)}
\leq
\frac{\delta(x)}{\theta(x)}
\leq
\frac{u_\delta(x)}{l_\theta(x)}
$,
we have
\[
\Phi\left(
\frac{u_\delta(x)}{l_\theta(x)}
\right)
\leq
\Phi\left(\frac{\delta(x)}{\theta(x)}
\right)
+
\frac{1}{\sqrt{2\pi}}
\left(
\frac{u_\delta(x)}{l_\theta(x)}
-
\frac{\delta(x)}{\theta(x)}
\right).
\]
Similarly,
\[
\phi\left(
\frac{l_\delta(x)}{u_\theta(x)}
\right)
\leq
\phi\left(\frac{\delta(x)}{\theta(x)}
\right)
+
\frac{1}{\sqrt{2e\pi}}
\left(
\frac{\delta(x)}{\theta(x)}
-
\frac{l_\delta(x)}{u_\theta(x)}
\right).
\]
Therefore, we have
\begin{eqnarray}\label{deltaBound}
\Delta(x)
&\leq&
\frac{\delta(x)}{\sqrt{2\pi}}\left(
\frac{u_\delta(x)}{l_\theta(x)}
-
\frac{\delta(x)}{\theta(x)}
\right)
+
\frac{\theta(x)}{\sqrt{2e\pi}}
\left(
\frac{\delta(x)}{\theta(x)}
-
\frac{l_\delta(x)}{u_\theta(x)}
\right)
+
(u_\theta(x) - \theta(x))\phi\left(\frac{\delta(x)}{\theta(x)}\right) \nonumber \\
&\leq&
\frac{\delta(x)}{\sqrt{2\pi}}\left(
\frac{u_\delta(x)}{l_\theta(x)}
-
\frac{l_\delta(x)}{u_\theta(x)}
\right)
+
\frac{\theta(x)}{\sqrt{2e\pi}}
\left(
\frac{u_\delta(x)}{l_\theta(x)}
-
\frac{l_\delta(x)}{u_\theta(x)}
\right)
+
\frac{u_\theta(x) - \theta(x)}{\sqrt{2e\pi}} \nonumber \\
&=&
\left( \frac{\delta(x)}{\sqrt{2\pi}} + \frac{\theta(x)}{\sqrt{2e\pi}}
\right)
\left(
\frac{u_\delta(x)}{l_\theta(x)}
-
\frac{l_\delta(x)}{u_\theta(x)}
\right)
+
\frac{u_\theta(x) - \theta(x)}{\sqrt{2e\pi}}
\end{eqnarray}
First, note that if~$l_\theta(x) = \theta(x) = u_\theta(x)$ and~$l_\delta(x) = \delta(x) = u_\delta(x)$, $\Delta(x) = 0$, as expected. For the element of the first expression above that does not depend on the discretization, we have for every~$x$
\begin{eqnarray}
\frac{\delta(x)}{\sqrt{2\pi}} + \frac{\theta(x)}{\sqrt{2e\pi}}
&\leq&
\frac{\mathbb{E} \left[ \max \{ Z_1(x),Z_2(x) \} \right]}{\sqrt{2\pi}}
+
\frac{\mathbb{E} \left[ \max \{ Z_1(x),Z_2(x) \} \right]}{\sqrt{e}} \nonumber \\
&\leq&
\frac{\sqrt{2\pi}+\sqrt{e}}{\sqrt{2\pi e}}
\mathbb{E} \left[ \max \{ Z_1(x),Z_2(x) \} \right] \approx 1.005\mathbb{E} \left[ \max \{ Z_1(x),Z_2(x) \} \right]
\end{eqnarray}
Therefore, the quality of~$g(x)$ as an estimator of~$\mathbb{E} \left[ \max \{ Z_1(x),Z_2(x) \} \right]$ depends on~$\left(
\frac{u_\delta(x)}{l_\theta(x)}
-
\frac{l_\delta(x)}{u_\theta(x)} \right)$ and~$u_\theta(x) - \theta(x)$. We leverage this result in order to obtain an approximation guarantee delivered by the enhanced RMP for a special case of our featured machine scheduling application in Section~\ref{sec:scheduling}.
\subsection{Strengthening Inequalities for the RMP}
\label{inequalities}
We show next how estimates on~$\diff(x)$ and~$\theta(x)$ can be coupled via supervalid inequalities (SVIs), which potentially eliminate integer solutions without removing all the optimal ones \citep{IsraeliWood02}. We propose SVIs of the form
\begin{equation}\label{SVIs}
\delta(x) \in \left[\delta_h, \delta_{h+1}\right] \implies \theta(x) \geq \underline{\theta}^h \quad \forall h=1,\dots,l,
\end{equation}
where
$\underline{\theta}^h$ establishes a lower bound on~$\theta(x)$ for every~$x$ such that~$\delta(x) \in \left[\delta_h, \delta_{h+1}\right]$. First, we show in Proposition \ref{P3} how to obtain a lower bound~$\underline{\theta}(\delta,z^{LB})$ for~$\theta(x)$ for all solutions~$x$ such that $\delta(x) = \delta$ when~$z^{LB}$ is a known lower bound for the exact problem, which can be obtained using any primal heuristic.
Proposition \ref{P4} extends this result to the case in which $\delta(x)$ belongs to a given interval. Both propositions use value~$\bar{\mu} = \max_{x \in \Omega}\mathbb{E}[Z_1(x)]$, which
typically can be quickly computed.
\begin{proposition} \label{P3}
Given~$\delta$ and~$z^{LB}$, let~$\Omega\left(\diff,z^{LB}\right)$ be the set of solutions~$x$ such that~$\diff(x)=\diff$ and~$\mathbb{E} \left[ \max \{ Z_1(x),Z_2(x) \} \right] \geq z^{LB}$. A lower bound~$\underline{\theta}(\delta,z^{LB})$ of~$\theta(x)$ for all $x$ in~$\Omega\left(\diff,z^{LB}\right)$ is given~by
\begin{align}
\label{auxProb}
\underline{\theta}(\delta,z^{LB})
= \min_{\theta \geq 0}\left\{ \theta \ | \ \bar{u}+\theta \phi\left(\frac{\delta}{\theta}\right) \geq z^{LB} \right\}.
\end{align}
\end{proposition}
\begin{proposition}
\label{P4}
Given~$\diff_1$, $\diff_2$, and~$z^{LB}$ such that
$\diff_1 \leq \diff_2$, we have
$\underline{\theta}(\delta_1,z^{LB}) \leq \underline{\theta}(\delta_2,z^{LB})$.
\end{proposition}
\begin{corollary}\label{cor:lb}
If $\delta(x) \in [\delta_h,\delta_{h+1}]$ and
$\mathbb{E} \left[ \max \{ Z_1(x),Z_2(x) \} \right] \geq z^{LB}$, we have
$\underline{\theta}\left(\delta_h,z^{LB}\right) \leq \theta(x)$.
\end{corollary}
By setting~$\underline{\theta}^h = \underline{\theta}\left(\delta_h,z^{LB}\right)$,
we obtain the following valid inequality to RMP:
\[
s \geq \left( \underline{\theta}^h \right)^2 y_h, \quad \quad \forall h=1,\dots,l.
\]
These inequalities tighten the bounds by relating the choices of $h$ for $y_h = 1$ to the best-known solution. Similarly to~$z^{LB}$, each~$\underline{\theta}^h$ is computed at the initialization of Algorithm~\ref{a1}.
\section{Performance Evaluation}
\label{sec:compExperiments}
We present results of an extensive evaluation on synthetic instances to assess the performance of the enhanced algorithm compared to the baseline approach. The models and algorithms are implemented in CPLEX version 12.7.1 \citep{cplex}
through the Java API. We utilize Python 3.6 with \texttt{Scikit-learn} functions~\citep{scikit-learn} to generate the random instances. All experiments are conducted on an Intel(R) Xeon(R) CPU E5-1650 v4 at 3.60GHz with 32GB of memory, and we impose a maximum time limit of 10 minutes. Source code and synthetic instances are available upon request.
The problem used for evaluation is the following: \begin{align}
& \min && \mathbb{E} \left[ \max \left\{ Z_1(x),Z_2(x) \right\} \right] \label{fa:kp} \tag{KP} \\
& \textnormal{s.t.} && Z_i(x) = \sum_{j=1}^n Y_j x_{i,j} && i = 1, 2 \nonumber \\
&&& \sum_{j=1}^n a_j x_{i,j} \leq b_i && i=1,2 \nonumber \\
&&& x_{1,j} + x_{2,j} \leq 1 && j=1, \ldots, n \nonumber \\
&&& x_{i,j} \in \set{0,1} && i=1, 2, \ j = 1, \ldots, n. \nonumber
\end{align}
We generate random problem instances by considering two knapsack constraints defined over a set of $n$ items. Each item has an integer weight $a_j$, which is drawn independently from $\mathcal{U}(1,19)$, and the profit associated to each item, $Y_j$, follows a normal distribution, with mean sampled also from $\mathcal{U}(15,25)$. The right-hand side of the constraints is fixed to~$b_1=b_2=40$. The objective function is to maximize the expected value of the knapsack with maximum profit, where the component random variables $Y_j$ correspond to the uncertain profit of each item.
To generate variances and correlations for the profits, we use \texttt{Scikit-learn} functions to generate a random, positive semidefinite matrix (PSD), which is then multiplied by a constant factor~$\alpha$. We generate 5 instances for each configuration of~$(n,\alpha)$, where $n \in \{15,20,25,30\}$ and $\alpha$ in~$\{50,100,150,200,250\}$, for a total of 100 instances. The $\alpha$ multiplier allows us to evaluate how sensitive the algorithms are to increasing variance and correlation.
Figure~\ref{fig:CDPP_Knapsack} presents a performance plot comparing solution times and optimality gaps for the baseline and enhanced algorithm. On the left half of the graph, the line plots report the number of instances solved by a given time limit, up to 600 seconds. On the right half of the graph, the line plots report the number of instances solved to within an optimality gap threshold, up to 18.3\%.
\begin{figure}[h!]
\centering
\caption{Cumulative distribution plot of performance comparing the baseline and the enhanced optimization algorithms.}
\label{fig:CDPP_Knapsack}
\includegraphics[scale=0.6]{CDPP_Knapsack.png}
\end{figure}
Figure~\ref{fig:CDPP_Knapsack} provides strong evidence for (1) the efficacy of the proposed enhancements and (2) the complexity of solving these problems to optimality. The baseline model solves only 3 instances within 10 minutes, and results in significant optimality gaps at time limit for the remaining 96 instances, at 9.3\% on average and up to 18.3\%. With our proposed enhancements, 55 instances are solved within 10 minutes, with 32 solved within 100 seconds, and much smaller optimality gaps for those instances that are not solved to optimality, at 0.9\% on average and 3\% at most. Regarding the number of cuts required to prove optimality, the baseline algorithm generates on average 270.5 cuts, while the enhanced algorithm generates 18.3 cuts, thus demonstrating the superior quality of the upper bounds obtained by the enhanced bounding function.
\renewcommand{\arraystretch}{0.6}
\begin{table}[h!]
\centering
\caption{Tabular comparison of solution times and gaps for baseline and enhanced algorithm.}
\label{tab:baselineVenchanced}
\begin{tabular}{cc|cc|cc}
\multicolumn{2}{c|}{} & \multicolumn{2}{c|}{Baseline} &
\multicolumn{2}{c}{Enhanced} \\
$n$ & $\alpha$ & Time (s) & Gap (\%) & Time (s) & Gap (\%) \\ \hline
15 & 50 & $ 254^{(2)} $ & 1.7 & $ 28^{(5)} $ & 0.0 \\
15 & 100 & $ 81^{(1)} $ & 2.6 & $ 44^{(5)} $ & 0.0 \\
15 & 150 & $ -^{(0)} $ & 4.7 & $ 102^{(5)} $ & 0.0 \\
15 & 200 & $ -^{(0)} $ & 4.6 & $ 49^{(5)} $ & 0.0 \\
15 & 250 & $ -^{(0)} $ & 4.8 & $ 28^{(5)} $ & 0.0 \\ \hline
20 & 50 & $ -^{(0)} $ & 7.5 & $ 132^{(5)} $ & 0.0 \\
20 & 100 & $ -^{(0)} $ & 9.4 & $ 87^{(2)} $ & 0.5 \\
20 & 150 & $ -^{(0)} $ & 9.8 & $ 109^{(4)} $ & 0.2 \\
20 & 200 & $ -^{(0)} $ & 10.6 & $ 201^{(4)} $ & 0.1 \\
20 & 250 & $ -^{(0)} $ & 10.8 & $ 111^{(5)} $ & 0.0 \\ \hline
25 & 50 & $ -^{(0)} $ & 10.6 & $ -^{(0)} $ & 1.0 \\
25 & 100 & $ -^{(0)} $ & 11.3 & $ -^{(0)} $ & 0.8 \\
25 & 150 & $ -^{(0)} $ & 12.2 & $ 415^{(2)} $ & 0.6 \\
25 & 200 & $ -^{(0)} $ & 12.5 & $ 367^{(1)} $ & 0.5 \\
25 & 250 & $ -^{(0)} $ & 11.9 & $ 213^{(3)} $ & 0.3 \\ \hline
30 & 50 & $ -^{(0)} $ & 10.5 & $ 30^{(1)} $ & 0.9 \\
30 & 100 & $ -^{(0)} $ & 11.0 & $ 71^{(1)} $ & 0.7 \\
30 & 150 & $ -^{(0)} $ & 13.4 & $ -^{(0)} $ & 1.4 \\
30 & 200 & $ -^{(0)} $ & 13.3 & $ 391^{(1)} $ & 1.1 \\
30 & 250 & $ -^{(0)} $ & 12.7 & $ 306^{(1)} $ & 0.5
\end{tabular}
\end{table}
Table~\ref{tab:baselineVenchanced} reports more detailed information concerning the comparison of the baseline and enhanced algorithms. For each configuration of $(n,\alpha)$ and for both the baseline and enhanced algorithms, we report the average solution time for those instances solved to optimality within 10 minutes, with the number of instances solved to optimality in superscript, and the average percent gap over all instances. This table makes it even more clear that the enhanced algorithm provides significant improvement and suggests that instances become significantly harder to solve for larger values of the variance and correlation.
\section
Daily Fantasy Sports}
\label{sec:DFF}
In daily fantasy football, contests are arranged based on the starting times of each week's slate of NFL games, and only players in those games are eligible for inclusion on a fantasy roster. Further, not all players
are eligible for rosters because the fantasy scoring system only rewards points for specific tasks. Namely, only the quarterbacks (QB), kickers (K), and offensive ``skill position'' players\textemdash wide receivers (WR), running backs (RB), and tight ends (TE)\textemdash are eligible as single players; other players can be
selected collectively as ``team'' defenses (DEF).
Generally the individual players receive points for gaining yardage and scoring points in the actual game (via touchdowns, extra points, two-point conversions, and field goals) and the team defense earns points by preventing the opposing team from scoring or by scoring game points itself.
\texttt{DraftKings} is one of the two major DFS providers. Different types of
contests are offered on the betting platform, including showdowns, classics, tiers, and others. We focus exclusively on showdown contests in this application. Showdown contests only include the players in a single NFL game, and entries consist of six players, regardless of position, with one designated as the captain. The captain costs 1.5 times the normal salary and earns 1.5 times the normal
points. Each player may appear no more than once on a given entry, and different contests allow a different number of entries per participant (some as high as 150). Additionally, each entry requires at least one player from each team in the contest. We focus here on smaller, high-entry-fee contests that limit each participant to two or three entries (we always enter just two).
DFS contests align directly with the focus of this paper. First, when assembling a collection of entries, a natural goal for a participant would be to have one of his entries score very high. This is because of the payout structure\textemdash most of the payout goes to the highest few entries in the competitions, as we will discuss below. This therefore can be modeled as a problem of maximizing the expected value of the higher of the two entries.
\subsection{Problem definition}
We model the two-entry selection problem in a showdown contest as a special case of Problem~\ref{eqn:optProblem}. Let~$n'$ be the number of players, and $n = 2 n'$. The first~$n'$ players represent standard versions of the players (``flex'' in the lingo used by \texttt{DraftKings}), whereas the next $n'$ represent their ``captain" versions, in a common order. As shown in formulation \eqref{fa:dff}, the uncertain player scores correspond to the random variables in~\ref{eqn:optProblem}. We define the feasible space $\Omega$ via the following constraints. Let $\mathcal{I}_1$ and $\mathcal{I}_2$ be the sets containing the players from the first and second team, respectively. For $i \in \{1,2\}$ and $j \in [n]$, binary variable~$x_{i,j}$ indicates the selection of player~$j$ for roster~$i$. The following constraints apply to each roster:
\begin{itemize}
\item Exactly 5 flex players and 1 captain must be selected:
\[
\sum\limits_{j=1}^{n'}
x_{i,j} = 5, \sum\limits_{j=n'+1}^{n} x_{i,j} = 1, \quad i = 1, 2.
\]
\item The same player cannot be selected both for a flex and the captain positions:
\[
x_{i,j} + x_{i,j+n'} \leq 1, \quad i = 1, 2, \ j=1,\dots,n'.
\]
\item At least one player from each team appears on each roster:
\[
\sum\limits_{j \in \mathcal{I}_i} x_{i,j} \geq 1, \quad i = 1, 2.
\]
\end{itemize}
We also limit the set of players under consideration to include only those with an expected score of at least 5 fantasy points. Players with expectation below this threshold are never selected by our algorithm anyway. However, as previously mentioned, the assumption of normality in player scores becomes less likely to be rejected as projected scores increase.
\subsection{Data Sources and Estimation}
There are several parameters that need to be estimated\textemdash in particular, $\forall j \in [n']$, parameters $\mu_j$ and $\sigma_j$, defining the normal distribution for the points scored by player~$j$; and, $\forall j, j' \in [n']$, the covariance $\rho_{j,j'}$ of the performance of players~$j$ and~$j'$. We used a training set consisting of historical data from four NFL seasons (2014\textendash2017) to estimate those parameters. We briefly discuss the process for each in turn below. We then compare our algorithm's performance against a heuristic over 16 competitions in the 2018 season. More details on the evaluation can be found in the Appendix.
\subsubsection{Expected Value Estimation} Due to the growth of the fantasy sports industry, estimated DFS points for players is the topic of many non-academic articles and websites (e.g., \url{https://rotogrinders.com}). For the purposes of this paper, we do not generate our own player points projections, but rather use the data from \url{https://fantasydata.com}. Their projections are consistent and reliable across the years analyzed.
\subsubsection{Variance, Correlation, and Covariance Estimation} We also used the data from
\url{https://fantasydata.com} in order to learn variances, correlations, and covariances for player scores using a nearest-neighbor-like algorithm. For a player $j$, his variance is estimated as the variance of the actual scores of the 50 players that share a common position with player $j$ in the training set data and whose expected value is as close as possible (in terms of squared difference) to player~$j$. For example, if a RB is \textit{projected} to score 20.5 fantasy points, we select the 50 RBs in the training set with \textit{projected} fantasy score as close as possible to~20.5 measured by squared difference, and use the \textit{actual} fantasy scores of those 50 players to calculate the variance of that player's fantasy score.
We estimate the correlation of players $j$ and $j'$ in a similar way to our single-player variance estimates. Since we are only reporting results for showdown contests, all available player pairs are either on the same or opposing teams. For teammates, we find the 50 pairs of players and games in the training set for which the players play on the same team, played the same positions as $j$ and $j'$, and have expected values as close as possible to that of $j$ and $j'$, and we use the sample correlation of their actual game scores to estimate their correlation. For example, if a QB and WR pair are on the same team and are expected to score 30 and 15 points, respectively, we find the 50 instances in the training set of QB and WR teammates with the sum of squared differences from 30 and 15 in expected values as low as possible. We follow the same process for players on opposing teams. Finally, a small correction is made to ensure that the covariance matrix is PSD, if needed.
\subsubsection{Normality Assumption} The assumption for normality of fantasy points production for players is well-grounded. Using the Shapiro-Wilk test \citep{ShapiroWilk1965}, the null hypothesis of a normal distribution of player scores cannot be rejected for 76\% of the QBs (by far the most valuable position) from the 2016-2018 seasons. In addition, the null hypothesis of a normal distribution cannot be rejected nearly half the time when considering players from any position over that same time frame with an expected score of at least 10 points. As the expected score increases, the likelihood of rejecting the assumption of normally distributed actual scores falls across all players. So while some of the less important positions, which score fewer points, may not follow a normal distribution, we cannot reject the normal distribution for the vast majority of the QBs and in general for the players projected to score a reasonably high number of points. These and other results regarding the normal distribution of player scores are available from the authors, by request.
\subsection{Benchmark Heuristic} We used entries generated by a simple heuristic that would select a first entry with maximum expected value and a second entry that differs from the first by at least one player and otherwise again maximizes the expected value. The set of players available was the same for both our algorithm and the heuristic. Note that this heuristic is similar to the online tools many fantasy participants pay to use, such as \url{https://www.fantasycruncher.com/}.
\subsection{Results}
Over a collection of 16 contests in the 2018 NFL season, the net realized profit of employing our exact two-entry model using the inputs described above would have been over \$5,000. On the same collection of contests, employing the heuristic and the same inputs would have resulted in a net loss of over \$4,000. This application therefore highlights how important the joint decision-making over the entries is in order to get a high scoring entry.
We provide detailed results in the Appendix. One important takeaway is that the two entries selected often score on opposite sides of their expectation, showing how correlation is exploited to elevate the expected score of the maximum. As an example,
in the game \texttt{Redskins vs. Saints} played on 10-8 the two entries selected had expectation of 104.74 and 102.74, respectively. Their actual scores were 65.55 and 140.80, where the second entry would have won the competition and resulted in substantial payout. When we use the heuristic, the two entries selected have expectation 109.51 and 108.88, and scored 78.95 and 114.20. The score of 114.20 would have resulted in a positive payout, but a very marginal one. This example shows how important optimizing exactly can be in practice.
\section
Makespan Minimization with Stochastic Processing Times}
\label{sec:scheduling}
We applied our theoretical and algorithmic results to~$P2|p \sim \N(\mu,\Sigma),|\mathbb{E}[C_{max}]$, a machine scheduling problem involving the minimization of makespan on two parallel machines for jobs with processing times drawn from a multivariate gaussian distribution. We present two theoretical results for the case where the processing times of the jobs are uncorrelated. First, we show that $P2|p \sim
\N(\mu,\Sigma),\rho_{j,j'}=0|\mathbb{E}[C_{max}]$
is equivalent to $P2|p_{j} = \mu_j|C_{max}$, the deterministic version of the problem where the means are used as
processing times.
\begin{theorem}\label{thm:Sched}
$P2|p \sim \N(\mu,\Sigma),\rho_{j,j'}=0|\mathbb{E}[C_{max}]$ is equivalent
to $P2|p_{i,j} = \mu_{i,j}|C_{max}$.
\end{theorem}
\begin{Proof} Because there is no correlation between the random variables, we have $\theta(x) =
\sqrt{\sigma^2(Z_1(x))+\sigma^2(Z_2(x))}$. Similarly, we have
$\sigma^2(Z_i) = \sum\limits_{j=1}^n \sigma^2_j x_{i,j}, i = 1,2$,
\begin{comment}
Similarly, $\sigma^2(Z_i)$, $i = 1,2$, can be rewritten as:
\begin{equation*}
\sigma^2(Z_i(x)) =
\sum_{j=1}^n \sigma^2_j x_{i,j}
+
2 \sum_{1 \leq j < j' \leq n} \mathrm{cov}(Y_{j} , Y_{j'} ) x_{i,j} x_{i,j'} =
\sum_{j=1}^n \sigma^2_j x_{i,j},
\end{equation*}
and therefore $\theta(x)
= \sqrt
{
\sum_{j=1}^n \sigma^2_j x_{1,j} + \sum_{j=1}^n \sigma^2_j x_{2,j}.
}$.
thus resulting in the following expression for~$\theta(x)$:
\begin{equation*}
\theta(x)
=
\sqrt
{
\sigma^2(Z_1(x))
+
\sigma^2(Z_2(x))
}
=
\sqrt
{
\sum_{j=1}^n \sigma^2_j x_{1,j} + \sum_{j=1}^n \sigma^2_j x_{2,j}.
}
\end{equation*}
\end{comment}
and as $x_{1,j} + x_{2,j} = 1$ for each~$j \in [n]$, we have
$\theta(x) = \sqrt{\sum\limits_{j=1}^n \sigma^2_j}$,
\begin{comment}
\begin{equation*}
\theta(x)
=
\sqrt
{
\sum_{j=1}^p \sigma^2_j x_{1,j} + \sum_{j=1}^p \sigma^2_j x_{2,j}
}
=
\sqrt
{
\sum_{j=1}^p \sigma^2_j
},
\end{equation*}
\end{comment}
i.e., $\theta(x)$ is actually constant and can simply be written as~$\theta$.
If we set~$\diff_{\theta} = \diff_{\theta}(x) = \frac{
\delta(x)}{\theta}$, we can rewrite
$\mathbb{E} \left[ \max \{ Z_1(x),Z_2(x) \} \right]$ as follows:
\begin{align*}\label{eq:objective}
& \mathbb{E} \left[ \max \{ Z_1(x),Z_2(x) \} \right]
=
\mathbb{E}
\left[
Z_1 (x)
\right]
\Phi
\left(
\diff_{\theta}
\right)
+
\mathbb{E}
\left[
Z_2 (x)
\right]
\Phi
\left(
\diff_{\theta}
\right) +
\theta
\phi
\left(
\diff_{\theta}
\right).
\end{align*}
The c.d.f. of the standard normal distribution can be written as
\[
\Phi(x) =
\frac{1}{2} + \frac{1}{\sqrt{2\pi}}e^{\frac{-x^2}{2}}\left[ x + \frac{x^3}{3} + \frac{x^5}{5\cdot 3} + \ldots + \frac{x^{2n+1}}{(2n+1)!!} + \ldots \right],
\]
where $n!! = n (n-2) (n-4)\ldots(((n-1) \mod 2)+1)$ is the double factorial of $n$. Therefore,
\begin{eqnarray*}
\mathbb{E} \left[ \max \{ Z_1(x),Z_2(x) \} \right] &=&
\mathbb{E} \left[ Z_1(x) \right] \left( \frac{1}{2} + \frac{1}{\sqrt{2\pi}}e^{\frac{-\diff_{\theta}^2}{2}}\left[\diff_{\theta} + \frac{\diff_{\theta}^3}{3!!} + \frac{\diff_{\theta}^5}{5!!} + \ldots \right]\right) + \\
&&
\mathbb{E} \left[ Z_2(x) \right]\left( \frac{1}{2} - \frac{1}{\sqrt{2\pi}}e^{\frac{-\diff_{\theta}^2}{2}}\left[\diff_{\theta} + \frac{\diff_{\theta}^3}{3!!} + \frac{\diff_{\theta}^5}{5!!} + \ldots \right]\right) +
\frac{\theta}{\sqrt{2\pi}}e^{\frac{-\diff_{\theta}^2}{2}} \\
&=&
\frac{\mathbb{E} \left[ Z_1(x) \right]+\mathbb{E} \left[ Z_2(x) \right]}{2} + \left( \frac{
\theta \diff_{\theta}}{\sqrt{2\pi}}e^{\frac{-\diff_{\theta}^2}{2}}\left[\diff_{\theta} + \frac{\diff_{\theta}^3}{3!!} + \frac{\diff_{\theta}^5}{5!!} + \ldots \right]\right) + \frac{\theta}{\sqrt{2\pi}}e^{\frac{-\diff_{\theta}^2}{2}} \\
&=&
\frac{\mathbb{E} \left[ Z_1(x) \right]+\mathbb{E} \left[ Z_2(x) \right]}{2} +
\left( \frac{\theta}{\sqrt{2\pi}}e^{\frac{-\diff_{\theta}^2}{2}}\left[1 + \diff_{\theta}^2 + \frac{\diff_{\theta}^4}{3!!} + \frac{\diff_{\theta}^6}{5!!} + \ldots \right]\right).
\end{eqnarray*}
By taking the first derivative in~$\diff_{\theta}$, we obtain
\begin{eqnarray*}
\frac{d \mathbb{E} \left[ \max \{ Z_1(x),Z_2(x) \} \right] }{d\diff_{\theta}} &=&
\left( \frac{\theta}{\sqrt{2\pi}}e^{\frac{-\diff_{\theta}^2}{2}}\left[2\diff_{\theta} + \frac{4\diff_{\theta}^3}{3!!} + \frac{6\diff_{\theta}^5}{5!!} + \ldots \right]\right)
-
\left( \frac{\theta}{\sqrt{2\pi}}e^{\frac{-\diff_{\theta}^2}{2}}\left[\diff_{\theta} + \diff_{\theta}^3 + \frac{\diff_{\theta}^5}{3!!} + \frac{\diff_{\theta}^7}{5!!} + \ldots \right]\right) \\
&=&
\frac{\theta}{\sqrt{2\pi}}e^{\frac{-\diff_{\theta}^2}{2}}\left[\diff_{\theta} + \frac{\diff_{\theta}^3}{3!!} + \frac{\diff_{\theta}^5}{5!!} + \ldots \right],
\end{eqnarray*}
which is strictly positive for~$\diff_{\theta} \geq 0$. Therefore, we conclude that~$\mathbb{E} \left[ \max \{ Z_1(x),Z_2(x) \} \right]$ attains its minimum (maximum) for every~$x$ such that~$\diff_{\theta}(x)$ is minimum (maximum). $\blacksquare$
\end{Proof}
This framing of $P2|p \sim \N(\mu,\Sigma),\rho_{i,j}=0|\mathbb{E}[C_{max}]$ allows one to see that the minimization of makespan is equivalent to maximization of the idlest machine's load. Similar arguments hold if one wishes to optimize~$\mathbb{E} \left[ \min \{ Z_1(x),Z_2(x) \} \right]$; in the machine scheduling setting, this problem consists of minimizing the load of the idlest machine (or maximizing the load of the busiest machine), which admits a trivial optimal solution, where all jobs are assigned to one machine.
The next result leverages our results in Section~\ref{sec:ub_tightness} involving the tightness of the upper-bound function to shows that RMP delivers solutions with constant-factor approximation guarantees for~$P2|p \sim \N(\mu,\Sigma),\rho_{i,j}=0|\mathbb{E}[C_{max}]$.
\begin{theorem}
RMP delivers a 2.005-approximation for~$P2|p \sim \N(\mu,\Sigma),\rho_{j,j'}=0|\mathbb{E}[C_{max}]$.
\end{theorem}
\begin{Proof} As $\theta(x) = \theta$ if variables are uncorrelated, the second term in~\eqref{deltaBound} vanishes and we have
\[
\Delta(x)
=
\left( \frac{\delta(x)}{\sqrt{2\pi}} + \frac{\theta}{\sqrt{2e\pi}}
\right)
\left(
\frac{u_\delta(x) - l_\delta(x)}{\theta}
\right).
\]
Also, because variances play no role in this setting (from Theorem~\ref{thm:Sched}), one may set all discretization intervals of~$\delta$ to have the same length and scale the variances such that $\theta = u_\delta(x) - l_\delta(x)$ for every~$x$. Therefore,
$ g(x) \leq
\mathbb{E} \left[ \max \{ Z_1(x),Z_2(x) \} \right]\left(1 +
\frac{\sqrt{2\pi}+\sqrt{e}}{\sqrt{2\pi e}}\right)
\approx 2.005\mathbb{E} \left[ \max \{ Z_1(x),Z_2(x) \} \right]$.
$\blacksquare$
\end{Proof}
We also conducted a computational evaluation of our algorithm using synthetic instances of~$P2|p \sim \N(\mu,\Sigma)|\mathbb{E}[C_{max}]$ that were generated using the same procedures as~\cite{ranjbar2012two} and~\cite{stec2019scheduling}; details about the generation of these instances are presented in the Appendix (see Section~\ref{sec:experiments_makespan}). Overall, our algorithm had a solid performance, achieving an average optimality gap of approximately 0.12\% within the time limit of 10 minutes; in particular, 98 out of 180 instances were solved to optimality. In a comparison with a deterministic heuristic that ignores correlations and uses the averages as the execution times of the jobs, the average improvements are roughly~1.7\%; slightly larger differences were observed when the processing times had higher variances.
\section{Conclusion}
\label{sec:conclusion}
We investigate a class of challenging discrete stochastic optimization problems for which the objective function is given as the expected value of the maximum of two functions of the component random variables of a multivariate Gaussian distribution. We show that our problem is NP-hard and provide two real-world applications that can be modeled within our settings.
From a computational perspective, the main difficulty for solving these problems comes from the highly nonlinear expression describing the objective function, which contains the evaluation of both the c.d.f and p.d.f of a standard normal distribution with arguments given as functions of the decision variables. We propose an exact cutting-plane algorithm based on a linear function that provides upper bounds on the nonlinear objective. We investigate strengthening techniques for the bounding function and our computational results show a considerable improvement in performance as a result of the proposed techniques.
For the featured applications, computational results show that our algorithm provides a clear advantage over deterministic heuristics that do not consider correlations and covariances. We also present results of theoretical relevance for an stochastic makespan minimization problem with two machines. We prove the equivalence of $2|p_{i,j} \sim \N(\mu_{i,j},\sigma_{i,j}^2),\rho_{j,j'}=0|\mathbb{E}[C_{max}]$ and its deterministic counterpart and show that optimizing over our bounding function gives a 2.005-approximation for the stochastic version. Real-world settings of the problem, such as DFS, allow for scenarios where three or more functions (entries, in this case) can be selected. The resulting problems are challenging from both a mathematical and computational perspective, and investigating them is an exciting possibility for future work.
\bibliographystyle{plainnat}
|
2,877,628,091,253 | arxiv | \section{Introduction}
We study a reformulation (following Constantin \cite{Const_EL_Local_2000}) of the incompressible Euler equations on a domain ${\TT^n}:=\RR^n/2\pi\ZZ^n$ in the absence of external forcing. The Euler equations model the flow of an incompressible inviscid fluid and are (classically) formulated in terms of a divergence-free vector field $u$ (i.e. $\nabla\cdot u=0$) as follows:
\eqnb\label{eqEulerClassical}
\frac{\p u}{\p t}+(u\cdot\nabla)u+\nabla p =0
\eqne
where $p$ is a scalar potential representing internal pressure (as opposed to physical pressure at a boundary). The divergence-free condition reflects the incompressibility constraint.
In two and particularly in three dimensions, these equations continue to be of great interest; some recent surveys include \cite{Const_Euler_2007,Gibbon_2008, Yudovich_2006}. As an illustration of the challenge posed by these equations we note that unlike the Navier--Stokes equations where global weak solutions have been known to exist since 1934 due to Leray \cite{Leray_1934}, existence of global weak solutions of the Euler equations (on periodic domains) was not proved until 2011 by Wiedemann \cite{Wiedemann_2011}, following the work of DeLellis and Sz\'ekelyhidi \cite{DeLellis_Szekelyhidi_2010}. On the spatial domain $\RR^3$, more regular local solutions ($u\in C^0([0,T];H^s)\cap C^1([0,T];H^{s-2})$ with $s>5/2$) have been known to exist since the 1970s due to Kato et al, see for example \cite{Kato_1972, Kato1974}.
In the study of the Navier--Stokes equations, results such as those found in \cite{JCR_WS_2009} motivate us to approach the classical equations of fluid mechanics from a more Lagrangian viewpoint. In that paper, Robinson and Sadowski show that if $u$ is a suitable weak solution of the Navier--Stokes equations in 3D in the sense of Caffarelli, Kohn and Nirenberg \cite{C_K_N_1982}, then almost every particle trajectory is unique and $C^1$ in time. The arguments there are based on the fact that almost all trajectories avoid the set of points $(x,t)$ where singularities could develop using the fact that the set of such points has box-counting dimension at most $5/3$.
Constantin has studied a form for the Euler equations that involves both the classical velocity field and the so called back-to-labels map $A$ which is defined to be the inverse of the trajectory map $X$ at each time $t$. More precisely, for an evolving vector field $u$ defined on ${\TT^n}\x[0,T]$, the trajectory map solves
\eqnb\label{eqXdefn}
\left\{
\begin{array}{l}
\dfrac{\d X}{\d t}(y,t)=u(X(y,t),t)\\
\\
X(y,0)=y
\end{array}
\right.
\eqne
for each $y\in{\TT^n}$. If $u$ is divergence-free and sufficiently regular then $X$ is well defined and $X(\cdot,t)$ is bijective for each $t$. In this case we can define the back-to-labels map $A$ by setting
\eqnb\label{eqAdefn}
A(\cdot,t)\coloneqq X^{-1}(\cdot,t),
\eqne
where we consider $X$ as a map $X(\cdot,t):{\TT^n}\rightarrow{\TT^n}$ for each $t\in[0,T]$. For the \textit{Eulerian-Lagrangian} form, as we shall continue to call it, Constantin \cite{Const_EL_Local_2000} proved local existence and uniqueness results in certain H\"older spaces on $\RR^3$ for solutions that are periodic, or satisfy suitable decay conditions.
As Yudovich \cite{Yudovich_2006} has noted, a similar combination of Eulerian and Lagrangian approaches was used to investigate the Euler equations in H\"older spaces, by G\"unther and Lichtenstein independently, as early as the 1920s (\cite{Lichtenstein_1927}, \cite{Gunther_1926}).
First we will review the Eulerian-Lagrangian formulation and discuss how it is formally equivalent to the usual Euler equations. We then turn to the main topic of this paper which is the proof of an existence and uniqueness result for the Eulerian-Lagrangian formulation in $C^0([0,T];H^{s}(\TT^n))$ with $s>\frac{n}{2}+1$ in dimension $n\geq2$. The proof is self contained, in the sense that it neither appeals to results about the classical Euler equations, nor to the problem in H\"older spaces.
\section{The Eulerian-Lagrangian form of the equations}
The Eulerian-Lagrangian form of the Euler equations comprises the following system:
\eqnb\label{eqELA}
\p_tA+(u\cdot\nabla)A=0,
\eqne
\eqnb\label{eqELu}
u=\PP((\nabla A)^\ast v),
\eqne
\eqnb\label{eqELv}
\p_tv+(u\cdot\nabla)v=0.
\eqne
Given an initial divergence-free velocity $u_0$ for the classical equations, we choose initial conditions for the above system as follows:
\eqnb\label{eqELICA}
A(x,0)=x,
\eqne
\eqnb\label{eqELICu}
u(x,0)=v(x,0)=u_0(x).
\eqne
We use the notation $\PP$ for the Leray projector onto the space of divergence-free functions. For a matrix $M$, $M^\ast$ denotes the transposed matrix. The vector field $v$ is called the \textit{virtual velocity} and represents the initial velocity transported by the flow.
It will often be convenient to treat $A$ as a perturbation of the identity map on ${\TT^n}$. In this case we use the notation $\eta(x,t)\coloneqq A(x,t)-x$ and replace \re{eqELA} and \re{eqELICA} with the equations
\eqnb\label{eqELeta}
\p_t\eta +(u\cdot\nabla)\eta+u=0,\oneChar \eta(x,0)=0
\eqne
respectively. We do this because the identity map (hence $A$) does not have sufficient Sobolev regularity when considered as a function on the torus with values in $\RR^n$ (i.e. without accounting for the topology of the target torus ).
The following proposition encapsulates the derivation of \re{eqELu} (sometimes called the Weber formula) which can be found in \cite{Const_EL_Local_2000}.
\begin{proposition}\label{propWeber}
Let $n\geq 2$, consider $u\in C^1((0,T)\x{\TT^n})$, with $u(0)\in C^1({\TT^n})$. If $u$ is divergence-free and satisfies \re{eqEulerClassical} for some $p$, with spatially periodic boundary conditions then $A\in C^1((0,T)\x{\TT^n};{\TT^n})$ and $u$ satisfies \re{eqELu} with $v(x,t)=u_0(A(x,t))$.
\end{proposition}
\begin{proof}
From the regularity assumptions on $u$ and periodicity of the domain we deduce that the trajectories $X(y,\cdot)\in C^2(0,T)$ and $\nabla X(y,\cdot)\in C^1(0,T)$ for all $y\in{\TT^n}$, we also have $X, \frac{\p X}{\p t}\in C^1((0,T)\x{\TT^n})$. It follows from the divergence-free condition that $\det \nabla X \equiv 1$, so $X$ is volume preserving and locally injective, hence bijective, given that ${\TT^n}$ has finite volume. By the inverse function theorem we see that $A$ exists and is an element of $C^1((0,T)\x{\TT^n})$. We now have enough regularity to make the following calculations rigorous.
From \re{eqEulerClassical} and \re{eqXdefn} we obtain
\eqnbs
\frac{\p^2X}{\p t^2}(y,t)=-\nabla p(X(y,t),t),
\eqnes
which is of course just a Lagrangian interpretation of the Euler equations.
Setting $\tilde p(y,t)=p(X(y,t),t)$ this becomes
\eqnbs
\frac{\p^2X}{\p t^2}=-((\nabla X)^\ast)^{-1}\nabla\tilde p(y,t).
\eqnes
Multiplying through by $(\nabla X)^\ast$ and changing the order of differentiation yields
\eqnb\label{propWeber:eq3}
\frac{\p}{\p t}\left[\frac{\p X_j}{\p t}\frac{\p X_j}{\p y_i}\right]=\frac{\p}{\p y_i}\left[-\tilde p +\frac{1}{2}\left|\frac{\p X}{\p t}\right|^2\right]
\eqne
for $i=1,\ldots,n$, where there is an implicit sum over $j=1,\ldots,n$ and $X_j$, $y_i$ denote the components in $\RR^n$ of $X$, $y$ respectively. Integrating \re{propWeber:eq3} in time, multiplying the corresponding vector equation by $(\nabla A)^\ast$ and evaluating at $A(x,t)$ gives
\eqnb\label{propWeber:eq4}
u(x,t)=\frac{\p X}{\p t}(A(x,t),t)=(\nabla A)^\ast u_0(A(x,t))-\nabla n
\eqne
where
\eqnbs
n(x,t)=\int_0^t \tilde p(A(x,t),s)-\frac{1}{2}\left|\frac{\p X}{\p t}(A(x,t),s)\right|^2 \d s.
\eqnes
As gradients lie in the kernel of the Leray projector, applying $\PP$ to \re{propWeber:eq4} shows that $u$ satisfies \re{eqELu} as required. Note that $v(x,t)=u_0(A(x,t))$ satisfies \re{eqELv}, hence solutions to the Euler equations indeed solve the Eulerian-Lagrangian form.
\end{proof}
The converse is a little more technical.
\begin{proposition}
Let $s>\frac{n}{2}+1$ and $u$, $v$, $\eta\in C^0([0,T];H^s)\cap C^1([0,T];H^{s-1})$ satisfy \re{eqELu}, \re{eqELv}, \re{eqELICu} and \re{eqELeta}. Then for some $p\in C^0([0,T];H^s)$ $u$ solves \re{eqEulerClassical}.
\end{proposition}
\begin{proof}
Since $H^{s-1}({\TT^n})\hookrightarrow L^\infty({\TT^n})$ is an algebra, we have that if $f,g\in H^{s-1}$ (scalar valued) then
\[
\p_{x_i}(fg)=(\p_{x_i}f)g + f(\p_{x_i}g)
\]
as an equlity of $L^2$ functions, for $i=1,2,\ldots, n$. Therefore, denoting the material derivative by $\D_t\coloneqq\p_t+(u\cdot\nabla)$, for $f,g\in C^0([0,T];H^{s-1})\cap C^1([0,T];H^{s-2})$ we have
\eqnb\label{prop2:eqDistr}
\D_t (fg)= (\D_tf)g+f(\D_tg).
\eqne
Moreover, if $f\in H^s$,
\[
(u\cdot\nabla )\nabla f = \nabla((u\cdot\nabla)f)- (\nabla u)^\ast \nabla f.
\]
Hence the classical commutation relation
\eqnb\label{eqComtnRel}
\D_t\nabla f=\nabla \D_t f-(\nabla u)^\ast\nabla f
\eqne
holds as an equality in $L^2$, when $f\in C^0([0,T];H^{s})\cap C^1([0,T];H^{s-1})$.
Since $u$ satisfies \re{eqELu}, we may write
\eqnb\label{eqELuAlt}
u(x,t)= v+(\nabla\eta)^\ast v-\nabla n
\eqne
for some real-valued $n$. Then by \re{prop2:eqDistr} and \re{eqComtnRel} the following calculations are justified:
\eqnb\label{eqReverseDeriv}
\begin{aligned}
\D_t u&=\D_tv+(\D_t\nabla \eta)^\ast v+(\nabla \eta)^\ast \D_tv-\D_t\nabla n\\
&=(\nabla \D_t \eta)^\ast v - (\nabla u)^\ast(\nabla \eta)^\ast v -\nabla \D_t n +(\nabla u)^\ast \nabla n\\
&=-(\nabla u)^\ast[v+(\nabla \eta)^\ast v-\nabla n]-\nabla \D_t n\\
&=-(\nabla u)^\ast u-\nabla \D_t n\\
&=-\nabla p
\end{aligned}
\eqne
where $p=\frac{1}{2}|u|^2+\D_tn$.
\end{proof}
\section{An Existence and Uniqueness Theorem}
For $r\geq 0$, we will use the notation $H^r$ variously for scalar or vector valued functions in $H^r(\TT^n)$ (componentwise), where this does not cause ambiguity.
We will often consider functions in spaces of the form $C^0([0,T];(H^{s}(\TT^n))^n)$. To simplify notation we define $\Sigma_s(T)$ (usually denoted $\Sigma_s$) for $T \geq 0$ and $s\geq 0$ by
\eqnbs
\Sigma_s(T):=C^0([0,T];(H^{s}(\TT^n))^n).
\eqnes
We consider the natural norm on $\Sigma_s$:
\[
\|u\|_{\Sigma_s}=\sup_{t\in[0,T]}\|u(t)\|_{H^{s}}.
\]
The aim of the rest of this paper is to prove the following theorem.
\begin{theorem}\label{thmExistUniq}
If $n\geq 2$, $s>\frac{n}{2}+1$ and $u_0\in H^{s}$ is divergence free
then there exists $T>0$, such that the system (\ref{eqELA}--\ref{eqELv}) with initial conditions \re{eqELICA} and \re{eqELICu} has a unique solution $A,u,v$ such that $\eta,u,v\in\Sigma_{s}(T)\cap C^1([0,T];H^{s-1})$ where $\eta(x,t)=A(x,t)-x$. Moreover $A \in C^1([0,T]\x{\TT^n})$ as a map into the torus.
\end{theorem}
We will prove this by constructing a contracting iteration scheme using the equations \re{eqELu},\re{eqELv} and \re{eqELeta}. More precisely, given $u\in\Sigma_s(T)$ we find $v, \eta\in\Sigma_s \cap C^1([0,T]\x{\TT^n})$, solutions of
\[
\p_t \eta + (u\cdot\nabla) \eta=-u, \;\eta(0,x)=0
\]
and
\[
\p_t v+ (u\cdot\nabla) v=0,\;v(0,x)=u_0(x).
\]
We then construct the next iterate of $u$, using
\[
u'=\PP[(\nabla A)^\ast v]
\]
and show that $u\mapsto u'$ is a contraction on a certain subset of $\Sigma_s$.
In the case of H\"older spaces, Constantin constructed an iteration scheme that was instead a contraction with respect to $A$. This involves controlling differences between candidate virtual velocities ($v_1$ and $v_2$, say) in terms of the difference between the respective back-to-labels maps ($A_1$ and $A_2$). This can be achieved, using the fact that $v_i = u_0 (A_i)$ is a solution to \re{eqELv}. In the H\"older setting this is a natural way to proceed, however, relying on this \textit{a posteriori} knowledge about the solution introduces an extra technicality when we work in Sobolev spaces. For this reason we will proceed as described above, relying only on \textit{a priori} estimates. Following the proof, we shall see how the argument differs if the contraction is with respect to $A$, in particular we get an alternative proof under the additional assumption that $s\in\ZZ$.
We begin the proof of Theorem \ref{thmExistUniq} by stating two inequalities concerning the advection term $(u\cdot\nabla)v$, using the notation $B(u,v)\coloneqq(u\cdot\nabla)v$. Both of these results can be proved following the steps in \cite{ConstFoias,JCR_Sad_Sil_2012} (the only difference being that $B$ here does not include a Leray projection).
\begin{lemma}\label{lemBilBndHs}
For $s>\frac{n}{2}$ there exists $C_1>0$ such that if $u\in H^s$ and $v\in H^{s+1}$ then $B(u,v)\in H^s$ and
\eqnb\label{lemBilBndHs:bound}
\|B(u,v)\|_{H^s}\leq C_1\|u\|_{H^s}\|v\|_{H^{s+1}}.
\eqne
\end{lemma}
This is really just the fact that $H^s$ is a Banach algebra. For the second lemma the assumption that $u$ is divergence-free allows us to ``save a derivative'' by means of the identities
\[(B(u,(-\Lap)^{r/2}v),(-\Lap)^{r/2}v)_{L^2}=0
\]
for $r\in[0,s]$.
\begin{lemma}\label{lemTrilBndHs}
If $s>\frac{n}{2}+1$ there exists $C_2>0$ such that for $u\in H^s$, $v\in H^{s+1}$ with $u$ divergence-free we have
\eqnb\label{lemTrilBndHs:bound}
|(B(u,v),v)_{H^s}|\leq C_2\|u\|_{H^s}\|v\|^2_{H^s}.
\eqne
\end{lemma}
We use the following shorthand for closed balls in $\Sigma_{s}$:
\eqnbs
B_M = \overline{B_{\|\cdot\|_{\Sigma_s}}(0,M)},
\eqnes
i.e. $B_M$ is the closed unit ball centred at the origin of radius $M>0$ with respect to the norm $\|\cdot\|_{\Sigma_s}$. Where ambiguity could arise we write $B_M(T)$ for the closed ball in $\Sigma_s(T)$.
\begin{lemma}\label{lemW}
If $s>\frac{n}{2}+1$ and $\eta,v\in \Sigma_{s}(T)$ then $\PP[(\nabla \eta)^\ast v]\in \Sigma_{s}$ and there exists a constant $C_3>0$ (independent of $\eta$, $v$, $t$ and $T$) such that for fixed $t$,
\eqnb\label{lemW:bound1}
\|\PP[(\nabla \eta)^\ast v]\|_{H^r}\leq C_3\|\eta\|_{H^{s}}\|v\|_{H^r},
\eqne
where $r=s$ or $r=s-1$. Furthermore, there exists $C_3'>0$ such that for any $M>0$ and $T>0$, the following bounds hold uniformly with respect to $t\in[0,T]$ for any $ \eta_1,\eta_2,v_1,v_2 \in B_M(T)$:
\eqnb\label{lemW:LipBound}
\|\PP[(\nabla \eta_1)^\ast v_1-(\nabla \eta_2)^\ast v_2]\|_{X}\leq C_3'M(\| \eta_1- \eta_2\|_{X}+ \|v_1-v_2\|_{X}).
\eqne
where $X$ is $L^2(\TT^n)$ or $H^{s-1}$.
\end{lemma}
\begin{proof}
For continuity into $H^{s-1}$ we use the fact that $H^{s-1}$ is a Banach algebra. More precisely, we see that
\eqnb\label{lemW:eq1}
\begin{aligned}
\|\PP[(\nabla \eta_1)^\ast v_1-(\nabla \eta_2)^\ast v_2]\|_{H^{s-1}}&\leq C\| \eta_1- \eta_2\|_{H^{s}}\|v_1+v_2\|_{H^{s-1}}\\
&\oneChar+C\|\nabla \eta_1+\nabla \eta_2\|_{H^{s-1}}\|v_1-v_2\|_{H^{s-1}},
\end{aligned}
\eqne
where $C>0$ is independent of the $\eta_i$ and $v_i$.
The key step in the proof of \re{lemW:bound1} when $r=s$ is that if $\eta,v\in C^2$ then for some $q\in H^{s}$,
\eqnbs
\begin{aligned}
\p_{x_i}\PP[(\nabla \eta)^\ast v]&=\p_{x_i}(\p_{x_j}\eta_kv_k)-\p_{x_i}\p_{x_j}q\\
&=\p_{x_j}(\p_{x_i}\eta_kv_k)-\p_{x_i}\eta_k\p_{x_j}v_k+\p_{x_j}\eta_k\p_{x_i}v_k-\p_{x_i}\p_{x_j}q
\end{aligned}
\eqnes
where sums are taken implicitly over $k$. The left-hand side is already divergence-free so projecting again removes the gradient terms and yields
\eqnb\label{lemW:eq0}
\p_{x_i} \PP[(\nabla \eta)^\ast v]=\PP[(\nabla \eta)^\ast \p_{x_i} v-(\nabla v)^\ast \p_{x_i} \eta].
\eqne
By continuity, this still holds if we only have $\eta,v\in H^{s}$.
A calculation similar to \re{lemW:eq1} applied to \re{lemW:eq0} yields continuity with respect to the $H^{s}$ norm as claimed.
The inequalities \re{lemW:bound1} for $r=s-1$ and $r=s$ are obtained by taking the $H^{s-1}$ norms of $\PP[(\nabla \eta)^\ast v]$ and \re{lemW:eq0} respectively.
To prove \re{lemW:LipBound}, we again use the fact that $\PP$ removes gradients. Indeed for $f$, $g\in H^{s}$, we have
\eqnb\label{lemW:eq2}
\PP((\nabla f)^\ast g)=\PP(\nabla(f\cdot g)-(\nabla g)^\ast f)=-\PP((\nabla g)^\ast f).
\eqne
Setting $f= \eta_1- \eta_2$, $g=v_1+ v_2$, we see that the calculations in \re{lemW:eq1} can be modified to give the required result. Note that for the $L^2$ bound we use the fact that \re{lemW:eq1} holds if we replace $H^{s}$ with $L^\infty$ and $H^{s-1}$ with $L^2$.
\end{proof}
The next lemma gives uniform bounds on the $H^s$ norms of solutions to the transport equations \re{eqELA} and \re{eqELv}. We will consider the following system:
\eqnb\label{eqTransport}
\left\{
\begin {array}{l}
\p_t f + (u\cdot\nabla) f =g\\
f(0)=f_0
\end{array}
\right.
\eqne
where $f,g:[0,T]\x{\TT^n}\to\RR^n$ and $u$ is divergence free.
\begin{lemma}\label{lemTransport}
Let $s>\frac{n}{2}+1$ and fix $f_0\in H^{s}$, $g\in\Sigma_s$. If $u\in\Sigma_s$ is non-zero and divergence free then there exists a unique solution $f$ to \re{eqTransport}. Furthermore, the solution $f\in \Sigma_{s}\cap C^1([0,T];H^{s-1})\cap C^1([0,T]\x {\TT^n})$ and there exists $C_4>0$ (from Lemma \ref{lemTrilBndHs}) such that if $r,t\in[0,T]$ we have:
\eqnb\label{lemTransport:bound1}
\|f(t)\|_{H^{s}}\leq \left(\|f(r)\|_{H^{s}}+\frac{\|g\|_{\Sigma_s}}{C_4\|u\|_{\Sigma_s}}\right)\exp(C_4|t-r|\|u\|_{\Sigma_{s}})-\frac{\|g\|_{\Sigma_s}}{C_4\|u\|_{\Sigma_s}}.
\eqne
\end{lemma}
\begin{proof}
By the method of characteristics we obtain a solution $f\in C^1([0,T]\x{\TT^n})$. The formal argument that follows motivates our consideration of the regularity of $f$.
Taking the $H^{s}$ product of \re{eqTransport} with $f$ yields
\[
\frac{1}{2}\frac{\d}{\d t}\|f\|^2_{H^{s}}=-(B(u,f),f)_{H^{s}}+(f,g)_{H^s}.
\]
By Lemma \ref{lemTrilBndHs}, there exists $C>0$ such that for all $t\in[0,T]$,
\eqnb\label{lemTheta:bound2}
\frac{1}{2}\frac{\d}{\d t}\|f(t)\|_{H^{s}}^2\leq C\|u(t)\|_{H^{s}}\|f(t)\|_{H^{s}}^2+\|g(t)\|_{H^s}\|f(t)\|_{H^s}.
\eqne
Now \re{lemTransport:bound1} follows from Gronwall's inequality. In the case $r>t$, this argument is applied to the time-reversed equation, that is, using the fact that for fixed $r$, $-f(r-t)$ is transported by $-u(r-t)$ with forcing $g(r-t)$.
To properly justify this we can proceed by a Galerkin method. For each $N\in\NN$ we find a solution to the system
\eqnb\label{lemTransport:eqTrunc}
\left\{
\begin{array}{l}
\p_t f_N +P_NB(u_N,f_N)=g_N\\
f_N(r)=P_N f(r),
\end{array}
\right.
\eqne
on $[r,T]$, where $P_N$ denotes truncation up to Fourier modes of order $N$ (in space), $u_N\coloneqq P_Nu$ and $g_N\coloneqq P_N g$.
The estimate \re{lemTransport:bound1} applies to $f_N$ so by a standard argument using the Aubin-Lions lemma we obtain a weak solution $h\in L^\infty(r,T; H^s)$ such that $\p_th\in L^\infty(r,T; H^{s-1})$, hence $h\in C^0([0,T];H^{s-1})$. Using the divergence free property we obtain uniqueness of solutions $h\in L^2(r,T;H^1)$ with time derivative $\p_t h\in L^2(r,T;L^2)$. Indeed, if $h$ and $\tilde h$ are two such solutions it follows from \re{eqTransport} that
\[
\frac{\d}{\d s}\|h-\tilde h\|^2_{L^2}=0.
\]
Therefore $f=h$, i.e. this weak solution agrees with our $C^1$ classical solution on $[r,T]$.
We now prove \re{lemTransport:bound1} in the case $r\leq t$. Since $f_N\to f$ in $L^2(r,T; H^{s-1})$, we may choose a dense countable subset $\{t_k\}_{k=1}^\infty\subset[r,T]$ such that $f_N(t_k)\to f(t_k)$ in $H^{s-1}$ as $N\to\infty$ for each $k$. The formal argument above is valid on the truncated system, thus
\eqnb\label{lemTransport:eqApproxIneq}
\|f_N(t_k)\|_{H^s}\leq \left(\|P_Nf(r)\|_{H^s}+\frac{\|g\|_{\Sigma_s}}{C\|u_N\|_{\Sigma_s}}\right)\exp(C|t_k-r|\|u\|_{\Sigma_s})-\frac{\|g_N\|_{\Sigma_s}}{C\|u\|_{\Sigma_s}}.
\eqne
Hence, passing to a subsequence of $f_N$ for each $k$ with a diagonalisation argument, we may assume that for all $k$, $f_N(t_k)$ converges weakly in $H^s$ as $N\to\infty$. Moreover, by the choice of the points $t_k$ and uniqueness of weak limits, we must have $f_N(t_k)\rightharpoonup f(t_k)$ in $H^s$. Taking the $\liminf$ of \re{lemTransport:eqApproxIneq} with respect to $N\to\infty$ yields
\eqnb\label{lemTransport:eqprebound1}
\|f(t_k)\|_{H^s}\leq \left(\|f(r)\|_{H^s}+\frac{\|g\|_{\Sigma_s}}{C\|u\|_{\Sigma_s}}\right)\exp(C|t_k-r|\|u\|_{\Sigma_s})-\frac{\|g\|_{\Sigma_s}}{C\|u\|_{\Sigma_s}}.
\eqne
To prove \re{lemTransport:bound1} and the weak continuity of $f$ into $H^s$ we will use the fact that a weakly convergent sequence in $H^{s-1}$ that is also bounded in $H^s$ must converge weakly in $H^s$ to the same limit by the Banach--Alaoglu theorem. Indeed if $x_k\rightharpoonup x$ in $H^{s-1}$ is bounded in $H^s$ then any subsequence admits a further subsequence converging weakly in $H^s$ to $x$ by the uniqueness of weak limits.
From this, \re{lemTransport:bound1} follows by the density of $\{t_k\}$ and the continuity of $f$ into $H^{s-1}$. Indeed, in the case $t\geq r$, for any subsequence $(t_{k_\ell})_{\ell=1}^\infty\subset(t_k)_{k=1}^\infty$ such that $t_{k_\ell}\to t$ we have $f(t_{k_\ell})\rightharpoonup f(t)$ in $H^s$. Applying \re{lemTransport:eqprebound1} at $t_{k_\ell}$ and taking the $\liminf$ as $\ell\to\infty$ yeilds \re{lemTransport:bound1} at time $t$. For $t<r$ the required bounds are obtained in the same way from the time-reversed version of \re{lemTransport:eqTrunc}.
We have shown that $\|f(t)\|_{H^s}$ in bounded uniformly, not merely almost everywhere. Therefore for any fixed $\tau\in[0,T]$ and any sequence $\{\tau_k\}\subset[0,T]$ such that $\tau_k\to\tau$ we deduce, by the continuity into $H^{s-1}$, that $f(\tau_k)\rightharpoonup f(\tau)$ in $H^s$. This says that $f$ is weakly continuous into $H^s$.
To see that $f\in \Sigma_{s}$ it is therefore enough to show that $\|f(t)\|_{H^{s}}$ is continuous. This is the case since for all $r,t\in[0,T]$, \re{lemTransport:bound1} gives bounds of the form
\[
(\|f(r)\|_{H^s}+\alpha)\e^{-\beta|t-r|} -\alpha\leq \|f(t)\|_{H^s}\leq(\|f(r)\|_{H^s}+\alpha)\e^{\beta|t-r|} -\alpha
\]
for time independent constants $\alpha,\beta>0$, where the first inequality comes from \re{lemTransport:bound1} with $r$ and $t$ interchanged.
The fact that $f\in C^1([0,T];H^{s-1})$ follows from the fact that $\p_tf\in \Sigma_{s-1}$ which can be seen from the regularity of the other terms in \re{eqTransport}.
\end{proof}
\begin{lemma}\label{lemDifferencesTransport}
For $s>n/2+1$ fix $u_1$, $u_2\in\Sigma_s$ and $f_0\in H^s$. Let $g_1=g_2=0$ or $g_i=-u_i$ for $i=1,2$. If $f_1$, $f_2$ are the solutions of \re{eqTransport} corresponding to $u_1$, $u_2$, $g_1$, $g_2$ respectively, then in the case that $g_1=g_2=0$, there exists $C_5>0$ depending only on $s$ such that
\eqnb\label{lemDifferencesTransport:bound1}
\|f_1(t)-f_2(t)\|_{L^2}\leq C_5\|f_1+f_2\|_{\Sigma_s}\|u_1-u_2\|_{\Sigma_0} t
\eqne
for all $t\in[0,T]$. In the case that $g_i=-u_i$ for $i=1,2$ we instead have
\eqnb\label{lemDifferencesTransport:bound2}
\|f_1(t)-f_2(t)\|_{L^2}\leq (C_5\|f_1+f_2\|_{\Sigma_s}+1)\|u_1-u_2\|_{\Sigma_0} t
\eqne
\end{lemma}
\begin{proof}
Using the anti-symmetry of $(B(u_1-u_2,\cdot),\cdot)_{L^2}$ we have, for $t\in[0,T]$,
\begin{multline*}
\frac{\d}{\d t} \|f_1-f_2\|_{L^2}^2\leq |(B(u_1-u_2,f_1+f_2),f_1-f_2)_{L^2}| + 2|(g_1-g_2,f_1-f_2)|\\
\leq C \|f_1+f_2\|_{H^s}\|u_1-u_2\|_{L^2}\|f_1-f_2\|_{L^2}+2\|g_1-g_2\|_{\Sigma_0}\|f_1-f_2\|_{L^2}\\
\leq C\|f_1+f_2\|_{\Sigma_s}\|u_1-u_2\|_{\Sigma_0}\|f_1-f_2\|_{L^2} +2\|g_1-g_2\|_{\Sigma_0}\|f_1-f_2\|_{L^2}
\end{multline*}
Where $C$ depends on the embedding $H^{s-1}\hookrightarrow L^\infty$. Formally dividing by $\|f_1-f_2\|_{L^2}$ and integrating the resulting inequality gives \re{lemDifferencesTransport:bound1} or \re{lemDifferencesTransport:bound2} depending on the choice of $g_1$ and $g_2$. Justifying this last step is straightforward.
\end{proof}
We are now in a position to prove the main result.
\begin{proof}[Proof of Theorem \ref{thmExistUniq}.]
Fix $s>n/2+1$ and let $C_3$, $C_4$ be the constants in \re{lemW:bound1}, \re{lemTransport:bound1} (from Lemmas \ref{lemW} and \ref{lemTransport}) respectively. Fix $M>\|u_0\|_{H^s}$ and $T>0$ so that
\[
\exp(C_4TM)\|u_0\|_{H^s}\left(\frac{C_3}{C_4}[\exp(C_4TM)-1]+1\right)\leq M.
\]
Let $u\in B_M(T)$ be a divergence free function and let $\eta$ be the solution of \re{eqTransport} for the flow $u$ with initial data $\eta_0=0$ and forcing $g=u$. Let $v$ be the solution for initial data $v_0=u_0$ with $g=0$. Define $Su:=\PP[(\nabla \eta)^\ast v + v]$, then by Lemmas \ref{lemW} and \ref{lemTransport},
\eqnb\label{thmExistUniq:B_MBnd}
\|Su(t)\|_{H^s}\leq \exp(C_4tM)\|u_0\|_{H^s}\left(\frac{C_3}{C_4}[\exp(C_4tM)-1]+1\right)\leq M
\eqne
for all $t\in[0,T]$. Hence $S:B_M(T)\to B_M(T)$. Note that $Su(\cdot,0)=u_0$ even if $u(\cdot,0)\neq u_0$.
We next show that $S$ is a contraction on $B_M(T)$ in the $L^2$ norm if $T$ is sufficiently small. For $u_1$, $u_2\in B_M(T)$ we construct $v_i$ and $\eta_i$ from $u_i$ as above for $i=1,2$ with $v_1(\cdot,0)=v_2(\cdot,0)=u_0$. Now
\eqnb\label{thmExistUniq:Contraction}
\begin{aligned}
\|Su_1-Su_2\|_{L^2}&\leq C_a\|\eta_1-\eta_2\|_{L^2}+C_b\|v_1-v_2\|_{L^2}\\
&\leq(C_c\|v_1+v_2\|_{\Sigma_s}+C_d\|\eta_1+\eta_2\|_{\Sigma_s}+C_e) T\|u_1-u_2\|_{\Sigma_0}\\
&\leq C(u_0,M,T)\|u_1-u_2\|_{\Sigma_0},
\end{aligned}
\eqne
where $C_a,\ldots,C_e$ denote various constants arising from the application of Lemmas \ref{lemW}, \ref{lemTransport} and \ref{lemDifferencesTransport}. Keeping careful track of the constants shows that $C(u_0,M,T)$ is given by the formula
\eqnb\label{thmExistUnique:Const}
\begin{aligned}
C(u_0,M,T)&\coloneqq2T\left[\left(C_5(C_3'M+1)\|u_0\|_{H^s}+\frac{C_3'C_5M}{C_4}\right)\exp(C_4TM)\right.\\
&\left.\oneChar+C_3'M\left(\frac{1}{2}-\frac{C_5}{C_4}\right)\right]
\end{aligned}
\eqne
Where $C_3'$, $C_4$, $C_5$ are the constants from Lemmas \ref{lemW}, \ref{lemTransport} and \ref{lemDifferencesTransport} respectively.
Taking the supremum of \re{thmExistUniq:Contraction} with respect to $t$ and choosing $T>0$ small enough, we see that $S$ is a contraction in the required sense.
We conclude that $S$ has a unique accumulation point $u$, in the closure of $B_M$ with respect to $\|\cdot\|_{\Sigma_0}$. Since $B_M(T)$ is convex and closed in $\Sigma_s$ it is weakly closed, hence $u\in B_M(T)$ is a fixed point of $S$. A fixed point of $S$, along with associated back-to-labels map and virtual velocity, clearly give a solution to the Eulerian-Lagrangian formulation of the Euler equations with the required regularity. The contraction argument gives uniqueness in $B_M(T)$ and it remains to prove that we have uniqueness in $\Sigma_s(T)$.
Since $S$ is a contraction on $B_M(\widetilde T)$ for any $\widetilde T\in(0,T]$, we have by continuity of $\|u(t)\|_{H^s}$, that if $u'$, $A'$ and $v'$ also satisfy (\ref{eqELA}--\ref{eqELv}) with $u'\in\Sigma_s(T)$, then $u(t)=u'(t)$ when $0\leq t\leq\min(T,\inf\{r: \|u'(r)\|_{H^s}=M\})$.
Now we know that for all $k\in\NN$ there exists $T_k\leq T$ such that $S$ is a contraction on $B_{M+1/k}(T_k)$ and we may assume $T_k\to T$ as $k\to\infty$. By the previous observation, this means that $u$ is the unique solution in $\Sigma_s(T-\varepsilon)$ for all $\varepsilon>0$, hence by continuity $u$ is the unique solution in $\Sigma_s$ as required.
The proof that $u\in C^1([0,T];H^{s-1})$ uses the same trick as Lemma \ref{lemW} to save a spatial derivative (we have only shown that $\nabla \eta_t\in H^{s-2}$, which might otherwise limit the regularity of $u$).
By definition $u=\PP[(\nabla \eta)^\ast v+v]$. We use \re{lemW:eq2} from the proof of Lemma \ref{lemW}. Precisely we have
\eqnbs
\begin{aligned}
&\frac{1}{h}\left\|u(t+h)-u(t)-h\PP[(\nabla \eta(t))^\ast\p_tv(t)+\p_tv(t)+(\nabla v(t))^\ast\p_t \eta(t)]\right\|_{H^{s-1}}\\
&\oneChar\leq\frac{1}{2h}\left\|\PP[(\nabla \eta(t+h)+\nabla \eta(t))^\ast(v(t+h)-v(t)- h\p_tv)]\right\|_{H^{s-1}}\\
&\oneChar\oneChar+\frac{1}{2h}\left\|\PP[(\nabla v(t+h)+\nabla v(t))^\ast(\eta(t+h)-\eta(t)- h\p_t\eta)]\right\|_{H^{s-1}}\\
&\oneChar\oneChar+\frac{1}{2}\|\PP[(\nabla \eta(t+h)-\nabla \eta(t))^\ast\p_t v(t)]\|_{H^{s-1}}\\
&\oneChar\oneChar+\frac{1}{2}\|\PP[(\nabla v(t+h)-\nabla v(t))^\ast\p_t \eta(t)]\|_{H^{s-1}}\\
&\oneChar\oneChar+\frac{1}{h}\|v(t+h)-v(t)-h\p_tv(t)\|_{H^{s-1}}.
\end{aligned}
\eqnes
Since $H^{s-1}$ is an algebra and $\eta, v\in C^0([0,T];H^{s})\cap C^1([0,T];H^{s-1})$, the right-hand side vanishes as $h\to 0$. Therefore $u\in C^1([0,T];H^{s-1})$ and
\[
\p_t u= \PP[(\nabla \eta(t))^\ast\p_tv(t)+\p_tv(t)(\nabla v(t))^\ast\p_t \eta(t)].
\]
\end{proof}
\section{An Alternative Iteration}
Here we exhibit an alternative proof of existence and uniqueness for (\ref{eqELA}--\ref{eqELv}), which is based on contractions with respect to $A$ rather than $u$. The extra technicality in this approach is contained in the following lemma, which is proved in an appendix. We will denote the identity map on ${\TT^n}$ by $\iota$ and use the correspondence between maps $\TT^n\to\RR^n$ and $\TT^n\to\TT^n$ without comment.
\setcounter{AppendixLemmaNumber}{\value{lem}}
\begin{lemma}\label{lemComposition}
Let $s\in\ZZ$ with $s>\frac{n}{2}+1$ and fix $f, g\in H^s$. If $g+\iota$ is a volume preserving map then $f\circ (g+\iota)\in H^s$ and
\eqnb\label{lemComposition:bound1}
\|f\circ (g+\iota)\|_{H^s}\leq C_6\|f\|_{H^s}(\|g\|_{H^s}+(2\pi)^n)^s
\eqne
for some $C_6>0$ depending only on $s$ and the constants from some Sobolev embeddings.
\end{lemma}
This allows us to write a second proof of existence and uniqueness of solutions in $\Sigma_s$ for $s>n/2+1$ in the case $s\in \ZZ$.
Fix $u_0 \in H^s$ and $M>0$ and suppose $\eta\in B_M(T)$ for some $T>0$ such that $\eta(t) +\iota$ is volume-preserving for all $t\in[0,T]$. Define $u$ and $v$ via $v=u_0\circ (\eta+\iota)$ and $u=\PP[(\nabla \eta)^\ast v + v]$. Construct $\eta'$, the iterate of $\eta$ by solving
\[
\p_t \eta'+(u\cdot\nabla)\eta'=-u, \; \eta'(x,0)=0.
\]
By Lemmas \ref{lemW}, \ref{lemTransport} and \ref{lemComposition} we have
\[
\|\eta'\|_{\Sigma_{s}}\leq\frac{1}{C_4}\left[\exp(C_4C_6(C_3M+1)(M+(2\pi)^n)^s\|u_0\|_{H^s}T)-1\right].
\]
Hence for $T$ small enough, we may assume $\eta'\in B_M(T)$ and since $\nabla \cdot u=0$ we also have that $\eta' +\iota$ is volume preserving.
Now suppose that $\eta_1$, $\eta_2\in B_M(T)$ and let $\eta_1'$, $\eta_2'$ be the respective iterates then
\[
\|\eta_1'-\eta_2'\|_{\Sigma_0}\leq 2(C_5M+1)(C_3'M+(C_3'M+1)C_{\mathrm{Lip}}) T\|\eta_1-\eta_2\|_{\Sigma_0},
\]
by Lemmas \ref{lemW} and \ref{lemDifferencesTransport}. Here $C_{\mathrm{Lip}}$ is the Lipschitz constant of $u_0$. It follows that, for small enough $T$, this iteration procedure is a contraction on $B_M(T)$ in the $L^2$ norm. Existence and uniqueness of solutions now follows using the same steps as in the previous method.
\section{Conclusions}
Constantin found that $C^{1,\mu}$ initial data gives rise to unique solutions with $C^{1,\mu}$ trajectories for a short time. In contrast, we have seen that for $s>n/2+1$, there exists a local solution which is continuous in time into $H^{s}$ and $C^1$ into $H^{s-1}$ with trajectories in $C^1([0,T]\x{\TT^n})$. This regularity is enough to deduce that such solutions are also solutions of the classical Euler equations.
This paper is partly to prepare the ground for a similar treatment of the Navier--Stokes equations. Once again it is Constantin \cite{Const_ELNS_2001,Const_2003} who has put forward an Eulerian-Lagrangian form for the viscous case. In that formulation diffusive terms appear in the equations for the back-to-labels map and the virtual velocity and in the aforementioned papers some a priori information about that system and its relationship to the classical Navier--Stokes equations are proved. We plan to consider a system for Navier--Stokes with a non-diffusive back to labels map and seek to prove a local existence result analogous to the one exhibited here.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.